Optimal rotated staggered-grid finite-difference schemes for elastic wave modeling in TTI media
NASA Astrophysics Data System (ADS)
Yang, Lei; Yan, Hongyong; Liu, Hong
2015-11-01
The rotated staggered-grid finite-difference (RSFD) is an effective approach for numerical modeling to study the wavefield characteristics in tilted transversely isotropic (TTI) media. But it surfaces from serious numerical dispersion, which directly affects the modeling accuracy. In this paper, we propose two different optimal RSFD schemes based on the sampling approximation (SA) method and the least-squares (LS) method respectively to overcome this problem. We first briefly introduce the RSFD theory, based on which we respectively derive the SA-based RSFD scheme and the LS-based RSFD scheme. Then different forms of analysis are used to compare the SA-based RSFD scheme and the LS-based RSFD scheme with the conventional RSFD scheme, which is based on the Taylor-series expansion (TE) method. The contrast in numerical accuracy analysis verifies the greater accuracy of the two proposed optimal schemes, and indicates that these schemes can effectively widen the wavenumber range with great accuracy compared with the TE-based RSFD scheme. Further comparisons between these two optimal schemes show that at small wavenumbers, the SA-based RSFD scheme performs better, while at large wavenumbers, the LS-based RSFD scheme leads to a smaller error. Finally, the modeling results demonstrate that for the same operator length, the SA-based RSFD scheme and the LS-based RSFD scheme can achieve greater accuracy than the TE-based RSFD scheme, while for the same accuracy, the optimal schemes can adopt shorter difference operators to save computing time.
A soft-hard combination-based cooperative spectrum sensing scheme for cognitive radio networks.
Do, Nhu Tri; An, Beongku
2015-02-13
In this paper we propose a soft-hard combination scheme, called SHC scheme, for cooperative spectrum sensing in cognitive radio networks. The SHC scheme deploys a cluster based network in which Likelihood Ratio Test (LRT)-based soft combination is applied at each cluster, and weighted decision fusion rule-based hard combination is utilized at the fusion center. The novelties of the SHC scheme are as follows: the structure of the SHC scheme reduces the complexity of cooperative detection which is an inherent limitation of soft combination schemes. By using the LRT, we can detect primary signals in a low signal-to-noise ratio regime (around an average of -15 dB). In addition, the computational complexity of the LRT is reduced since we derive the closed-form expression of the probability density function of LRT value. The SHC scheme also takes into account the different effects of large scale fading on different users in the wide area network. The simulation results show that the SHC scheme not only provides the better sensing performance compared to the conventional hard combination schemes, but also reduces sensing overhead in terms of reporting time compared to the conventional soft combination scheme using the LRT.
Adaptive Numerical Dissipative Control in High Order Schemes for Multi-D Non-Ideal MHD
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, B.
2004-01-01
The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free of numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multi-resolution wavelets (WAV) (for the above types of flow feature). These filter approaches also provide a natural and efficient way for the minimization of Div(B) numerical error. The filter scheme consists of spatially sixth order or higher non-dissipative spatial difference operators as the base scheme for the inviscid flux derivatives. If necessary, a small amount of high order linear dissipation is used to remove spurious high frequency oscillations. For example, an eighth-order centered linear dissipation (AD8) might be included in conjunction with a spatially sixth-order base scheme. The inviscid difference operator is applied twice for the viscous flux derivatives. After the completion of a full time step of the base scheme step, the solution is adaptively filtered by the product of a 'flow detector' and the 'nonlinear dissipative portion' of a high-resolution shock-capturing scheme. In addition, the scheme independent wavelet flow detector can be used in conjunction with spatially compact, spectral or spectral element type of base schemes. The ACM and wavelet filter schemes using the dissipative portion of a second-order shock-capturing scheme with sixth-order spatial central base scheme for both the inviscid and viscous MHD flux derivatives and a fourth-order Runge-Kutta method are denoted.
Total Variation Diminishing (TVD) schemes of uniform accuracy
NASA Technical Reports Server (NTRS)
Hartwich, PETER-M.; Hsu, Chung-Hao; Liu, C. H.
1988-01-01
Explicit second-order accurate finite-difference schemes for the approximation of hyperbolic conservation laws are presented. These schemes are nonlinear even for the constant coefficient case. They are based on first-order upwind schemes. Their accuracy is enhanced by locally replacing the first-order one-sided differences with either second-order one-sided differences or central differences or a blend thereof. The appropriate local difference stencils are selected such that they give TVD schemes of uniform second-order accuracy in the scalar, or linear systems, case. Like conventional TVD schemes, the new schemes avoid a Gibbs phenomenon at discontinuities of the solution, but they do not switch back to first-order accuracy, in the sense of truncation error, at extrema of the solution. The performance of the new schemes is demonstrated in several numerical tests.
High-order asynchrony-tolerant finite difference schemes for partial differential equations
NASA Astrophysics Data System (ADS)
Aditya, Konduri; Donzis, Diego A.
2017-12-01
Synchronizations of processing elements (PEs) in massively parallel simulations, which arise due to communication or load imbalances between PEs, significantly affect the scalability of scientific applications. We have recently proposed a method based on finite-difference schemes to solve partial differential equations in an asynchronous fashion - synchronization between PEs is relaxed at a mathematical level. While standard schemes can maintain their stability in the presence of asynchrony, their accuracy is drastically affected. In this work, we present a general methodology to derive asynchrony-tolerant (AT) finite difference schemes of arbitrary order of accuracy, which can maintain their accuracy when synchronizations are relaxed. We show that there are several choices available in selecting a stencil to derive these schemes and discuss their effect on numerical and computational performance. We provide a simple classification of schemes based on the stencil and derive schemes that are representative of different classes. Their numerical error is rigorously analyzed within a statistical framework to obtain the overall accuracy of the solution. Results from numerical experiments are used to validate the performance of the schemes.
NASA Technical Reports Server (NTRS)
Noor, A. K.; Stephens, W. B.
1973-01-01
Several finite difference schemes are applied to the stress and free vibration analysis of homogeneous isotropic and layered orthotropic shells of revolution. The study is based on a form of the Sanders-Budiansky first-approximation linear shell theory modified such that the effects of shear deformation and rotary inertia are included. A Fourier approach is used in which all the shell stress resultants and displacements are expanded in a Fourier series in the circumferential direction, and the governing equations reduce to ordinary differential equations in the meridional direction. While primary attention is given to finite difference schemes used in conjunction with first order differential equation formulation, comparison is made with finite difference schemes used with other formulations. These finite difference discretization models are compared with respect to simplicity of application, convergence characteristics, and computational efficiency. Numerical studies are presented for the effects of variations in shell geometry and lamination parameters on the accuracy and convergence of the solutions obtained by the different finite difference schemes. On the basis of the present study it is shown that the mixed finite difference scheme based on the first order differential equation formulation and two interlacing grids for the different fundamental unknowns combines a number of advantages over other finite difference schemes previously reported in the literature.
Optimal reconstruction of the states in qutrit systems
NASA Astrophysics Data System (ADS)
Yan, Fei; Yang, Ming; Cao, Zhuo-Liang
2010-10-01
Based on mutually unbiased measurements, an optimal tomographic scheme for the multiqutrit states is presented explicitly. Because the reconstruction process of states based on mutually unbiased states is free of information waste, we refer to our scheme as the optimal scheme. By optimal we mean that the number of the required conditional operations reaches the minimum in this tomographic scheme for the states of qutrit systems. Special attention will be paid to how those different mutually unbiased measurements are realized; that is, how to decompose each transformation that connects each mutually unbiased basis with the standard computational basis. It is found that all those transformations can be decomposed into several basic implementable single- and two-qutrit unitary operations. For the three-qutrit system, there exist five different mutually unbiased-bases structures with different entanglement properties, so we introduce the concept of physical complexity to minimize the number of nonlocal operations needed over the five different structures. This scheme is helpful for experimental scientists to realize the most economical reconstruction of quantum states in qutrit systems.
Studies of Inviscid Flux Schemes for Acoustics and Turbulence Problems
NASA Technical Reports Server (NTRS)
Morris, Chris
2013-01-01
Five different central difference schemes, based on a conservative differencing form of the Kennedy and Gruber skew-symmetric scheme, were compared with six different upwind schemes based on primitive variable reconstruction and the Roe flux. These eleven schemes were tested on a one-dimensional acoustic standing wave problem, the Taylor-Green vortex problem and a turbulent channel flow problem. The central schemes were generally very accurate and stable, provided the grid stretching rate was kept below 10%. As near-DNS grid resolutions, the results were comparable to reference DNS calculations. At coarser grid resolutions, the need for an LES SGS model became apparent. There was a noticeable improvement moving from CD-2 to CD-4, and higher-order schemes appear to yield clear benefits on coarser grids. The UB-7 and CU-5 upwind schemes also performed very well at near-DNS grid resolutions. The UB-5 upwind scheme does not do as well, but does appear to be suitable for well-resolved DNS. The UF-2 and UB-3 upwind schemes, which have significant dissipation over a wide spectral range, appear to be poorly suited for DNS or LES.
NASA Astrophysics Data System (ADS)
Yang, Lei; Yan, Hongyong; Liu, Hong
2017-03-01
Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.
Report on Pairing-based Cryptography.
Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily
2015-01-01
This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST's position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed.
Report on Pairing-based Cryptography
Moody, Dustin; Peralta, Rene; Perlner, Ray; Regenscheid, Andrew; Roginsky, Allen; Chen, Lily
2015-01-01
This report summarizes study results on pairing-based cryptography. The main purpose of the study is to form NIST’s position on standardizing and recommending pairing-based cryptography schemes currently published in research literature and standardized in other standard bodies. The report reviews the mathematical background of pairings. This includes topics such as pairing-friendly elliptic curves and how to compute various pairings. It includes a brief introduction to existing identity-based encryption (IBE) schemes and other cryptographic schemes using pairing technology. The report provides a complete study of the current status of standard activities on pairing-based cryptographic schemes. It explores different application scenarios for pairing-based cryptography schemes. As an important aspect of adopting pairing-based schemes, the report also considers the challenges inherent in validation testing of cryptographic algorithms and modules. Based on the study, the report suggests an approach for including pairing-based cryptography schemes in the NIST cryptographic toolkit. The report also outlines several questions that will require further study if this approach is followed. PMID:26958435
Error function attack of chaos synchronization based encryption schemes.
Wang, Xingang; Zhan, Meng; Lai, C-H; Gang, Hu
2004-03-01
Different chaos synchronization based encryption schemes are reviewed and compared from the practical point of view. As an efficient cryptanalysis tool for chaos encryption, a proposal based on the error function attack is presented systematically and used to evaluate system security. We define a quantitative measure (quality factor) of the effective applicability of a chaos encryption scheme, which takes into account the security, the encryption speed, and the robustness against channel noise. A comparison is made of several encryption schemes and it is found that a scheme based on one-way coupled chaotic map lattices performs outstandingly well, as judged from quality factor. Copyright 2004 American Institute of Physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Sirui, E-mail: siruitan@hotmail.com; Huang, Lianjie, E-mail: ljh@lanl.gov
For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within amore » given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion.« less
A New Framework to Compare Mass-Flux Schemes Within the AROME Numerical Weather Prediction Model
NASA Astrophysics Data System (ADS)
Riette, Sébastien; Lac, Christine
2016-08-01
In the Application of Research to Operations at Mesoscale (AROME) numerical weather forecast model used in operations at Météo-France, five mass-flux schemes are available to parametrize shallow convection at kilometre resolution. All but one are based on the eddy-diffusivity-mass-flux approach, and differ in entrainment/detrainment, the updraft vertical velocity equation and the closure assumption. The fifth is based on a more classical mass-flux approach. Screen-level scores obtained with these schemes show few discrepancies and are not sufficient to highlight behaviour differences. Here, we describe and use a new experimental framework, able to compare and discriminate among different schemes. For a year, daily forecast experiments were conducted over small domains centred on the five French metropolitan radio-sounding locations. Cloud base, planetary boundary-layer height and normalized vertical profiles of specific humidity, potential temperature, wind speed and cloud condensate were compared with observations, and with each other. The framework allowed the behaviour of the different schemes in and above the boundary layer to be characterized. In particular, the impact of the entrainment/detrainment formulation, closure assumption and cloud scheme were clearly visible. Differences mainly concerned the transport intensity thus allowing schemes to be separated into two groups, with stronger or weaker updrafts. In the AROME model (with all interactions and the possible existence of compensating errors), evaluation diagnostics gave the advantage to the first group.
Liarokapis, Minas V; Artemiadis, Panagiotis K; Kyriakopoulos, Kostas J; Manolakos, Elias S
2013-09-01
A learning scheme based on random forests is used to discriminate between different reach to grasp movements in 3-D space, based on the myoelectric activity of human muscles of the upper-arm and the forearm. Task specificity for motion decoding is introduced in two different levels: Subspace to move toward and object to be grasped. The discrimination between the different reach to grasp strategies is accomplished with machine learning techniques for classification. The classification decision is then used in order to trigger an EMG-based task-specific motion decoding model. Task specific models manage to outperform "general" models providing better estimation accuracy. Thus, the proposed scheme takes advantage of a framework incorporating both a classifier and a regressor that cooperate advantageously in order to split the task space. The proposed learning scheme can be easily used to a series of EMG-based interfaces that must operate in real time, providing data-driven capabilities for multiclass problems, that occur in everyday life complex environments.
Examination of Spectral Transformations on Spectral Mixture Analysis
NASA Astrophysics Data System (ADS)
Deng, Y.; Wu, C.
2018-04-01
While many spectral transformation techniques have been applied on spectral mixture analysis (SMA), few study examined their necessity and applicability. This paper focused on exploring the difference between spectrally transformed schemes and untransformed scheme to find out which transformed scheme performed better in SMA. In particular, nine spectrally transformed schemes as well as untransformed scheme were examined in two study areas. Each transformed scheme was tested 100 times using different endmember classes' spectra under the endmember model of vegetation- high albedo impervious surface area-low albedo impervious surface area-soil (V-ISAh-ISAl-S). Performance of each scheme was assessed based on mean absolute error (MAE). Statistical analysis technique, Paired-Samples T test, was applied to test the significance of mean MAEs' difference between transformed and untransformed schemes. Results demonstrated that only NSMA could exceed the untransformed scheme in all study areas. Some transformed schemes showed unstable performance since they outperformed the untransformed scheme in one area but weakened the SMA result in another region.
Méndez-López, María Elena; García-Frapolli, Eduardo; Pritchard, Diana J; Sánchez González, María Consuelo; Ruiz-Mallén, Isabel; Porter-Bolland, Luciana; Reyes-Garcia, Victoria
2014-12-01
In Mexico, biodiversity conservation is primarily implemented through three schemes: 1) protected areas, 2) payment-based schemes for environmental services, and 3) community-based conservation, officially recognized in some cases as Indigenous and Community Conserved Areas. In this paper we compare levels of local participation across conservation schemes. Through a survey applied to 670 households across six communities in Southeast Mexico, we document local participation during the creation, design, and implementation of the management plan of different conservation schemes. To analyze the data, we first calculated the frequency of participation at the three different stages mentioned, then created a participation index that characterizes the presence and relative intensity of local participation for each conservation scheme. Results showed that there is a low level of local participation across all the conservation schemes explored in this study. Nonetheless, the payment for environmental services had the highest local participation while the protected areas had the least. Our findings suggest that local participation in biodiversity conservation schemes is not a predictable outcome of a specific (community-based) model, thus implying that other factors might be important in determining local participation. This has implications on future strategies that seek to encourage local involvement in conservation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Gröbner Bases and Generation of Difference Schemes for Partial Differential Equations
NASA Astrophysics Data System (ADS)
Gerdt, Vladimir P.; Blinkov, Yuri A.; Mozzhilkin, Vladimir V.
2006-05-01
In this paper we present an algorithmic approach to the generation of fully conservative difference schemes for linear partial differential equations. The approach is based on enlargement of the equations in their integral conservation law form by extra integral relations between unknown functions and their derivatives, and on discretization of the obtained system. The structure of the discrete system depends on numerical approximation methods for the integrals occurring in the enlarged system. As a result of the discretization, a system of linear polynomial difference equations is derived for the unknown functions and their partial derivatives. A difference scheme is constructed by elimination of all the partial derivatives. The elimination can be achieved by selecting a proper elimination ranking and by computing a Gröbner basis of the linear difference ideal generated by the polynomials in the discrete system. For these purposes we use the difference form of Janet-like Gröbner bases and their implementation in Maple. As illustration of the described methods and algorithms, we construct a number of difference schemes for Burgers and Falkowich-Karman equations and discuss their numerical properties.
Reliable multicast protocol specifications flow control and NACK policy
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.; Whetten, Brian
1995-01-01
This appendix presents the flow and congestion control schemes recommended for RMP and a NACK policy based on the whiteboard tool. Because RMP uses a primarily NACK based error detection scheme, there is no direct feedback path through which receivers can signal losses through low buffer space or congestion. Reliable multicast protocols also suffer from the fact that throughput for a multicast group must be divided among the members of the group. This division is usually very dynamic in nature and therefore does not lend itself well to a priori determination. These facts have led the flow and congestion control schemes of RMP to be made completely orthogonal to the protocol specification. This allows several differing schemes to be used in different environments to produce the best results. As a default, a modified sliding window scheme based on previous algorithms are suggested and described below.
NASA Astrophysics Data System (ADS)
Li, Xiaosong; Li, Huafeng; Yu, Zhengtao; Kong, Yingchun
2015-07-01
An efficient multifocus image fusion scheme in nonsubsampled contourlet transform (NSCT) domain is proposed. Based on the property of optical imaging and the theory of defocused image, we present a selection principle for lowpass frequency coefficients and also investigate the connection between a low-frequency image and the defocused image. Generally, the NSCT algorithm decomposes detail image information indwells in different scales and different directions in the bandpass subband coefficient. In order to correctly pick out the prefused bandpass directional coefficients, we introduce multiscale curvature, which not only inherits the advantages of windows with different sizes, but also correctly recognizes the focused pixels from source images, and then develop a new fusion scheme of the bandpass subband coefficients. The fused image can be obtained by inverse NSCT with the different fused coefficients. Several multifocus image fusion methods are compared with the proposed scheme. The experimental results clearly indicate the validity and superiority of the proposed scheme in terms of both the visual qualities and the quantitative evaluation.
NASA Astrophysics Data System (ADS)
Madhulatha, A.; Rajeevan, M.
2018-02-01
Main objective of the present paper is to examine the role of various parameterization schemes in simulating the evolution of mesoscale convective system (MCS) occurred over south-east India. Using the Weather Research and Forecasting (WRF) model, numerical experiments are conducted by considering various planetary boundary layer, microphysics, and cumulus parameterization schemes. Performances of different schemes are evaluated by examining boundary layer, reflectivity, and precipitation features of MCS using ground-based and satellite observations. Among various physical parameterization schemes, Mellor-Yamada-Janjic (MYJ) boundary layer scheme is able to produce deep boundary layer height by simulating warm temperatures necessary for storm initiation; Thompson (THM) microphysics scheme is capable to simulate the reflectivity by reasonable distribution of different hydrometeors during various stages of system; Betts-Miller-Janjic (BMJ) cumulus scheme is able to capture the precipitation by proper representation of convective instability associated with MCS. Present analysis suggests that MYJ, a local turbulent kinetic energy boundary layer scheme, which accounts strong vertical mixing; THM, a six-class hybrid moment microphysics scheme, which considers number concentration along with mixing ratio of rain hydrometeors; and BMJ, a closure cumulus scheme, which adjusts thermodynamic profiles based on climatological profiles might have contributed for better performance of respective model simulations. Numerical simulation carried out using the above combination of schemes is able to capture storm initiation, propagation, surface variations, thermodynamic structure, and precipitation features reasonably well. This study clearly demonstrates that the simulation of MCS characteristics is highly sensitive to the choice of parameterization schemes.
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Carpenter, Mark H.; Lockard, David P.
2009-01-01
Recent experience in the application of an optimized, second-order, backward-difference (BDF2OPT) temporal scheme is reported. The primary focus of the work is on obtaining accurate solutions of the unsteady Reynolds-averaged Navier-Stokes equations over long periods of time for aerodynamic problems of interest. The baseline flow solver under consideration uses a particular BDF2OPT temporal scheme with a dual-time-stepping algorithm for advancing the flow solutions in time. Numerical difficulties are encountered with this scheme when the flow code is run for a large number of time steps, a behavior not seen with the standard second-order, backward-difference, temporal scheme. Based on a stability analysis, slight modifications to the BDF2OPT scheme are suggested. The performance and accuracy of this modified scheme is assessed by comparing the computational results with other numerical schemes and experimental data.
A multigrid LU-SSOR scheme for approximate Newton iteration applied to the Euler equations
NASA Technical Reports Server (NTRS)
Yoon, Seokkwan; Jameson, Antony
1986-01-01
A new efficient relaxation scheme in conjunction with a multigrid method is developed for the Euler equations. The LU SSOR scheme is based on a central difference scheme and does not need flux splitting for Newton iteration. Application to transonic flow shows that the new method surpasses the performance of the LU implicit scheme.
NASA Technical Reports Server (NTRS)
Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The efficiency gains obtained using higher-order implicit Runge-Kutta schemes as compared with the second-order accurate backward difference schemes for the unsteady Navier-Stokes equations are investigated. Three different algorithms for solving the nonlinear system of equations arising at each timestep are presented. The first algorithm (NMG) is a pseudo-time-stepping scheme which employs a non-linear full approximation storage (FAS) agglomeration multigrid method to accelerate convergence. The other two algorithms are based on Inexact Newton's methods. The linear system arising at each Newton step is solved using iterative/Krylov techniques and left preconditioning is used to accelerate convergence of the linear solvers. One of the methods (LMG) uses Richardson's iterative scheme for solving the linear system at each Newton step while the other (PGMRES) uses the Generalized Minimal Residual method. Results demonstrating the relative superiority of these Newton's methods based schemes are presented. Efficiency gains as high as 10 are obtained by combining the higher-order time integration schemes with the more efficient nonlinear solvers.
NASA Astrophysics Data System (ADS)
Chirico, G. B.; Medina, H.; Romano, N.
2014-07-01
This paper examines the potential of different algorithms, based on the Kalman filtering approach, for assimilating near-surface observations into a one-dimensional Richards equation governing soil water flow in soil. Our specific objectives are: (i) to compare the efficiency of different Kalman filter algorithms in retrieving matric pressure head profiles when they are implemented with different numerical schemes of the Richards equation; (ii) to evaluate the performance of these algorithms when nonlinearities arise from the nonlinearity of the observation equation, i.e. when surface soil water content observations are assimilated to retrieve matric pressure head values. The study is based on a synthetic simulation of an evaporation process from a homogeneous soil column. Our first objective is achieved by implementing a Standard Kalman Filter (SKF) algorithm with both an explicit finite difference scheme (EX) and a Crank-Nicolson (CN) linear finite difference scheme of the Richards equation. The Unscented (UKF) and Ensemble Kalman Filters (EnKF) are applied to handle the nonlinearity of a backward Euler finite difference scheme. To accomplish the second objective, an analogous framework is applied, with the exception of replacing SKF with the Extended Kalman Filter (EKF) in combination with a CN numerical scheme, so as to handle the nonlinearity of the observation equation. While the EX scheme is computationally too inefficient to be implemented in an operational assimilation scheme, the retrieval algorithm implemented with a CN scheme is found to be computationally more feasible and accurate than those implemented with the backward Euler scheme, at least for the examined one-dimensional problem. The UKF appears to be as feasible as the EnKF when one has to handle nonlinear numerical schemes or additional nonlinearities arising from the observation equation, at least for systems of small dimensionality as the one examined in this study.
NASA Astrophysics Data System (ADS)
Ji, Yang; Chen, Hong; Tang, Hongwu
2017-06-01
A highly accurate wide-angle scheme, based on the generalized mutistep scheme in the propagation direction, is developed for the finite difference beam propagation method (FD-BPM). Comparing with the previously presented method, the simulation shows that our method results in a more accurate solution, and the step size can be much larger
Chao, Eunice; Krewski, Daniel
2008-12-01
This paper presents an exploratory evaluation of four functional components of a proposed risk-based classification scheme (RBCS) for crop-derived genetically modified (GM) foods in a concordance study. Two independent raters assigned concern levels to 20 reference GM foods using a rating form based on the proposed RBCS. The four components of evaluation were: (1) degree of concordance, (2) distribution across concern levels, (3) discriminating ability of the scheme, and (4) ease of use. At least one of the 20 reference foods was assigned to each of the possible concern levels, demonstrating the ability of the scheme to identify GM foods of different concern with respect to potential health risk. There was reasonably good concordance between the two raters for the three separate parts of the RBCS. The raters agreed that the criteria in the scheme were sufficiently clear in discriminating reference foods into different concern levels, and that with some experience, the scheme was reasonably easy to use. Specific issues and suggestions for improvements identified in the concordance study are discussed.
Fu, Yunhai; Ma, Lin; Xu, Yubin
2015-01-01
In spectrum aggregation (SA), two or more component carriers (CCs) of different bandwidths in different bands can be aggregated to support a wider transmission bandwidth. The scheduling delay is the most important design constraint for the broadband wireless trunking (BWT) system, especially in the cognitive radio (CR) condition. The current resource scheduling schemes for spectrum aggregation become questionable and are not suitable for meeting the challenge of the delay requirement. Consequently, the authors propose a novel component carrier configuration and switching scheme for real-time traffic (RT-CCCS) to satisfy the delay requirement in the CR-based SA system. In this work, the authors consider a sensor-network-assisted CR network. The authors first introduce a resource scheduling structure for SA in the CR condition. Then the proposed scheme is analyzed in detail. Finally, simulations are carried out to verify the analysis on the proposed scheme. Simulation results prove that our proposed scheme can satisfy the delay requirement in the CR-based SA system. PMID:26393594
Digital Noise Reduction: An Overview
Bentler, Ruth; Chiou, Li-Kuei
2006-01-01
Digital noise reduction schemes are being used in most hearing aids currently marketed. Unlike the earlier analog schemes, these manufacturer-specific algorithms are developed to acoustically analyze the incoming signal and alter the gain/output characteristics according to their predetermined rules. Although most are modulation-based schemes (ie, differentiating speech from noise based on temporal characteristics), spectral subtraction techniques are being applied as well. The purpose of this article is to overview these schemes in terms of their differences and similarities. PMID:16959731
Multiparty Quantum Blind Signature Scheme Based on Graph States
NASA Astrophysics Data System (ADS)
Jian-Wu, Liang; Xiao-Shu, Liu; Jin-Jing, Shi; Ying, Guo
2018-05-01
A multiparty quantum blind signature scheme is proposed based on the principle of graph state, in which the unitary operations of graph state particles can be applied to generate the quantum blind signature and achieve verification. Different from the classical blind signature based on the mathematical difficulty, the scheme could guarantee not only the anonymity but also the unconditionally security. The analysis shows that the length of the signature generated in our scheme does not become longer as the number of signers increases, and it is easy to increase or decrease the number of signers.
NASA Astrophysics Data System (ADS)
Pötz, Walter
2017-11-01
A single-cone finite-difference lattice scheme is developed for the (2+1)-dimensional Dirac equation in presence of general electromagnetic textures. The latter is represented on a (2+1)-dimensional staggered grid using a second-order-accurate finite difference scheme. A Peierls-Schwinger substitution to the wave function is used to introduce the electromagnetic (vector) potential into the Dirac equation. Thereby, the single-cone energy dispersion and gauge invariance are carried over from the continuum to the lattice formulation. Conservation laws and stability properties of the formal scheme are identified by comparison with the scheme for zero vector potential. The placement of magnetization terms is inferred from consistency with the one for the vector potential. Based on this formal scheme, several numerical schemes are proposed and tested. Elementary examples for single-fermion transport in the presence of in-plane magnetization are given, using material parameters typical for topological insulator surfaces.
NASA Astrophysics Data System (ADS)
Wang, Jinlong; Feng, Shuo; Wu, Qihui; Zheng, Xueqiang; Xu, Yuhua; Ding, Guoru
2014-12-01
Cognitive radio (CR) is a promising technology that brings about remarkable improvement in spectrum utilization. To tackle the hidden terminal problem, cooperative spectrum sensing (CSS) which benefits from the spatial diversity has been studied extensively. Since CSS is vulnerable to the attacks initiated by malicious secondary users (SUs), several secure CSS schemes based on Dempster-Shafer theory have been proposed. However, the existing works only utilize the current difference of SUs, such as the difference in SNR or similarity degree, to evaluate the trustworthiness of each SU. As the current difference is only one-sided and sometimes inaccurate, the statistical information contained in each SU's historical behavior should not be overlooked. In this article, we propose a robust CSS scheme based on Dempster-Shafer theory and trustworthiness degree calculation. It is carried out in four successive steps, which are basic probability assignment (BPA), trustworthiness degree calculation, selection and adjustment of BPA, and combination by Dempster-Shafer rule, respectively. Our proposed scheme evaluates the trustworthiness degree of SUs from both current difference aspect and historical behavior aspect and exploits Dempster-Shafer theory's potential to establish a `soft update' approach for the reputation value maintenance. It can not only differentiate malicious SUs from honest ones based on their historical behaviors but also reserve the current difference for each SU to achieve a better real-time performance. Abundant simulation results have validated that the proposed scheme outperforms the existing ones under the impact of different attack patterns and different number of malicious SUs.
Scheme for quantum state manipulation in coupled cavities
NASA Astrophysics Data System (ADS)
Lin, Jin-Zhong
By controlling the parameters of the system, the effective interaction between different atoms is achieved in different cavities. Based on the interaction, scheme to generate three-atom Greenberger-Horne-Zeilinger (GHZ) is proposed in coupled cavities. Spontaneous emission of excited states and decay of cavity modes can be suppressed efficiently. In addition, the scheme is robust against the variation of hopping rate between cavities.
A Numerical Model for Predicting Shoreline Changes.
1980-07-01
minimal shorelines for finite - difference scheme of time lAt (B) . . . 27 11 Transport function Q(ao) = cos ao sin za o for selected values of z . 28 12...generate the preceding examples was based on the use of implicit finite differences . Such schemes, whether implicit or ex- plicit (or both), are...10(A) shows an initially straight shoreline. In any finite - difference scheme, after one time increment At, the shoreline is bounded below by the solid
Simple scheme to implement decoy-state reference-frame-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Zhang, Chunmei; Zhu, Jianrong; Wang, Qin
2018-06-01
We propose a simple scheme to implement decoy-state reference-frame-independent quantum key distribution (RFI-QKD), where signal states are prepared in Z, X, and Y bases, decoy states are prepared in X and Y bases, and vacuum states are set to no bases. Different from the original decoy-state RFI-QKD scheme whose decoy states are prepared in Z, X and Y bases, in our scheme decoy states are only prepared in X and Y bases, which avoids the redundancy of decoy states in Z basis, saves the random number consumption, simplifies the encoding device of practical RFI-QKD systems, and makes the most of the finite pulses in a short time. Numerical simulations show that, considering the finite size effect with reasonable number of pulses in practical scenarios, our simple decoy-state RFI-QKD scheme exhibits at least comparable or even better performance than that of the original decoy-state RFI-QKD scheme. Especially, in terms of the resistance to the relative rotation of reference frames, our proposed scheme behaves much better than the original scheme, which has great potential to be adopted in current QKD systems.
Lin, Yuehe; Bennett, Wendy D.; Timchalk, Charles; Thrall, Karla D.
2004-03-02
Microanalytical systems based on a microfluidics/electrochemical detection scheme are described. Individual modules, such as microfabricated piezoelectrically actuated pumps and a microelectrochemical cell were integrated onto portable platforms. This allowed rapid change-out and repair of individual components by incorporating "plug and play" concepts now standard in PC's. Different integration schemes were used for construction of the microanalytical systems based on microfluidics/electrochemical detection. In one scheme, all individual modules were integrated in the surface of the standard microfluidic platform based on a plug-and-play design. Microelectrochemical flow cell which integrated three electrodes based on a wall-jet design was fabricated on polymer substrate. The microelectrochemical flow cell was then plugged directly into the microfluidic platform. Another integration scheme was based on a multilayer lamination method utilizing stacking modules with different functionality to achieve a compact microanalytical device. Application of the microanalytical system for detection of lead in, for example, river water and saliva samples using stripping voltammetry is described.
An implicit spatial and high-order temporal finite difference scheme for 2D acoustic modelling
NASA Astrophysics Data System (ADS)
Wang, Enjiang; Liu, Yang
2018-01-01
The finite difference (FD) method exhibits great superiority over other numerical methods due to its easy implementation and small computational requirement. We propose an effective FD method, characterised by implicit spatial and high-order temporal schemes, to reduce both the temporal and spatial dispersions simultaneously. For the temporal derivative, apart from the conventional second-order FD approximation, a special rhombus FD scheme is included to reach high-order accuracy in time. Compared with the Lax-Wendroff FD scheme, this scheme can achieve nearly the same temporal accuracy but requires less floating-point operation times and thus less computational cost when the same operator length is adopted. For the spatial derivatives, we adopt the implicit FD scheme to improve the spatial accuracy. Apart from the existing Taylor series expansion-based FD coefficients, we derive the least square optimisation based implicit spatial FD coefficients. Dispersion analysis and modelling examples demonstrate that, our proposed method can effectively decrease both the temporal and spatial dispersions, thus can provide more accurate wavefields.
NASA Astrophysics Data System (ADS)
Jin, Juliang; Li, Lei; Wang, Wensheng; Zhang, Ming
2006-10-01
The optimal selection of schemes of water transportation projects is a process of choosing a relatively optimal scheme from a number of schemes of water transportation programming and management projects, which is of importance in both theory and practice in water resource systems engineering. In order to achieve consistency and eliminate the dimensions of fuzzy qualitative and fuzzy quantitative evaluation indexes, to determine the weights of the indexes objectively, and to increase the differences among the comprehensive evaluation index values of water transportation project schemes, a projection pursuit method, named FPRM-PP for short, was developed in this work for selecting the optimal water transportation project scheme based on the fuzzy preference relation matrix. The research results show that FPRM-PP is intuitive and practical, the correction range of the fuzzy preference relation matrix
Development of a new flux splitting scheme
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1991-01-01
The use of a new splitting scheme, the advection upstream splitting method, for model aerodynamic problems where Van Leer and Roe schemes had failed previously is discussed. The present scheme is based on splitting in which the convective and pressure terms are separated and treated differently depending on the underlying physical conditions. The present method is found to be both simple and accurate.
Development of a new flux splitting scheme
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1991-01-01
The successful use of a novel splitting scheme, the advection upstream splitting method, for model aerodynamic problems where Van Leer and Roe schemes had failed previously is discussed. The present scheme is based on splitting in which the convective and pressure terms are separated and treated differently depending on the underlying physical conditions. The present method is found to be both simple and accurate.
Unequal error control scheme for dimmable visible light communication systems
NASA Astrophysics Data System (ADS)
Deng, Keyan; Yuan, Lei; Wan, Yi; Li, Huaan
2017-01-01
Visible light communication (VLC), which has the advantages of a very large bandwidth, high security, and freedom from license-related restrictions and electromagnetic-interference, has attracted much interest. Because a VLC system simultaneously performs illumination and communication functions, dimming control, efficiency, and reliable transmission are significant and challenging issues of such systems. In this paper, we propose a novel unequal error control (UEC) scheme in which expanding window fountain (EWF) codes in an on-off keying (OOK)-based VLC system are used to support different dimming target values. To evaluate the performance of the scheme for various dimming target values, we apply it to H.264 scalable video coding bitstreams in a VLC system. The results of the simulations that are performed using additive white Gaussian noises (AWGNs) with different signal-to-noise ratios (SNRs) are used to compare the performance of the proposed scheme for various dimming target values. It is found that the proposed UEC scheme enables earlier base layer recovery compared to the use of the equal error control (EEC) scheme for different dimming target values and therefore afford robust transmission for scalable video multicast over optical wireless channels. This is because of the unequal error protection (UEP) and unequal recovery time (URT) of the EWF code in the proposed scheme.
Wubulihasimu, Parida; Brouwer, Werner; van Baal, Pieter
2016-08-01
In this study, aggregate-level panel data from 20 Organization for Economic Cooperation and Development countries over three decades (1980-2009) were used to investigate the impact of hospital payment reforms on healthcare output and mortality. Hospital payment schemes were classified as fixed-budget (i.e. not directly based on activities), fee-for-service (FFS) or patient-based payment (PBP) schemes. The data were analysed using a difference-in-difference model that allows for a structural change in outcomes due to payment reform. The results suggest that FFS schemes increase the growth rate of healthcare output, whereas PBP schemes positively affect life expectancy at age 65 years. However, these results should be interpreted with caution, as results are sensitive to model specification. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
CONDIF - A modified central-difference scheme for convective flows
NASA Technical Reports Server (NTRS)
Runchal, Akshai K.
1987-01-01
The paper presents a method, called CONDIF, which modifies the CDS (central-difference scheme) by introducing a controlled amount of numerical diffusion based on the local gradients. The numerical diffusion can be adjusted to be negligibly low for most problems. CONDIF results are significantly more accurate than those obtained from the hybrid scheme when the Peclet number is very high and the flow is at large angles to the grid.
A remark on the GNSS single difference model with common clock scheme for attitude determination
NASA Astrophysics Data System (ADS)
Chen, Wantong
2016-09-01
GNSS-based attitude determination technique is an important field of study, in which two schemes can be used to construct the actual system: the common clock scheme and the non-common clock scheme. Compared with the non-common clock scheme, the common clock scheme can strongly improve both the reliability and the accuracy. However, in order to gain these advantages, specific care must be taken in the implementation. The cares are thus discussed, based on the generating technique of carrier phase measurement in GNSS receivers. A qualitative assessment of potential phase bias contributes is also carried out. Possible technical difficulties are pointed out for the development of single-board multi-antenna GNSS attitude systems with a common clock.
NASA Astrophysics Data System (ADS)
Chao, Luo
2015-11-01
In this paper, a novel digital secure communication scheme is firstly proposed. Different from the usual secure communication schemes based on chaotic synchronization, the proposed scheme employs asynchronous communication which avoids the weakness of synchronous systems and is susceptible to environmental interference. Moreover, as to the transmission errors and data loss in the process of communication, the proposed scheme has the ability to be error-checking and error-correcting in real time. In order to guarantee security, the fractional-order complex chaotic system with the shifting of order is utilized to modulate the transmitted signal, which has high nonlinearity and complexity in both frequency and time domains. The corresponding numerical simulations demonstrate the effectiveness and feasibility of the scheme.
Elsler, Dietmar; Eeckelaert, Lieven
2010-06-01
This article looks at the factors that influence the transferability of different types of occupational safety and health (OSH) economic incentives from one country to another. To review the legal, political, and cultural framework conditions for economic incentive schemes in the European Union (EU), the European Agency for Safety and Health at Work (EU-OSHA) surveyed EU member states about the state of such schemes in their countries. In addition to the survey responses, relevant information on existing schemes and their national context within the 27 EU member states was gathered through reports, articles, and databases. Following this, countries were clustered according to cross-cultural differences. Despite the apparent variations in Europe's social security systems, there is a high degree of similarity between the countries regarding the basic criteria of design of the system. In addition, different kinds of incentives are used in different member states regardless of the social insurance system. When it comes to insurance incentive schemes, the fundamental difference between countries is whether the workers' compensation scheme is based on a competitive market between private insurance companies or a kind of monopoly structure, where the employers do not have the choice between several insurance companies. A clear majority of 19 of the 27 EU member states have a monopoly system. Subsidy systems, tax incentives, and insurance-based "experience rating" are theoretically -possible in all EU countries. In competitive insurance markets, effort-based incentives are more difficult to achieve. A possible solution could be the introduction of long-term contracts or the creation of a common prevention fund, financed equally by all insurers.
Some implementational issues of convection schemes for finite volume formulations
NASA Technical Reports Server (NTRS)
Thakur, Siddharth; Shyy, Wei
1993-01-01
Two higher-order upwind schemes - second-order upwind and QUICK - are examined in terms of their interpretation, implementation as well as performance for a recirculating flow in a lid-driven cavity, in the context of a control volume formulation using the SIMPLE algorithm. The present formulation of these schemes is based on a unified framework wherein the first-order upwind scheme is chosen as the basis, with the remaining terms being assigned to the source term. The performance of these schemes is contrasted with the first-order upwind and second-order central difference schemes. Also addressed in this study is the issue of boundary treatment associated with these higher-order upwind schemes. Two different boundary treatments - one that uses a two-point scheme consistently within a given control volume at the boundary, and the other that maintains consistency of flux across the interior face between the adjacent control volumes - are formulated and evaluated.
Some implementational issues of convection schemes for finite-volume formulations
NASA Technical Reports Server (NTRS)
Thakur, Siddharth; Shyy, Wei
1993-01-01
Two higher-order upwind schemes - second-order upwind and QUICK - are examined in terms of their interpretation, implementations, as well as performance for a recirculating flow in a lid-driven cavity, in the context of a control-volume formulation using the SIMPLE algorithm. The present formulation of these schemes is based on a unified framework wherein the first-order upwind scheme is chosen as the basis, with the remaining terms being assigned to the source term. The performance of these schemes is contrasted with the first-order upwind and second-order central difference schemes. Also addressed in this study is the issue of boundary treatment associated with these higher-order upwind schemes. Two different boundary treatments - one that uses a two-point scheme consistently within a given control volume at the boundary, and the other that maintains consistency of flux across the interior face between the adjacent control volumes - are formulated and evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Huang, Zhenyu; Chavarría-Miranda, Daniel
Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimation. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. High performance computing holds the promise of faster analysis of more contingency cases for the purpose of safe and reliable operation of today’s power grids with less operating margin and more intermittent renewable energy sources. This paper evaluates the performance of counter-based dynamic load balancing schemes for massive contingency analysis under different computing environments. Insights frommore » the performance evaluation can be used as guidance for users to select suitable schemes in the application of massive contingency analysis. Case studies, as well as MATLAB simulations, of massive contingency cases using the Western Electricity Coordinating Council power grid model are presented to illustrate the application of high performance computing with counter-based dynamic load balancing schemes.« less
Differences exist across insurance schemes in China post-consolidation.
Li, Yang; Zhao, Yinjun; Yi, Danhui; Wang, Xiaojun; Jiang, Yan; Wang, Yu; Liu, Xinchun; Ma, Shuangge
2017-01-01
In China, the basic insurance system consists of three schemes: the UEBMI (Urban Employee Basic Medical Insurance), URBMI (Urban Resident Basic Medical Insurance), and NCMS (New Cooperative Medical Scheme), across which significant differences have been observed. Since 2009, the central government has been experimenting with consolidating these schemes in selected areas. This study examines whether differences still exist across schemes after the consolidation. A survey was conducted in the city of Suzhou, collecting data on subjects 45 years old and above with at least one inpatient or outpatient treatment during a period of twelve months. Analysis on 583 subjects was performed comparing subjects' characteristics across insurance schemes. A resampling-based method was applied to compute the predicted gross medical cost, OOP (out-of-pocket) cost, and insurance reimbursement rate. Subjects under different insurance schemes differ in multiple aspects. For inpatient treatments, subjects under the URBMI have the highest observed and predicted gross and OOP costs, while those under the UEBMI have the lowest. For outpatient treatments, subjects under the UEBMI and URBMI have comparable costs, while those under the NCMS have much lower costs. Subjects under the NCMS also have a much lower reimbursement rate. Differences still exist across schemes in medical costs and insurance reimbursement rate post-consolidation. Further investigations are needed to identify the causes, and interventions are needed to eliminate such differences.
Differences exist across insurance schemes in China post-consolidation
Yi, Danhui; Wang, Xiaojun; Jiang, Yan; Wang, Yu; Liu, Xinchun
2017-01-01
Background In China, the basic insurance system consists of three schemes: the UEBMI (Urban Employee Basic Medical Insurance), URBMI (Urban Resident Basic Medical Insurance), and NCMS (New Cooperative Medical Scheme), across which significant differences have been observed. Since 2009, the central government has been experimenting with consolidating these schemes in selected areas. This study examines whether differences still exist across schemes after the consolidation. Methods A survey was conducted in the city of Suzhou, collecting data on subjects 45 years old and above with at least one inpatient or outpatient treatment during a period of twelve months. Analysis on 583 subjects was performed comparing subjects’ characteristics across insurance schemes. A resampling-based method was applied to compute the predicted gross medical cost, OOP (out-of-pocket) cost, and insurance reimbursement rate. Results Subjects under different insurance schemes differ in multiple aspects. For inpatient treatments, subjects under the URBMI have the highest observed and predicted gross and OOP costs, while those under the UEBMI have the lowest. For outpatient treatments, subjects under the UEBMI and URBMI have comparable costs, while those under the NCMS have much lower costs. Subjects under the NCMS also have a much lower reimbursement rate. Conclusions Differences still exist across schemes in medical costs and insurance reimbursement rate post-consolidation. Further investigations are needed to identify the causes, and interventions are needed to eliminate such differences. PMID:29125837
Karayannis, Nicholas V; Jull, Gwendolen A; Hodges, Paul W
2012-02-20
Several classification schemes, each with its own philosophy and categorizing method, subgroup low back pain (LBP) patients with the intent to guide treatment. Physiotherapy derived schemes usually have a movement impairment focus, but the extent to which other biological, psychological, and social factors of pain are encompassed requires exploration. Furthermore, within the prevailing 'biological' domain, the overlap of subgrouping strategies within the orthopaedic examination remains unexplored. The aim of this study was "to review and clarify through developer/expert survey, the theoretical basis and content of physical movement classification schemes, determine their relative reliability and similarities/differences, and to consider the extent of incorporation of the bio-psycho-social framework within the schemes". A database search for relevant articles related to LBP and subgrouping or classification was conducted. Five dominant movement-based schemes were identified: Mechanical Diagnosis and Treatment (MDT), Treatment Based Classification (TBC), Pathoanatomic Based Classification (PBC), Movement System Impairment Classification (MSI), and O'Sullivan Classification System (OCS) schemes. Data were extracted and a survey sent to the classification scheme developers/experts to clarify operational criteria, reliability, decision-making, and converging/diverging elements between schemes. Survey results were integrated into the review and approval obtained for accuracy. Considerable diversity exists between schemes in how movement informs subgrouping and in the consideration of broader neurosensory, cognitive, emotional, and behavioural dimensions of LBP. Despite differences in assessment philosophy, a common element lies in their objective to identify a movement pattern related to a pain reduction strategy. Two dominant movement paradigms emerge: (i) loading strategies (MDT, TBC, PBC) aimed at eliciting a phenomenon of centralisation of symptoms; and (ii) modified movement strategies (MSI, OCS) targeted towards documenting the movement impairments associated with the pain state. Schemes vary on: the extent to which loading strategies are pursued; the assessment of movement dysfunction; and advocated treatment approaches. A biomechanical assessment predominates in the majority of schemes (MDT, PBC, MSI), certain psychosocial aspects (fear-avoidance) are considered in the TBC scheme, certain neurophysiologic (central versus peripherally mediated pain states) and psychosocial (cognitive and behavioural) aspects are considered in the OCS scheme.
Moon, Jongho; Choi, Younsung; Jung, Jaewook; Won, Dongho
2015-01-01
In multi-server environments, user authentication is a very important issue because it provides the authorization that enables users to access their data and services; furthermore, remote user authentication schemes for multi-server environments have solved the problem that has arisen from user's management of different identities and passwords. For this reason, numerous user authentication schemes that are designed for multi-server environments have been proposed over recent years. In 2015, Lu et al. improved upon Mishra et al.'s scheme, claiming that their remote user authentication scheme is more secure and practical; however, we found that Lu et al.'s scheme is still insecure and incorrect. In this paper, we demonstrate that Lu et al.'s scheme is vulnerable to outsider attack and user impersonation attack, and we propose a new biometrics-based scheme for authentication and key agreement that can be used in multi-server environments; then, we show that our proposed scheme is more secure and supports the required security properties.
Improved finite difference schemes for transonic potential calculations
NASA Technical Reports Server (NTRS)
Hafez, M.; Osher, S.; Whitlow, W., Jr.
1984-01-01
Engquist and Osher (1980) have introduced a finite difference scheme for solving the transonic small disturbance equation, taking into account cases in which only compression shocks are admitted. Osher et al. (1983) studied a class of schemes for the full potential equation. It is proved that these schemes satisfy a new discrete 'entropy inequality' which rules out expansion shocks. However, the conducted analysis is restricted to steady two-dimensional flows. The present investigation is concerned with the adoption of a heuristic approach. The full potential equation in conservation form is solved with the aid of a modified artificial density method, based on flux biasing. It is shown that, with the current scheme, expansion shocks are not possible.
Simplified Interval Observer Scheme: A New Approach for Fault Diagnosis in Instruments
Martínez-Sibaja, Albino; Astorga-Zaragoza, Carlos M.; Alvarado-Lassman, Alejandro; Posada-Gómez, Rubén; Aguila-Rodríguez, Gerardo; Rodríguez-Jarquin, José P.; Adam-Medina, Manuel
2011-01-01
There are different schemes based on observers to detect and isolate faults in dynamic processes. In the case of fault diagnosis in instruments (FDI) there are different diagnosis schemes based on the number of observers: the Simplified Observer Scheme (SOS) only requires one observer, uses all the inputs and only one output, detecting faults in one detector; the Dedicated Observer Scheme (DOS), which again uses all the inputs and just one output, but this time there is a bank of observers capable of locating multiple faults in sensors, and the Generalized Observer Scheme (GOS) which involves a reduced bank of observers, where each observer uses all the inputs and m-1 outputs, and allows the localization of unique faults. This work proposes a new scheme named Simplified Interval Observer SIOS-FDI, which does not requires the measurement of any input and just with just one output allows the detection of unique faults in sensors and because it does not require any input, it simplifies in an important way the diagnosis of faults in processes in which it is difficult to measure all the inputs, as in the case of biologic reactors. PMID:22346593
PHACK: An Efficient Scheme for Selective Forwarding Attack Detection in WSNs.
Liu, Anfeng; Dong, Mianxiong; Ota, Kaoru; Long, Jun
2015-12-09
In this paper, a Per-Hop Acknowledgement (PHACK)-based scheme is proposed for each packet transmission to detect selective forwarding attacks. In our scheme, the sink and each node along the forwarding path generate an acknowledgement (ACK) message for each received packet to confirm the normal packet transmission. The scheme, in which each ACK is returned to the source node along a different routing path, can significantly increase the resilience against attacks because it prevents an attacker from compromising nodes in the return routing path, which can otherwise interrupt the return of nodes' ACK packets. For this case, the PHACK scheme also has better potential to detect abnormal packet loss and identify suspect nodes as well as better resilience against attacks. Another pivotal issue is the network lifetime of the PHACK scheme, as it generates more acknowledgements than previous ACK-based schemes. We demonstrate that the network lifetime of the PHACK scheme is not lower than that of other ACK-based schemes because the scheme just increases the energy consumption in non-hotspot areas and does not increase the energy consumption in hotspot areas. Moreover, the PHACK scheme greatly simplifies the protocol and is easy to implement. Both theoretical and simulation results are given to demonstrate the effectiveness of the proposed scheme in terms of high detection probability and the ability to identify suspect nodes.
PHACK: An Efficient Scheme for Selective Forwarding Attack Detection in WSNs
Liu, Anfeng; Dong, Mianxiong; Ota, Kaoru; Long, Jun
2015-01-01
In this paper, a Per-Hop Acknowledgement (PHACK)-based scheme is proposed for each packet transmission to detect selective forwarding attacks. In our scheme, the sink and each node along the forwarding path generate an acknowledgement (ACK) message for each received packet to confirm the normal packet transmission. The scheme, in which each ACK is returned to the source node along a different routing path, can significantly increase the resilience against attacks because it prevents an attacker from compromising nodes in the return routing path, which can otherwise interrupt the return of nodes’ ACK packets. For this case, the PHACK scheme also has better potential to detect abnormal packet loss and identify suspect nodes as well as better resilience against attacks. Another pivotal issue is the network lifetime of the PHACK scheme, as it generates more acknowledgements than previous ACK-based schemes. We demonstrate that the network lifetime of the PHACK scheme is not lower than that of other ACK-based schemes because the scheme just increases the energy consumption in non-hotspot areas and does not increase the energy consumption in hotspot areas. Moreover, the PHACK scheme greatly simplifies the protocol and is easy to implement. Both theoretical and simulation results are given to demonstrate the effectiveness of the proposed scheme in terms of high detection probability and the ability to identify suspect nodes. PMID:26690178
NASA Technical Reports Server (NTRS)
Vila, Daniel; deGoncalves, Luis Gustavo; Toll, David L.; Rozante, Jose Roberto
2008-01-01
This paper describes a comprehensive assessment of a new high-resolution, high-quality gauge-satellite based analysis of daily precipitation over continental South America during 2004. This methodology is based on a combination of additive and multiplicative bias correction schemes in order to get the lowest bias when compared with the observed values. Inter-comparisons and cross-validations tests have been carried out for the control algorithm (TMPA real-time algorithm) and different merging schemes: additive bias correction (ADD), ratio bias correction (RAT) and TMPA research version, for different months belonging to different seasons and for different network densities. All compared merging schemes produce better results than the control algorithm, but when finer temporal (daily) and spatial scale (regional networks) gauge datasets is included in the analysis, the improvement is remarkable. The Combined Scheme (CoSch) presents consistently the best performance among the five techniques. This is also true when a degraded daily gauge network is used instead of full dataset. This technique appears a suitable tool to produce real-time, high-resolution, high-quality gauge-satellite based analyses of daily precipitation over land in regional domains.
Forcing scheme analysis for the axisymmetric lattice Boltzmann method under incompressible limit.
Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chen, Jie; Yin, Linmao; Chew, Jia Wei
2017-04-01
Because the standard lattice Boltzmann (LB) method is proposed for Cartesian Navier-Stokes (NS) equations, additional source terms are necessary in the axisymmetric LB method for representing the axisymmetric effects. Therefore, the accuracy and applicability of the axisymmetric LB models depend on the forcing schemes adopted for discretization of the source terms. In this study, three forcing schemes, namely, the trapezium rule based scheme, the direct forcing scheme, and the semi-implicit centered scheme, are analyzed theoretically by investigating their derived macroscopic equations in the diffusive scale. Particularly, the finite difference interpretation of the standard LB method is extended to the LB equations with source terms, and then the accuracy of different forcing schemes is evaluated for the axisymmetric LB method. Theoretical analysis indicates that the discrete lattice effects arising from the direct forcing scheme are part of the truncation error terms and thus would not affect the overall accuracy of the standard LB method with general force term (i.e., only the source terms in the momentum equation are considered), but lead to incorrect macroscopic equations for the axisymmetric LB models. On the other hand, the trapezium rule based scheme and the semi-implicit centered scheme both have the advantage of avoiding the discrete lattice effects and recovering the correct macroscopic equations. Numerical tests applied for validating the theoretical analysis show that both the numerical stability and the accuracy of the axisymmetric LB simulations are affected by the direct forcing scheme, which indicate that forcing schemes free of the discrete lattice effects are necessary for the axisymmetric LB method.
Receiver-Coupling Schemes Based On Optimal-Estimation Theory
NASA Technical Reports Server (NTRS)
Kumar, Rajendra
1992-01-01
Two schemes for reception of weak radio signals conveying digital data via phase modulation provide for mutual coupling of multiple receivers, and coherent combination of outputs of receivers. In both schemes, optimal mutual-coupling weights computed according to Kalman-filter theory, but differ in manner of transmission and combination of outputs of receivers.
On the solution of evolution equations based on multigrid and explicit iterative methods
NASA Astrophysics Data System (ADS)
Zhukov, V. T.; Novikova, N. D.; Feodoritova, O. B.
2015-08-01
Two schemes for solving initial-boundary value problems for three-dimensional parabolic equations are studied. One is implicit and is solved using the multigrid method, while the other is explicit iterative and is based on optimal properties of the Chebyshev polynomials. In the explicit iterative scheme, the number of iteration steps and the iteration parameters are chosen as based on the approximation and stability conditions, rather than on the optimization of iteration convergence to the solution of the implicit scheme. The features of the multigrid scheme include the implementation of the intergrid transfer operators for the case of discontinuous coefficients in the equation and the adaptation of the smoothing procedure to the spectrum of the difference operators. The results produced by these schemes as applied to model problems with anisotropic discontinuous coefficients are compared.
Multiple image encryption scheme based on pixel exchange operation and vector decomposition
NASA Astrophysics Data System (ADS)
Xiong, Y.; Quan, C.; Tay, C. J.
2018-02-01
We propose a new multiple image encryption scheme based on a pixel exchange operation and a basic vector decomposition in Fourier domain. In this algorithm, original images are imported via a pixel exchange operator, from which scrambled images and pixel position matrices are obtained. Scrambled images encrypted into phase information are imported using the proposed algorithm and phase keys are obtained from the difference between scrambled images and synthesized vectors in a charge-coupled device (CCD) plane. The final synthesized vector is used as an input in a random phase encoding (DRPE) scheme. In the proposed encryption scheme, pixel position matrices and phase keys serve as additional private keys to enhance the security of the cryptosystem which is based on a 4-f system. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed encryption scheme.
NASA Astrophysics Data System (ADS)
Keslake, Tim; Chipperfield, Martyn; Mann, Graham; Flemming, Johannes; Remy, Sam; Dhomse, Sandip; Morgan, Will
2016-04-01
The C-IFS (Composition Integrated Forecast System) developed under the MACC series of projects and to be continued under the Copernicus Atmospheric Monitoring System, provides global operational forecasts and re-analyses of atmospheric composition at high spatial resolution (T255, ~80km). Currently there are 2 aerosol schemes implemented within C-IFS, a mass-based scheme with externally mixed particle types and an aerosol microphysics scheme (GLOMAP-mode). The simpler mass-based scheme is the current operational system, also used in the existing system to assimilate satellite measurements of aerosol optical depth (AOD) for improved forecast capability. The microphysical GLOMAP scheme has now been implemented and evaluated in the latest C-IFS cycle alongside the mass-based scheme. The upgrade to the microphysical scheme provides for higher fidelity aerosol-radiation and aerosol-cloud interactions, accounting for global variations in size distribution and mixing state, and additional aerosol properties such as cloud condensation nuclei concentrations. The new scheme will also provide increased aerosol information when used as lateral boundary conditions for regional air quality models. Here we present a series of experiments highlighting the influence and accuracy of the two different aerosol schemes and the impact of MODIS AOD assimilation. In particular, we focus on the influence of biomass burning emissions on aerosol properties in the Amazon, comparing to ground-based and aircraft observations from the 2012 SAMBBA campaign. Biomass burning can affect regional air quality, human health, regional weather and the local energy budget. Tropical biomass burning generates particles primarily composed of particulate organic matter (POM) and black carbon (BC), the local ratio of these two different constituents often determining the properties and subsequent impacts of the aerosol particles. Therefore, the model's ability to capture the concentrations of these two carbonaceous aerosol types, during the tropical dry season, is essential for quantifying these wide ranging impacts. Comparisons to SAMBBA aircraft observations show that while both schemes underestimate POM and BC mass concentrations, the GLOMAP scheme provides a more accurate simulation. When satellite AOD is assimilated into the GEMS-AER scheme, the model is successfully adjusted, capturing observed mass concentrations to a good degree of accuracy.
NASA Astrophysics Data System (ADS)
Li, Wei; Huang, Zhitong; Li, Haoyue; Ji, Yuefeng
2018-04-01
Visible light communication (VLC) is a promising candidate for short-range broadband access due to its integration of advantages for both optical communication and wireless communication, whereas multi-user access is a key problem because of the intra-cell and inter-cell interferences. In addition, the non-flat channel effect results in higher losses for users in high frequency bands, which leads to unfair qualities. To solve those issues, we propose a power adaptive multi-filter carrierless amplitude and phase access (PA-MF-CAPA) scheme, and in the first step of this scheme, the MF-CAPA scheme utilizing multiple filters as different CAP dimensions is used to realize multi-user access. The character of orthogonality among the filters in different dimensions can mitigate the effect of intra-cell and inter-cell interferences. Moreover, the MF-CAPA scheme provides different channels modulated on the same frequency bands, which further increases the transmission rate. Then, the power adaptive procedure based on MF-CAPA scheme is presented to realize quality fairness. As demonstrated in our experiments, the MF-CAPA scheme yields an improved throughput compared with multi-band CAP access scheme, and the PA-MF-CAPA scheme enhances the quality fairness and further improves the throughput compared with the MF-CAPA scheme.
One-dimensional high-order compact method for solving Euler's equations
NASA Astrophysics Data System (ADS)
Mohamad, M. A. H.; Basri, S.; Basuno, B.
2012-06-01
In the field of computational fluid dynamics, many numerical algorithms have been developed to simulate inviscid, compressible flows problems. Among those most famous and relevant are based on flux vector splitting and Godunov-type schemes. Previously, this system was developed through computational studies by Mawlood [1]. However the new test cases for compressible flows, the shock tube problems namely the receding flow and shock waves were not investigated before by Mawlood [1]. Thus, the objective of this study is to develop a high-order compact (HOC) finite difference solver for onedimensional Euler equation. Before developing the solver, a detailed investigation was conducted to assess the performance of the basic third-order compact central discretization schemes. Spatial discretization of the Euler equation is based on flux-vector splitting. From this observation, discretization of the convective flux terms of the Euler equation is based on a hybrid flux-vector splitting, known as the advection upstream splitting method (AUSM) scheme which combines the accuracy of flux-difference splitting and the robustness of flux-vector splitting. The AUSM scheme is based on the third-order compact scheme to the approximate finite difference equation was completely analyzed consequently. In one-dimensional problem for the first order schemes, an explicit method is adopted by using time integration method. In addition to that, development and modification of source code for the one-dimensional flow is validated with four test cases namely, unsteady shock tube, quasi-one-dimensional supersonic-subsonic nozzle flow, receding flow and shock waves in shock tubes. From these results, it was also carried out to ensure that the definition of Riemann problem can be identified. Further analysis had also been done in comparing the characteristic of AUSM scheme against experimental results, obtained from previous works and also comparative analysis with computational results generated by van Leer, KFVS and AUSMPW schemes. Furthermore, there is a remarkable improvement with the extension of the AUSM scheme from first-order to third-order accuracy in terms of shocks, contact discontinuities and rarefaction waves.
Antenna Allocation in MIMO Radar with Widely Separated Antennas for Multi-Target Detection
Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong
2014-01-01
In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes. PMID:25350505
Antenna allocation in MIMO radar with widely separated antennas for multi-target detection.
Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong
2014-10-27
In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes.
Investigation of combustion characteristics in a scramjet combustor using a modified flamelet model
NASA Astrophysics Data System (ADS)
Zhao, Guoyan; Sun, Mingbo; Wang, Hongbo; Ouyang, Hao
2018-07-01
In this study, the characteristics of supersonic combustion inside an ethylene-fueled scramjet combustor equipped with multi-cavities were investigated with different injection schemes. Experimental results showed that the flames concentrated in the cavity and separated boundary layer downstream of the cavity, and they occupied the flow channel further enhancing the bulk flow compression. The flame structure in distributed injection scheme differed from that in centralized injection scheme. In numerical simulations, a modified flamelet model was introduced to consider that the pressure distribution is far from homogenous inside the scramjet combustor. Compared with original flamelet model, numerical predictions based on the modified model showed better agreement with the experimental results, validating the reliability of the calculations. Based on the modified model, the simulations with different injection schemes were analysed. The predicted flame agreed reasonably with the experimental observations in structure. The CO masses were concentrated in cavity and subsonic region adjacent to the cavity shear layer leading to intense heat release. Compared with centralized scheme, the higher jet mixing efficiency in distributed scheme induced an intense combustion in posterior upper cavity and downstream of the cavity. From streamline and isolation surfaces, the combustion at trail of lower cavity was depressed since the bulk flow downstream of the cavity is pushed down.
NASA Astrophysics Data System (ADS)
Schaefer, Semjon; Gregory, Mark; Rosenkranz, Werner
2016-11-01
We present simulative and experimental investigations of different coherent receiver designs for high-speed optical intersatellite links. We focus on frequency offset (FO) compensation in homodyne and intradyne detection systems. The considered laser communication terminal uses an optical phase-locked loop (OPLL), which ensures stable homodyne detection. However, the hardware complexity increases with the modulation order. Therefore, we show that software-based intradyne detection is an attractive alternative for OPLL-based homodyne systems. Our approach is based on digital FO and phase noise compensation, in order to achieve a more flexible coherent detection scheme. Analytic results will further show the theoretical impact of the different detection schemes on the receiver sensitivity. Finally, we compare the schemes in terms of bit error ratio measurements and optimal receiver design.
An effective and secure key-management scheme for hierarchical access control in E-medicine system.
Odelu, Vanga; Das, Ashok Kumar; Goswami, Adrijit
2013-04-01
Recently several hierarchical access control schemes are proposed in the literature to provide security of e-medicine systems. However, most of them are either insecure against 'man-in-the-middle attack' or they require high storage and computational overheads. Wu and Chen proposed a key management method to solve dynamic access control problems in a user hierarchy based on hybrid cryptosystem. Though their scheme improves computational efficiency over Nikooghadam et al.'s approach, it suffers from large storage space for public parameters in public domain and computational inefficiency due to costly elliptic curve point multiplication. Recently, Nikooghadam and Zakerolhosseini showed that Wu-Chen's scheme is vulnerable to man-in-the-middle attack. In order to remedy this security weakness in Wu-Chen's scheme, they proposed a secure scheme which is again based on ECC (elliptic curve cryptography) and efficient one-way hash function. However, their scheme incurs huge computational cost for providing verification of public information in the public domain as their scheme uses ECC digital signature which is costly when compared to symmetric-key cryptosystem. In this paper, we propose an effective access control scheme in user hierarchy which is only based on symmetric-key cryptosystem and efficient one-way hash function. We show that our scheme reduces significantly the storage space for both public and private domains, and computational complexity when compared to Wu-Chen's scheme, Nikooghadam-Zakerolhosseini's scheme, and other related schemes. Through the informal and formal security analysis, we further show that our scheme is secure against different attacks and also man-in-the-middle attack. Moreover, dynamic access control problems in our scheme are also solved efficiently compared to other related schemes, making our scheme is much suitable for practical applications of e-medicine systems.
Symplectic partitioned Runge-Kutta scheme for Maxwell's equations
NASA Astrophysics Data System (ADS)
Huang, Zhi-Xiang; Wu, Xian-Liang
Using the symplectic partitioned Runge-Kutta (PRK) method, we construct a new scheme for approximating the solution to infinite dimensional nonseparable Hamiltonian systems of Maxwell's equations for the first time. The scheme is obtained by discretizing the Maxwell's equations in the time direction based on symplectic PRK method, and then evaluating the equation in the spatial direction with a suitable finite difference approximation. Several numerical examples are presented to verify the efficiency of the scheme.
Compiler-directed cache management in multiprocessors
NASA Technical Reports Server (NTRS)
Cheong, Hoichi; Veidenbaum, Alexander V.
1990-01-01
The necessity of finding alternatives to hardware-based cache coherence strategies for large-scale multiprocessor systems is discussed. Three different software-based strategies sharing the same goals and general approach are presented. They consist of a simple invalidation approach, a fast selective invalidation scheme, and a version control scheme. The strategies are suitable for shared-memory multiprocessor systems with interconnection networks and a large number of processors. Results of trace-driven simulations conducted on numerical benchmark routines to compare the performance of the three schemes are presented.
NASA Astrophysics Data System (ADS)
Faridatussafura, Nurzaka; Wandala, Agie
2018-05-01
The meteorological model WRF-ARW version 3.8.1 is used for simulating the heavy rainfall in Semarang that occurred on February 12th, 2015. Two different convective schemes and two different microphysics scheme in a nested configuration were chosen. The sensitivity of those schemes in capturing the extreme weather event has been tested. GFS data were used for the initial and boundary condition. Verification on the twenty-four hours accumulated rainfall using GSMaPsatellite data shows that Kain-Fritsch convective scheme and Lin microphysics scheme is the best combination scheme among the others. The combination also gives the highest success ratio value in placing high intensity rainfall area. Based on the ROC diagram, KF-Lin shows the best performance in detecting high intensity rainfall. However, the combination still has high bias value.
NASA Astrophysics Data System (ADS)
Xia, Weiwei; Shen, Lianfeng
We propose two vertical handoff schemes for cellular network and wireless local area network (WLAN) integration: integrated service-based handoff (ISH) and integrated service-based handoff with queue capabilities (ISHQ). Compared with existing handoff schemes in integrated cellular/WLAN networks, the proposed schemes consider a more comprehensive set of system characteristics such as different features of voice and data services, dynamic information about the admitted calls, user mobility and vertical handoffs in two directions. The code division multiple access (CDMA) cellular network and IEEE 802.11e WLAN are taken into account in the proposed schemes. We model the integrated networks by using multi-dimensional Markov chains and the major performance measures are derived for voice and data services. The important system parameters such as thresholds to prioritize handoff voice calls and queue sizes are optimized. Numerical results demonstrate that the proposed ISHQ scheme can maximize the utilization of overall bandwidth resources with the best quality of service (QoS) provisioning for voice and data services.
Threshold quantum state sharing based on entanglement swapping
NASA Astrophysics Data System (ADS)
Qin, Huawang; Tso, Raylin
2018-06-01
A threshold quantum state sharing scheme is proposed. The dealer uses the quantum-controlled-not operations to expand the d-dimensional quantum state and then uses the entanglement swapping to distribute the state to a random subset of participants. The participants use the single-particle measurements and unitary operations to recover the initial quantum state. In our scheme, the dealer can share different quantum states among different subsets of participants simultaneously. So the scheme will be very flexible in practice.
Teleportation of Unknown Superpositions of Collective Atomic Coherent States
NASA Astrophysics Data System (ADS)
Zheng, Shi-Biao
2001-06-01
We propose a scheme to teleport an unknown superposition of two atomic coherent states with different phases. Our scheme is based on resonant and dispersive atom-field interaction. Our scheme provides a possibility of teleporting macroscopic superposition states of many atoms first time. The project supported by National Natural Science Foundation of China under Grant No. 60008003
NASA Astrophysics Data System (ADS)
Lee, Sheng-Jui; Chen, Hung-Cheng; You, Zhi-Qiang; Liu, Kuan-Lin; Chow, Tahsin J.; Chen, I.-Chia; Hsu, Chao-Ping
2010-10-01
We calculate the electron transfer (ET) rates for a series of heptacyclo[6.6.0.02,6.03,13.014,11.05,9.010,14]-tetradecane (HCTD) linked donor-acceptor molecules. The electronic coupling factor was calculated by the fragment charge difference (FCD) [19] and the generalized Mulliken-Hush (GMH) schemes [20]. We found that the FCD is less prone to problems commonly seen in the GMH scheme, especially when the coupling values are small. For a 3-state case where the charge transfer (CT) state is coupled with two different locally excited (LE) states, we tested with the 3-state approach for the GMH scheme [30], and found that it works well with the FCD scheme. A simplified direct diagonalization based on Rust's 3-state scheme was also proposed and tested. This simplified scheme does not require a manual assignment of the states, and it yields coupling values that are largely similar to those from the full Rust's approach. The overall electron transfer (ET) coupling rates were also calculated.
Universal health coverage in Latin American countries: how to improve solidarity-based schemes.
Titelman, Daniel; Cetrángolo, Oscar; Acosta, Olga Lucía
2015-04-04
In this Health Policy we examine the association between the financing structure of health systems and universal health coverage. Latin American health systems encompass a wide range of financial sources, which translate into different solidarity-based schemes that combine contributory (payroll taxes) and non-contributory (general taxes) sources of financing. To move towards universal health coverage, solidarity-based schemes must heavily rely on countries' capacity to increase public expenditure in health. Improvement of solidarity-based schemes will need the expansion of mandatory universal insurance systems and strengthening of the public sector including increased fiscal expenditure. These actions demand a new model to integrate different sources of health-sector financing, including general tax revenue, social security contributions, and private expenditure. The extent of integration achieved among these sources will be the main determinant of solidarity and universal health coverage. The basic challenges for improvement of universal health coverage are not only to spend more on health, but also to reduce the proportion of out-of-pocket spending, which will need increased fiscal resources. Copyright © 2015 Elsevier Ltd. All rights reserved.
Self-match based on polling scheme for passive optical network monitoring
NASA Astrophysics Data System (ADS)
Zhang, Xuan; Guo, Hao; Jia, Xinhong; Liao, Qinghua
2018-06-01
We propose a self-match based on polling scheme for passive optical network monitoring. Each end-user is equipped with an optical matcher that exploits only the specific length patchcord and two different fiber Bragg gratings with 100% reflectivity. The simple and low-cost scheme can greatly simplify the final recognition processing of the network link status and reduce the sensitivity of the photodetector. We analyze the time-domain relation between reflected pulses and establish the calculation model to evaluate the false alarm rate. The feasibility of the proposed scheme and the validity of the time-domain relation analysis are experimentally demonstrated.
NASA Astrophysics Data System (ADS)
Popescu, Mihaela; Shyy, Wei; Garbey, Marc
2005-12-01
In developing suitable numerical techniques for computational aero-acoustics, the dispersion-relation-preserving (DRP) scheme by Tam and co-workers and the optimized prefactored compact (OPC) scheme by Ashcroft and Zhang have shown desirable properties of reducing both dissipative and dispersive errors. These schemes, originally based on the finite difference, attempt to optimize the coefficients for better resolution of short waves with respect to the computational grid while maintaining pre-determined formal orders of accuracy. In the present study, finite volume formulations of both schemes are presented to better handle the nonlinearity and complex geometry encountered in many engineering applications. Linear and nonlinear wave equations, with and without viscous dissipation, have been adopted as the test problems. Highlighting the principal characteristics of the schemes and utilizing linear and nonlinear wave equations with different wavelengths as the test cases, the performance of these approaches is documented. For the linear wave equation, there is no major difference between the DRP and OPC schemes. For the nonlinear wave equations, the finite volume version of both DRP and OPC schemes offers substantially better solutions in regions of high gradient or discontinuity.
A Layered Searchable Encryption Scheme with Functional Components Independent of Encryption Methods
Luo, Guangchun; Qin, Ke
2014-01-01
Searchable encryption technique enables the users to securely store and search their documents over the remote semitrusted server, which is especially suitable for protecting sensitive data in the cloud. However, various settings (based on symmetric or asymmetric encryption) and functionalities (ranked keyword query, range query, phrase query, etc.) are often realized by different methods with different searchable structures that are generally not compatible with each other, which limits the scope of application and hinders the functional extensions. We prove that asymmetric searchable structure could be converted to symmetric structure, and functions could be modeled separately apart from the core searchable structure. Based on this observation, we propose a layered searchable encryption (LSE) scheme, which provides compatibility, flexibility, and security for various settings and functionalities. In this scheme, the outputs of the core searchable component based on either symmetric or asymmetric setting are converted to some uniform mappings, which are then transmitted to loosely coupled functional components to further filter the results. In such a way, all functional components could directly support both symmetric and asymmetric settings. Based on LSE, we propose two representative and novel constructions for ranked keyword query (previously only available in symmetric scheme) and range query (previously only available in asymmetric scheme). PMID:24719565
Chung, Yun Won; Hwang, Ho Young
2010-01-01
In sensor network, energy conservation is one of the most critical issues since sensor nodes should perform a sensing task for a long time (e.g., lasting a few years) but the battery of them cannot be replaced in most practical situations. For this purpose, numerous energy conservation schemes have been proposed and duty cycling scheme is considered the most suitable power conservation technique, where sensor nodes alternate between states having different levels of power consumption. In order to analyze the energy consumption of energy conservation scheme based on duty cycling, it is essential to obtain the probability of each state. In this paper, we analytically derive steady state probability of sensor node states, i.e., sleep, listen, and active states, based on traffic characteristics and timer values, i.e., sleep timer, listen timer, and active timer. The effect of traffic characteristics and timer values on the steady state probability and energy consumption is analyzed in detail. Our work can provide sensor network operators guideline for selecting appropriate timer values for efficient energy conservation. The analytical methodology developed in this paper can be extended to other energy conservation schemes based on duty cycling with different sensor node states, without much difficulty. PMID:22219676
Jin, Long; Liao, Bolin; Liu, Mei; Xiao, Lin; Guo, Dongsheng; Yan, Xiaogang
2017-01-01
By incorporating the physical constraints in joint space, a different-level simultaneous minimization scheme, which takes both the robot kinematics and robot dynamics into account, is presented and investigated for fault-tolerant motion planning of redundant manipulator in this paper. The scheme is reformulated as a quadratic program (QP) with equality and bound constraints, which is then solved by a discrete-time recurrent neural network. Simulative verifications based on a six-link planar redundant robot manipulator substantiate the efficacy and accuracy of the presented acceleration fault-tolerant scheme, the resultant QP and the corresponding discrete-time recurrent neural network. PMID:28955217
Moon, Jongho; Choi, Younsung; Jung, Jaewook; Won, Dongho
2015-01-01
In multi-server environments, user authentication is a very important issue because it provides the authorization that enables users to access their data and services; furthermore, remote user authentication schemes for multi-server environments have solved the problem that has arisen from user’s management of different identities and passwords. For this reason, numerous user authentication schemes that are designed for multi-server environments have been proposed over recent years. In 2015, Lu et al. improved upon Mishra et al.’s scheme, claiming that their remote user authentication scheme is more secure and practical; however, we found that Lu et al.’s scheme is still insecure and incorrect. In this paper, we demonstrate that Lu et al.’s scheme is vulnerable to outsider attack and user impersonation attack, and we propose a new biometrics-based scheme for authentication and key agreement that can be used in multi-server environments; then, we show that our proposed scheme is more secure and supports the required security properties. PMID:26709702
NASA Astrophysics Data System (ADS)
Zhao, Liang; Adhikari, Avishek; Sakurai, Kouichi
Watermarking is one of the most effective techniques for copyright protection and information hiding. It can be applied in many fields of our society. Nowadays, some image scrambling schemes are used as one part of the watermarking algorithm to enhance the security. Therefore, how to select an image scrambling scheme and what kind of the image scrambling scheme may be used for watermarking are the key problems. Evaluation method of the image scrambling schemes can be seen as a useful test tool for showing the property or flaw of the image scrambling method. In this paper, a new scrambling evaluation system based on spatial distribution entropy and centroid difference of bit-plane is presented to obtain the scrambling degree of image scrambling schemes. Our scheme is illustrated and justified through computer simulations. The experimental results show (in Figs. 6 and 7) that for the general gray-scale image, the evaluation degree of the corresponding cipher image for the first 4 significant bit-planes selection is nearly the same as that for the 8 bit-planes selection. That is why, instead of taking 8 bit-planes of a gray-scale image, it is sufficient to take only the first 4 significant bit-planes for the experiment to find the scrambling degree. This 50% reduction in the computational cost makes our scheme efficient.
NASA Astrophysics Data System (ADS)
Pandey, Gavendra; Sharan, Maithili
2018-01-01
Application of atmospheric dispersion models in air quality analysis requires a proper representation of the vertical and horizontal growth of the plume. For this purpose, various schemes for the parameterization of dispersion parameters σ‧s are described in both stable and unstable conditions. These schemes differ on the use of (i) extent of availability of on-site measurements (ii) formulations developed for other sites and (iii) empirical relations. The performance of these schemes is evaluated in an earlier developed IIT (Indian Institute of Technology) dispersion model with the data set in single and multiple releases conducted at Fusion Field Trials, Dugway Proving Ground, Utah 2007. Qualitative and quantitative evaluation of the relative performance of all the schemes is carried out in both stable and unstable conditions in the light of (i) peak/maximum concentrations, and (ii) overall concentration distribution. The blocked bootstrap resampling technique is adopted to investigate the statistical significance of the differences in performances of each of the schemes by computing 95% confidence limits on the parameters FB and NMSE. The various analysis based on some selected statistical measures indicated consistency in the qualitative and quantitative performances of σ schemes. The scheme which is based on standard deviation of wind velocity fluctuations and Lagrangian time scales exhibits a relatively better performance in predicting the peak as well as the lateral spread.
NASA Astrophysics Data System (ADS)
Pantano, Carlos
2005-11-01
We describe a hybrid finite difference method for large-eddy simulation (LES) of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). Numerical experiments and validation calculations are presented including a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability. The approach is a conservative flux-based SAMR formulation and as such, it utilizes refinement to computational advantage. The numerical method for the resolved scale terms encompasses the cases of scheme alternation and internal mesh interfaces resulting from SAMR. An explicit centered scheme that is consistent with a skew-symmetric finite difference formulation is used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. The subgrid stresses and transports are calculated by means of the streched-vortex model, Misra & Pullin (1997)
Novel schemes for measurement-based quantum computation.
Gross, D; Eisert, J
2007-06-01
We establish a framework which allows one to construct novel schemes for measurement-based quantum computation. The technique develops tools from many-body physics-based on finitely correlated or projected entangled pair states-to go beyond the cluster-state based one-way computer. We identify resource states radically different from the cluster state, in that they exhibit nonvanishing correlations, can be prepared using nonmaximally entangling gates, or have very different local entanglement properties. In the computational models, randomness is compensated in a different manner. It is shown that there exist resource states which are locally arbitrarily close to a pure state. We comment on the possibility of tailoring computational models to specific physical systems.
Construction of Low Dissipative High Order Well-Balanced Filter Schemes for Non-Equilibrium Flows
NASA Technical Reports Server (NTRS)
Wang, Wei; Yee, H. C.; Sjogreen, Bjorn; Magin, Thierry; Shu, Chi-Wang
2009-01-01
The goal of this paper is to generalize the well-balanced approach for non-equilibrium flow studied by Wang et al. [26] to a class of low dissipative high order shock-capturing filter schemes and to explore more advantages of well-balanced schemes in reacting flows. The class of filter schemes developed by Yee et al. [30], Sjoegreen & Yee [24] and Yee & Sjoegreen [35] consist of two steps, a full time step of spatially high order non-dissipative base scheme and an adaptive nonlinear filter containing shock-capturing dissipation. A good property of the filter scheme is that the base scheme and the filter are stand alone modules in designing. Therefore, the idea of designing a well-balanced filter scheme is straightforward, i.e., choosing a well-balanced base scheme with a well-balanced filter (both with high order). A typical class of these schemes shown in this paper is the high order central difference schemes/predictor-corrector (PC) schemes with a high order well-balanced WENO filter. The new filter scheme with the well-balanced property will gather the features of both filter methods and well-balanced properties: it can preserve certain steady state solutions exactly; it is able to capture small perturbations, e.g., turbulence fluctuations; it adaptively controls numerical dissipation. Thus it shows high accuracy, efficiency and stability in shock/turbulence interactions. Numerical examples containing 1D and 2D smooth problems, 1D stationary contact discontinuity problem and 1D turbulence/shock interactions are included to verify the improved accuracy, in addition to the well-balanced behavior.
NASA Astrophysics Data System (ADS)
Danala, Gopichandh; Wang, Yunzhi; Thai, Theresa; Gunderson, Camille C.; Moxley, Katherine M.; Moore, Kathleen; Mannel, Robert S.; Cheng, Samuel; Liu, Hong; Zheng, Bin; Qiu, Yuchen
2017-02-01
Accurate tumor segmentation is a critical step in the development of the computer-aided detection (CAD) based quantitative image analysis scheme for early stage prognostic evaluation of ovarian cancer patients. The purpose of this investigation is to assess the efficacy of several different methods to segment the metastatic tumors occurred in different organs of ovarian cancer patients. In this study, we developed a segmentation scheme consisting of eight different algorithms, which can be divided into three groups: 1) Region growth based methods; 2) Canny operator based methods; and 3) Partial differential equation (PDE) based methods. A number of 138 tumors acquired from 30 ovarian cancer patients were used to test the performance of these eight segmentation algorithms. The results demonstrate each of the tested tumors can be successfully segmented by at least one of the eight algorithms without the manual boundary correction. Furthermore, modified region growth, classical Canny detector, and fast marching, and threshold level set algorithms are suggested in the future development of the ovarian cancer related CAD schemes. This study may provide meaningful reference for developing novel quantitative image feature analysis scheme to more accurately predict the response of ovarian cancer patients to the chemotherapy at early stage.
Entropy Analysis of Kinetic Flux Vector Splitting Schemes for the Compressible Euler Equations
NASA Technical Reports Server (NTRS)
Shiuhong, Lui; Xu, Jun
1999-01-01
Flux Vector Splitting (FVS) scheme is one group of approximate Riemann solvers for the compressible Euler equations. In this paper, the discretized entropy condition of the Kinetic Flux Vector Splitting (KFVS) scheme based on the gas-kinetic theory is proved. The proof of the entropy condition involves the entropy definition difference between the distinguishable and indistinguishable particles.
Scheme for Implementing Teleporting an Arbitrary Tripartite Entangled State in Cavity QED
NASA Astrophysics Data System (ADS)
Wang, Xue-Wen; Peng, Zhao-Hui
2009-10-01
We propose to teleport an arbitrary tripartite entangled state in cavity QED. In this scheme, the five-qubit Brown state is chosen as the quantum channel. It has been shown that the teleportation protocol can be completed perfectly with two different measurement methods. In the future, our scheme might be realizable based on present experimental technology.
Mao, Yuxing; Cheng, Tao; Zhao, Huiyuan; Shen, Na
2017-11-27
In Wireless Sensor Networks (WSNs), unlicensed users, that is, sensor nodes, have excessively exploited the unlicensed radio spectrum. Through Cognitive Radio (CR), licensed radio spectra, which are owned by licensed users, can be partly or entirely shared with unlicensed users. This paper proposes a strategic bargaining spectrum-sharing scheme, considering a CR-based heterogeneous WSN (HWSN). The sensors of HWSNs are discrepant and exist in different wireless environments, which leads to various signal-to-noise ratios (SNRs) for the same or different licensed users. Unlicensed users bargain with licensed users regarding the spectrum price. In each round of bargaining, licensed users are allowed to adaptively adjust their spectrum price to the best for maximizing their profits. . Then, each unlicensed user makes their best response and informs licensed users of "bargaining" and "warning". Through finite rounds of bargaining, this scheme can obtain a Nash bargaining solution (NBS), which makes all licensed and unlicensed users reach an agreement. The simulation results demonstrate that the proposed scheme can quickly find a NBS and all players in the game prefer to be honest. The proposed scheme outperforms existing schemes, within a certain range, in terms of fairness and trade success probability.
Bias correction of daily satellite precipitation data using genetic algorithm
NASA Astrophysics Data System (ADS)
Pratama, A. W.; Buono, A.; Hidayat, R.; Harsa, H.
2018-05-01
Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) was producted by blending Satellite-only Climate Hazards Group InfraRed Precipitation (CHIRP) with Stasion observations data. The blending process was aimed to reduce bias of CHIRP. However, Biases of CHIRPS on statistical moment and quantil values were high during wet season over Java Island. This paper presented a bias correction scheme to adjust statistical moment of CHIRP using observation precipitation data. The scheme combined Genetic Algorithm and Nonlinear Power Transformation, the results was evaluated based on different season and different elevation level. The experiment results revealed that the scheme robustly reduced bias on variance around 100% reduction and leaded to reduction of first, and second quantile biases. However, bias on third quantile only reduced during dry months. Based on different level of elevation, the performance of bias correction process is only significantly different on skewness indicators.
Student’s scheme in solving mathematics problems
NASA Astrophysics Data System (ADS)
Setyaningsih, Nining; Juniati, Dwi; Suwarsono
2018-03-01
The purpose of this study was to investigate students’ scheme in solving mathematics problems. Scheme are data structures for representing the concepts stored in memory. In this study, we used it in solving mathematics problems, especially ratio and proportion topics. Scheme is related to problem solving that assumes that a system is developed in the human mind by acquiring a structure in which problem solving procedures are integrated with some concepts. The data were collected by interview and students’ written works. The results of this study revealed are students’ scheme in solving the problem of ratio and proportion as follows: (1) the content scheme, where students can describe the selected components of the problem according to their prior knowledge, (2) the formal scheme, where students can explain in construct a mental model based on components that have been selected from the problem and can use existing schemes to build planning steps, create something that will be used to solve problems and (3) the language scheme, where students can identify terms, or symbols of the components of the problem.Therefore, by using the different strategies to solve the problems, the students’ scheme in solving the ratio and proportion problems will also differ.
Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations.
Ullah, Azmat; Malik, Suheel Abdullah; Alimgeer, Khurram Saleem
2018-01-01
In this paper, a hybrid heuristic scheme based on two different basis functions i.e. Log Sigmoid and Bernstein Polynomial with unknown parameters is used for solving the nonlinear heat transfer equations efficiently. The proposed technique transforms the given nonlinear ordinary differential equation into an equivalent global error minimization problem. Trial solution for the given nonlinear differential equation is formulated using a fitness function with unknown parameters. The proposed hybrid scheme of Genetic Algorithm (GA) with Interior Point Algorithm (IPA) is opted to solve the minimization problem and to achieve the optimal values of unknown parameters. The effectiveness of the proposed scheme is validated by solving nonlinear heat transfer equations. The results obtained by the proposed scheme are compared and found in sharp agreement with both the exact solution and solution obtained by Haar Wavelet-Quasilinearization technique which witnesses the effectiveness and viability of the suggested scheme. Moreover, the statistical analysis is also conducted for investigating the stability and reliability of the presented scheme.
Active synchronization between two different chaotic dynamical system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maheri, M.; Arifin, N. Md; Ismail, F.
2015-05-15
In this paper we investigate on the synchronization problem between two different chaotic dynamical system based on the Lyapunov stability theorem by using nonlinear control functions. Active control schemes are used for synchronization Liu system as drive and Rossler system as response. Numerical simulation by using Maple software are used to show effectiveness of the proposed schemes.
NASA Astrophysics Data System (ADS)
Deng, Shuang; Xiang, Wenting; Tian, Yangge
2009-10-01
Map coloring is a hard task even to the experienced map experts. In the GIS project, usually need to color map according to the customer, which make the work more complex. With the development of GIS, more and more programmers join the project team, which lack the training of cartology, their coloring map are harder to meet the requirements of customer. From the experience, customers with similar background usually have similar tastes for coloring map. So, we developed a GIS color scheme decision-making system which can select color schemes of similar customers from case base for customers to select and adjust. The system is a BS/CS mixed system, the client side use JSP and make it possible for the system developers to go on remote calling of the colors scheme cases in the database server and communicate with customers. Different with general case-based reasoning, even the customers are very similar, their selection may have difference, it is hard to provide a "best" option. So, we select the Simulated Annealing Algorithm (SAA) to arrange the emergence order of different color schemes. Customers can also dynamically adjust certain features colors based on existing case. The result shows that the system can facilitate the communication between the designers and the customers and improve the quality and efficiency of coloring map.
NASA Astrophysics Data System (ADS)
MacArt, Jonathan F.; Mueller, Michael E.
2016-12-01
Two formally second-order accurate, semi-implicit, iterative methods for the solution of scalar transport-reaction equations are developed for Direct Numerical Simulation (DNS) of low Mach number turbulent reacting flows. The first is a monolithic scheme based on a linearly implicit midpoint method utilizing an approximately factorized exact Jacobian of the transport and reaction operators. The second is an operator splitting scheme based on the Strang splitting approach. The accuracy properties of these schemes, as well as their stability, cost, and the effect of chemical mechanism size on relative performance, are assessed in two one-dimensional test configurations comprising an unsteady premixed flame and an unsteady nonpremixed ignition, which have substantially different Damköhler numbers and relative stiffness of transport to chemistry. All schemes demonstrate their formal order of accuracy in the fully-coupled convergence tests. Compared to a (non-)factorized scheme with a diagonal approximation to the chemical Jacobian, the monolithic, factorized scheme using the exact chemical Jacobian is shown to be both more stable and more economical. This is due to an improved convergence rate of the iterative procedure, and the difference between the two schemes in convergence rate grows as the time step increases. The stability properties of the Strang splitting scheme are demonstrated to outpace those of Lie splitting and monolithic schemes in simulations at high Damköhler number; however, in this regime, the monolithic scheme using the approximately factorized exact Jacobian is found to be the most economical at practical CFL numbers. The performance of the schemes is further evaluated in a simulation of a three-dimensional, spatially evolving, turbulent nonpremixed planar jet flame.
NASA Astrophysics Data System (ADS)
Niu, Hailin; Zhang, Xiaotong; Liu, Qiang; Feng, Youbin; Li, Xiuhong; Zhang, Jialin; Cai, Erli
2015-12-01
The ocean surface albedo (OSA) is a deciding factor on ocean net surface shortwave radiation (ONSSR) estimation. Several OSA schemes have been proposed successively, but there is not a conclusion for the best OSA scheme of estimating the ONSSR. On the base of analyzing currently existing OSA parameterization, including Briegleb et al.(B), Taylor et al.(T), Hansen et al.(H), Jin et al.(J), Preisendorfer and Mobley(PM86), Feng's scheme(F), this study discusses the difference of OSA's impact on ONSSR estimation in condition of actual downward shortwave radiation(DSR). Then we discussed the necessity and applicability for the climate models to integrate the more complicated OSA scheme. It is concluded that the SZA and the wind speed are the two most significant effect factor to broadband OSA, thus the different OSA parameterizations varies violently in the regions of both high latitudes and strong winds. The OSA schemes can lead the ONSSR results difference of the order of 20 w m-2. The Taylor's scheme shows the best estimate, and Feng's result just following Taylor's. However, the accuracy of the estimated instantaneous OSA changes at different local time. Jin's scheme has the best performance generally at noon and in the afternoon, and PM86's is the best of all in the morning, which indicate that the more complicated OSA schemes reflect the temporal variation of OWA better than the simple ones.
Toward functional classification of neuronal types.
Sharpee, Tatyana O
2014-09-17
How many types of neurons are there in the brain? This basic neuroscience question remains unsettled despite many decades of research. Classification schemes have been proposed based on anatomical, electrophysiological, or molecular properties. However, different schemes do not always agree with each other. This raises the question of whether one can classify neurons based on their function directly. For example, among sensory neurons, can a classification scheme be devised that is based on their role in encoding sensory stimuli? Here, theoretical arguments are outlined for how this can be achieved using information theory by looking at optimal numbers of cell types and paying attention to two key properties: correlations between inputs and noise in neural responses. This theoretical framework could help to map the hierarchical tree relating different neuronal classes within and across species. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zepka, G. D.; Pinto, O.
2010-12-01
The intent of this study is to identify the combination of convective and microphysical WRF parameterizations that better adjusts to lightning occurrence over southeastern Brazil. Twelve thunderstorm days were simulated with WRF model using three different convective parameterizations (Kain-Fritsch, Betts-Miller-Janjic and Grell-Devenyi ensemble) and two different microphysical schemes (Purdue-Lin and WSM6). In order to test the combinations of parameterizations at the same time of lightning occurrence, a comparison was made between the WRF grid point values of surface-based Convective Available Potential Energy (CAPE), Lifted Index (LI), K-Index (KI) and equivalent potential temperature (theta-e), and the lightning locations nearby those grid points. Histograms were built up to show the ratio of the occurrence of different values of these variables for WRF grid points associated with lightning to all WRF grid points. The first conclusion from this analysis was that the choice of microphysics did not change appreciably the results as much as different convective schemes. The Betts-Miller-Janjic parameterization has generally worst skill to relate higher magnitudes for all four variables to lightning occurrence. The differences between the Kain-Fritsch and Grell-Devenyi ensemble schemes were not large. This fact can be attributed to the similar main assumptions used by these schemes that consider entrainment/detrainment processes along the cloud boundaries. After that, we examined three case studies using the combinations of convective and microphysical options without the Betts-Miller-Janjic scheme. Differently from the traditional verification procedures, fields of surface-based CAPE from WRF 10 km domain were compared to the Eta model, satellite images and lightning data. In general the more reliable convective scheme was Kain-Fritsch since it provided more consistent distribution of the CAPE fields with respect to satellite images and lightning data.
Fairness of QoS supporting in optical burst switching
NASA Astrophysics Data System (ADS)
Xuan, Xuelei; Liu, Hua; Chen, Chunfeng; Zhang, Zhizhong
2004-04-01
In this paper we investigate the fairness problem of offset-time-based quality of service (QoS) scheme proposed by Qiao and Dixit in optical burst switching (OBS) networks. In the proposed schemes, QoS relies on the fact that the requests for reservation further into the future, but for practical, benchmark offset-time of data bursts at the intermediate nodes is not equal to each other. Here, a new offset-time-based QoS scheme is introduced, where data bursts are classified according to their offset-time and isolated in the wavelength domain or time domain to achieve the parallel reservation. Through simulation, it is found that this scheme achieves fairness among data bursts with different priority.
Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.
Liu, Tao; Lin, Changyu; Djordjevic, Ivan B
2016-06-27
In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.
Liu, Xiaoting; Wong, Hung; Liu, Kai
2016-01-14
Against the achievement of nearly universal coverage for social health insurance for the elderly in China, a problem of inequity among different insurance schemes on health outcomes is still a big challenge for the health care system. Whether various health insurance schemes have divergent effects on health outcome is still a puzzle. Empirical evidence will be investigated in this study. This study employs a nationally representative survey database, the National Survey of the Aged Population in Urban/Rural China, to compare the changes of health outcomes among the elderly before and after the reform. A one-way ANOVA is utilized to detect disparities in health care expenditures and health status among different health insurance schemes. Multiple Linear Regression is applied later to examine the further effects of different insurance plans on health outcomes while controlling for other social determinants. The one-way ANOVA result illustrates that although the gaps in insurance reimbursements between the Urban Employee Basic Medical Insurance (UEBMI) and the other schemes, the New Rural Cooperative Medical Scheme (NCMS) and Urban Residents Basic Medical Insurance (URBMI) decreased, out-of-pocket spending accounts for a larger proportion of total health care expenditures, and the disparities among different insurances enlarged. Results of the Multiple Linear Regression suggest that UEBMI participants have better self-reported health status, physical functions and psychological wellbeing than URBMI and NCMS participants, and those uninsured. URBMI participants report better self-reported health than NCMS ones and uninsured people, while having worse psychological wellbeing compared with their NCMS counterparts. This research contributes to a transformation in health insurance studies from an emphasis on the opportunity-oriented health equity measured by coverage and healthcare accessibility to concern with outcome-based equity composed of health expenditure and health status. The results indicate that fragmented health insurance schemes generate inequitable health care utilization and health outcomes for the elderly. This study re-emphasizes the importance of reforming health insurance systems based on their health outcome rather than entitlement, which will particularly benefit the most vulnerable older groups.
Törnros, Tobias; Dorn, Helen; Reichert, Markus; Ebner-Priemer, Ulrich; Salize, Hans-Joachim; Tost, Heike; Meyer-Lindenberg, Andreas; Zipf, Alexander
2016-11-21
Self-reporting is a well-established approach within the medical and psychological sciences. In order to avoid recall bias, i.e. past events being remembered inaccurately, the reports can be filled out on a smartphone in real-time and in the natural environment. This is often referred to as ambulatory assessment and the reports are usually triggered at regular time intervals. With this sampling scheme, however, rare events (e.g. a visit to a park or recreation area) are likely to be missed. When addressing the correlation between mood and the environment, it may therefore be beneficial to include participant locations within the ambulatory assessment sampling scheme. Based on the geographical coordinates, the database query system then decides if a self-report should be triggered or not. We simulated four different ambulatory assessment sampling schemes based on movement data (coordinates by minute) from 143 voluntary participants tracked for seven consecutive days. Two location-based sampling schemes incorporating the environmental characteristics (land use and population density) at each participant's location were introduced and compared to a time-based sampling scheme triggering a report on the hour as well as to a sampling scheme incorporating physical activity. We show that location-based sampling schemes trigger a report less often, but we obtain more unique trigger positions and a greater spatial spread in comparison to sampling strategies based on time and distance. Additionally, the location-based methods trigger significantly more often at rarely visited types of land use and less often outside the study region where no underlying environmental data are available.
Experimental evaluation of multiprocessor cache-based error recovery
NASA Technical Reports Server (NTRS)
Janssens, Bob; Fuchs, W. K.
1991-01-01
Several variations of cache-based checkpointing for rollback error recovery in shared-memory multiprocessors have been recently developed. By modifying the cache replacement policy, these techniques use the inherent redundancy in the memory hierarchy to periodically checkpoint the computation state. Three schemes, different in the manner in which they avoid rollback propagation, are evaluated. By simulation with address traces from parallel applications running on an Encore Multimax shared-memory multiprocessor, the performance effect of integrating the recovery schemes in the cache coherence protocol are evaluated. The results indicate that the cache-based schemes can provide checkpointing capability with low performance overhead but uncontrollable high variability in the checkpoint interval.
Rate-distortion optimized tree-structured compression algorithms for piecewise polynomial images.
Shukla, Rahul; Dragotti, Pier Luigi; Do, Minh N; Vetterli, Martin
2005-03-01
This paper presents novel coding algorithms based on tree-structured segmentation, which achieve the correct asymptotic rate-distortion (R-D) behavior for a simple class of signals, known as piecewise polynomials, by using an R-D based prune and join scheme. For the one-dimensional case, our scheme is based on binary-tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly to achieve the correct exponentially decaying R-D behavior (D(R) - c(o)2(-c1R)), thus improving over classic wavelet schemes. We also prove that the computational complexity of the scheme is of O(N log N). We then show the extension of this scheme to the two-dimensional case using a quadtree. This quadtree-coding scheme also achieves an exponentially decaying R-D behavior, for the polygonal image model composed of a white polygon-shaped object against a uniform black background, with low computational cost of O(N log N). Again, the key is an R-D optimized prune and join strategy. Finally, we conclude with numerical results, which show that the proposed quadtree-coding scheme outperforms JPEG2000 by about 1 dB for real images, like cameraman, at low rates of around 0.15 bpp.
Genetic and economic evaluation of Japanese Black (Wagyu) cattle breeding schemes.
Kahi, A K; Hirooka, H
2005-09-01
Deterministic simulation was used to evaluate 10 breeding schemes for genetic gain and profitability and in the context of maximizing returns from investment in Japanese Black cattle breeding. A breeding objective that integrated the cow-calf and feedlot segments was considered. Ten breeding schemes that differed in the records available for use as selection criteria were defined. The schemes ranged from one that used carcass traits currently available to Japanese Black cattle breeders (Scheme 1) to one that also included linear measurements and male and female reproduction traits (Scheme 10). The latter scheme represented the highest level of performance recording. In all breeding schemes, sires were chosen from the proportion selected during the first selection stage (performance testing), modeling a two-stage selection process. The effect on genetic gain and profitability of varying test capacity and number of progeny per sire and of ultrasound scanning of live animals was examined for all breeding schemes. Breeding schemes that selected young bulls during performance testing based on additional individual traits and information on carcass traits from their relatives generated additional genetic gain and profitability. Increasing test capacity resulted in an increase in genetic gain in all schemes. Profitability was optimal in Scheme 2 (a scheme similar to Scheme 1, but selection of young bulls also was based on information on carcass traits from their relatives) to 10 when 900 to 1,000 places were available for performance testing. Similarly, as the number of progeny used in the selection of sires increased, genetic gain first increased sharply and then gradually in all schemes. Profit was optimal across all breeding schemes when sires were selected based on information from 150 to 200 progeny. Additional genetic gain and profitability were generated in each breeding scheme with ultrasound scanning of live animals for carcass traits. Ultrasound scanning of live animals was more important than the addition of any other traits in the selection criteria. These results may be used to provide guidance to Japanese Black cattle breeders.
Inferring Cirrus Size Distributions Through Satellite Remote Sensing and Microphysical Databases
NASA Technical Reports Server (NTRS)
Mitchell, David; D'Entremont, Robert P.; Lawson, R. Paul
2010-01-01
Since cirrus clouds have a substantial influence on the global energy balance that depends on their microphysical properties, climate models should strive to realistically characterize the cirrus ice particle size distribution (PSD), at least in a climatological sense. To date, the airborne in situ measurements of the cirrus PSD have contained large uncertainties due to errors in measuring small ice crystals (D<60 m). This paper presents a method to remotely estimate the concentration of the small ice crystals relative to the larger ones using the 11- and 12- m channels aboard several satellites. By understanding the underlying physics producing the emissivity difference between these channels, this emissivity difference can be used to infer the relative concentration of small ice crystals. This is facilitated by enlisting temperature-dependent characterizations of the PSD (i.e., PSD schemes) based on in situ measurements. An average cirrus emissivity relationship between 12 and 11 m is developed here using the Moderate Resolution Imaging Spectroradiometer (MODIS) satellite instrument and is used to retrieve the PSD based on six different PSD schemes. The PSDs from the measurement-based PSD schemes are compared with corresponding retrieved PSDs to evaluate differences in small ice crystal concentrations. The retrieved PSDs generally had lower concentrations of small ice particles, with total number concentration independent of temperature. In addition, the temperature dependence of the PSD effective diameter De and fall speed Vf for these retrieved PSD schemes exhibited less variability relative to the unmodified PSD schemes. The reduced variability in the retrieved De and Vf was attributed to the lower concentrations of small ice crystals in the retrieved PSD.
MRI-based treatment planning with pseudo CT generated through atlas registration.
Uh, Jinsoo; Merchant, Thomas E; Li, Yimei; Li, Xingyu; Hua, Chiaho
2014-05-01
To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787-0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%-98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.
MRI-based treatment planning with pseudo CT generated through atlas registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uh, Jinsoo, E-mail: jinsoo.uh@stjude.org; Merchant, Thomas E.; Hua, Chiaho
2014-05-15
Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration ofmore » conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs.« less
MRI-based treatment planning with pseudo CT generated through atlas registration
Uh, Jinsoo; Merchant, Thomas E.; Li, Yimei; Li, Xingyu; Hua, Chiaho
2014-01-01
Purpose: To evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration. Methods: A pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference. Results: The atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787–0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%–98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found. Conclusions: MRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs. PMID:24784377
Public attitudes toward stuttering in Turkey: probability versus convenience sampling.
Ozdemir, R Sertan; St Louis, Kenneth O; Topbaş, Seyhun
2011-12-01
A Turkish translation of the Public Opinion Survey of Human Attributes-Stuttering (POSHA-S) was used to compare probability versus convenience sampling to measure public attitudes toward stuttering. A convenience sample of adults in Eskişehir, Turkey was compared with two replicates of a school-based, probability cluster sampling scheme. The two replicates of the probability sampling scheme yielded similar demographic samples, both of which were different from the convenience sample. Components of subscores on the POSHA-S were significantly different in more than half of the comparisons between convenience and probability samples, indicating important differences in public attitudes. If POSHA-S users intend to generalize to specific geographic areas, results of this study indicate that probability sampling is a better research strategy than convenience sampling. The reader will be able to: (1) discuss the difference between convenience sampling and probability sampling; (2) describe a school-based probability sampling scheme; and (3) describe differences in POSHA-S results from convenience sampling versus probability sampling. Copyright © 2011 Elsevier Inc. All rights reserved.
A Protocol Layer Trust-Based Intrusion Detection Scheme for Wireless Sensor Networks
Wang, Jian; Jiang, Shuai; Fapojuwo, Abraham O.
2017-01-01
This article proposes a protocol layer trust-based intrusion detection scheme for wireless sensor networks. Unlike existing work, the trust value of a sensor node is evaluated according to the deviations of key parameters at each protocol layer considering the attacks initiated at different protocol layers will inevitably have impacts on the parameters of the corresponding protocol layers. For simplicity, the paper mainly considers three aspects of trustworthiness, namely physical layer trust, media access control layer trust and network layer trust. The per-layer trust metrics are then combined to determine the overall trust metric of a sensor node. The performance of the proposed intrusion detection mechanism is then analyzed using the t-distribution to derive analytical results of false positive and false negative probabilities. Numerical analytical results, validated by simulation results, are presented in different attack scenarios. It is shown that the proposed protocol layer trust-based intrusion detection scheme outperforms a state-of-the-art scheme in terms of detection probability and false probability, demonstrating its usefulness for detecting cross-layer attacks. PMID:28555023
A Protocol Layer Trust-Based Intrusion Detection Scheme for Wireless Sensor Networks.
Wang, Jian; Jiang, Shuai; Fapojuwo, Abraham O
2017-05-27
This article proposes a protocol layer trust-based intrusion detection scheme for wireless sensor networks. Unlike existing work, the trust value of a sensor node is evaluated according to the deviations of key parameters at each protocol layer considering the attacks initiated at different protocol layers will inevitably have impacts on the parameters of the corresponding protocol layers. For simplicity, the paper mainly considers three aspects of trustworthiness, namely physical layer trust, media access control layer trust and network layer trust. The per-layer trust metrics are then combined to determine the overall trust metric of a sensor node. The performance of the proposed intrusion detection mechanism is then analyzed using the t-distribution to derive analytical results of false positive and false negative probabilities. Numerical analytical results, validated by simulation results, are presented in different attack scenarios. It is shown that the proposed protocol layer trust-based intrusion detection scheme outperforms a state-of-the-art scheme in terms of detection probability and false probability, demonstrating its usefulness for detecting cross-layer attacks.
Are Escherichia coli Pathotypes Still Relevant in the Era of Whole-Genome Sequencing?
Robins-Browne, Roy M.; Holt, Kathryn E.; Ingle, Danielle J.; Hocking, Dianna M.; Yang, Ji; Tauschek, Marija
2016-01-01
The empirical and pragmatic nature of diagnostic microbiology has given rise to several different schemes to subtype E.coli, including biotyping, serotyping, and pathotyping. These schemes have proved invaluable in identifying and tracking outbreaks, and for prognostication in individual cases of infection, but they are imprecise and potentially misleading due to the malleability and continuous evolution of E. coli. Whole genome sequencing can be used to accurately determine E. coli subtypes that are based on allelic variation or differences in gene content, such as serotyping and pathotyping. Whole genome sequencing also provides information about single nucleotide polymorphisms in the core genome of E. coli, which form the basis of sequence typing, and is more reliable than other systems for tracking the evolution and spread of individual strains. A typing scheme for E. coli based on genome sequences that includes elements of both the core and accessory genomes, should reduce typing anomalies and promote understanding of how different varieties of E. coli spread and cause disease. Such a scheme could also define pathotypes more precisely than current methods. PMID:27917373
Are Escherichia coli Pathotypes Still Relevant in the Era of Whole-Genome Sequencing?
Robins-Browne, Roy M; Holt, Kathryn E; Ingle, Danielle J; Hocking, Dianna M; Yang, Ji; Tauschek, Marija
2016-01-01
The empirical and pragmatic nature of diagnostic microbiology has given rise to several different schemes to subtype E .coli, including biotyping, serotyping, and pathotyping. These schemes have proved invaluable in identifying and tracking outbreaks, and for prognostication in individual cases of infection, but they are imprecise and potentially misleading due to the malleability and continuous evolution of E. coli . Whole genome sequencing can be used to accurately determine E. coli subtypes that are based on allelic variation or differences in gene content, such as serotyping and pathotyping. Whole genome sequencing also provides information about single nucleotide polymorphisms in the core genome of E. coli , which form the basis of sequence typing, and is more reliable than other systems for tracking the evolution and spread of individual strains. A typing scheme for E. coli based on genome sequences that includes elements of both the core and accessory genomes, should reduce typing anomalies and promote understanding of how different varieties of E. coli spread and cause disease. Such a scheme could also define pathotypes more precisely than current methods.
Sensors with centroid-based common sensing scheme and their multiplexing
NASA Astrophysics Data System (ADS)
Berkcan, Ertugrul; Tiemann, Jerome J.; Brooksby, Glen W.
1993-03-01
The ability to multiplex sensors with different measurands but with a common sensing scheme is of importance in aircraft and aircraft engine applications; this unification of the sensors into a common interface has major implications for weight, cost, and reliability. A new class of sensors based on a common sensing scheme and their E/O Interface has been developed. The approach detects the location of the centroid of a beam of light; the set of fiber optic sensors with this sensing scheme include linear and rotary position, temperature, pressure, as well as duct Mach number. The sensing scheme provides immunity to intensity variations of the source or due to environmental effects on the fiber. A detector spatially multiplexed common electro-optic interface for the sensors has been demonstrated with a position and a temperature sensor.
Gas Evolution Dynamics in Godunov-Type Schemes and Analysis of Numerical Shock Instability
NASA Technical Reports Server (NTRS)
Xu, Kun
1999-01-01
In this paper we are going to study the gas evolution dynamics of the exact and approximate Riemann solvers, e.g., the Flux Vector Splitting (FVS) and the Flux Difference Splitting (FDS) schemes. Since the FVS scheme and the Kinetic Flux Vector Splitting (KFVS) scheme have the same physical mechanism and similar flux function, based on the analysis of the discretized KFVS scheme the weakness and advantage of the FVS scheme are closely observed. The subtle dissipative mechanism of the Godunov method in the 2D case is also analyzed, and the physical reason for shock instability, i.e., carbuncle phenomena and odd-even decoupling, is presented.
Collision Resolution Scheme with Offset for Improved Performance of Heterogeneous WLAN
NASA Astrophysics Data System (ADS)
Upadhyay, Raksha; Vyavahare, Prakash D.; Tokekar, Sanjiv
2016-03-01
CSMA/CA based DCF of 802.11 MAC layer employs best effort delivery model, in which all stations compete for channel access with same priority. Heterogeneous conditions result in unfairness among stations and degradation in throughput, therefore, providing different priorities to different applications for required quality of service in heterogeneous networks is challenging task. This paper proposes a collision resolution scheme with a novel concept of introducing offset, which is suitable for heterogeneous networks. Selection of random value by a station for its contention with offset results in reduced probability of collision. Expression for the optimum value of the offset is also derived. Results show that proposed scheme, when applied to heterogeneous networks, has improved throughput and fairness than conventional scheme. Results show that proposed scheme also exhibits higher throughput and fairness with reduced delay in homogeneous networks.
High-order conservative finite difference GLM-MHD schemes for cell-centered MHD
NASA Astrophysics Data System (ADS)
Mignone, Andrea; Tzeferacos, Petros; Bodo, Gianluigi
2010-08-01
We present and compare third- as well as fifth-order accurate finite difference schemes for the numerical solution of the compressible ideal MHD equations in multiple spatial dimensions. The selected methods lean on four different reconstruction techniques based on recently improved versions of the weighted essentially non-oscillatory (WENO) schemes, monotonicity preserving (MP) schemes as well as slope-limited polynomial reconstruction. The proposed numerical methods are highly accurate in smooth regions of the flow, avoid loss of accuracy in proximity of smooth extrema and provide sharp non-oscillatory transitions at discontinuities. We suggest a numerical formulation based on a cell-centered approach where all of the primary flow variables are discretized at the zone center. The divergence-free condition is enforced by augmenting the MHD equations with a generalized Lagrange multiplier yielding a mixed hyperbolic/parabolic correction, as in Dedner et al. [J. Comput. Phys. 175 (2002) 645-673]. The resulting family of schemes is robust, cost-effective and straightforward to implement. Compared to previous existing approaches, it completely avoids the CPU intensive workload associated with an elliptic divergence cleaning step and the additional complexities required by staggered mesh algorithms. Extensive numerical testing demonstrate the robustness and reliability of the proposed framework for computations involving both smooth and discontinuous features.
Vector quantization for efficient coding of upper subbands
NASA Technical Reports Server (NTRS)
Zeng, W. J.; Huang, Y. F.
1994-01-01
This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches.
Two modified symplectic partitioned Runge-Kutta methods for solving the elastic wave equation
NASA Astrophysics Data System (ADS)
Su, Bo; Tuo, Xianguo; Xu, Ling
2017-08-01
Based on a modified strategy, two modified symplectic partitioned Runge-Kutta (PRK) methods are proposed for the temporal discretization of the elastic wave equation. The two symplectic schemes are similar in form but are different in nature. After the spatial discretization of the elastic wave equation, the ordinary Hamiltonian formulation for the elastic wave equation is presented. The PRK scheme is then applied for time integration. An additional term associated with spatial discretization is inserted into the different stages of the PRK scheme. Theoretical analyses are conducted to evaluate the numerical dispersion and stability of the two novel PRK methods. A finite difference method is used to approximate the spatial derivatives since the two schemes are independent of the spatial discretization technique used. The numerical solutions computed by the two new schemes are compared with those computed by a conventional symplectic PRK. The numerical results, which verify the new method, are superior to those generated by traditional conventional methods in seismic wave modeling.
Comparison of three explicit multigrid methods for the Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.; Turkel, Eli; Schaffer, Steve
1987-01-01
Three explicit multigrid methods, Ni's method, Jameson's finite-volume method, and a finite-difference method based on Brandt's work, are described and compared for two model problems. All three methods use an explicit multistage Runge-Kutta scheme on the fine grid, and this scheme is also described. Convergence histories for inviscid flow over a bump in a channel for the fine-grid scheme alone show that convergence rate is proportional to Courant number and that implicit residual smoothing can significantly accelerate the scheme. Ni's method was slightly slower than the implicitly-smoothed scheme alone. Brandt's and Jameson's methods are shown to be equivalent in form but differ in their node versus cell-centered implementations. They are about 8.5 times faster than Ni's method in terms of CPU time. Results for an oblique shock/boundary layer interaction problem verify the accuracy of the finite-difference code. All methods slowed considerably on the stretched viscous grid but Brandt's method was still 2.1 times faster than Ni's method.
NASA Technical Reports Server (NTRS)
Myhill, Elizabeth A.; Boss, Alan P.
1993-01-01
In Boss & Myhill (1992) we described the derivation and testing of a spherical coordinate-based scheme for solving the hydrodynamic equations governing the gravitational collapse of nonisothermal, nonmagnetic, inviscid, radiative, three-dimensional protostellar clouds. Here we discuss a Cartesian coordinate-based scheme based on the same set of hydrodynamic equations. As with the spherical coorrdinate-based code, the Cartesian coordinate-based scheme employs explicit Eulerian methods which are both spatially and temporally second-order accurate. We begin by describing the hydrodynamic equations in Cartesian coordinates and the numerical methods used in this particular code. Following Finn & Hawley (1989), we pay special attention to the proper implementations of high-order accuracy, finite difference methods. We evaluate the ability of the Cartesian scheme to handle shock propagation problems, and through convergence testing, we show that the code is indeed second-order accurate. To compare the Cartesian scheme discussed here with the spherical coordinate-based scheme discussed in Boss & Myhill (1992), the two codes are used to calculate the standard isothermal collapse test case described by Bodenheimer & Boss (1981). We find that with the improved codes, the intermediate bar-configuration found previously disappears, and the cloud fragments directly into a binary protostellar system. Finally, we present the results from both codes of a new test for nonisothermal protostellar collapse.
A key heterogeneous structure of fractal networks based on inverse renormalization scheme
NASA Astrophysics Data System (ADS)
Bai, Yanan; Huang, Ning; Sun, Lina
2018-06-01
Self-similarity property of complex networks was found by the application of renormalization group theory. Based on this theory, network topologies can be classified into universality classes in the space of configurations. In return, through inverse renormalization scheme, a given primitive structure can grow into a pure fractal network, then adding different types of shortcuts, it exhibits different characteristics of complex networks. However, the effect of primitive structure on networks structural property has received less attention. In this paper, we introduce a degree variance index to measure the dispersion of nodes degree in the primitive structure, and investigate the effect of the primitive structure on network structural property quantified by network efficiency. Numerical simulations and theoretical analysis show a primitive structure is a key heterogeneous structure of generated networks based on inverse renormalization scheme, whether or not adding shortcuts, and the network efficiency is positively correlated with degree variance of the primitive structure.
Lee, Kilhung
2010-01-01
This paper presents a medium access control and scheduling scheme for wireless sensor networks. It uses time trees for sending data from the sensor node to the base station. For an energy efficient operation of the sensor networks in a distributed manner, time trees are built in order to reduce the collision probability and to minimize the total energy required to send data to the base station. A time tree is a data gathering tree where the base station is the root and each sensor node is either a relaying or a leaf node of the tree. Each tree operates in a different time schedule with possibly different activation rates. Through the simulation, the proposed scheme that uses time trees shows better characteristics toward burst traffic than the previous energy and data arrival rate scheme. PMID:22319270
NASA Astrophysics Data System (ADS)
Kim, Jae Wook
2013-05-01
This paper proposes a novel systematic approach for the parallelization of pentadiagonal compact finite-difference schemes and filters based on domain decomposition. The proposed approach allows a pentadiagonal banded matrix system to be split into quasi-disjoint subsystems by using a linear-algebraic transformation technique. As a result the inversion of pentadiagonal matrices can be implemented within each subdomain in an independent manner subject to a conventional halo-exchange process. The proposed matrix transformation leads to new subdomain boundary (SB) compact schemes and filters that require three halo terms to exchange with neighboring subdomains. The internode communication overhead in the present approach is equivalent to that of standard explicit schemes and filters based on seven-point discretization stencils. The new SB compact schemes and filters demand additional arithmetic operations compared to the original serial ones. However, it is shown that the additional cost becomes sufficiently low by choosing optimal sizes of their discretization stencils. Compared to earlier published results, the proposed SB compact schemes and filters successfully reduce parallelization artifacts arising from subdomain boundaries to a level sufficiently negligible for sophisticated aeroacoustic simulations without degrading parallel efficiency. The overall performance and parallel efficiency of the proposed approach are demonstrated by stringent benchmark tests.
Hybrid Upwinding for Two-Phase Flow in Heterogeneous Porous Media with Buoyancy and Capillarity
NASA Astrophysics Data System (ADS)
Hamon, F. P.; Mallison, B.; Tchelepi, H.
2016-12-01
In subsurface flow simulation, efficient discretization schemes for the partial differential equations governing multiphase flow and transport are critical. For highly heterogeneous porous media, the temporal discretization of choice is often the unconditionally stable fully implicit (backward-Euler) method. In this scheme, the simultaneous update of all the degrees of freedom requires solving large algebraic nonlinear systems at each time step using Newton's method. This is computationally expensive, especially in the presence of strong capillary effects driven by abrupt changes in porosity and permeability between different rock types. Therefore, discretization schemes that reduce the simulation cost by improving the nonlinear convergence rate are highly desirable. To speed up nonlinear convergence, we present an efficient fully implicit finite-volume scheme for immiscible two-phase flow in the presence of strong capillary forces. In this scheme, the discrete viscous, buoyancy, and capillary spatial terms are evaluated separately based on physical considerations. We build on previous work on Implicit Hybrid Upwinding (IHU) by using the upstream saturations with respect to the total velocity to compute the relative permeabilities in the viscous term, and by determining the directionality of the buoyancy term based on the phase density differences. The capillary numerical flux is decomposed into a rock- and geometry-dependent transmissibility factor, a nonlinear capillary diffusion coefficient, and an approximation of the saturation gradient. Combining the viscous, buoyancy, and capillary terms, we obtain a numerical flux that is consistent, bounded, differentiable, and monotone for homogeneous one-dimensional flow. The proposed scheme also accounts for spatially discontinuous capillary pressure functions. Specifically, at the interface between two rock types, the numerical scheme accurately honors the entry pressure condition by solving a local nonlinear problem to compute the numerical flux. Heterogeneous numerical tests demonstrate that this extended IHU scheme is non-oscillatory and convergent upon refinement. They also illustrate the superior accuracy and nonlinear convergence rate of the IHU scheme compared with the standard phase-based upstream weighting approach.
Evaluation of the Performance of the Hybrid Lattice Boltzmann Based Numerical Flux
NASA Astrophysics Data System (ADS)
Zheng, H. W.; Shu, C.
2016-06-01
It is well known that the numerical scheme is a key factor to the stability and accuracy of a Navier-Stokes solver. Recently, a new hybrid lattice Boltzmann numerical flux (HLBFS) is developed by Shu's group. It combines two different LBFS schemes by a switch function. It solves the Boltzmann equation instead of the Euler equation. In this article, the main object is to evaluate the ability of this HLBFS scheme by our in-house cell centered hybrid mesh based Navier-Stokes code. Its performance is examined by several widely-used bench-mark test cases. The comparisons on results between calculation and experiment are conducted. They show that the scheme can capture the shock wave as well as the resolving of boundary layer.
NASA Astrophysics Data System (ADS)
Skare, Stefan; Hedehus, Maj; Moseley, Michael E.; Li, Tie-Qiang
2000-12-01
Diffusion tensor mapping with MRI can noninvasively track neural connectivity and has great potential for neural scientific research and clinical applications. For each diffusion tensor imaging (DTI) data acquisition scheme, the diffusion tensor is related to the measured apparent diffusion coefficients (ADC) by a transformation matrix. With theoretical analysis we demonstrate that the noise performance of a DTI scheme is dependent on the condition number of the transformation matrix. To test the theoretical framework, we compared the noise performances of different DTI schemes using Monte-Carlo computer simulations and experimental DTI measurements. Both the simulation and the experimental results confirmed that the noise performances of different DTI schemes are significantly correlated with the condition number of the associated transformation matrices. We therefore applied numerical algorithms to optimize a DTI scheme by minimizing the condition number, hence improving the robustness to experimental noise. In the determination of anisotropic diffusion tensors with different orientations, MRI data acquisitions using a single optimum b value based on the mean diffusivity can produce ADC maps with regional differences in noise level. This will give rise to rotational variances of eigenvalues and anisotropy when diffusion tensor mapping is performed using a DTI scheme with a limited number of diffusion-weighting gradient directions. To reduce this type of artifact, a DTI scheme with not only a small condition number but also a large number of evenly distributed diffusion-weighting gradients in 3D is preferable.
NASA Technical Reports Server (NTRS)
Natarajan, Murali; Fairlie, T. Duncan; Dwyer Cianciolo, Alicia; Smith, Michael D.
2015-01-01
We use the mesoscale modeling capability of Mars Weather Research and Forecasting (MarsWRF) model to study the sensitivity of the simulated Martian lower atmosphere to differences in the parameterization of the planetary boundary layer (PBL). Characterization of the Martian atmosphere and realistic representation of processes such as mixing of tracers like dust depend on how well the model reproduces the evolution of the PBL structure. MarsWRF is based on the NCAR WRF model and it retains some of the PBL schemes available in the earth version. Published studies have examined the performance of different PBL schemes in NCAR WRF with the help of observations. Currently such assessments are not feasible for Martian atmospheric models due to lack of observations. It is of interest though to study the sensitivity of the model to PBL parameterization. Typically, for standard Martian atmospheric simulations, we have used the Medium Range Forecast (MRF) PBL scheme, which considers a correction term to the vertical gradients to incorporate nonlocal effects. For this study, we have also used two other parameterizations, a non-local closure scheme called Yonsei University (YSU) PBL scheme and a turbulent kinetic energy closure scheme called Mellor- Yamada-Janjic (MYJ) PBL scheme. We will present intercomparisons of the near surface temperature profiles, boundary layer heights, and wind obtained from the different simulations. We plan to use available temperature observations from Mini TES instrument onboard the rovers Spirit and Opportunity in evaluating the model results.
Watermarking protocols for authentication and ownership protection based on timestamps and holograms
NASA Astrophysics Data System (ADS)
Dittmann, Jana; Steinebach, Martin; Croce Ferri, Lucilla
2002-04-01
Digital watermarking has become an accepted technology for enabling multimedia protection schemes. One problem here is the security of these schemes. Without a suitable framework, watermarks can be replaced and manipulated. We discuss different protocols providing security against rightful ownership attacks and other fraud attempts. We compare the characteristics of existing protocols for different media like direct embedding or seed based and required attributes of the watermarking technology like robustness or payload. We introduce two new media independent protocol schemes for rightful ownership authentication. With the first scheme we ensure security of digital watermarks used for ownership protection with a combination of two watermarks: first watermark of the copyright holder and a second watermark from a Trusted Third Party (TTP). It is based on hologram embedding and the watermark consists of e.g. a company logo. As an example we use digital images and specify the properties of the embedded additional security information. We identify components necessary for the security protocol like timestamp, PKI and cryptographic algorithms. The second scheme is used for authentication. It is designed for invertible watermarking applications which require high data integrity. We combine digital signature schemes and digital watermarking to provide a public verifiable integrity. The original data can only be reproduced with a secret key. Both approaches provide solutions for copyright and authentication watermarking and are introduced for image data but can be easily adopted for video and audio data as well.
Numerically stable formulas for a particle-based explicit exponential integrator
NASA Astrophysics Data System (ADS)
Nadukandi, Prashanth
2015-05-01
Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.
A fourth order accurate finite difference scheme for the computation of elastic waves
NASA Technical Reports Server (NTRS)
Bayliss, A.; Jordan, K. E.; Lemesurier, B. J.; Turkel, E.
1986-01-01
A finite difference for elastic waves is introduced. The model is based on the first order system of equations for the velocities and stresses. The differencing is fourth order accurate on the spatial derivatives and second order accurate in time. The model is tested on a series of examples including the Lamb problem, scattering from plane interf aces and scattering from a fluid-elastic interface. The scheme is shown to be effective for these problems. The accuracy and stability is insensitive to the Poisson ratio. For the class of problems considered here it is found that the fourth order scheme requires for two-thirds to one-half the resolution of a typical second order scheme to give comparable accuracy.
NASA Astrophysics Data System (ADS)
Maiti, Anup Kumar; Nath Roy, Jitendra; Mukhopadhyay, Sourangshu
2007-08-01
In the field of optical computing and parallel information processing, several number systems have been used for different arithmetic and algebraic operations. Therefore an efficient conversion scheme from one number system to another is very important. Modified trinary number (MTN) has already taken a significant role towards carry and borrow free arithmetic operations. In this communication, we propose a tree-net architecture based all optical conversion scheme from binary number to its MTN form. Optical switch using nonlinear material (NLM) plays an important role.
Mashup Scheme Design of Map Tiles Using Lightweight Open Source Webgis Platform
NASA Astrophysics Data System (ADS)
Hu, T.; Fan, J.; He, H.; Qin, L.; Li, G.
2018-04-01
To address the difficulty involved when using existing commercial Geographic Information System platforms to integrate multi-source image data fusion, this research proposes the loading of multi-source local tile data based on CesiumJS and examines the tile data organization mechanisms and spatial reference differences of the CesiumJS platform, as well as various tile data sources, such as Google maps, Map World, and Bing maps. Two types of tile data loading schemes have been designed for the mashup of tiles, the single data source loading scheme and the multi-data source loading scheme. The multi-sources of digital map tiles used in this paper cover two different but mainstream spatial references, the WGS84 coordinate system and the Web Mercator coordinate system. According to the experimental results, the single data source loading scheme and the multi-data source loading scheme with the same spatial coordinate system showed favorable visualization effects; however, the multi-data source loading scheme was prone to lead to tile image deformation when loading multi-source tile data with different spatial references. The resulting method provides a low cost and highly flexible solution for small and medium-scale GIS programs and has a certain potential for practical application values. The problem of deformation during the transition of different spatial references is an important topic for further research.
ERIC Educational Resources Information Center
Douvropoulos, Theodosios G.
2012-01-01
An approximate formula for the period of pendulum motion beyond the small amplitude regime is obtained based on physical arguments. Two different schemes of different accuracy are developed: in the first less accurate scheme, emphasis is given on the non-quadratic form of the potential in connection to isochronism, and a specific form of a generic…
NASA Astrophysics Data System (ADS)
Gundreddy, Rohith Reddy; Tan, Maxine; Qui, Yuchen; Zheng, Bin
2015-03-01
The purpose of this study is to develop and test a new content-based image retrieval (CBIR) scheme that enables to achieve higher reproducibility when it is implemented in an interactive computer-aided diagnosis (CAD) system without significantly reducing lesion classification performance. This is a new Fourier transform based CBIR algorithm that determines image similarity of two regions of interest (ROI) based on the difference of average regional image pixel value distribution in two Fourier transform mapped images under comparison. A reference image database involving 227 ROIs depicting the verified soft-tissue breast lesions was used. For each testing ROI, the queried lesion center was systematically shifted from 10 to 50 pixels to simulate inter-user variation of querying suspicious lesion center when using an interactive CAD system. The lesion classification performance and reproducibility as the queried lesion center shift were assessed and compared among the three CBIR schemes based on Fourier transform, mutual information and Pearson correlation. Each CBIR scheme retrieved 10 most similar reference ROIs and computed a likelihood score of the queried ROI depicting a malignant lesion. The experimental results shown that three CBIR schemes yielded very comparable lesion classification performance as measured by the areas under ROC curves with the p-value greater than 0.498. However, the CBIR scheme using Fourier transform yielded the highest invariance to both queried lesion center shift and lesion size change. This study demonstrated the feasibility of improving robustness of the interactive CAD systems by adding a new Fourier transform based image feature to CBIR schemes.
Mapping Mangrove Density from Rapideye Data in Central America
NASA Astrophysics Data System (ADS)
Son, Nguyen-Thanh; Chen, Chi-Farn; Chen, Cheng-Ru
2017-06-01
Mangrove forests provide a wide range of socioeconomic and ecological services for coastal communities. Extensive aquaculture development of mangrove waters in many developing countries has constantly ignored services of mangrove ecosystems, leading to unintended environmental consequences. Monitoring the current status and distribution of mangrove forests is deemed important for evaluating forest management strategies. This study aims to delineate the density distribution of mangrove forests in the Gulf of Fonseca, Central America with Rapideye data using the support vector machines (SVM). The data collected in 2012 for density classification of mangrove forests were processed based on four different band combination schemes: scheme-1 (bands 1-3, 5 excluding the red-edge band 4), scheme-2 (bands 1-5), scheme-3 (bands 1-3, 5 incorporating with the normalized difference vegetation index, NDVI), and scheme-4 (bands 1-3, 5 incorporating with the normalized difference red-edge index, NDRI). We also hypothesized if the obvious contribution of Rapideye red-edge band could improve the classification results. Three main steps of data processing were employed: (1), data pre-processing, (2) image classification, and (3) accuracy assessment to evaluate the contribution of red-edge band in terms of the accuracy of classification results across these four schemes. The classification maps compared with the ground reference data indicated the slightly higher accuracy level observed for schemes 2 and 4. The overall accuracies and Kappa coefficients were 97% and 0.95 for scheme-2 and 96.9% and 0.95 for scheme-4, respectively.
NASA Technical Reports Server (NTRS)
Wang, Ten-See
1993-01-01
The objective of this study is to benchmark a four-engine clustered nozzle base flowfield with a computational fluid dynamics (CFD) model. The CFD model is a three-dimensional pressure-based, viscous flow formulation. An adaptive upwind scheme is employed for the spatial discretization. The upwind scheme is based on second and fourth order central differencing with adaptive artificial dissipation. Qualitative base flow features such as the reverse jet, wall jet, recompression shock, and plume-plume impingement have been captured. The computed quantitative flow properties such as the radial base pressure distribution, model centerline Mach number and static pressure variation, and base pressure characteristic curve agreed reasonably well with those of the measurement. Parametric study on the effect of grid resolution, turbulence model, inlet boundary condition and difference scheme on convective terms has been performed. The results showed that grid resolution had a strong influence on the accuracy of the base flowfield prediction.
Wavelet Based Protection Scheme for Multi Terminal Transmission System with PV and Wind Generation
NASA Astrophysics Data System (ADS)
Manju Sree, Y.; Goli, Ravi kumar; Ramaiah, V.
2017-08-01
A hybrid generation is a part of large power system in which number of sources usually attached to a power electronic converter and loads are clustered can operate independent of the main power system. The protection scheme is crucial against faults based on traditional over current protection since there are adequate problems due to fault currents in the mode of operation. This paper adopts a new approach for detection, discrimination of the faults for multi terminal transmission line protection in presence of hybrid generation. Transient current based protection scheme is developed with discrete wavelet transform. Fault indices of all phase currents at all terminals are obtained by analyzing the detail coefficients of current signals using bior 1.5 mother wavelet. This scheme is tested for different types of faults and is found effective for detection and discrimination of fault with various fault inception angle and fault impedance.
NASA Astrophysics Data System (ADS)
Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad
2017-07-01
This paper introduces a fractional order total variation (FOTV) based model with three different weights in the fractional order derivative definition for multiplicative noise removal purpose. The fractional-order Euler Lagrange equation which is a highly non-linear partial differential equation (PDE) is obtained by the minimization of the energy functional for image restoration. Two numerical schemes namely an iterative scheme based on the dual theory and majorization- minimization algorithm (MMA) are used. To improve the restoration results, we opt for an adaptive parameter selection procedure for the proposed model by applying the trial and error method. We report numerical simulations which show the validity and state of the art performance of the fractional-order model in visual improvement as well as an increase in the peak signal to noise ratio comparing to corresponding methods. Numerical experiments also demonstrate that MMAbased methodology is slightly better than that of an iterative scheme.
Optical frequency comb based multi-band microwave frequency conversion for satellite applications.
Yang, Xinwu; Xu, Kun; Yin, Jie; Dai, Yitang; Yin, Feifei; Li, Jianqiang; Lu, Hua; Liu, Tao; Ji, Yuefeng
2014-01-13
Based on optical frequency combs (OFC), we propose an efficient and flexible multi-band frequency conversion scheme for satellite repeater applications. The underlying principle is to mix dual coherent OFCs with one of which carrying the input signal. By optically channelizing the mixed OFCs, the converted signal in different bands can be obtained in different channels. Alternatively, the scheme can be configured to generate multi-band local oscillators (LO) for widely distribution. Moreover, the scheme realizes simultaneous inter- and intra-band frequency conversion just in a single structure and needs only three frequency-fixed microwave sources. We carry out a proof of concept experiment in which multiple LOs with 2 GHz, 10 GHz, 18 GHz, and 26 GHz are generated. A C-band signal of 6.1 GHz input to the proposed scheme is successfully converted to 4.1 GHz (C band), 3.9 GHz (C band) and 11.9 GHz (X band), etc. Compared with the back-to-back (B2B) case measured at 0 dBm input power, the proposed scheme shows a 9.3% error vector magnitude (EVM) degradation at each output channel. Furthermore, all channels satisfy the EVM limit in a very wide input power range.
NASA Astrophysics Data System (ADS)
Miki, Nobuhiko; Kishiyama, Yoshihisa; Higuchi, Kenichi; Sawahashi, Mamoru; Nakagawa, Masao
In the Evolved UTRA (UMTS Terrestrial Radio Access) downlink, Orthogonal Frequency Division Multiplexing (OFDM) based radio access was adopted because of its inherent immunity to multipath interference and flexible accommodation of different spectrum arrangements. This paper presents the optimum adaptive modulation and channel coding (AMC) scheme when resource blocks (RBs) is simultaneously assigned to the same user when frequency and time domain channel-dependent scheduling is assumed in the downlink OFDMA radio access with single-antenna transmission. We start by presenting selection methods for the modulation and coding scheme (MCS) employing mutual information both for RB-common and RB-dependent modulation schemes. Simulation results show that, irrespective of the application of power adaptation to RB-dependent modulation, the improvement in the achievable throughput of the RB-dependent modulation scheme compared to that for the RB-common modulation scheme is slight, i.e., 4 to 5%. In addition, the number of required control signaling bits in the RB-dependent modulation scheme becomes greater than that for the RB-common modulation scheme. Therefore, we conclude that the RB-common modulation and channel coding rate scheme is preferred, when multiple RBs of the same coded stream are assigned to one user in the case of single-antenna transmission.
NASA Astrophysics Data System (ADS)
Zhang, Sijin; Austin, Geoff; Sutherland-Stacey, Luke
2014-05-01
Reverse Kessler warm rain processes were implemented within the Weather Research and Forecasting Model (WRF) and coupled with a Newtonian relaxation, or nudging technique designed to improve quantitative precipitation forecasting (QPF) in New Zealand by making use of observed radar reflectivity and modest computing facilities. One of the reasons for developing such a scheme, rather than using 4D-Var for example, is that radar VAR scheme in general, and 4D-Var in particular, requires computational resources beyond the capability of most university groups and indeed some national forecasting centres of small countries like New Zealand. The new scheme adjusts the model water vapor mixing ratio profiles based on observed reflectivity at each time step within an assimilation time window. The whole scheme can be divided into following steps: (i) The radar reflectivity is firstly converted to rain water, and (ii) then the rain water is used to derive cloud water content according to the reverse Kessler scheme; (iii) The cloud water content associated water vapor mixing ratio is then calculated based on the saturation adjustment processes; (iv) Finally the adjusted water vapor is nudged into the model and the model background is updated. 13 rainfall cases which occurred in the summer of 2011/2012 in New Zealand were used to evaluate the new scheme, different forecast scores were calculated and showed that the new scheme was able to improve precipitation forecasts on average up to around 7 hours ahead depending on different verification thresholds.
Design and implementation of streaming media server cluster based on FFMpeg.
Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao
2015-01-01
Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system.
Design and Implementation of Streaming Media Server Cluster Based on FFMpeg
Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao
2015-01-01
Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system. PMID:25734187
NASA Astrophysics Data System (ADS)
Kumar, Vivek; Raghurama Rao, S. V.
2008-04-01
Non-standard finite difference methods (NSFDM) introduced by Mickens [ Non-standard Finite Difference Models of Differential Equations, World Scientific, Singapore, 1994] are interesting alternatives to the traditional finite difference and finite volume methods. When applied to linear hyperbolic conservation laws, these methods reproduce exact solutions. In this paper, the NSFDM is first extended to hyperbolic systems of conservation laws, by a novel utilization of the decoupled equations using characteristic variables. In the second part of this paper, the NSFDM is studied for its efficacy in application to nonlinear scalar hyperbolic conservation laws. The original NSFDMs introduced by Mickens (1994) were not in conservation form, which is an important feature in capturing discontinuities at the right locations. Mickens [Construction and analysis of a non-standard finite difference scheme for the Burgers-Fisher equations, Journal of Sound and Vibration 257 (4) (2002) 791-797] recently introduced a NSFDM in conservative form. This method captures the shock waves exactly, without any numerical dissipation. In this paper, this algorithm is tested for the case of expansion waves with sonic points and is found to generate unphysical expansion shocks. As a remedy to this defect, we use the strategy of composite schemes [R. Liska, B. Wendroff, Composite schemes for conservation laws, SIAM Journal of Numerical Analysis 35 (6) (1998) 2250-2271] in which the accurate NSFDM is used as the basic scheme and localized relaxation NSFDM is used as the supporting scheme which acts like a filter. Relaxation schemes introduced by Jin and Xin [The relaxation schemes for systems of conservation laws in arbitrary space dimensions, Communications in Pure and Applied Mathematics 48 (1995) 235-276] are based on relaxation systems which replace the nonlinear hyperbolic conservation laws by a semi-linear system with a stiff relaxation term. The relaxation parameter ( λ) is chosen locally on the three point stencil of grid which makes the proposed method more efficient. This composite scheme overcomes the problem of unphysical expansion shocks and captures the shock waves with an accuracy better than the upwind relaxation scheme, as demonstrated by the test cases, together with comparisons with popular numerical methods like Roe scheme and ENO schemes.
Scheme, Erik J; Englehart, Kevin B
2013-07-01
When controlling a powered upper limb prosthesis it is important not only to know how to move the device, but also when not to move. A novel approach to pattern recognition control, using a selective multiclass one-versus-one classification scheme has been shown to be capable of rejecting unintended motions. This method was shown to outperform other popular classification schemes when presented with muscle contractions that did not correspond to desired actions. In this work, a 3-D Fitts' Law test is proposed as a suitable alternative to using virtual limb environments for evaluating real-time myoelectric control performance. The test is used to compare the selective approach to a state-of-the-art linear discriminant analysis classification based scheme. The framework is shown to obey Fitts' Law for both control schemes, producing linear regression fittings with high coefficients of determination (R(2) > 0.936). Additional performance metrics focused on quality of control are discussed and incorporated in the evaluation. Using this framework the selective classification based scheme is shown to produce significantly higher efficiency and completion rates, and significantly lower overshoot and stopping distances, with no significant difference in throughput.
Optimizing Scheme for Remote Preparation of Four-particle Cluster-like Entangled States
NASA Astrophysics Data System (ADS)
Wang, Dong; Ye, Liu
2011-09-01
Recently, Ma et al. (Opt. Commun. 283:2640, 2010) have proposed a novel scheme for preparing a class of cluster-like entangled states based on a four-particle projective measurement. In this paper, we put forward a new and optimal scheme to realize the remote preparation for this class of cluster-like states with the aid of two bipartite partially entangled channels. Different from the previous scheme, we employ a two-particle projective measurement instead of the four-particle projective measurement during the preparation. Besides, the resource consumptions are computed in our scheme, which include classical communication cost and quantum resource consumptions. Moreover, we have some discussions on the features of our scheme and make some comparisons on resource consumptions and operation complexity between the previous scheme and ours. The results show that our scheme is more economic and feasible compared with the previous.
RICH: OPEN-SOURCE HYDRODYNAMIC SIMULATION ON A MOVING VORONOI MESH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yalinewich, Almog; Steinberg, Elad; Sari, Re’em
2015-02-01
We present here RICH, a state-of-the-art two-dimensional hydrodynamic code based on Godunov’s method, on an unstructured moving mesh (the acronym stands for Racah Institute Computational Hydrodynamics). This code is largely based on the code AREPO. It differs from AREPO in the interpolation and time-advancement schemeS as well as a novel parallelization scheme based on Voronoi tessellation. Using our code, we study the pros and cons of a moving mesh (in comparison to a static mesh). We also compare its accuracy to other codes. Specifically, we show that our implementation of external sources and time-advancement scheme is more accurate and robustmore » than is AREPO when the mesh is allowed to move. We performed a parameter study of the cell rounding mechanism (Lloyd iterations) and its effects. We find that in most cases a moving mesh gives better results than a static mesh, but it is not universally true. In the case where matter moves in one way and a sound wave is traveling in the other way (such that relative to the grid the wave is not moving) a static mesh gives better results than a moving mesh. We perform an analytic analysis for finite difference schemes that reveals that a Lagrangian simulation is better than a Eulerian simulation in the case of a highly supersonic flow. Moreover, we show that Voronoi-based moving mesh schemes suffer from an error, which is resolution independent, due to inconsistencies between the flux calculation and the change in the area of a cell. Our code is publicly available as open source and designed in an object-oriented, user-friendly way that facilitates incorporation of new algorithms and physical processes.« less
Impacts of pay for performance on the quality of primary care
Allen, T; Mason, T; Whittaker, W
2014-01-01
Increasingly, financial incentives are being used in health care as a result of increasing demand for health care coupled with fiscal pressures. Financial incentive schemes are one approach by which the system may incentivize providers of health care to improve productivity and/or adapt to better quality provision. Pay for performance (P4P) is an example of a financial incentive which seeks to link providers’ payments to some measure of performance. This paper provides a discussion of the theoretical underpinnings of P4P, gives an overview of the health P4P evidence base, and provide a detailed case study of a particularly large scheme from the English National Health Service. Lessons are then drawn from the evidence base. Overall, we find that the evidence for the effectiveness of P4P for improving quality of care in primary care is mixed. This is to some extent due to the fact that the P4P schemes used in primary care are also mixed. There are many different schemes that incentivize different aspects of care in different ways and in different settings, making evaluation problematic. The Quality and Outcomes Framework in the United Kingdom is the largest example of P4P in primary care. Evidence suggests incentivized quality initially improved following the introduction of the Quality and Outcomes Framework, but this was short-lived. If P4P in primary care is to have a long-term future, the question about scheme effectiveness (perhaps incorporating the identification and assessment of potential risk factors) needs to be answered robustly. This would require that new schemes be designed from the onset to support their evaluation: control and treatment groups, coupled with before and after data. PMID:25061341
Error reduction program: A progress report
NASA Technical Reports Server (NTRS)
Syed, S. A.
1984-01-01
Five finite differences schemes were evaluated for minimum numerical diffusion in an effort to identify and incorporate the best error reduction scheme into a 3D combustor performance code. Based on this evaluated, two finite volume method schemes were selected for further study. Both the quadratic upstream differencing scheme (QUDS) and the bounded skew upstream differencing scheme two (BSUDS2) were coded into a two dimensional computer code and their accuracy and stability determined by running several test cases. It was found that BSUDS2 was more stable than QUDS. It was also found that the accuracy of both schemes is dependent on the angle that the streamline make with the mesh with QUDS being more accurate at smaller angles and BSUDS2 more accurate at larger angles. The BSUDS2 scheme was selected for extension into three dimensions.
High order cell-centered scheme totally based on cell average
NASA Astrophysics Data System (ADS)
Liu, Ze-Yu; Cai, Qing-Dong
2018-05-01
This work clarifies the concept of cell average by pointing out the differences between cell average and cell centroid value, which are averaged cell-centered value and pointwise cell-centered value, respectively. Interpolation based on cell averages is constructed and high order QUICK-like numerical scheme is designed for such interpolation. A new approach of error analysis is introduced in this work, which is similar to Taylor’s expansion.
Jung, Jaewook; Kim, Jiye; Choi, Younsung; Won, Dongho
2016-08-16
In wireless sensor networks (WSNs), a registered user can login to the network and use a user authentication protocol to access data collected from the sensor nodes. Since WSNs are typically deployed in unattended environments and sensor nodes have limited resources, many researchers have made considerable efforts to design a secure and efficient user authentication process. Recently, Chen et al. proposed a secure user authentication scheme using symmetric key techniques for WSNs. They claim that their scheme assures high efficiency and security against different types of attacks. After careful analysis, however, we find that Chen et al.'s scheme is still vulnerable to smart card loss attack and is susceptible to denial of service attack, since it is invalid for verification to simply compare an entered ID and a stored ID in smart card. In addition, we also observe that their scheme cannot preserve user anonymity. Furthermore, their scheme cannot quickly detect an incorrect password during login phase, and this flaw wastes both communication and computational overheads. In this paper, we describe how these attacks work, and propose an enhanced anonymous user authentication and key agreement scheme based on a symmetric cryptosystem in WSNs to address all of the aforementioned vulnerabilities in Chen et al.'s scheme. Our analysis shows that the proposed scheme improves the level of security, and is also more efficient relative to other related schemes.
Lord, Anton; Ehrlich, Stefan; Borchardt, Viola; Geisler, Daniel; Seidel, Maria; Huber, Stefanie; Murr, Julia; Walter, Martin
2016-03-30
Network-based analyses of deviant brain function have become extremely popular in psychiatric neuroimaging. Underpinning brain network analyses is the selection of appropriate regions of interest (ROIs). Although ROI selection is fundamental in network analysis, its impact on detecting disease effects remains unclear. We investigated the impact of parcellation choice when comparing results from different studies. We investigated the effects of anatomical (AAL) and literature-based (Dosenbach) parcellation schemes on comparability of group differences in 35 female patients with anorexia nervosa and 35 age- and sex-matched healthy controls. Global and local network properties, including network-based statistics (NBS), were assessed on resting state functional magnetic resonance imaging data obtained at 3T. Parcellation schemes were comparably consistent on global network properties, while NBS and local metrics differed in location, but not metric type. Location of local metric alterations varied for AAL (parietal and cingulate cortices) versus Dosenbach (insula, thalamus) parcellation approaches. However, consistency was observed for the occipital cortex. Patient-specific global network properties can be robustly observed using different parcellation schemes, while graph metrics characterizing impairments of individual nodes vary considerably. Therefore, the impact of parcellation choice on specific group differences varies depending on the level of network organization. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Unequal power allocation for JPEG transmission over MIMO systems.
Sabir, Muhammad Farooq; Bovik, Alan Conrad; Heath, Robert W
2010-02-01
With the introduction of multiple transmit and receive antennas in next generation wireless systems, real-time image and video communication are expected to become quite common, since very high data rates will become available along with improved data reliability. New joint transmission and coding schemes that explore advantages of multiple antenna systems matched with source statistics are expected to be developed. Based on this idea, we present an unequal power allocation scheme for transmission of JPEG compressed images over multiple-input multiple-output systems employing spatial multiplexing. The JPEG-compressed image is divided into different quality layers, and different layers are transmitted simultaneously from different transmit antennas using unequal transmit power, with a constraint on the total transmit power during any symbol period. Results show that our unequal power allocation scheme provides significant image quality improvement as compared to different equal power allocations schemes, with the peak-signal-to-noise-ratio gain as high as 14 dB at low signal-to-noise-ratios.
Chang, Chung-Liang; Huang, Yi-Ming; Hong, Guo-Fong
2015-01-01
The direction of sunshine or the installation sites of environmental control facilities in the greenhouse result in different temperature and humidity levels in the various zones of the greenhouse, and thus, the production quality of crop is inconsistent. This study proposed a wireless-networked decentralized fuzzy control scheme to regulate the environmental parameters of various culture zones within a greenhouse. The proposed scheme can create different environmental conditions for cultivating different crops in various zones and achieve diversification or standardization of crop production. A star-type wireless sensor network is utilized to communicate with each sensing node, actuator node, and control node in various zones within the greenhouse. The fuzzy rule-based inference system is used to regulate the environmental parameters for temperature and humidity based on real-time data of plant growth response provided by a growth stage selector. The growth stage selector defines the control ranges of temperature and humidity of the various culture zones according to the leaf area of the plant, the number of leaves, and the cumulative amount of light. The experimental results show that the proposed scheme is stable and robust and provides basis for future greenhouse applications. PMID:26569264
Physical Layer Secret-Key Generation Scheme for Transportation Security Sensor Network
Yang, Bin; Zhang, Jianfeng
2017-01-01
Wireless Sensor Networks (WSNs) are widely used in different disciplines, including transportation systems, agriculture field environment monitoring, healthcare systems, and industrial monitoring. The security challenge of the wireless communication link between sensor nodes is critical in WSNs. In this paper, we propose a new physical layer secret-key generation scheme for transportation security sensor network. The scheme is based on the cooperation of all the sensor nodes, thus avoiding the key distribution process, which increases the security of the system. Different passive and active attack models are analyzed in this paper. We also prove that when the cooperative node number is large enough, even when the eavesdropper is equipped with multiple antennas, the secret-key is still secure. Numerical results are performed to show the efficiency of the proposed scheme. PMID:28657588
Physical Layer Secret-Key Generation Scheme for Transportation Security Sensor Network.
Yang, Bin; Zhang, Jianfeng
2017-06-28
Wireless Sensor Networks (WSNs) are widely used in different disciplines, including transportation systems, agriculture field environment monitoring, healthcare systems, and industrial monitoring. The security challenge of the wireless communication link between sensor nodes is critical in WSNs. In this paper, we propose a new physical layer secret-key generation scheme for transportation security sensor network. The scheme is based on the cooperation of all the sensor nodes, thus avoiding the key distribution process, which increases the security of the system. Different passive and active attack models are analyzed in this paper. We also prove that when the cooperative node number is large enough, even when the eavesdropper is equipped with multiple antennas, the secret-key is still secure. Numerical results are performed to show the efficiency of the proposed scheme.
On the feasibility of interoperable schemes in hand biometrics.
Morales, Aythami; González, Ester; Ferrer, Miguel A
2012-01-01
Personal recognition through hand-based biometrics has attracted the interest of many researchers in the last twenty years. A significant number of proposals based on different procedures and acquisition devices have been published in the literature. However, comparisons between devices and their interoperability have not been thoroughly studied. This paper tries to fill this gap by proposing procedures to improve the interoperability among different hand biometric schemes. The experiments were conducted on a database made up of 8,320 hand images acquired from six different hand biometric schemes, including a flat scanner, webcams at different wavelengths, high quality cameras, and contactless devices. Acquisitions on both sides of the hand were included. Our experiment includes four feature extraction methods which determine the best performance among the different scenarios for two of the most popular hand biometrics: hand shape and palm print. We propose smoothing techniques at the image and feature levels to reduce interdevice variability. Results suggest that comparative hand shape offers better performance in terms of interoperability than palm prints, but palm prints can be more effective when using similar sensors.
On the Feasibility of Interoperable Schemes in Hand Biometrics
Morales, Aythami; González, Ester; Ferrer, Miguel A.
2012-01-01
Personal recognition through hand-based biometrics has attracted the interest of many researchers in the last twenty years. A significant number of proposals based on different procedures and acquisition devices have been published in the literature. However, comparisons between devices and their interoperability have not been thoroughly studied. This paper tries to fill this gap by proposing procedures to improve the interoperability among different hand biometric schemes. The experiments were conducted on a database made up of 8,320 hand images acquired from six different hand biometric schemes, including a flat scanner, webcams at different wavelengths, high quality cameras, and contactless devices. Acquisitions on both sides of the hand were included. Our experiment includes four feature extraction methods which determine the best performance among the different scenarios for two of the most popular hand biometrics: hand shape and palm print. We propose smoothing techniques at the image and feature levels to reduce interdevice variability. Results suggest that comparative hand shape offers better performance in terms of interoperability than palm prints, but palm prints can be more effective when using similar sensors. PMID:22438714
An Efficient Authenticated Key Transfer Scheme in Client-Server Networks
NASA Astrophysics Data System (ADS)
Shi, Runhua; Zhang, Shun
2017-10-01
In this paper, we presented a novel authenticated key transfer scheme in client-server networks, which can achieve two secure goals of remote user authentication and the session key establishment between the remote user and the server. Especially, the proposed scheme can subtly provide two fully different authentications: identity-base authentication and anonymous authentication, while the remote user only holds a private key. Furthermore, our scheme only needs to transmit 1-round messages from the remote user to the server, thus it is very efficient in communication complexity. In addition, the most time-consuming computation in our scheme is elliptic curve scalar point multiplication, so it is also feasible even for mobile devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kupferman, R.
The author presents a numerical study of the axisymmetric Couette-Taylor problem using a finite difference scheme. The scheme is based on a staggered version of a second-order central-differencing method combined with a discrete Hodge projection. The use of central-differencing operators obviates the need to trace the characteristic flow associated with the hyperbolic terms. The result is a simple and efficient scheme which is readily adaptable to other geometries and to more complicated flows. The scheme exhibits competitive performance in terms of accuracy, resolution, and robustness. The numerical results agree accurately with linear stability theory and with previous numerical studies.
Multigrid schemes for viscous hypersonic flows
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Radespiel, R.
1993-01-01
Several multigrid schemes are considered for the numerical computation of viscous hypersonic flows. For each scheme, the basic solution algorithm employs upwind spatial discretization with explicit multistage time stepping. Two-level versions of the various multigrid algorithms are applied to the two-dimensional advection equation, and Fourier analysis is used to determine their damping properties. The capabilities of the multigrid methods are assessed by solving two different hypersonic flow problems. Some new multigrid schemes, based on semicoarsening strategies, are shown to be quite effective in relieving the stiffness caused by the high-aspect-ratio cells required to resolve high Reynolds number flows. These schemes exhibit good convergence rates for Reynolds numbers up to 200 x 10(exp 6).
NASA Astrophysics Data System (ADS)
Yu, Yuting; Cheng, Ming
2018-05-01
Aiming at various configuration scheme and inertial measurement units of Strapdown Inertial Navigation System, selected tetrahedron skew configuration and coaxial orthogonal configuration by nine low cost IMU to build system. Calculation and simulation the performance index, reliability and fault diagnosis ability of the navigation system. Analysis shows that the reliability and reconfiguration scheme of skew configuration is superior to the orthogonal configuration scheme, while the performance index and fault diagnosis ability of the system are similar. The work in this paper provides a strong reference for the selection of engineering applications.
A new scheme of force reflecting control
NASA Technical Reports Server (NTRS)
Kim, Won S.
1992-01-01
A new scheme of force reflecting control has been developed that incorporates position-error-based force reflection and robot compliance control. The operator is provided with a kinesthetic force feedback which is proportional to the position error between the operator-commanded and the actual position of the robot arm. Robot compliance control, which increases the effective compliance of the robot, is implemented by low pass filtering the outputs of the force/torque sensor mounted on the base of robot hand and using these signals to alter the operator's position command. This position-error-based force reflection scheme combined with shared compliance control has been implemented successfully to the Advanced Teleoperation system consisting of dissimilar master-slave arms. Stability measurements have demonstrated unprecedentedly high force reflection gains of up to 2 or 3, even though the slave arm is much stiffer than operator's hand holding the force reflecting hand controller. Peg-in-hole experiments were performed with eight different operating modes to evaluate the new force-reflecting control scheme. Best task performance resulted with this new control scheme.
LSB-Based Steganography Using Reflected Gray Code
NASA Astrophysics Data System (ADS)
Chen, Chang-Chu; Chang, Chin-Chen
Steganography aims to hide secret data into an innocuous cover-medium for transmission and to make the attacker cannot recognize the presence of secret data easily. Even the stego-medium is captured by the eavesdropper, the slight distortion is hard to be detected. The LSB-based data hiding is one of the steganographic methods, used to embed the secret data into the least significant bits of the pixel values in a cover image. In this paper, we propose an LSB-based scheme using reflected-Gray code, which can be applied to determine the embedded bit from secret information. Following the transforming rule, the LSBs of stego-image are not always equal to the secret bits and the experiment shows that the differences are up to almost 50%. According to the mathematical deduction and experimental results, the proposed scheme has the same image quality and payload as the simple LSB substitution scheme. In fact, our proposed data hiding scheme in the case of G1 (one bit Gray code) system is equivalent to the simple LSB substitution scheme.
Hazard-Ranking of Agricultural Pesticides for Chronic Health Effects in Yuma County, Arizona
Sugeng, Anastasia J.; Beamer, Paloma I.; Lutz, Eric A.; Rosales, Cecilia B.
2013-01-01
With thousands of pesticides registered by the United States Environmental Protection Agency, it not feasible to sample for all pesticides applied in agricultural communities. Hazard-ranking pesticides based on use, toxicity, and exposure potential can help prioritize community-specific pesticide hazards. This study applied hazard-ranking schemes for cancer, endocrine disruption, and reproductive/developmental toxicity in Yuma County, Arizona. An existing cancer hazard-ranking scheme was modified, and novel schemes for endocrine disruption and reproductive/developmental toxicity were developed to rank pesticide hazards. The hazard-ranking schemes accounted for pesticide use, toxicity, and exposure potential based on chemical properties of each pesticide. Pesticides were ranked as hazards with respect to each health effect, as well as overall chronic health effects. The highest hazard-ranked pesticides for overall chronic health effects were maneb, metam sodium, trifluralin, pronamide, and bifenthrin. The relative pesticide rankings were unique for each health effect. The highest hazard-ranked pesticides differed from those most heavily applied, as well as from those previously detected in Yuma homes over a decade ago. The most hazardous pesticides for cancer in Yuma County, Arizona were also different from a previous hazard-ranking applied in California. Hazard-ranking schemes that take into account pesticide use, toxicity, and exposure potential can help prioritize pesticides of greatest health risk in agricultural communities. This study is the first to provide pesticide hazard-rankings for endocrine disruption and reproductive/developmental toxicity based on use, toxicity, and exposure potential. These hazard-ranking schemes can be applied to other agricultural communities for prioritizing community-specific pesticide hazards to target decreasing health risk. PMID:23783270
NASA Technical Reports Server (NTRS)
Kidd, Chris; Matsui, Toshi; Chern, Jiundar; Mohr, Karen; Kummerow, Christian; Randel, Dave
2015-01-01
The estimation of precipitation across the globe from satellite sensors provides a key resource in the observation and understanding of our climate system. Estimates from all pertinent satellite observations are critical in providing the necessary temporal sampling. However, consistency in these estimates from instruments with different frequencies and resolutions is critical. This paper details the physically based retrieval scheme to estimate precipitation from cross-track (XT) passive microwave (PM) sensors on board the constellation satellites of the Global Precipitation Measurement (GPM) mission. Here the Goddard profiling algorithm (GPROF), a physically based Bayesian scheme developed for conically scanning (CS) sensors, is adapted for use with XT PM sensors. The present XT GPROF scheme utilizes a model-generated database to overcome issues encountered with an observational database as used by the CS scheme. The model database ensures greater consistency across meteorological regimes and surface types by providing a more comprehensive set of precipitation profiles. The database is corrected for bias against the CS database to ensure consistency in the final product. Statistical comparisons over western Europe and the United States show that the XT GPROF estimates are comparable with those from the CS scheme. Indeed, the XT estimates have higher correlations against surface radar data, while maintaining similar root-mean-square errors. Latitudinal profiles of precipitation show the XT estimates are generally comparable with the CS estimates, although in the southern midlatitudes the peak precipitation is shifted equatorward while over the Arctic large differences are seen between the XT and the CS retrievals.
NASA Astrophysics Data System (ADS)
Pasten Zapata, Ernesto; Moggridge, Helen; Jones, Julie; Widmann, Martin
2017-04-01
Run-of-the-River (ROR) hydropower schemes are expected to be importantly affected by climate change as they rely in the availability of river flow to generate energy. As temperature and precipitation are expected to vary in the future, the hydrological cycle will also undergo changes. Therefore, climate models based on complex physical atmospheric interactions have been developed to simulate future climate scenarios considering the atmosphere's greenhouse gas concentrations. These scenarios are classified according to the Representative Concentration Pathways (RCP) that are generated according to the concentration of greenhouse gases. This study evaluates possible scenarios for selected ROR hydropower schemes within the UK, considering three different RCPs: 2.6, 4.5 and 8.5 W/m2 for 2100 relative to pre-industrial values. The study sites cover different climate, land cover, topographic and hydropower scheme characteristics representative of the UK's heterogeneity. Precipitation and temperature outputs from state-of-the-art Regional Climate Models (RCMs) from the Euro-CORDEX project are used as input for a HEC-HMS hydrological model to simulate the future river flow available. Both uncorrected and bias-corrected RCM simulations are analyzed. The results of this project provide an insight of the possible effects of climate change towards the generation of power from the ROR hydropower schemes according to the different RCP scenarios and contrasts the results obtained from uncorrected and bias-corrected RCMs. This analysis can aid on the adaptation to climate change as well as the planning of future ROR schemes in the region.
NASA Astrophysics Data System (ADS)
Jiang, Wen; Yang, Yanfu; Zhang, Qun; Sun, Yunxu; Zhong, Kangping; Zhou, Xian; Yao, Yong
2016-09-01
The frequency offset estimation (FOE) schemes based on Kalman filter are proposed and investigated in detail via numerical simulation and experiment. The schemes consist of a modulation phase removing stage and Kalman filter estimation stage. In the second stage, the Kalman filters are employed for tracking either differential angles or differential data between two successive symbols. Several implementations of the proposed FOE scheme are compared by employing different modulation removing methods and two Kalman algorithms. The optimal FOE implementation is suggested for different operating conditions including optical signal-to-noise ratio and the number of the available data symbols.
Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet
NASA Technical Reports Server (NTRS)
Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.
2000-01-01
This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.
Unobtrusive monitoring of heart rate using a cost-effective speckle-based SI-POF remote sensor
NASA Astrophysics Data System (ADS)
Pinzón, P. J.; Montero, D. S.; Tapetado, A.; Vázquez, C.
2017-03-01
A novel speckle-based sensing technique for cost-effective heart-rate monitoring is demonstrated. This technique detects periodical changes in the spatial distribution of energy on the speckle pattern at the output of a Step-Index Polymer Optical Fiber (SI-POF) lead by using a low-cost webcam. The scheme operates in reflective configuration thus performing a centralized interrogation unit scheme. The prototype has been integrated into a mattress and its functionality has been tested with 5 different patients lying on the mattress in different positions without direct contact with the fiber sensing lead.
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.; Lytle, John K.
1989-01-01
An algebraic adaptive grid scheme based on the concept of arc equidistribution is presented. The scheme locally adjusts the grid density based on gradients of selected flow variables from either finite difference or finite volume calculations. A user-prescribed grid stretching can be specified such that control of the grid spacing can be maintained in areas of known flowfield behavior. For example, the grid can be clustered near a wall for boundary layer resolution and made coarse near the outer boundary of an external flow. A grid smoothing technique is incorporated into the adaptive grid routine, which is found to be more robust and efficient than the weight function filtering technique employed by other researchers. Since the present algebraic scheme requires no iteration or solution of differential equations, the computer time needed for grid adaptation is trivial, making the scheme useful for three-dimensional flow problems. Applications to two- and three-dimensional flow problems show that a considerable improvement in flowfield resolution can be achieved by using the proposed adaptive grid scheme. Although the scheme was developed with steady flow in mind, it is a good candidate for unsteady flow computations because of its efficiency.
Improvement of a Privacy Authentication Scheme Based on Cloud for Medical Environment.
Chiou, Shin-Yan; Ying, Zhaoqin; Liu, Junqiang
2016-04-01
Medical systems allow patients to receive care at different hospitals. However, this entails considerable inconvenience through the need to transport patients and their medical records between hospitals. The development of Telecare Medicine Information Systems (TMIS) makes it easier for patients to seek medical treatment and to store and access medical records. However, medical data stored in TMIS is not encrypted, leaving patients' private data vulnerable to external leaks. In 2014, scholars proposed a new cloud-based medical information model and authentication scheme which would not only allow patients to remotely access medical services but also protects patient privacy. However, this scheme still fails to provide patient anonymity and message authentication. Furthermore, this scheme only stores patient medical data, without allowing patients to directly access medical advice. Therefore, we propose a new authentication scheme, which provides anonymity, unlinkability, and message authentication, and allows patients to directly and remotely consult with doctors. In addition, our proposed scheme is more efficient in terms of computation cost. The proposed system was implemented in Android system to demonstrate its workability.
Optimal placement of fast cut back units based on the theory of cellular automata and agent
NASA Astrophysics Data System (ADS)
Yan, Jun; Yan, Feng
2017-06-01
The thermal power generation units with the function of fast cut back could serve power for auxiliary system and keep island operation after a major blackout, so they are excellent substitute for the traditional black-start power sources. Different placement schemes for FCB units have different influence on the subsequent restoration process. Considering the locality of the emergency dispatching rules, the unpredictability of specific dispatching instructions and unexpected situations like failure of transmission line energization, a novel deduction model for network reconfiguration based on the theory of cellular automata and agent is established. Several indexes are then defined for evaluating the placement schemes for FCB units. The attribute weights determination method based on subjective and objective integration and grey relational analysis are combinatorically used to determine the optimal placement scheme for FCB unit. The effectiveness of the proposed method is validated by the test results on the New England 10-unit 39-bus power system.
An IDS Alerts Aggregation Algorithm Based on Rough Set Theory
NASA Astrophysics Data System (ADS)
Zhang, Ru; Guo, Tao; Liu, Jianyi
2018-03-01
Within a system in which has been deployed several IDS, a great number of alerts can be triggered by a single security event, making real alerts harder to be found. To deal with redundant alerts, we propose a scheme based on rough set theory. In combination with basic concepts in rough set theory, the importance of attributes in alerts was calculated firstly. With the result of attributes importance, we could compute the similarity of two alerts, which will be compared with a pre-defined threshold to determine whether these two alerts can be aggregated or not. Also, time interval should be taken into consideration. Allowed time interval for different types of alerts is computed individually, since different types of alerts may have different time gap between two alerts. In the end of this paper, we apply proposed scheme on DAPRA98 dataset and the results of experiment show that our scheme can efficiently reduce the redundancy of alerts so that administrators of security system could avoid wasting time on useless alerts.
Time as a Tool for Policy Analysis in Aging.
ERIC Educational Resources Information Center
Pastorello, Thomas
National policy makers have put forth different life cycle planning proposals for the more satisfying integration of education, work and leisure over the life course. This speech describes a decision making scheme, the Time Paradigm, for researched-based choice among various proposals. The scheme is defined in terms of a typology of time-related…
The faithful remote preparation of general quantum states
NASA Astrophysics Data System (ADS)
Luo, Ming-Xing; Deng, Yun; Chen, Xiu-Bo; Yang, Yi-Xian
2013-01-01
This paper is to establish a theoretical framework for faithful and deterministic remote state preparation, which is related to the classical Hurwitz theorem. And then based on the new theory various schemes with different characteristics are presented. Moreover, the permutation group and the partially quantum resources have also discussed for faithful schemes.
An Innovative Approach to Scheme Learning Map Considering Tradeoff Multiple Objectives
ERIC Educational Resources Information Center
Lin, Yu-Shih; Chang, Yi-Chun; Chu, Chih-Ping
2016-01-01
An important issue in personalized learning is to provide learners with customized learning according to their learning characteristics. This paper focused attention on scheming learning map as follows. The learning goal can be achieved via different pathways based on alternative materials, which have the relationships of prerequisite, dependence,…
Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding
NASA Astrophysics Data System (ADS)
Oh, Kwan-Jung; Oh, Byung Tae
2015-04-01
We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.
NASA Astrophysics Data System (ADS)
He, C.; Li, Q.; Liou, K. N.; Qi, L.; Tao, S.; Schwarz, J. P.
2015-12-01
Black carbon (BC) aging significantly affects its distributions and radiative properties, which is an important uncertainty source in estimating BC climatic effects. Global models often use a fixed aging timescale for the hydrophobic-to-hydrophilic BC conversion or a simple parameterization. We have developed and implemented a microphysics-based BC aging scheme that accounts for condensation and coagulation processes into a global 3-D chemical transport model (GEOS-Chem). Model results are systematically evaluated by comparing with the HIPPO observations across the Pacific (67°S-85°N) during 2009-2011. We find that the microphysics-based scheme substantially increases the BC aging rate over source regions as compared with the fixed aging timescale (1.2 days), due to the condensation of sulfate and secondary organic aerosols (SOA) and coagulation with pre-existing hydrophilic aerosols. However, the microphysics-based scheme slows down BC aging over Polar regions where condensation and coagulation are rather weak. We find that BC aging is primarily dominated by condensation process that accounts for ~75% of global BC aging, while the coagulation process is important over source regions where a large amount of pre-existing aerosols are available. Model results show that the fixed aging scheme tends to overestimate BC concentrations over the Pacific throughout the troposphere by a factor of 2-5 at different latitudes, while the microphysics-based scheme reduces the discrepancies by up to a factor of 2, particularly in the middle troposphere. The microphysics-based scheme developed in this work decreases BC column total concentrations at all latitudes and seasons, especially over tropical regions, leading to large improvement in model simulations. We are presently analyzing the impact of this scheme on global BC budget and lifetime, quantifying its uncertainty associated with key parameters, and investigating the effects of heterogeneous chemical oxidation on BC aging.
A New Quantum Watermarking Based on Quantum Wavelet Transforms
NASA Astrophysics Data System (ADS)
Heidari, Shahrokh; Naseri, Mosayeb; Gheibi, Reza; Baghfalaki, Masoud; Rasoul Pourarian, Mohammad; Farouk, Ahmed
2017-06-01
Quantum watermarking is a technique to embed specific information, usually the owner’s identification, into quantum cover data such for copyright protection purposes. In this paper, a new scheme for quantum watermarking based on quantum wavelet transforms is proposed which includes scrambling, embedding and extracting procedures. The invisibility and robustness performances of the proposed watermarking method is confirmed by simulation technique. The invisibility of the scheme is examined by the peak-signal-to-noise ratio (PSNR) and the histogram calculation. Furthermore the robustness of the scheme is analyzed by the Bit Error Rate (BER) and the Correlation Two-Dimensional (Corr 2-D) calculation. The simulation results indicate that the proposed watermarking scheme indicate not only acceptable visual quality but also a good resistance against different types of attack. Supported by Kermanshah Branch, Islamic Azad University, Kermanshah, Iran
An exponential time-integrator scheme for steady and unsteady inviscid flows
NASA Astrophysics Data System (ADS)
Li, Shu-Jie; Luo, Li-Shi; Wang, Z. J.; Ju, Lili
2018-07-01
An exponential time-integrator scheme of second-order accuracy based on the predictor-corrector methodology, denoted PCEXP, is developed to solve multi-dimensional nonlinear partial differential equations pertaining to fluid dynamics. The effective and efficient implementation of PCEXP is realized by means of the Krylov method. The linear stability and truncation error are analyzed through a one-dimensional model equation. The proposed PCEXP scheme is applied to the Euler equations discretized with a discontinuous Galerkin method in both two and three dimensions. The effectiveness and efficiency of the PCEXP scheme are demonstrated for both steady and unsteady inviscid flows. The accuracy and efficiency of the PCEXP scheme are verified and validated through comparisons with the explicit third-order total variation diminishing Runge-Kutta scheme (TVDRK3), the implicit backward Euler (BE) and the implicit second-order backward difference formula (BDF2). For unsteady flows, the PCEXP scheme generates a temporal error much smaller than the BDF2 scheme does, while maintaining the expected acceleration at the same time. Moreover, the PCEXP scheme is also shown to achieve the computational efficiency comparable to the implicit schemes for steady flows.
NASA Technical Reports Server (NTRS)
Huynh, H. T.; Wang, Z. J.; Vincent, P. E.
2013-01-01
Popular high-order schemes with compact stencils for Computational Fluid Dynamics (CFD) include Discontinuous Galerkin (DG), Spectral Difference (SD), and Spectral Volume (SV) methods. The recently proposed Flux Reconstruction (FR) approach or Correction Procedure using Reconstruction (CPR) is based on a differential formulation and provides a unifying framework for these high-order schemes. Here we present a brief review of recent developments for the FR/CPR schemes as well as some pacing items.
NASA Astrophysics Data System (ADS)
Lv, M.; Li, C.; Lu, H.; Yang, K.; Chen, Y.
2017-12-01
The parameterization of vegetation cover fraction (VCF) is an important component of land surface models. This paper investigates the impacts of three VCF parameterization schemes on land surface temperature (LST) simulation by the Common Land Model (CoLM) in the Tibetan Plateau (TP). The first scheme is a simple land cover (LC) based method; the second one is based on remote sensing observation (hereafter named as RNVCF) , in which multi-year climatology VCFs is derived from Moderate-resolution Imaging Spectroradiometer (MODIS) NDVI (Normalized Difference Vegetation Index); the third VCF parameterization scheme derives VCF from the LAI simulated by LSM and clump index at every model time step (hereafter named as SMVCF). Simulated land surface temperature(LST) and soil temperature by CoLM with three VCF parameterization schemes were evaluated by using satellite LST observation and in situ soil temperature observation, respectively, during the period of 2010 to 2013. The comparison against MODIS Aqua LST indicates that (1) CTL produces large biases for both four seasons in early afternoon (about 13:30, local solar time), while the mean bias in spring reach to 12.14K; (2) RNVCF and SMVCF reduce the mean bias significantly, especially in spring as such reduce is about 6.5K. Surface soil temperature observed at 5 cm depth from three soil moisture and temperature monitoring networks is also employed to assess the skill of three VCF schemes. The three networks, crossing TP from West to East, have different climate and vegetation conditions. In the Ngari network, located in the Western TP with an arid climate, there are not obvious differences among three schemes. In Naqu network, located in central TP with a semi-arid climate condition, CTL shows a severe overestimates (12.1 K), but such overestimations can be reduced by 79% by RNVCF and 87% by SMVCF. In the third humid network (Maqu in eastern TP), CoLM performs similar to Naqu. However, at both Naqu and Maqu networks, RNVCF shows significant overestimation in summer, perhaps due to RNVCF ignores the growing characteristics of vegetation (mainly grass) in these two regions. Our results demonstrate that VCF schemes have significant influence on LSM performance, and indicate that it is important to consider vegetation growing characteristics in VCF schemes for different LCs.
NASA Astrophysics Data System (ADS)
Shukla, Chitra; Thapliyal, Kishore; Pathak, Anirban
2017-12-01
Semi-quantum protocols that allow some of the users to remain classical are proposed for a large class of problems associated with secure communication and secure multiparty computation. Specifically, first-time semi-quantum protocols are proposed for key agreement, controlled deterministic secure communication and dialogue, and it is shown that the semi-quantum protocols for controlled deterministic secure communication and dialogue can be reduced to semi-quantum protocols for e-commerce and private comparison (socialist millionaire problem), respectively. Complementing with the earlier proposed semi-quantum schemes for key distribution, secret sharing and deterministic secure communication, set of schemes proposed here and subsequent discussions have established that almost every secure communication and computation tasks that can be performed using fully quantum protocols can also be performed in semi-quantum manner. Some of the proposed schemes are completely orthogonal-state-based, and thus, fundamentally different from the existing semi-quantum schemes that are conjugate coding-based. Security, efficiency and applicability of the proposed schemes have been discussed with appropriate importance.
Jung, Jaewook; Kim, Jiye; Choi, Younsung; Won, Dongho
2016-01-01
In wireless sensor networks (WSNs), a registered user can login to the network and use a user authentication protocol to access data collected from the sensor nodes. Since WSNs are typically deployed in unattended environments and sensor nodes have limited resources, many researchers have made considerable efforts to design a secure and efficient user authentication process. Recently, Chen et al. proposed a secure user authentication scheme using symmetric key techniques for WSNs. They claim that their scheme assures high efficiency and security against different types of attacks. After careful analysis, however, we find that Chen et al.’s scheme is still vulnerable to smart card loss attack and is susceptible to denial of service attack, since it is invalid for verification to simply compare an entered ID and a stored ID in smart card. In addition, we also observe that their scheme cannot preserve user anonymity. Furthermore, their scheme cannot quickly detect an incorrect password during login phase, and this flaw wastes both communication and computational overheads. In this paper, we describe how these attacks work, and propose an enhanced anonymous user authentication and key agreement scheme based on a symmetric cryptosystem in WSNs to address all of the aforementioned vulnerabilities in Chen et al.’s scheme. Our analysis shows that the proposed scheme improves the level of security, and is also more efficient relative to other related schemes. PMID:27537890
A dispersion minimizing scheme for the 3-D Helmholtz equation based on ray theory
NASA Astrophysics Data System (ADS)
Stolk, Christiaan C.
2016-06-01
We develop a new dispersion minimizing compact finite difference scheme for the Helmholtz equation in 2 and 3 dimensions. The scheme is based on a newly developed ray theory for difference equations. A discrete Helmholtz operator and a discrete operator to be applied to the source and the wavefields are constructed. Their coefficients are piecewise polynomial functions of hk, chosen such that phase and amplitude errors are minimal. The phase errors of the scheme are very small, approximately as small as those of the 2-D quasi-stabilized FEM method and substantially smaller than those of alternatives in 3-D, assuming the same number of gridpoints per wavelength is used. In numerical experiments, accurate solutions are obtained in constant and smoothly varying media using meshes with only five to six points per wavelength and wave propagation over hundreds of wavelengths. When used as a coarse level discretization in a multigrid method the scheme can even be used with down to three points per wavelength. Tests on 3-D examples with up to 108 degrees of freedom show that with a recently developed hybrid solver, the use of coarser meshes can lead to corresponding savings in computation time, resulting in good simulation times compared to the literature.
Niemi, Jarkko K; Heikkilä, Jaakko
2011-06-01
The participation of agricultural producers in financing losses caused by livestock epidemics has been debated in many countries. One of the issues raised is how reluctant producers are to participate voluntarily in the financing of disease losses before an outbreak occurs. This study contributes to the literature by examining whether disease losses should be financed through pre- or post-outbreak premiums or their combination. A Monte Carlo simulation was employed to illustrate the costs of financing two diseases of different profiles. The profiles differed in the probability in which the damage occurs and in the average damage per event. Three hypothetical financing schemes were compared based on their ability to reduce utility losses in the case of risk-neutral and risk-averse producer groups. The schemes were examined in a dynamic setting where premiums depended on the compensation history of the sector. If producers choose the preferred financing scheme based on utility losses, results suggest that the timing of the premiums, the transaction costs of the scheme, the degree of risk aversion of the producer, and the level and the volatility of premiums affect the choice of the financing scheme. Copyright © 2011 Elsevier B.V. All rights reserved.
Emerging Security Mechanisms for Medical Cyber Physical Systems.
Kocabas, Ovunc; Soyata, Tolga; Aktas, Mehmet K
2016-01-01
The following decade will witness a surge in remote health-monitoring systems that are based on body-worn monitoring devices. These Medical Cyber Physical Systems (MCPS) will be capable of transmitting the acquired data to a private or public cloud for storage and processing. Machine learning algorithms running in the cloud and processing this data can provide decision support to healthcare professionals. There is no doubt that the security and privacy of the medical data is one of the most important concerns in designing an MCPS. In this paper, we depict the general architecture of an MCPS consisting of four layers: data acquisition, data aggregation, cloud processing, and action. Due to the differences in hardware and communication capabilities of each layer, different encryption schemes must be used to guarantee data privacy within that layer. We survey conventional and emerging encryption schemes based on their ability to provide secure storage, data sharing, and secure computation. Our detailed experimental evaluation of each scheme shows that while the emerging encryption schemes enable exciting new features such as secure sharing and secure computation, they introduce several orders-of-magnitude computational and storage overhead. We conclude our paper by outlining future research directions to improve the usability of the emerging encryption schemes in an MCPS.
NASA Astrophysics Data System (ADS)
Montane, F.; Fox, A. M.; Arellano, A. F.; Alexander, M. R.; Moore, D. J.
2016-12-01
Carbon (C) allocation to different plant tissues (leaves, stem and roots) remains a central challenge for understanding the global C cycle, as it determines C residence time. We used a diverse set of observations (AmeriFlux eddy covariance towers, biomass estimates from tree-ring data, and Leaf Area Index measurements) to compare C fluxes, pools, and Leaf Area Index (LAI) data with the Community Land Model (CLM). We ran CLM for seven temperate forests in North America (including evergreen and deciduous sites) between 1980 and 2013 using different C allocation schemes: i) standard C allocation scheme in CLM, which allocates C to the stem and leaves as a dynamic function of annual net primary productivity (NPP); ii) two fixed C allocation schemes, one representative of evergreen and the other one of deciduous forests, based on Luyssaert et al. 2007; iii) an alternative C allocation scheme, which allocated C to stem and leaves, and to stem and coarse roots, as a dynamic function of annual NPP, based on Litton et al. 2007. At our sites CLM usually overestimated gross primary production and ecosystem respiration, and underestimated net ecosystem exchange. Initial aboveground biomass in 1980 was largely overestimated for deciduous forests, whereas aboveground biomass accumulation between 1980 and 2011 was highly underestimated for both evergreen and deciduous sites due to the lower turnover rate in the sites than the one used in the model. CLM overestimated LAI in both evergreen and deciduous sites because the Leaf C-LAI relationship in the model did not match the observed Leaf C-LAI relationship in our sites. Although the different C allocation schemes gave similar results for aggregated C fluxes, they translated to important differences in long-term aboveground biomass accumulation and aboveground NPP. For deciduous forests, one of the alternative C allocation schemes used (iii) gave more realistic stem C/leaf C ratios, and highly reduced the overestimation of initial aboveground biomass, and accumulated aboveground NPP for deciduous forests by CLM. Our results would suggest using different C allocation schemes for evergreen and deciduous forests. It is crucial to improve CLM in the near future to minimize data-model mismatches, and to address some of the current model structural errors and parameter uncertainties.
Progress with multigrid schemes for hypersonic flow problems
NASA Technical Reports Server (NTRS)
Radespiel, R.; Swanson, R. C.
1991-01-01
Several multigrid schemes are considered for the numerical computation of viscous hypersonic flows. For each scheme, the basic solution algorithm uses upwind spatial discretization with explicit multistage time stepping. Two level versions of the various multigrid algorithms are applied to the two dimensional advection equation, and Fourier analysis is used to determine their damping properties. The capabilities of the multigrid methods are assessed by solving three different hypersonic flow problems. Some new multigrid schemes based on semicoarsening strategies are shown to be quite effective in relieving the stiffness caused by the high aspect ratio cells required to resolve high Reynolds number flows. These schemes exhibit good convergence rates for Reynolds numbers up to 200 x 10(exp 6) and Mach numbers up to 25.
Improving the accuracy of central difference schemes
NASA Technical Reports Server (NTRS)
Turkel, Eli
1988-01-01
General difference approximations to the fluid dynamic equations require an artificial viscosity in order to converge to a steady state. This artificial viscosity serves two purposes. One is to suppress high frequency noise which is not damped by the central differences. The second purpose is to introduce an entropy-like condition so that shocks can be captured. These viscosities need a coefficient to measure the amount of viscosity to be added. In the standard scheme, a scalar coefficient is used based on the spectral radius of the Jacobian of the convective flux. However, this can add too much viscosity to the slower waves. Hence, it is suggested that a matrix viscosity be used. This gives an appropriate viscosity for each wave component. With this matrix valued coefficient, the central difference scheme becomes closer to upwind biased methods.
Underwater target classification using wavelet packets and neural networks.
Azimi-Sadjadi, M R; Yao, D; Huang, Q; Dobeck, G J
2000-01-01
In this paper, a new subband-based classification scheme is developed for classifying underwater mines and mine-like targets from the acoustic backscattered signals. The system consists of a feature extractor using wavelet packets in conjunction with linear predictive coding (LPC), a feature selection scheme, and a backpropagation neural-network classifier. The data set used for this study consists of the backscattered signals from six different objects: two mine-like targets and four nontargets for several aspect angles. Simulation results on ten different noisy realizations and for signal-to-noise ratio (SNR) of 12 dB are presented. The receiver operating characteristic (ROC) curve of the classifier generated based on these results demonstrated excellent classification performance of the system. The generalization ability of the trained network was demonstrated by computing the error and classification rate statistics on a large data set. A multiaspect fusion scheme was also adopted in order to further improve the classification performance.
All-optical virtual private network and ONUs communication in optical OFDM-based PON system.
Zhang, Chongfu; Huang, Jian; Chen, Chen; Qiu, Kun
2011-11-21
We propose and demonstrate a novel scheme, which enables all-optical virtual private network (VPN) and all-optical optical network units (ONUs) inter-communications in optical orthogonal frequency-division multiplexing-based passive optical network (OFDM-PON) system using the subcarrier bands allocation for the first time (to our knowledge). We consider the intra-VPN and inter-VPN communications which correspond to two different cases: VPN communication among ONUs in one group and in different groups. The proposed scheme can provide the enhanced security and a more flexible configuration for VPN users compared to the VPN in WDM-PON or TDM-PON systems. The all-optical VPN and inter-ONU communications at 10-Gbit/s with 16 quadrature amplitude modulation (16 QAM) for the proposed optical OFDM-PON system are demonstrated. These results verify that the proposed scheme is feasible. © 2011 Optical Society of America
A Minimal Three-Dimensional Tropical Cyclone Model.
NASA Astrophysics Data System (ADS)
Zhu, Hongyan; Smith, Roger K.; Ulrich, Wolfgang
2001-07-01
A minimal 3D numerical model designed for basic studies of tropical cyclone behavior is described. The model is formulated in coordinates on an f or plane and has three vertical levels, one characterizing a shallow boundary layer and the other two representing the upper and lower troposphere, respectively. It has three options for treating cumulus convection on the subgrid scale and a simple scheme for the explicit release of latent heat on the grid scale. The subgrid-scale schemes are based on the mass-flux models suggested by Arakawa and Ooyama in the late 1960s, but modified to include the effects of precipitation-cooled downdrafts. They differ from one another in the closure that determines the cloud-base mass flux. One closure is based on the assumption of boundary layer quasi-equilibrium proposed by Raymond and Emanuel.It is shown that a realistic hurricane-like vortex develops from a moderate strength initial vortex, even when the initial environment is slightly stable to deep convection. This is true for all three cumulus schemes as well as in the case where only the explicit release of latent heat is included. In all cases there is a period of gestation during which the boundary layer moisture in the inner core region increases on account of surface moisture fluxes, followed by a period of rapid deepening. Precipitation from the convection scheme dominates the explicit precipitation in the early stages of development, but this situation is reversed as the vortex matures. These findings are similar to those of Baik et al., who used the Betts-Miller parameterization scheme in an axisymmetric model with 11 levels in the vertical. The most striking difference between the model results using different convection schemes is the length of the gestation period, whereas the maximum intensity attained is similar for the three schemes. The calculations suggest the hypothesis that the period of rapid development in tropical cyclones is accompanied by a change in the character of deep convection in the inner core region from buoyantly driven, predominantly upright convection to slantwise forced moist ascent.
Multi-objective optimization of radiotherapy: distributed Q-learning and agent-based simulation
NASA Astrophysics Data System (ADS)
Jalalimanesh, Ammar; Haghighi, Hamidreza Shahabi; Ahmadi, Abbas; Hejazian, Hossein; Soltani, Madjid
2017-09-01
Radiotherapy (RT) is among the regular techniques for the treatment of cancerous tumours. Many of cancer patients are treated by this manner. Treatment planning is the most important phase in RT and it plays a key role in therapy quality achievement. As the goal of RT is to irradiate the tumour with adequately high levels of radiation while sparing neighbouring healthy tissues as much as possible, it is a multi-objective problem naturally. In this study, we propose an agent-based model of vascular tumour growth and also effects of RT. Next, we use multi-objective distributed Q-learning algorithm to find Pareto-optimal solutions for calculating RT dynamic dose. We consider multiple objectives and each group of optimizer agents attempt to optimise one of them, iteratively. At the end of each iteration, agents compromise the solutions to shape the Pareto-front of multi-objective problem. We propose a new approach by defining three schemes of treatment planning created based on different combinations of our objectives namely invasive, conservative and moderate. In invasive scheme, we enforce killing cancer cells and pay less attention about irradiation effects on normal cells. In conservative scheme, we take more care of normal cells and try to destroy cancer cells in a less stressed manner. The moderate scheme stands in between. For implementation, each of these schemes is handled by one agent in MDQ-learning algorithm and the Pareto optimal solutions are discovered by the collaboration of agents. By applying this methodology, we could reach Pareto treatment plans through building different scenarios of tumour growth and RT. The proposed multi-objective optimisation algorithm generates robust solutions and finds the best treatment plan for different conditions.
Rudasingwa, Martin; Soeters, Robert; Bossuyt, Michel
2015-01-01
To strengthen the health care delivery, the Burundian Government in collaboration with international NGOs piloted performance-based financing (PBF) in 2006. The health facilities were assigned - by using a simple matching method - to begin PBF scheme or to continue with the traditional input-based funding. Our objective was to analyse the effect of that PBF scheme on the quality of health services between 2006 and 2008. We conducted the analysis in 16 health facilities with PBF scheme and 13 health facilities without PBF scheme. We analysed the PBF effect by using 58 composite quality indicators of eight health services: Care management, outpatient care, maternity care, prenatal care, family planning, laboratory services, medicines management and materials management. The differences in quality improvement in the two groups of health facilities were performed applying descriptive statistics, a paired non-parametric Wilcoxon Signed Ranks test and a simple difference-in-difference approach at a significance level of 5%. We found an improvement of the quality of care in the PBF group and a significant deterioration in the non-PBF group in the same four health services: care management, outpatient care, maternity care, and prenatal care. The findings suggest a PBF effect of between 38 and 66 percentage points (p<0.001) in the quality scores of care management, outpatient care, prenatal care, and maternal care. We found no PBF effect on clinical support services: laboratory services, medicines management, and material management. The PBF scheme in Burundi contributed to the improvement of the health services that were strongly under the control of medical personnel (physicians and nurses) in a short time of two years. The clinical support services that did not significantly improved were strongly under the control of laboratory technicians, pharmacists and non-medical personnel. PMID:25948432
NASA Technical Reports Server (NTRS)
Wang, Ten-See
1993-01-01
The objective of this study is to benchmark a four-engine clustered nozzle base flowfield with a computational fluid dynamics (CFD) model. The CFD model is a pressure based, viscous flow formulation. An adaptive upwind scheme is employed for the spatial discretization. The upwind scheme is based on second and fourth order central differencing with adaptive artificial dissipation. Qualitative base flow features such as the reverse jet, wall jet, recompression shock, and plume-plume impingement have been captured. The computed quantitative flow properties such as the radial base pressure distribution, model centerline Mach number and static pressure variation, and base pressure characteristic curve agreed reasonably well with those of the measurement. Parametric study on the effect of grid resolution, turbulence model, inlet boundary condition and difference scheme on convective terms has been performed. The results showed that grid resolution and turbulence model are two primary factors that influence the accuracy of the base flowfield prediction.
NASA Astrophysics Data System (ADS)
Zhang, Chongfu; Qiu, Kun; Xu, Bo; Ling, Yun
2008-05-01
This paper proposes an all-optical label processing scheme that uses the multiple optical orthogonal codes sequences (MOOCS)-based optical label for optical packet switching (OPS) (MOOCS-OPS) networks. In this scheme, each MOOCS is a permutation or combination of the multiple optical orthogonal codes (MOOC) selected from the multiple-groups optical orthogonal codes (MGOOC). Following a comparison of different optical label processing (OLP) schemes, the principles of MOOCS-OPS network are given and analyzed. Firstly, theoretical analyses are used to prove that MOOCS is able to greatly enlarge the number of available optical labels when compared to the previous single optical orthogonal code (SOOC) for OPS (SOOC-OPS) network. Then, the key units of the MOOCS-based optical label packets, including optical packet generation, optical label erasing, optical label extraction and optical label rewriting etc., are given and studied. These results are used to verify that the proposed MOOCS-OPS scheme is feasible.
Design and evaluation of nonverbal sound-based input for those with motor handicapped.
Punyabukkana, Proadpran; Chanjaradwichai, Supadaech; Suchato, Atiwong
2013-03-01
Most personal computing interfaces rely on the users' ability to use their hand and arm movements to interact with on-screen graphical widgets via mainstream devices, including keyboards and mice. Without proper assistive devices, this style of input poses difficulties for motor-handicapped users. We propose a sound-based input scheme enabling users to operate Windows' Graphical User Interface by producing hums and fricatives through regular microphones. Hierarchically arranged menus are utilized so that only minimal numbers of different actions are required at a time. The proposed scheme was found to be accurate and capable of responding promptly compared to other sound-based schemes. Being able to select from multiple item-selecting modes helps reducing the average time duration needed for completing tasks in the test scenarios almost by half the time needed when the tasks were performed solely through cursor movements. Still, improvements on facilitating users to select the most appropriate modes for desired tasks should improve the overall usability of the proposed scheme.
Proof of cipher text ownership based on convergence encryption
NASA Astrophysics Data System (ADS)
Zhong, Weiwei; Liu, Zhusong
2017-08-01
Cloud storage systems save disk space and bandwidth through deduplication technology, but with the use of this technology has been targeted security attacks: the attacker can get the original file just use hash value to deceive the server to obtain the file ownership. In order to solve the above security problems and the different security requirements of cloud storage system files, an efficient information theory security proof of ownership scheme is proposed. This scheme protects the data through the convergence encryption method, and uses the improved block-level proof of ownership scheme, and can carry out block-level client deduplication to achieve efficient and secure cloud storage deduplication scheme.
Coded excitation for infrared non-destructive testing of carbon fiber reinforced plastics.
Mulaveesala, Ravibabu; Venkata Ghali, Subbarao
2011-05-01
This paper proposes a Barker coded excitation for defect detection using infrared non-destructive testing. Capability of the proposed excitation scheme is highlighted with recently introduced correlation based post processing approach and compared with the existing phase based analysis by taking the signal to noise ratio into consideration. Applicability of the proposed scheme has been experimentally validated on a carbon fiber reinforced plastic specimen containing flat bottom holes located at different depths.
Sound beam manipulation based on temperature gradients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qian, Feng; School of Physics & Electronic Engineering, Changshu Institute of Technology, Changshu 215500; Quan, Li
Previous research with temperature gradients has shown the feasibility of controlling airborne sound propagation. Here, we present a temperature gradients based airborne sound manipulation schemes: a cylindrical acoustic omnidirectional absorber (AOA). The proposed AOA has high absorption performance which can almost completely absorb the incident wave. Geometric acoustics is used to obtain the refractive index distributions with different radii, which is then utilized to deduce the desired temperature gradients. Since resonant units are not applied in the scheme, its working bandwidth is expected to be broadband. The scheme is temperature-tuned and easy to realize, which is of potential interest tomore » fields such as noise control or acoustic cloaking.« less
Hazard-ranking of agricultural pesticides for chronic health effects in Yuma County, Arizona.
Sugeng, Anastasia J; Beamer, Paloma I; Lutz, Eric A; Rosales, Cecilia B
2013-10-01
With thousands of pesticides registered by the United States Environmental Protection Agency, it not feasible to sample for all pesticides applied in agricultural communities. Hazard-ranking pesticides based on use, toxicity, and exposure potential can help prioritize community-specific pesticide hazards. This study applied hazard-ranking schemes for cancer, endocrine disruption, and reproductive/developmental toxicity in Yuma County, Arizona. An existing cancer hazard-ranking scheme was modified, and novel schemes for endocrine disruption and reproductive/developmental toxicity were developed to rank pesticide hazards. The hazard-ranking schemes accounted for pesticide use, toxicity, and exposure potential based on chemical properties of each pesticide. Pesticides were ranked as hazards with respect to each health effect, as well as overall chronic health effects. The highest hazard-ranked pesticides for overall chronic health effects were maneb, metam-sodium, trifluralin, pronamide, and bifenthrin. The relative pesticide rankings were unique for each health effect. The highest hazard-ranked pesticides differed from those most heavily applied, as well as from those previously detected in Yuma homes over a decade ago. The most hazardous pesticides for cancer in Yuma County, Arizona were also different from a previous hazard-ranking applied in California. Hazard-ranking schemes that take into account pesticide use, toxicity, and exposure potential can help prioritize pesticides of greatest health risk in agricultural communities. This study is the first to provide pesticide hazard-rankings for endocrine disruption and reproductive/developmental toxicity based on use, toxicity, and exposure potential. These hazard-ranking schemes can be applied to other agricultural communities for prioritizing community-specific pesticide hazards to target decreasing health risk. Copyright © 2013 Elsevier B.V. All rights reserved.
Analysis of composite ablators using massively parallel computation
NASA Technical Reports Server (NTRS)
Shia, David
1995-01-01
In this work, the feasibility of using massively parallel computation to study the response of ablative materials is investigated. Explicit and implicit finite difference methods are used on a massively parallel computer, the Thinking Machines CM-5. The governing equations are a set of nonlinear partial differential equations. The governing equations are developed for three sample problems: (1) transpiration cooling, (2) ablative composite plate, and (3) restrained thermal growth testing. The transpiration cooling problem is solved using a solution scheme based solely on the explicit finite difference method. The results are compared with available analytical steady-state through-thickness temperature and pressure distributions and good agreement between the numerical and analytical solutions is found. It is also found that a solution scheme based on the explicit finite difference method has the following advantages: incorporates complex physics easily, results in a simple algorithm, and is easily parallelizable. However, a solution scheme of this kind needs very small time steps to maintain stability. A solution scheme based on the implicit finite difference method has the advantage that it does not require very small times steps to maintain stability. However, this kind of solution scheme has the disadvantages that complex physics cannot be easily incorporated into the algorithm and that the solution scheme is difficult to parallelize. A hybrid solution scheme is then developed to combine the strengths of the explicit and implicit finite difference methods and minimize their weaknesses. This is achieved by identifying the critical time scale associated with the governing equations and applying the appropriate finite difference method according to this critical time scale. The hybrid solution scheme is then applied to the ablative composite plate and restrained thermal growth problems. The gas storage term is included in the explicit pressure calculation of both problems. Results from ablative composite plate problems are compared with previous numerical results which did not include the gas storage term. It is found that the through-thickness temperature distribution is not affected much by the gas storage term. However, the through-thickness pressure and stress distributions, and the extent of chemical reactions are different from the previous numerical results. Two types of chemical reaction models are used in the restrained thermal growth testing problem: (1) pressure-independent Arrhenius type rate equations and (2) pressure-dependent Arrhenius type rate equations. The numerical results are compared to experimental results and the pressure-dependent model is able to capture the trend better than the pressure-independent one. Finally, a performance study is done on the hybrid algorithm using the ablative composite plate problem. It is found that there is a good speedup of performance on the CM-5. For 32 CPU's, the speedup of performance is 20. The efficiency of the algorithm is found to be a function of the size and execution time of a given problem and the effective parallelization of the algorithm. It also seems that there is an optimum number of CPU's to use for a given problem.
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1982-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.
Huang, Ai-Mei; Nguyen, Truong
2009-04-01
In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.
Accurate B-spline-based 3-D interpolation scheme for digital volume correlation
NASA Astrophysics Data System (ADS)
Ren, Maodong; Liang, Jin; Wei, Bin
2016-12-01
An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.
Some Aspects of Essentially Nonoscillatory (ENO) Formulations for the Euler Equations, Part 3
NASA Technical Reports Server (NTRS)
Chakravarthy, Sukumar R.
1990-01-01
An essentially nonoscillatory (ENO) formulation is described for hyperbolic systems of conservation laws. ENO approaches are based on smart interpolation to avoid spurious numerical oscillations. ENO schemes are a superset of Total Variation Diminishing (TVD) schemes. In the recent past, TVD formulations were used to construct shock capturing finite difference methods. At extremum points of the solution, TVD schemes automatically reduce to being first-order accurate discretizations locally, while away from extrema they can be constructed to be of higher order accuracy. The new framework helps construct essentially non-oscillatory finite difference methods without recourse to local reductions of accuracy to first order. Thus arbitrarily high orders of accuracy can be obtained. The basic general ideas of the new approach can be specialized in several ways and one specific implementation is described based on: (1) the integral form of the conservation laws; (2) reconstruction based on the primitive functions; (3) extension to multiple dimensions in a tensor product fashion; and (4) Runge-Kutta time integration. The resulting method is fourth-order accurate in time and space and is applicable to uniform Cartesian grids. The construction of such schemes for scalar equations and systems in one and two space dimensions is described along with several examples which illustrate interesting aspects of the new approach.
Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang
We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less
Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis
Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang; ...
2016-01-28
We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less
Core Genome Multilocus Sequence Typing Scheme for High-Resolution Typing of Enterococcus faecium
de Been, Mark; Pinholt, Mette; Top, Janetta; Bletz, Stefan; van Schaik, Willem; Brouwer, Ellen; Rogers, Malbert; Kraat, Yvette; Bonten, Marc; Corander, Jukka; Westh, Henrik; Harmsen, Dag
2015-01-01
Enterococcus faecium, a common inhabitant of the human gut, has emerged in the last 2 decades as an important multidrug-resistant nosocomial pathogen. Since the start of the 21st century, multilocus sequence typing (MLST) has been used to study the molecular epidemiology of E. faecium. However, due to the use of a small number of genes, the resolution of MLST is limited. Whole-genome sequencing (WGS) now allows for high-resolution tracing of outbreaks, but current WGS-based approaches lack standardization, rendering them less suitable for interlaboratory prospective surveillance. To overcome this limitation, we developed a core genome MLST (cgMLST) scheme for E. faecium. cgMLST transfers genome-wide single nucleotide polymorphism (SNP) diversity into a standardized and portable allele numbering system that is far less computationally intensive than SNP-based analysis of WGS data. The E. faecium cgMLST scheme was built using 40 genome sequences that represented the diversity of the species. The scheme consists of 1,423 cgMLST target genes. To test the performance of the scheme, we performed WGS analysis of 103 outbreak isolates from five different hospitals in the Netherlands, Denmark, and Germany. The cgMLST scheme performed well in distinguishing between epidemiologically related and unrelated isolates, even between those that had the same sequence type (ST), which denotes the higher discriminatory power of this cgMLST scheme over that of conventional MLST. We also show that in terms of resolution, the performance of the E. faecium cgMLST scheme is equivalent to that of an SNP-based approach. In conclusion, the cgMLST scheme developed in this study facilitates rapid, standardized, and high-resolution tracing of E. faecium outbreaks. PMID:26400782
Core Genome Multilocus Sequence Typing Scheme for High- Resolution Typing of Enterococcus faecium.
de Been, Mark; Pinholt, Mette; Top, Janetta; Bletz, Stefan; Mellmann, Alexander; van Schaik, Willem; Brouwer, Ellen; Rogers, Malbert; Kraat, Yvette; Bonten, Marc; Corander, Jukka; Westh, Henrik; Harmsen, Dag; Willems, Rob J L
2015-12-01
Enterococcus faecium, a common inhabitant of the human gut, has emerged in the last 2 decades as an important multidrug-resistant nosocomial pathogen. Since the start of the 21st century, multilocus sequence typing (MLST) has been used to study the molecular epidemiology of E. faecium. However, due to the use of a small number of genes, the resolution of MLST is limited. Whole-genome sequencing (WGS) now allows for high-resolution tracing of outbreaks, but current WGS-based approaches lack standardization, rendering them less suitable for interlaboratory prospective surveillance. To overcome this limitation, we developed a core genome MLST (cgMLST) scheme for E. faecium. cgMLST transfers genome-wide single nucleotide polymorphism(SNP) diversity into a standardized and portable allele numbering system that is far less computationally intensive than SNP-based analysis of WGS data. The E. faecium cgMLST scheme was built using 40 genome sequences that represented the diversity of the species. The scheme consists of 1,423 cgMLST target genes. To test the performance of the scheme, we performed WGS analysis of 103 outbreak isolates from five different hospitals in the Netherlands, Denmark, and Germany. The cgMLST scheme performed well in distinguishing between epidemiologically related and unrelated isolates, even between those that had the same sequence type (ST), which denotes the higher discriminatory power of this cgMLST scheme over that of conventional MLST. We also show that in terms of resolution, the performance of the E. faecium cgMLST scheme is equivalent to that of an SNP-based approach. In conclusion, the cgMLST scheme developed in this study facilitates rapid, standardized, and high-resolution tracing of E. faecium outbreaks.
Yousaf, Sidrah; Javaid, Nadeem; Qasim, Umar; Alrajeh, Nabil; Khan, Zahoor Ali; Ahmed, Mansoor
2016-02-24
In this study, we analyse incremental cooperative communication for wireless body area networks (WBANs) with different numbers of relays. Energy efficiency (EE) and the packet error rate (PER) are investigated for different schemes. We propose a new cooperative communication scheme with three-stage relaying and compare it to existing schemes. Our proposed scheme provides reliable communication with less PER at the cost of surplus energy consumption. Analytical expressions for the EE of the proposed three-stage cooperative communication scheme are also derived, taking into account the effect of PER. Later on, the proposed three-stage incremental cooperation is implemented in a network layer protocol; enhanced incremental cooperative critical data transmission in emergencies for static WBANs (EInCo-CEStat). Extensive simulations are conducted to validate the proposed scheme. Results of incremental relay-based cooperative communication protocols are compared to two existing cooperative routing protocols: cooperative critical data transmission in emergencies for static WBANs (Co-CEStat) and InCo-CEStat. It is observed from the simulation results that incremental relay-based cooperation is more energy efficient than the existing conventional cooperation protocol, Co-CEStat. The results also reveal that EInCo-CEStat proves to be more reliable with less PER and higher throughput than both of the counterpart protocols. However, InCo-CEStat has less throughput with a greater stability period and network lifetime. Due to the availability of more redundant links, EInCo-CEStat achieves a reduced packet drop rate at the cost of increased energy consumption.
Yousaf, Sidrah; Javaid, Nadeem; Qasim, Umar; Alrajeh, Nabil; Khan, Zahoor Ali; Ahmed, Mansoor
2016-01-01
In this study, we analyse incremental cooperative communication for wireless body area networks (WBANs) with different numbers of relays. Energy efficiency (EE) and the packet error rate (PER) are investigated for different schemes. We propose a new cooperative communication scheme with three-stage relaying and compare it to existing schemes. Our proposed scheme provides reliable communication with less PER at the cost of surplus energy consumption. Analytical expressions for the EE of the proposed three-stage cooperative communication scheme are also derived, taking into account the effect of PER. Later on, the proposed three-stage incremental cooperation is implemented in a network layer protocol; enhanced incremental cooperative critical data transmission in emergencies for static WBANs (EInCo-CEStat). Extensive simulations are conducted to validate the proposed scheme. Results of incremental relay-based cooperative communication protocols are compared to two existing cooperative routing protocols: cooperative critical data transmission in emergencies for static WBANs (Co-CEStat) and InCo-CEStat. It is observed from the simulation results that incremental relay-based cooperation is more energy efficient than the existing conventional cooperation protocol, Co-CEStat. The results also reveal that EInCo-CEStat proves to be more reliable with less PER and higher throughput than both of the counterpart protocols. However, InCo-CEStat has less throughput with a greater stability period and network lifetime. Due to the availability of more redundant links, EInCo-CEStat achieves a reduced packet drop rate at the cost of increased energy consumption. PMID:26927104
Spatially Common Sparsity Based Adaptive Channel Estimation and Feedback for FDD Massive MIMO
NASA Astrophysics Data System (ADS)
Gao, Zhen; Dai, Linglong; Wang, Zhaocheng; Chen, Sheng
2015-12-01
This paper proposes a spatially common sparsity based adaptive channel estimation and feedback scheme for frequency division duplex based massive multi-input multi-output (MIMO) systems, which adapts training overhead and pilot design to reliably estimate and feed back the downlink channel state information (CSI) with significantly reduced overhead. Specifically, a non-orthogonal downlink pilot design is first proposed, which is very different from standard orthogonal pilots. By exploiting the spatially common sparsity of massive MIMO channels, a compressive sensing (CS) based adaptive CSI acquisition scheme is proposed, where the consumed time slot overhead only adaptively depends on the sparsity level of the channels. Additionally, a distributed sparsity adaptive matching pursuit algorithm is proposed to jointly estimate the channels of multiple subcarriers. Furthermore, by exploiting the temporal channel correlation, a closed-loop channel tracking scheme is provided, which adaptively designs the non-orthogonal pilot according to the previous channel estimation to achieve an enhanced CSI acquisition. Finally, we generalize the results of the multiple-measurement-vectors case in CS and derive the Cramer-Rao lower bound of the proposed scheme, which enlightens us to design the non-orthogonal pilot signals for the improved performance. Simulation results demonstrate that the proposed scheme outperforms its counterparts, and it is capable of approaching the performance bound.
A study of the spreading scheme for viral marketing based on a complex network model
NASA Astrophysics Data System (ADS)
Yang, Jianmei; Yao, Canzhong; Ma, Weicheng; Chen, Guanrong
2010-02-01
Buzzword-based viral marketing, known also as digital word-of-mouth marketing, is a marketing mode attached to some carriers on the Internet, which can rapidly copy marketing information at a low cost. Viral marketing actually uses a pre-existing social network where, however, the scale of the pre-existing network is believed to be so large and so random, so that its theoretical analysis is intractable and unmanageable. There are very few reports in the literature on how to design a spreading scheme for viral marketing on real social networks according to the traditional marketing theory or the relatively new network marketing theory. Complex network theory provides a new model for the study of large-scale complex systems, using the latest developments of graph theory and computing techniques. From this perspective, the present paper extends the complex network theory and modeling into the research of general viral marketing and develops a specific spreading scheme for viral marking and an approach to design the scheme based on a real complex network on the QQ instant messaging system. This approach is shown to be rather universal and can be further extended to the design of various spreading schemes for viral marketing based on different instant messaging systems.
ERIC Educational Resources Information Center
Escalié, Guillaume; Chaliès, Sébastien
2016-01-01
This longitudinal case study examines whether a school-based training scheme that brings together different categories of teacher educators (university supervisors and cooperating teachers) engenders true collective training activity and, if so, whether this collective work contributes to pre-service teacher education. The scheme grew out of a…
BARI+: A Biometric Based Distributed Key Management Approach for Wireless Body Area Networks
Muhammad, Khaliq-ur-Rahman Raazi Syed; Lee, Heejo; Lee, Sungyoung; Lee, Young-Koo
2010-01-01
Wireless body area networks (WBAN) consist of resource constrained sensing devices just like other wireless sensor networks (WSN). However, they differ from WSN in topology, scale and security requirements. Due to these differences, key management schemes designed for WSN are inefficient and unnecessarily complex when applied to WBAN. Considering the key management issue, WBAN are also different from WPAN because WBAN can use random biometric measurements as keys. We highlight the differences between WSN and WBAN and propose an efficient key management scheme, which makes use of biometrics and is specifically designed for WBAN domain. PMID:22319333
NASA Astrophysics Data System (ADS)
Shen, Qian; Bai, Yanfeng; Shi, Xiaohui; Nan, Suqin; Qu, Lijie; Li, Hengxing; Fu, Xiquan
2017-07-01
The difference in imaging quality between different ghost imaging schemes is studied by using coherent-mode representation of partially coherent fields. It is shown that the difference mainly relies on the distribution changes of the decomposition coefficients of the object imaged when the light source is fixed. For a new-designed imaging scheme, we only need to give the distribution of the decomposition coefficients and compare them with that of the existing imaging system, thus one can predict imaging quality. By choosing several typical ghost imaging systems, we theoretically and experimentally verify our results.
BARI+: a biometric based distributed key management approach for wireless body area networks.
Muhammad, Khaliq-ur-Rahman Raazi Syed; Lee, Heejo; Lee, Sungyoung; Lee, Young-Koo
2010-01-01
Wireless body area networks (WBAN) consist of resource constrained sensing devices just like other wireless sensor networks (WSN). However, they differ from WSN in topology, scale and security requirements. Due to these differences, key management schemes designed for WSN are inefficient and unnecessarily complex when applied to WBAN. Considering the key management issue, WBAN are also different from WPAN because WBAN can use random biometric measurements as keys. We highlight the differences between WSN and WBAN and propose an efficient key management scheme, which makes use of biometrics and is specifically designed for WBAN domain.
NASA Astrophysics Data System (ADS)
Li, Yifan; Liang, Xihui; Lin, Jianhui; Chen, Yuejian; Liu, Jianxin
2018-02-01
This paper presents a novel signal processing scheme, feature selection based multi-scale morphological filter (MMF), for train axle bearing fault detection. In this scheme, more than 30 feature indicators of vibration signals are calculated for axle bearings with different conditions and the features which can reflect fault characteristics more effectively and representatively are selected using the max-relevance and min-redundancy principle. Then, a filtering scale selection approach for MMF based on feature selection and grey relational analysis is proposed. The feature selection based MMF method is tested on diagnosis of artificially created damages of rolling bearings of railway trains. Experimental results show that the proposed method has a superior performance in extracting fault features of defective train axle bearings. In addition, comparisons are performed with the kurtosis criterion based MMF and the spectral kurtosis criterion based MMF. The proposed feature selection based MMF method outperforms these two methods in detection of train axle bearing faults.
A summary of image segmentation techniques
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly
1993-01-01
Machine vision systems are often considered to be composed of two subsystems: low-level vision and high-level vision. Low level vision consists primarily of image processing operations performed on the input image to produce another image with more favorable characteristics. These operations may yield images with reduced noise or cause certain features of the image to be emphasized (such as edges). High-level vision includes object recognition and, at the highest level, scene interpretation. The bridge between these two subsystems is the segmentation system. Through segmentation, the enhanced input image is mapped into a description involving regions with common features which can be used by the higher level vision tasks. There is no theory on image segmentation. Instead, image segmentation techniques are basically ad hoc and differ mostly in the way they emphasize one or more of the desired properties of an ideal segmenter and in the way they balance and compromise one desired property against another. These techniques can be categorized in a number of different groups including local vs. global, parallel vs. sequential, contextual vs. noncontextual, interactive vs. automatic. In this paper, we categorize the schemes into three main groups: pixel-based, edge-based, and region-based. Pixel-based segmentation schemes classify pixels based solely on their gray levels. Edge-based schemes first detect local discontinuities (edges) and then use that information to separate the image into regions. Finally, region-based schemes start with a seed pixel (or group of pixels) and then grow or split the seed until the original image is composed of only homogeneous regions. Because there are a number of survey papers available, we will not discuss all segmentation schemes. Rather than a survey, we take the approach of a detailed overview. We focus only on the more common approaches in order to give the reader a flavor for the variety of techniques available yet present enough details to facilitate implementation and experimentation.
Adaptive Packet Combining Scheme in Three State Channel Model
NASA Astrophysics Data System (ADS)
Saring, Yang; Bulo, Yaka; Bhunia, Chandan Tilak
2018-01-01
The two popular techniques of packet combining based error correction schemes are: Packet Combining (PC) scheme and Aggressive Packet Combining (APC) scheme. PC scheme and APC scheme have their own merits and demerits; PC scheme has better throughput than APC scheme, but suffers from higher packet error rate than APC scheme. The wireless channel state changes all the time. Because of this random and time varying nature of wireless channel, individual application of SR ARQ scheme, PC scheme and APC scheme can't give desired levels of throughput. Better throughput can be achieved if appropriate transmission scheme is used based on the condition of channel. Based on this approach, adaptive packet combining scheme has been proposed to achieve better throughput. The proposed scheme adapts to the channel condition to carry out transmission using PC scheme, APC scheme and SR ARQ scheme to achieve better throughput. Experimentally, it was observed that the error correction capability and throughput of the proposed scheme was significantly better than that of SR ARQ scheme, PC scheme and APC scheme.
NASA Astrophysics Data System (ADS)
Vallam, P.; Qin, X. S.
2017-10-01
Anthropogenic-driven climate change would affect the global ecosystem and is becoming a world-wide concern. Numerous studies have been undertaken to determine the future trends of meteorological variables at different scales. Despite these studies, there remains significant uncertainty in the prediction of future climates. To examine the uncertainty arising from using different schemes to downscale the meteorological variables for the future horizons, projections from different statistical downscaling schemes were examined. These schemes included statistical downscaling method (SDSM), change factor incorporated with LARS-WG, and bias corrected disaggregation (BCD) method. Global circulation models (GCMs) based on CMIP3 (HadCM3) and CMIP5 (CanESM2) were utilized to perturb the changes in the future climate. Five study sites (i.e., Alice Springs, Edmonton, Frankfurt, Miami, and Singapore) with diverse climatic conditions were chosen for examining the spatial variability of applying various statistical downscaling schemes. The study results indicated that the regions experiencing heavy precipitation intensities were most likely to demonstrate the divergence between the predictions from various statistical downscaling methods. Also, the variance computed in projecting the weather extremes indicated the uncertainty derived from selection of downscaling tools and climate models. This study could help gain an improved understanding about the features of different downscaling approaches and the overall downscaling uncertainty.
NASA Astrophysics Data System (ADS)
Farrell, Patricio; Koprucki, Thomas; Fuhrmann, Jürgen
2017-10-01
We compare three thermodynamically consistent numerical fluxes known in the literature, appearing in a Voronoï finite volume discretization of the van Roosbroeck system with general charge carrier statistics. Our discussion includes an extension of the Scharfetter-Gummel scheme to non-Boltzmann (e.g. Fermi-Dirac) statistics. It is based on the analytical solution of a two-point boundary value problem obtained by projecting the continuous differential equation onto the interval between neighboring collocation points. Hence, it serves as a reference flux. The exact solution of the boundary value problem can be approximated by computationally cheaper fluxes which modify certain physical quantities. One alternative scheme averages the nonlinear diffusion (caused by the non-Boltzmann nature of the problem), another one modifies the effective density of states. To study the differences between these three schemes, we analyze the Taylor expansions, derive an error estimate, visualize the flux error and show how the schemes perform for a carefully designed p-i-n benchmark simulation. We present strong evidence that the flux discretization based on averaging the nonlinear diffusion has an edge over the scheme based on modifying the effective density of states.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sciarrino, Fabio; Dipartimento di Fisica and Consorzio Nazionale Interuniversitario per le Scienze Fisiche della Materia, Universita 'La Sapienza', Rome 00185; De Martini, Francesco
The optimal phase-covariant quantum cloning machine (PQCM) broadcasts the information associated to an input qubit into a multiqubit system, exploiting a partial a priori knowledge of the input state. This additional a priori information leads to a higher fidelity than for the universal cloning. The present article first analyzes different innovative schemes to implement the 1{yields}3 PQCM. The method is then generalized to any 1{yields}M machine for an odd value of M by a theoretical approach based on the general angular momentum formalism. Finally different experimental schemes based either on linear or nonlinear methods and valid for single photon polarizationmore » encoded qubits are discussed.« less
Ramesh, Tejavathu; Kumar Panda, Anup; Shiva Kumar, S
2015-07-01
In this research study, a model reference adaptive system (MRAS) speed estimator for speed sensorless direct torque and flux control (DTFC) of an induction motor drive (IMD) using two adaptation mechanism schemes are proposed to replace the conventional proportional integral controller (PIC). The first adaptation mechanism scheme is based on Type-1 fuzzy logic controller (T1FLC), which is used to achieve high performance sensorless drive in both transient as well as steady state conditions. However, the Type-1 fuzzy sets are certain and unable to work effectively when higher degree of uncertainties presents in the system which can be caused by sudden change in speed or different load disturbances, process noise etc. Therefore, a new Type-2 fuzzy logic controller (T2FLC) based adaptation mechanism scheme is proposed to better handle the higher degree of uncertainties and improves the performance and also robust to various load torque and sudden change in speed conditions, respectively. The detailed performances of various adaptation mechanism schemes are carried out in a MATLAB/Simulink environment with a speed sensor and speed sensorless modes of operation when an IMD is operating under different operating conditions, such as, no-load, load and sudden change in speed, respectively. To validate the different control approaches, the system also implemented on real-time system and adequate results are reported for its validation. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Design of a Variational Multiscale Method for Turbulent Compressible Flows
NASA Technical Reports Server (NTRS)
Diosady, Laslo Tibor; Murman, Scott M.
2013-01-01
A spectral-element framework is presented for the simulation of subsonic compressible high-Reynolds-number flows. The focus of the work is maximizing the efficiency of the computational schemes to enable unsteady simulations with a large number of spatial and temporal degrees of freedom. A collocation scheme is combined with optimized computational kernels to provide a residual evaluation with computational cost independent of order of accuracy up to 16th order. The optimized residual routines are used to develop a low-memory implicit scheme based on a matrix-free Newton-Krylov method. A preconditioner based on the finite-difference diagonalized ADI scheme is developed which maintains the low memory of the matrix-free implicit solver, while providing improved convergence properties. Emphasis on low memory usage throughout the solver development is leveraged to implement a coupled space-time DG solver which may offer further efficiency gains through adaptivity in both space and time.
Quantum coordinated multi-point communication based on entanglement swapping
NASA Astrophysics Data System (ADS)
Du, Gang; Shang, Tao; Liu, Jian-wei
2017-05-01
In a quantum network, adjacent nodes can communicate with each other point to point by using pre-shared Einsten-Podolsky-Rosen (EPR) pairs, and furthermore remote nodes can establish entanglement channels by using quantum routing among intermediate nodes. However, with the rapid development of quantum networks, the demand of various message transmission among nodes inevitably emerges. In order to realize this goal and extend quantum networks, we propose a quantum coordinated multi-point communication scheme based on entanglement swapping. The scheme takes full advantage of EPR pairs between adjacent nodes and performs multi-party entanglement swapping to transmit messages. Considering various demands of communication, all nodes work cooperatively to realize different message transmission modes, including one to many, many to one and one to some. Scheme analysis shows that the proposed scheme can flexibly organize a coordinated group and efficiently use EPR resources, while it meets basic security requirement under the condition of coordinated communication.
Positivity-preserving numerical schemes for multidimensional advection
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Macvean, M. K.; Lock, A. P.
1993-01-01
This report describes the construction of an explicit, single time-step, conservative, finite-volume method for multidimensional advective flow, based on a uniformly third-order polynomial interpolation algorithm (UTOPIA). Particular attention is paid to the problem of flow-to-grid angle-dependent, anisotropic distortion typical of one-dimensional schemes used component-wise. The third-order multidimensional scheme automatically includes certain cross-difference terms that guarantee good isotropy (and stability). However, above first-order, polynomial-based advection schemes do not preserve positivity (the multidimensional analogue of monotonicity). For this reason, a multidimensional generalization of the first author's universal flux-limiter is sought. This is a very challenging problem. A simple flux-limiter can be found; but this introduces strong anisotropic distortion. A more sophisticated technique, limiting part of the flux and then restoring the isotropy-maintaining cross-terms afterwards, gives more satisfactory results. Test cases are confined to two dimensions; three-dimensional extensions are briefly discussed.
A DFIG Islanding Detection Scheme Based on Reactive Power Infusion
NASA Astrophysics Data System (ADS)
Wang, M.; Liu, C.; He, G. Q.; Li, G. H.; Feng, K. H.; Sun, W. W.
2017-07-01
A lot of research has been done on photovoltaic (the “PV”) power system islanding detection in recent years. As a comparison, much less attention has been paid to islanding in wind turbines. Meanwhile, wind turbines can work in islanding conditions for quite a long period, which can be harmful to equipments and cause safety hazards. This paper presents and examines a double fed introduction generation (the “DFIG”) islanding detection scheme based on feedback of reactive power and frequency and uses a trigger signal of reactive power infusion which can be obtained by dividing the voltage total harmonic distortion (the "THD") by the voltage THD of last cycle to avoid the deterioration of power quality. This DFIG islanding detection scheme uses feedback of reactive power current loop to amplify the frequency differences in islanding and normal conditions. Simulation results show that the DFIG islanding detection scheme is effective.
Optical authentication based on moiré effect of nonlinear gratings in phase space
NASA Astrophysics Data System (ADS)
Liao, Meihua; He, Wenqi; Wu, Jiachen; Lu, Dajiang; Liu, Xiaoli; Peng, Xiang
2015-12-01
An optical authentication scheme based on the moiré effect of nonlinear gratings in phase space is proposed. According to the phase function relationship of the moiré effect in phase space, an arbitrary authentication image can be encoded into two nonlinear gratings which serve as the authentication lock (AL) and the authentication key (AK). The AL is stored in the authentication system while the AK is assigned to the authorized user. The authentication procedure can be performed using an optoelectronic approach, while the design process is accomplished by a digital approach. Furthermore, this optical authentication scheme can be extended for multiple users with different security levels. The proposed scheme can not only verify the legality of a user identity, but can also discriminate and control the security levels of legal users. Theoretical analysis and simulation experiments are provided to verify the feasibility and effectiveness of the proposed scheme.
Least-squares finite element methods for compressible Euler equations
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Carey, G. F.
1990-01-01
A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.
Nsiah-Boateng, Eric; Asenso-Boadi, Francis; Dsane-Selby, Lydia; Andoh-Adjei, Francis-Xavier; Otoo, Nathaniel; Akweongo, Patricia; Aikins, Moses
2017-02-06
A robust medical claims review system is crucial for addressing fraud and abuse and ensuring financial viability of health insurance organisations. This paper assesses claims adjustment rate of the paper- and electronic-based claims reviews of the National Health Insurance Scheme (NHIS) in Ghana. The study was a cross-sectional comparative assessment of paper- and electronic-based claims reviews of the NHIS. Medical claims of subscribers for the year, 2014 were requested from the claims directorate and analysed. Proportions of claims adjusted by the paper- and electronic-based claims reviews were determined for each type of healthcare facility. Bivariate analyses were also conducted to test for differences in claims adjustments between healthcare facility types, and between the two claims reviews. The electronic-based review made overall adjustment of 17.0% from GHS10.09 million (USD2.64 m) claims cost whilst the paper-based review adjusted 4.9% from a total of GHS57.50 million (USD15.09 m) claims cost received, and the difference was significant (p < 0.001). However, there were no significant differences in claims cost adjustment rate between healthcare facility types by the electronic-based (p = 0.0656) and by the paper-based reviews (p = 0.6484). The electronic-based review adjusted significantly higher claims cost than the paper-based claims review. Scaling up the electronic-based review to cover claims from all accredited care providers could reduce spurious claims cost to the scheme and ensure long term financial sustainability.
NASA Astrophysics Data System (ADS)
Liu, Zhengguang; Li, Xiaoli
2018-05-01
In this article, we present a new second-order finite difference discrete scheme for a fractal mobile/immobile transport model based on equivalent transformative Caputo formulation. The new transformative formulation takes the singular kernel away to make the integral calculation more efficient. Furthermore, this definition is also effective where α is a positive integer. Besides, the T-Caputo derivative also helps us to increase the convergence rate of the discretization of the α-order(0 < α < 1) Caputo derivative from O(τ2-α) to O(τ3-α), where τ is the time step. For numerical analysis, a Crank-Nicolson finite difference scheme to solve the fractal mobile/immobile transport model is introduced and analyzed. The unconditional stability and a priori estimates of the scheme are given rigorously. Moreover, the applicability and accuracy of the scheme are demonstrated by numerical experiments to support our theoretical analysis.
Optical sectioning in induced coherence tomography with frequency-entangled photons
NASA Astrophysics Data System (ADS)
Vallés, Adam; Jiménez, Gerard; Salazar-Serrano, Luis José; Torres, Juan P.
2018-02-01
We demonstrate a different scheme to perform optical sectioning of a sample based on the concept of induced coherence [Zou et al., Phys. Rev. Lett. 67, 318 (1991), 10.1103/PhysRevLett.67.318]. This can be viewed as a different type of optical coherence tomography scheme where the varying reflectivity of the sample along the direction of propagation of an optical beam translates into changes of the degree of first-order coherence between two beams. As a practical advantage the scheme allows probing the sample with one wavelength and measuring photons with another wavelength. In a bio-imaging scenario, this would result in a deeper penetration into the sample because of probing with longer wavelengths, while still using the optimum wavelength for detection. The scheme proposed here could achieve submicron axial resolution by making use of nonlinear parametric sources with broad spectral bandwidth emission.
Comparison of Grouping Schemes for Exposure to Total Dust in Cement Factories in Korea.
Koh, Dong-Hee; Kim, Tae-Woo; Jang, Seung Hee; Ryu, Hyang-Woo; Park, Donguk
2015-08-01
The purpose of this study was to evaluate grouping schemes for exposure to total dust in cement industry workers using non-repeated measurement data. In total, 2370 total dust measurements taken from nine Portland cement factories in 1995-2009 were analyzed. Various grouping schemes were generated based on work process, job, factory, or average exposure. To characterize variance components of each grouping scheme, we developed mixed-effects models with a B-spline time trend incorporated as fixed effects and a grouping variable incorporated as a random effect. Using the estimated variance components, elasticity was calculated. To compare the prediction performances of different grouping schemes, 10-fold cross-validation tests were conducted, and root mean squared errors and pooled correlation coefficients were calculated for each grouping scheme. The five exposure groups created a posteriori by ranking job and factory combinations according to average dust exposure showed the best prediction performance and highest elasticity among various grouping schemes. Our findings suggest a grouping method based on ranking of job, and factory combinations would be the optimal choice in this population. Our grouping method may aid exposure assessment efforts in similar occupational settings, minimizing the misclassification of exposures. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Control of parallel manipulators using force feedback
NASA Technical Reports Server (NTRS)
Nanua, Prabjot
1994-01-01
Two control schemes are compared for parallel robotic mechanisms actuated by hydraulic cylinders. One scheme, the 'rate based scheme', uses the position and rate information only for feedback. The second scheme, the 'force based scheme' feeds back the force information also. The force control scheme is shown to improve the response over the rate control one. It is a simple constant gain control scheme better suited to parallel mechanisms. The force control scheme can be easily modified for the dynamic forces on the end effector. This paper presents the results of a computer simulation of both the rate and force control schemes. The gains in the force based scheme can be individually adjusted in all three directions, whereas the adjustment in just one direction of the rate based scheme directly affects the other two directions.
NASA Astrophysics Data System (ADS)
Yakushin, Sergey S.; Wolf, Alexey A.; Dostovalov, Alexandr V.; Skvortsov, Mikhail I.; Wabnitz, Stefan; Babin, Sergey A.
2018-07-01
Fiber Bragg gratings with different reflection wavelengths have been inscribed in different cores of a dual-core fiber section. The effect of fiber bending on the FBG reflection spectra has been studied. Various interrogation schemes are presented, including a single-end scheme based on a cross-talk between the cores that uses only standard optical components. Simultaneous interrogation of the FBGs in both cores allows to achieve a bending sensitivity of 12.8 pm/m-1, being free of temperature and strain influence. The technology enables the development of real-time bending sensors with high spatial resolution based on series of FBGs with different wavelength inscribed along the multi-core fiber.
Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris
2017-12-15
Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.
Primdahl, Jørgen; Vesterager, Jens Peter; Finn, John A; Vlahos, George; Kristensen, Lone; Vejre, Henrik
2010-06-01
Agri-Environment Schemes (AES) to maintain or promote environmentally-friendly farming practices were implemented on about 25% of all agricultural land in the EU by 2002. This article analyses and discusses the actual and potential use of impact models in supporting the design, implementation and evaluation of AES. Impact models identify and establish the causal relationships between policy objectives and policy outcomes. We review and discuss the role of impact models at different stages in the AES policy process, and present results from a survey of impact models underlying 60 agri-environmental schemes in seven EU member states. We distinguished among three categories of impact models (quantitative, qualitative or common sense), depending on the degree of evidence in the formal scheme description, additional documents, or key person interviews. The categories of impact models used mainly depended on whether scheme objectives were related to natural resources, biodiversity or landscape. A higher proportion of schemes dealing with natural resources (primarily water) were based on quantitative impact models, compared to those concerned with biodiversity or landscape. Schemes explicitly targeted either on particular parts of individual farms or specific areas tended to be based more on quantitative impact models compared to whole-farm schemes and broad, horizontal schemes. We conclude that increased and better use of impact models has significant potential to improve efficiency and effectiveness of AES. (c) 2009 Elsevier Ltd. All rights reserved.
SU-C-207B-03: A Geometrical Constrained Chan-Vese Based Tumor Segmentation Scheme for PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, L; Zhou, Z; Wang, J
Purpose: Accurate segmentation of tumor in PET is challenging when part of tumor is connected with normal organs/tissues with no difference in intensity. Conventional segmentation methods, such as thresholding or region growing, cannot generate satisfactory results in this case. We proposed a geometrical constrained Chan-Vese based scheme to segment tumor in PET for this special case by considering the similarity between two adjacent slices. Methods: The proposed scheme performs segmentation in a slice-by-slice fashion where an accurate segmentation of one slice is used as the guidance for segmentation of rest slices. For a slice that the tumor is not directlymore » connected to organs/tissues with similar intensity values, a conventional clustering-based segmentation method under user’s guidance is used to obtain an exact tumor contour. This is set as the initial contour and the Chan-Vese algorithm is applied for segmenting the tumor in the next adjacent slice by adding constraints of tumor size, position and shape information. This procedure is repeated until the last slice of PET containing tumor. The proposed geometrical constrained Chan-Vese based algorithm was implemented in Matlab and its performance was tested on several cervical cancer patients where cervix and bladder are connected with similar activity values. The positive predictive values (PPV) are calculated to characterize the segmentation accuracy of the proposed scheme. Results: Tumors were accurately segmented by the proposed method even when they are connected with bladder in the image with no difference in intensity. The average PPVs were 0.9571±0.0355 and 0.9894±0.0271 for 17 slices and 11 slices of PET from two patients, respectively. Conclusion: We have developed a new scheme to segment tumor in PET images for the special case that the tumor is quite similar to or connected to normal organs/tissues in the image. The proposed scheme can provide a reliable way for segmenting tumors.« less
Hackett, Julia; Glidewell, Liz; West, Robert; Carder, Paul; Doran, Tim; Foy, Robbie
2014-10-25
A range of policy initiatives have addressed inequalities in healthcare and health outcomes. Local pay-for-performance schemes for primary care have been advocated as means of enhancing clinical ownership of the quality agenda and better targeting local need compared with national schemes such as the UK Quality and Outcomes Framework (QOF). We investigated whether professionals' experience of a local scheme in one English National Health Service (NHS) former primary care trust (PCT) differed from that of the national QOF in relation to the goal of reducing inequalities. We conducted retrospective semi-structured interviews with primary care professionals implementing the scheme and those involved in its development. We purposively sampled practices with varying levels of population socio-economic deprivation and achievement. Interviews explored perceptions of the scheme and indicators, likely mechanisms of influence on practice, perceived benefits and harms, and how future schemes could be improved. We used a framework approach to analysis. Thirty-eight professionals from 16 general practices and six professionals involved in developing local indicators participated. Our findings cover four themes: ownership, credibility of the indicators, influences on behaviour, and exacerbated tensions. We found little evidence that the scheme engendered any distinctive sense of ownership or experiences different from the national scheme. Although the indicators and their evidence base were seldom actively questioned, doubts were expressed about their focus on health promotion given that eventual benefits relied upon patient action and availability of local resources. Whilst practices serving more affluent populations reported status and patient benefit as motivators for participating in the scheme, those serving more deprived populations highlighted financial reward. The scheme exacerbated tensions between patient and professional consultation agendas, general practitioners benefitting directly from incentives and nurses who did much of the work, and practices serving more and less affluent populations which faced different challenges in achieving targets. The contentious nature of pay-for-performance was not necessarily reduced by local adaptation. Those developing future schemes should consider differential rewards and supportive resources for practices serving more deprived populations, and employing a wider range of levers to promote professional understanding and ownership of indicators.
Development of Mycoplasma synoviae (MS) core genome multilocus sequence typing (cgMLST) scheme.
Ghanem, Mostafa; El-Gazzar, Mohamed
2018-05-01
Mycoplasma synoviae (MS) is a poultry pathogen with reported increased prevalence and virulence in recent years. MS strain identification is essential for prevention, control efforts and epidemiological outbreak investigations. Multiple multilocus based sequence typing schemes have been developed for MS, yet the resolution of these schemes could be limited for outbreak investigation. The cost of whole genome sequencing became close to that of sequencing the seven MLST targets; however, there is no standardized method for typing MS strains based on whole genome sequences. In this paper, we propose a core genome multilocus sequence typing (cgMLST) scheme as a standardized and reproducible method for typing MS based whole genome sequences. A diverse set of 25 MS whole genome sequences were used to identify 302 core genome genes as cgMLST targets (35.5% of MS genome) and 44 whole genome sequences of MS isolates from six countries in four continents were used for typing applying this scheme. cgMLST based phylogenetic trees displayed a high degree of agreement with core genome SNP based analysis and available epidemiological information. cgMLST allowed evaluation of two conventional MLST schemes of MS. The high discriminatory power of cgMLST allowed differentiation between samples of the same conventional MLST type. cgMLST represents a standardized, accurate, highly discriminatory, and reproducible method for differentiation between MS isolates. Like conventional MLST, it provides stable and expandable nomenclature, allowing for comparing and sharing the typing results between different laboratories worldwide. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Guo, Xi; Wang, Min; Wang, Lu; Wang, Yao; Chen, Tingting; Wu, Pan; Chen, Min; Liu, Bin; Feng, Lu
2018-01-01
Serotyping based on surface polysaccharide antigens is important for the clinical detection and epidemiological surveillance of pathogens. Polysaccharide gene clusters (PSgcs) are typically responsible for the diversity of bacterial surface polysaccharides. Through whole-genome sequencing and analysis, eight putative PSgc types were identified in 23 Enterobacter aerogenes strains from several geographic areas, allowing us to present the first molecular serotyping system for E. aerogenes. A conventional antigenic scheme was also established and correlated well with the molecular serotyping system that was based on PSgc genetic variation, indicating that PSgc-based molecular typing and immunological serology provide equally valid results. Further, a multiplex Luminex-based array was developed, and a double-blind test was conducted with 97 clinical specimens from Shanghai, China, to validate our array. The results of these analyses indicated that strains containing PSgc4 and PSgc7 comprised the predominant groups. We then examined 86 publicly available E. aerogenes strain genomes and identified an additional seven novel PSgc types, with PSgc10 being the most abundant type. In total, our study identified 15 PSgc types in E. aerogenes, providing the basis for a molecular serotyping scheme. From these results, differing epidemic patterns were identified between strains that were predominant in different regions. Our study highlights the feasibility and reliability of a serotyping system based on PSgc diversity, and for the first time, presents a molecular serotyping system, as well as an antigenic scheme for E. aerogenes, providing the basis for molecular diagnostics and epidemiological surveillance of this important emerging pathogen. PMID:29616012
Guo, Xi; Wang, Min; Wang, Lu; Wang, Yao; Chen, Tingting; Wu, Pan; Chen, Min; Liu, Bin; Feng, Lu
2018-01-01
Serotyping based on surface polysaccharide antigens is important for the clinical detection and epidemiological surveillance of pathogens. Polysaccharide gene clusters (PSgcs) are typically responsible for the diversity of bacterial surface polysaccharides. Through whole-genome sequencing and analysis, eight putative PSgc types were identified in 23 Enterobacter aerogenes strains from several geographic areas, allowing us to present the first molecular serotyping system for E. aerogenes . A conventional antigenic scheme was also established and correlated well with the molecular serotyping system that was based on PSgc genetic variation, indicating that PSgc-based molecular typing and immunological serology provide equally valid results. Further, a multiplex Luminex-based array was developed, and a double-blind test was conducted with 97 clinical specimens from Shanghai, China, to validate our array. The results of these analyses indicated that strains containing PSgc4 and PSgc7 comprised the predominant groups. We then examined 86 publicly available E. aerogenes strain genomes and identified an additional seven novel PSgc types, with PSgc10 being the most abundant type. In total, our study identified 15 PSgc types in E. aerogenes , providing the basis for a molecular serotyping scheme. From these results, differing epidemic patterns were identified between strains that were predominant in different regions. Our study highlights the feasibility and reliability of a serotyping system based on PSgc diversity, and for the first time, presents a molecular serotyping system, as well as an antigenic scheme for E. aerogenes , providing the basis for molecular diagnostics and epidemiological surveillance of this important emerging pathogen.
On fuzzy semantic similarity measure for DNA coding.
Ahmad, Muneer; Jung, Low Tang; Bhuiyan, Md Al-Amin
2016-02-01
A coding measure scheme numerically translates the DNA sequence to a time domain signal for protein coding regions identification. A number of coding measure schemes based on numerology, geometry, fixed mapping, statistical characteristics and chemical attributes of nucleotides have been proposed in recent decades. Such coding measure schemes lack the biologically meaningful aspects of nucleotide data and hence do not significantly discriminate coding regions from non-coding regions. This paper presents a novel fuzzy semantic similarity measure (FSSM) coding scheme centering on FSSM codons׳ clustering and genetic code context of nucleotides. Certain natural characteristics of nucleotides i.e. appearance as a unique combination of triplets, preserving special structure and occurrence, and ability to own and share density distributions in codons have been exploited in FSSM. The nucleotides׳ fuzzy behaviors, semantic similarities and defuzzification based on the center of gravity of nucleotides revealed a strong correlation between nucleotides in codons. The proposed FSSM coding scheme attains a significant enhancement in coding regions identification i.e. 36-133% as compared to other existing coding measure schemes tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Entropy Splitting for High Order Numerical Simulation of Compressible Turbulence
NASA Technical Reports Server (NTRS)
Sandham, N. D.; Yee, H. C.; Kwak, Dochan (Technical Monitor)
2000-01-01
A stable high order numerical scheme for direct numerical simulation (DNS) of shock-free compressible turbulence is presented. The method is applicable to general geometries. It contains no upwinding, artificial dissipation, or filtering. Instead the method relies on the stabilizing mechanisms of an appropriate conditioning of the governing equations and the use of compatible spatial difference operators for the interior points (interior scheme) as well as the boundary points (boundary scheme). An entropy splitting approach splits the inviscid flux derivatives into conservative and non-conservative portions. The spatial difference operators satisfy a summation by parts condition leading to a stable scheme (combined interior and boundary schemes) for the initial boundary value problem using a generalized energy estimate. A Laplacian formulation of the viscous and heat conduction terms on the right hand side of the Navier-Stokes equations is used to ensure that any tendency to odd-even decoupling associated with central schemes can be countered by the fluid viscosity. A special formulation of the continuity equation is used, based on similar arguments. The resulting methods are able to minimize spurious high frequency oscillation producing nonlinear instability associated with pure central schemes, especially for long time integration simulation such as DNS. For validation purposes, the methods are tested in a DNS of compressible turbulent plane channel flow at a friction Mach number of 0.1 where a very accurate turbulence data base exists. It is demonstrated that the methods are robust in terms of grid resolution, and in good agreement with incompressible channel data, as expected at this Mach number. Accurate turbulence statistics can be obtained with moderate grid sizes. Stability limits on the range of the splitting parameter are determined from numerical tests.
Sanjeevi, Sathish K P; Zarghami, Ahad; Padding, Johan T
2018-04-01
Various curved no-slip boundary conditions available in literature improve the accuracy of lattice Boltzmann simulations compared to the traditional staircase approximation of curved geometries. Usually, the required unknown distribution functions emerging from the solid nodes are computed based on the known distribution functions using interpolation or extrapolation schemes. On using such curved boundary schemes, there will be mass loss or gain at each time step during the simulations, especially apparent at high Reynolds numbers, which is called mass leakage. Such an issue becomes severe in periodic flows, where the mass leakage accumulation would affect the computed flow fields over time. In this paper, we examine mass leakage of the most well-known curved boundary treatments for high-Reynolds-number flows. Apart from the existing schemes, we also test different forced mass conservation schemes and a constant density scheme. The capability of each scheme is investigated and, finally, recommendations for choosing a proper boundary condition scheme are given for stable and accurate simulations.
NASA Astrophysics Data System (ADS)
Sanjeevi, Sathish K. P.; Zarghami, Ahad; Padding, Johan T.
2018-04-01
Various curved no-slip boundary conditions available in literature improve the accuracy of lattice Boltzmann simulations compared to the traditional staircase approximation of curved geometries. Usually, the required unknown distribution functions emerging from the solid nodes are computed based on the known distribution functions using interpolation or extrapolation schemes. On using such curved boundary schemes, there will be mass loss or gain at each time step during the simulations, especially apparent at high Reynolds numbers, which is called mass leakage. Such an issue becomes severe in periodic flows, where the mass leakage accumulation would affect the computed flow fields over time. In this paper, we examine mass leakage of the most well-known curved boundary treatments for high-Reynolds-number flows. Apart from the existing schemes, we also test different forced mass conservation schemes and a constant density scheme. The capability of each scheme is investigated and, finally, recommendations for choosing a proper boundary condition scheme are given for stable and accurate simulations.
Ranak, M S A Noman; Azad, Saiful; Nor, Nur Nadiah Hanim Binti Mohd; Zamli, Kamal Z
2017-01-01
Due to recent advancements and appealing applications, the purchase rate of smart devices is increasing at a higher rate. Parallely, the security related threats and attacks are also increasing at a greater ratio on these devices. As a result, a considerable number of attacks have been noted in the recent past. To resist these attacks, many password-based authentication schemes are proposed. However, most of these schemes are not screen size independent; whereas, smart devices come in different sizes. Specifically, they are not suitable for miniature smart devices due to the small screen size and/or lack of full sized keyboards. In this paper, we propose a new screen size independent password-based authentication scheme, which also offers an affordable defense against shoulder surfing, brute force, and smudge attacks. In the proposed scheme, the Press Touch (PT)-a.k.a., Force Touch in Apple's MacBook, Apple Watch, ZTE's Axon 7 phone; 3D Touch in iPhone 6 and 7; and so on-is transformed into a new type of code, named Press Touch Code (PTC). We design and implement three variants of it, namely mono-PTC, multi-PTC, and multi-PTC with Grid, on the Android Operating System. An in-lab experiment and a comprehensive survey have been conducted on 105 participants to demonstrate the effectiveness of the proposed scheme.
Ranak, M. S. A. Noman; Nor, Nur Nadiah Hanim Binti Mohd; Zamli, Kamal Z.
2017-01-01
Due to recent advancements and appealing applications, the purchase rate of smart devices is increasing at a higher rate. Parallely, the security related threats and attacks are also increasing at a greater ratio on these devices. As a result, a considerable number of attacks have been noted in the recent past. To resist these attacks, many password-based authentication schemes are proposed. However, most of these schemes are not screen size independent; whereas, smart devices come in different sizes. Specifically, they are not suitable for miniature smart devices due to the small screen size and/or lack of full sized keyboards. In this paper, we propose a new screen size independent password-based authentication scheme, which also offers an affordable defense against shoulder surfing, brute force, and smudge attacks. In the proposed scheme, the Press Touch (PT)—a.k.a., Force Touch in Apple’s MacBook, Apple Watch, ZTE’s Axon 7 phone; 3D Touch in iPhone 6 and 7; and so on—is transformed into a new type of code, named Press Touch Code (PTC). We design and implement three variants of it, namely mono-PTC, multi-PTC, and multi-PTC with Grid, on the Android Operating System. An in-lab experiment and a comprehensive survey have been conducted on 105 participants to demonstrate the effectiveness of the proposed scheme. PMID:29084262
A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.
Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less
A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes
Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.
2017-02-05
Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less
Adaptive Q–V Scheme for the Voltage Control of a DFIG-Based Wind Power Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jinho; Seok, Jul-Ki; Muljadi, Eduard
Wind generators within a wind power plant (WPP) will produce different amounts of active power because of the wake effect, and therefore, they have different reactive power capabilities. This paper proposes an adaptive reactive power to the voltage (Q-V) scheme for the voltage control of a doubly fed induction generator (DFIG)-based WPP. In the proposed scheme, the WPP controller uses a voltage control mode and sends a voltage error signal to each DFIG. The DFIG controller also employs a voltage control mode utilizing the adaptive Q-V characteristics depending on the reactive power capability such that a DFIG with a largermore » reactive power capability will inject more reactive power to ensure fast voltage recovery. Test results indicate that the proposed scheme can recover the voltage within a short time, even for a grid fault with a small short-circuit ratio, by making use of the available reactive power of a WPP and differentiating the reactive power injection in proportion to the reactive power capability. This will, therefore, help to reduce the additional reactive power and ensure fast voltage recovery.« less
A New Scheme for the Design of Hilbert Transform Pairs of Biorthogonal Wavelet Bases
NASA Astrophysics Data System (ADS)
Shi, Hongli; Luo, Shuqian
2010-12-01
In designing the Hilbert transform pairs of biorthogonal wavelet bases, it has been shown that the requirements of the equal-magnitude responses and the half-sample phase offset on the lowpass filters are the necessary and sufficient condition. In this paper, the relationship between the phase offset and the vanishing moment difference of biorthogonal scaling filters is derived, which implies a simple way to choose the vanishing moments so that the phase response requirement can be satisfied structurally. The magnitude response requirement is approximately achieved by a constrained optimization procedure, where the objective function and constraints are all expressed in terms of the auxiliary filters of scaling filters rather than the scaling filters directly. Generally, the calculation burden in the design implementation will be less than that of the current schemes. The integral of magnitude response difference between the primal and dual scaling filters has been chosen as the objective function, which expresses the magnitude response requirements in the whole frequency range. Two design examples illustrate that the biorthogonal wavelet bases designed by the proposed scheme are very close to Hilbert transform pairs.
NASA Astrophysics Data System (ADS)
Miller, V. M.; Semiatin, S. L.; Szczepanski, C.; Pilchak, A. L.
2018-06-01
The ability to predict the evolution of crystallographic texture during hot work of titanium alloys in the α + β temperature regime is greatly significant to numerous engineering disciplines; however, research efforts are complicated by the rapid changes in phase volume fractions and flow stresses with temperature in addition to topological considerations. The viscoplastic self-consistent (VPSC) polycrystal plasticity model is employed to simulate deformation in the two phase field. Newly developed parameter selection schemes utilizing automated optimization based on two different error metrics are considered. In the first optimization scheme, which is commonly used in the literature, the VPSC parameters are selected based on the quality of fit between experiment and simulated flow curves at six hot-working temperatures. Under the second newly developed scheme, parameters are selected to minimize the difference between the simulated and experimentally measured α textures after accounting for the β → α transformation upon cooling. It is demonstrated that both methods result in good qualitative matches for the experimental α phase texture, but texture-based optimization results in a substantially better quantitative orientation distribution function match.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Usman, Yasir; Kim, Jinho; Muljadi, Eduard
Wake effects cause wind turbine generators (WTGs) within a wind power plant (WPP) to produce different levels of active power and subsequent reactive power capabilities. Further, the impedance between a WTG and the point of interconnection (POI)-which depends on the distance between them-impacts the WPP's reactive power injection capability at the POI. This paper proposes a voltage control scheme for a WPP based on the available reactive current of the doubly-fed induction generators (DFIGs) and its impacts on the POI to improve the reactive power injection capability of the WPP. In this paper, a design strategy for modifying the gainmore » of DFIG controller is suggested and the comprehensive properties of these control gains are investigated. In the proposed scheme, the WPP controller, which operates in a voltage control mode, sends the command signal to the DFIGs based on the voltage difference at the POI. The DFIG controllers, which operate in a voltage control mode, employ a proportional controller with a limiter. The gain of the proportional controller is adjusted depending on the available reactive current of the DFIG and the series impedance between the DFIG and the POI. The performance of the proposed scheme is validated for various disturbances such as a reactive load connection and grid fault using an EMTP-RV simulator. Furthermore, simulation results demonstrate that the proposed scheme promptly recovers the POI voltage by injecting more reactive power after a disturbance than the conventional scheme.« less
NASA Astrophysics Data System (ADS)
Li, Xin; Zeng, Mingjian; Wang, Yuan; Wang, Wenlan; Wu, Haiying; Mei, Haixia
2016-10-01
Different choices of control variables in variational assimilation can bring about different influences on the analyzed atmospheric state. Based on the WRF model's three-dimensional variational assimilation system, this study compares the behavior of two momentum control variable options—streamfunction velocity potential ( ψ-χ) and horizontal wind components ( U-V)—in radar wind data assimilation for a squall line case that occurred in Jiangsu Province on 24 August 2014. The wind increment from the single observation test shows that the ψ-χ control variable scheme produces negative increments in the neighborhood around the observation point because streamfunction and velocity potential preserve integrals of velocity. On the contrary, the U-V control variable scheme objectively reflects the information of the observation itself. Furthermore, radial velocity data from 17 Doppler radars in eastern China are assimilated. As compared to the impact of conventional observation, the assimilation of radar radial velocity based on the U-V control variable scheme significantly improves the mesoscale dynamic field in the initial condition. The enhanced low-level jet stream, water vapor convergence and low-level wind shear result in better squall line forecasting. However, the ψ-χ control variable scheme generates a discontinuous wind field and unrealistic convergence/divergence in the analyzed field, which lead to a degraded precipitation forecast.
NASA Astrophysics Data System (ADS)
Zhang, Yuchao; Gan, Chaoqin; Gou, Kaiyu; Xu, Anni; Ma, Jiamin
2018-01-01
DBA scheme based on Load balance algorithm (LBA) and wavelength recycle mechanism (WRM) for multi-wavelength upstream transmission is proposed in this paper. According to 1 Gbps and 10 Gbps line rates, ONUs are grouped into different VPONs. To facilitate wavelength management, resource pool is proposed to record wavelength state. To realize quantitative analysis, a mathematical model describing metro-access network (MAN) environment is presented. To 10G-EPON upstream, load balance algorithm is designed to ensure load distribution fairness for 10G-OLTs. To 1G-EPON upstream, wavelength recycle mechanism is designed to share remained wavelengths. Finally, the effectiveness of the proposed scheme is demonstrated by simulation and analysis.
A controlled variation scheme for convection treatment in pressure-based algorithm
NASA Technical Reports Server (NTRS)
Shyy, Wei; Thakur, Siddharth; Tucker, Kevin
1993-01-01
Convection effect and source terms are two primary sources of difficulties in computing turbulent reacting flows typically encountered in propulsion devices. The present work intends to elucidate the individual as well as the collective roles of convection and source terms in the fluid flow equations, and to devise appropriate treatments and implementations to improve our current capability of predicting such flows. A controlled variation scheme (CVS) has been under development in the context of a pressure-based algorithm, which has the characteristics of adaptively regulating the amount of numerical diffusivity, relative to central difference scheme, according to the variation in local flow field. Both the basic concepts and a pragmatic assessment will be presented to highlight the status of this work.
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589
Usman, Yasir; Kim, Jinho; Muljadi, Eduard; ...
2016-01-01
Wake effects cause wind turbine generators (WTGs) within a wind power plant (WPP) to produce different levels of active power and subsequent reactive power capabilities. Further, the impedance between a WTG and the point of interconnection (POI)-which depends on the distance between them-impacts the WPP's reactive power injection capability at the POI. This paper proposes a voltage control scheme for a WPP based on the available reactive current of the doubly-fed induction generators (DFIGs) and its impacts on the POI to improve the reactive power injection capability of the WPP. In this paper, a design strategy for modifying the gainmore » of DFIG controller is suggested and the comprehensive properties of these control gains are investigated. In the proposed scheme, the WPP controller, which operates in a voltage control mode, sends the command signal to the DFIGs based on the voltage difference at the POI. The DFIG controllers, which operate in a voltage control mode, employ a proportional controller with a limiter. The gain of the proportional controller is adjusted depending on the available reactive current of the DFIG and the series impedance between the DFIG and the POI. The performance of the proposed scheme is validated for various disturbances such as a reactive load connection and grid fault using an EMTP-RV simulator. Furthermore, simulation results demonstrate that the proposed scheme promptly recovers the POI voltage by injecting more reactive power after a disturbance than the conventional scheme.« less
NASA Astrophysics Data System (ADS)
Saetchnikov, Vladimir A.; Tcherniavskaia, Elina A.; Schweiger, Gustav
2009-05-01
A novel emerging technique for the label-free analysis of nanoparticles including biomolecules using optical micro cavity resonance of whispering-gallery-type modes is being developed. Schemes of such a method based on microsphere melted by laser on the tip of a standard single mode fiber optical cable with a laser and free microsphere matrix have been developed. Using a calibration principal of ultra high resolution spectroscopy based on such a scheme the method is being transformed to make further development for microbial application. The sensitivity of developed schemes has been tested to refractive index changes by monitoring the magnitude of the whispering gallery modes spectral shift. Water solutions of ethanol, glucose, vitamin C and biotin have been used. Some other schemes using similar principals: stand-alone, array and matrix microsphere resonators, liquid core optical ring resonators are also being under development. The influences of the gap in whispering-gallery modes on energy coupling, resonance quality and frequency have been investigated. An optimum gap for sensing applications has been defined at the half maximum energy coupling where both the Q factor and coupling efficiency are high and the resonance frequency is little affected by the gap variation. Developed schemes have been demonstrated to be a promising technology platform for sensitive, lab-on-chip type sensor which can be used for development of diagnostic tools for different biological molecules, e.g. proteins, oligonucleotides, oligosaccharides, lipids, small molecules, viral particles, cells as well as in different experimental contexts e.g. proteomics, genomics, drug discovery, and membrane studies.
CT liver volumetry using geodesic active contour segmentation with a level-set algorithm
NASA Astrophysics Data System (ADS)
Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard
2010-03-01
Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.
Update schemes of multi-velocity floor field cellular automaton for pedestrian dynamics
NASA Astrophysics Data System (ADS)
Luo, Lin; Fu, Zhijian; Cheng, Han; Yang, Lizhong
2018-02-01
Modeling pedestrian movement is an interesting problem both in statistical physics and in computational physics. Update schemes of cellular automaton (CA) models for pedestrian dynamics govern the schedule of pedestrian movement. Usually, different update schemes make the models behave in different ways, which should be carefully recalibrated. Thus, in this paper, we investigated the influence of four different update schemes, namely parallel/synchronous scheme, random scheme, order-sequential scheme and shuffled scheme, on pedestrian dynamics. The multi-velocity floor field cellular automaton (FFCA) considering the changes of pedestrians' moving properties along walking paths and heterogeneity of pedestrians' walking abilities was used. As for parallel scheme only, the collisions detection and resolution should be considered, resulting in a great difference from any other update schemes. For pedestrian evacuation, the evacuation time is enlarged, and the difference in pedestrians' walking abilities is better reflected, under parallel scheme. In face of a bottleneck, for example a exit, using a parallel scheme leads to a longer congestion period and a more dispersive density distribution. The exit flow and the space-time distribution of density and velocity have significant discrepancies under four different update schemes when we simulate pedestrian flow with high desired velocity. Update schemes may have no influence on pedestrians in simulation to create tendency to follow others, but sequential and shuffled update scheme may enhance the effect of pedestrians' familiarity with environments.
Inferring product healthfulness from nutrition labelling. The influence of reference points.
van Herpen, Erica; Hieke, Sophie; van Trijp, Hans C M
2014-01-01
Despite considerable research on nutrition labelling, it has proven difficult to find a front-of-pack label which is informative about product healthfulness across various situations. This study examines the ability of different types of nutrition labelling schemes (multiple traffic light label, nutrition table, GDA, logo) to communicate product healthfulness (a) across different product categories, (b) between options from the same product category, and (c) when viewed in isolation and in comparison with another product. Results of two experiments in Germany and The Netherlands show that a labelling scheme with reference point information at the nutrient level (e.g., the traffic light label) can achieve all three objectives. Although other types of labelling schemes are also capable of communicating healthfulness, labelling schemes lacking reference point information (e.g., nutrition tables) are less effective when no comparison product is available, and labelling schemes based on overall product healthfulness within the category (e.g., logos) can diminish consumers' ability to differentiate between categories, leading to a potential misinterpretation of product healthfulness. None of the labels affected food preferences.
Inferring Product Healthfulness from Nutrition Labelling: The Influence of Reference Points.
Herpen, Erica van; Hieke, Sophie; van Trijp, Hans C M
2013-10-25
Despite considerable research on nutrition labelling, it has proven difficult to find a front-of-pack label which is informative about product healthfulness across various situations. This study examines the ability of different types of nutrition labelling schemes (multiple traffic light label, nutrition table, GDA, logo) to communicate product healthfulness (a) across different product categories, (b) between options from the same product category, and (c) when viewed in isolation and in comparison with another product. Results of two experiments in Germany and The Netherlands show that a labelling scheme with reference point information at the nutrient level (e.g., the traffic light label) can achieve all three objectives. Although other types of labelling schemes are also capable of communicating healthfulness, labelling schemes lacking reference point information (e.g., nutrition tables) are less effective when no comparison product is available, and labelling schemes based on overall product healthfulness within the category (e.g., logos) can diminish consumers' ability to differentiate between categories, leading to a potential misinterpretation of product healthfulness. None of the labels affected food preferences. Copyright © 2013. Published by Elsevier Ltd.
Schemes of detecting nuclear spin correlations by dynamical decoupling based quantum sensing
NASA Astrophysics Data System (ADS)
Ma, Wen-Long Ma; Liu, Ren-Bao
Single-molecule sensitivity of nuclear magnetic resonance (NMR) and angstrom resolution of magnetic resonance imaging (MRI) are the highest challenges in magnetic microscopy. Recent development in dynamical decoupling (DD) enhanced diamond quantum sensing has enabled NMR of single nuclear spins and nanoscale NMR. Similar to conventional NMR and MRI, current DD-based quantum sensing utilizes the frequency fingerprints of target nuclear spins. Such schemes, however, cannot resolve different nuclear spins that have the same noise frequency or differentiate different types of correlations in nuclear spin clusters. Here we show that the first limitation can be overcome by using wavefunction fingerprints of target nuclear spins, which is much more sensitive than the ''frequency fingerprints'' to weak hyperfine interaction between the targets and a sensor, while the second one can be overcome by a new design of two-dimensional DD sequences composed of two sets of periodic DD sequences with different periods, which can be independently set to match two different transition frequencies. Our schemes not only offer an approach to breaking the resolution limit set by ''frequency gradients'' in conventional MRI, but also provide a standard approach to correlation spectroscopy for single-molecule NMR.
The Benard problem: A comparison of finite difference and spectral collocation eigen value solutions
NASA Technical Reports Server (NTRS)
Skarda, J. Raymond Lee; Mccaughan, Frances E.; Fitzmaurice, Nessan
1995-01-01
The application of spectral methods, using a Chebyshev collocation scheme, to solve hydrodynamic stability problems is demonstrated on the Benard problem. Implementation of the Chebyshev collocation formulation is described. The performance of the spectral scheme is compared with that of a 2nd order finite difference scheme. An exact solution to the Marangoni-Benard problem is used to evaluate the performance of both schemes. The error of the spectral scheme is at least seven orders of magnitude smaller than finite difference error for a grid resolution of N = 15 (number of points used). The performance of the spectral formulation far exceeded the performance of the finite difference formulation for this problem. The spectral scheme required only slightly more effort to set up than the 2nd order finite difference scheme. This suggests that the spectral scheme may actually be faster to implement than higher order finite difference schemes.
Yu, Shidi; Liu, Xiao; Liu, Anfeng; Xiong, Naixue; Cai, Zhiping; Wang, Tian
2018-05-10
Due to the Software Defined Network (SDN) technology, Wireless Sensor Networks (WSNs) are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB) problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD) scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1) with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2) As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3) The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that the proposed ABRCD scheme shows better performance in different broadcast situations. Compared to previous schemes, the transmission delay is reduced by 41.11~78.42%, the number of broadcasts is reduced by 36.18~94.27% and the energy utilization ratio is improved up to 583.42%, while the network lifetime can be prolonged up to 274.99%.
An Adaption Broadcast Radius-Based Code Dissemination Scheme for Low Energy Wireless Sensor Networks
Yu, Shidi; Liu, Xiao; Cai, Zhiping; Wang, Tian
2018-01-01
Due to the Software Defined Network (SDN) technology, Wireless Sensor Networks (WSNs) are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB) problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD) scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1) with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2) As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3) The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that the proposed ABRCD scheme shows better performance in different broadcast situations. Compared to previous schemes, the transmission delay is reduced by 41.11~78.42%, the number of broadcasts is reduced by 36.18~94.27% and the energy utilization ratio is improved up to 583.42%, while the network lifetime can be prolonged up to 274.99%. PMID:29748525
Visual privacy by context: proposal and evaluation of a level-based visualisation scheme.
Padilla-López, José Ramón; Chaaraoui, Alexandros Andre; Gu, Feng; Flórez-Revuelta, Francisco
2015-06-04
Privacy in image and video data has become an important subject since cameras are being installed in an increasing number of public and private spaces. Specifically, in assisted living, intelligent monitoring based on computer vision can allow one to provide risk detection and support services that increase people's autonomy at home. In the present work, a level-based visualisation scheme is proposed to provide visual privacy when human intervention is necessary, such as at telerehabilitation and safety assessment applications. Visualisation levels are dynamically selected based on the previously modelled context. In this way, different levels of protection can be provided, maintaining the necessary intelligibility required for the applications. Furthermore, a case study of a living room, where a top-view camera is installed, is presented. Finally, the performed survey-based evaluation indicates the degree of protection provided by the different visualisation models, as well as the personal privacy preferences and valuations of the users.
NASA Astrophysics Data System (ADS)
Ren, Jingye; Zhang, Fang; Wang, Yuying; Collins, Don; Fan, Xinxin; Jin, Xiaoai; Xu, Weiqi; Sun, Yele; Cribb, Maureen; Li, Zhanqing
2018-05-01
Understanding the impacts of aerosol chemical composition and mixing state on cloud condensation nuclei (CCN) activity in polluted areas is crucial for accurately predicting CCN number concentrations (NCCN). In this study, we predict NCCN under five assumed schemes of aerosol chemical composition and mixing state based on field measurements in Beijing during the winter of 2016. Our results show that the best closure is achieved with the assumption of size dependent chemical composition for which sulfate, nitrate, secondary organic aerosols, and aged black carbon are internally mixed with each other but externally mixed with primary organic aerosol and fresh black carbon (external-internal size-resolved, abbreviated as EI-SR scheme). The resulting ratios of predicted-to-measured NCCN (RCCN_p/m) were 0.90 - 0.98 under both clean and polluted conditions. Assumption of an internal mixture and bulk chemical composition (INT-BK scheme) shows good closure with RCCN_p/m of 1.0 -1.16 under clean conditions, implying that it is adequate for CCN prediction in continental clean regions. On polluted days, assuming the aerosol is internally mixed and has a chemical composition that is size dependent (INT-SR scheme) achieves better closure than the INT-BK scheme due to the heterogeneity and variation in particle composition at different sizes. The improved closure achieved using the EI-SR and INT-SR assumptions highlight the importance of measuring size-resolved chemical composition for CCN predictions in polluted regions. NCCN is significantly underestimated (with RCCN_p/m of 0.66 - 0.75) when using the schemes of external mixtures with bulk (EXT-BK scheme) or size-resolved composition (EXT-SR scheme), implying that primary particles experience rapid aging and physical mixing processes in urban Beijing. However, our results show that the aerosol mixing state plays a minor role in CCN prediction when the κorg exceeds 0.1.
NASA Astrophysics Data System (ADS)
Piatkowski, Marian; Müthing, Steffen; Bastian, Peter
2018-03-01
In this paper we consider discontinuous Galerkin (DG) methods for the incompressible Navier-Stokes equations in the framework of projection methods. In particular we employ symmetric interior penalty DG methods within the second-order rotational incremental pressure correction scheme. The major focus of the paper is threefold: i) We propose a modified upwind scheme based on the Vijayasundaram numerical flux that has favourable properties in the context of DG. ii) We present a novel postprocessing technique in the Helmholtz projection step based on H (div) reconstruction of the pressure correction that is computed locally, is a projection in the discrete setting and ensures that the projected velocity satisfies the discrete continuity equation exactly. As a consequence it also provides local mass conservation of the projected velocity. iii) Numerical results demonstrate the properties of the scheme for different polynomial degrees applied to two-dimensional problems with known solution as well as large-scale three-dimensional problems. In particular we address second-order convergence in time of the splitting scheme as well as its long-time stability.
Convergence Acceleration for Multistage Time-Stepping Schemes
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, Eli L.; Rossow, C-C; Vasta, V. N.
2006-01-01
The convergence of a Runge-Kutta (RK) scheme with multigrid is accelerated by preconditioning with a fully implicit operator. With the extended stability of the Runge-Kutta scheme, CFL numbers as high as 1000 could be used. The implicit preconditioner addresses the stiffness in the discrete equations associated with stretched meshes. Numerical dissipation operators (based on the Roe scheme, a matrix formulation, and the CUSP scheme) as well as the number of RK stages are considered in evaluating the RK/implicit scheme. Both the numerical and computational efficiency of the scheme with the different dissipation operators are discussed. The RK/implicit scheme is used to solve the two-dimensional (2-D) and three-dimensional (3-D) compressible, Reynolds-averaged Navier-Stokes equations. In two dimensions, turbulent flows over an airfoil at subsonic and transonic conditions are computed. The effects of mesh cell aspect ratio on convergence are investigated for Reynolds numbers between 5.7 x 10(exp 6) and 100.0 x 10(exp 6). Results are also obtained for a transonic wing flow. For both 2-D and 3-D problems, the computational time of a well-tuned standard RK scheme is reduced at least a factor of four.
An online outlier identification and removal scheme for improving fault detection performance.
Ferdowsi, Hasan; Jagannathan, Sarangapani; Zawodniok, Maciej
2014-05-01
Measured data or states for a nonlinear dynamic system is usually contaminated by outliers. Identifying and removing outliers will make the data (or system states) more trustworthy and reliable since outliers in the measured data (or states) can cause missed or false alarms during fault diagnosis. In addition, faults can make the system states nonstationary needing a novel analytical model-based fault detection (FD) framework. In this paper, an online outlier identification and removal (OIR) scheme is proposed for a nonlinear dynamic system. Since the dynamics of the system can experience unknown changes due to faults, traditional observer-based techniques cannot be used to remove the outliers. The OIR scheme uses a neural network (NN) to estimate the actual system states from measured system states involving outliers. With this method, the outlier detection is performed online at each time instant by finding the difference between the estimated and the measured states and comparing its median with its standard deviation over a moving time window. The NN weight update law in OIR is designed such that the detected outliers will have no effect on the state estimation, which is subsequently used for model-based fault diagnosis. In addition, since the OIR estimator cannot distinguish between the faulty or healthy operating conditions, a separate model-based observer is designed for fault diagnosis, which uses the OIR scheme as a preprocessing unit to improve the FD performance. The stability analysis of both OIR and fault diagnosis schemes are introduced. Finally, a three-tank benchmarking system and a simple linear system are used to verify the proposed scheme in simulations, and then the scheme is applied on an axial piston pump testbed. The scheme can be applied to nonlinear systems whose dynamics and underlying distribution of states are subjected to change due to both unknown faults and operating conditions.
Beamforming Based Full-Duplex for Millimeter-Wave Communication
Liu, Xiao; Xiao, Zhenyu; Bai, Lin; Choi, Jinho; Xia, Pengfei; Xia, Xiang-Gen
2016-01-01
In this paper, we study beamforming based full-duplex (FD) systems in millimeter-wave (mmWave) communications. A joint transmission and reception (Tx/Rx) beamforming problem is formulated to maximize the achievable rate by mitigating self-interference (SI). Since the optimal solution is difficult to find due to the non-convexity of the objective function, suboptimal schemes are proposed in this paper. A low-complexity algorithm, which iteratively maximizes signal power while suppressing SI, is proposed and its convergence is proven. Moreover, two closed-form solutions, which do not require iterations, are also derived under minimum-mean-square-error (MMSE), zero-forcing (ZF), and maximum-ratio transmission (MRT) criteria. Performance evaluations show that the proposed iterative scheme converges fast (within only two iterations on average) and approaches an upper-bound performance, while the two closed-form solutions also achieve appealing performances, although there are noticeable differences from the upper bound depending on channel conditions. Interestingly, these three schemes show different robustness against the geometry of Tx/Rx antenna arrays and channel estimation errors. PMID:27455256
A study of design approach of spreading schemes for viral marketing based on human dynamics
NASA Astrophysics Data System (ADS)
Yang, Jianmei; Zhuang, Dong; Xie, Weicong; Chen, Guangrong
2013-12-01
Before launching a real viral marketing campaign, it is needed to design a spreading scheme by simulations. Based on a categorization of spreading patterns in real world and models, we point out that the existing research (especially Yang et al. (2010) Ref. [16]) implicitly assume that if a user decides to post a received message (is activated), he/she will take the reposting action promptly (Prompt Action After Activation, or PAAA). After a careful analysis on a real dataset however, it is found that the observed time differences between action and activation exhibit a heavy-tailed distribution. A simulation model for heavy-tailed pattern is then proposed and performed. Similarities and differences of spreading processes between the heavy-tailed and PAAA patterns are analyzed. Consequently, a more practical design approach of spreading scheme for viral marketing on QQ platform is proposed. The design approach can be extended and applied to the contexts of non-heavy-tailed pattern, and viral marketing on other instant messaging platforms.
Genetic algorithms with memory- and elitism-based immigrants in dynamic environments.
Yang, Shengxiang
2008-01-01
In recent years the genetic algorithm community has shown a growing interest in studying dynamic optimization problems. Several approaches have been devised. The random immigrants and memory schemes are two major ones. The random immigrants scheme addresses dynamic environments by maintaining the population diversity while the memory scheme aims to adapt genetic algorithms quickly to new environments by reusing historical information. This paper investigates a hybrid memory and random immigrants scheme, called memory-based immigrants, and a hybrid elitism and random immigrants scheme, called elitism-based immigrants, for genetic algorithms in dynamic environments. In these schemes, the best individual from memory or the elite from the previous generation is retrieved as the base to create immigrants into the population by mutation. This way, not only can diversity be maintained but it is done more efficiently to adapt genetic algorithms to the current environment. Based on a series of systematically constructed dynamic problems, experiments are carried out to compare genetic algorithms with the memory-based and elitism-based immigrants schemes against genetic algorithms with traditional memory and random immigrants schemes and a hybrid memory and multi-population scheme. The sensitivity analysis regarding some key parameters is also carried out. Experimental results show that the memory-based and elitism-based immigrants schemes efficiently improve the performance of genetic algorithms in dynamic environments.
LSB-based Steganography Using Reflected Gray Code for Color Quantum Images
NASA Astrophysics Data System (ADS)
Li, Panchi; Lu, Aiping
2018-02-01
At present, the classical least-significant-bit (LSB) based image steganography has been extended to quantum image processing. For the existing LSB-based quantum image steganography schemes, the embedding capacity is no more than 3 bits per pixel. Therefore, it is meaningful to study how to improve the embedding capacity of quantum image steganography. This work presents a novel LSB-based steganography using reflected Gray code for colored quantum images, and the embedding capacity of this scheme is up to 4 bits per pixel. In proposed scheme, the secret qubit sequence is considered as a sequence of 4-bit segments. For the four bits in each segment, the first bit is embedded in the second LSB of B channel of the cover image, and and the remaining three bits are embedded in LSB of RGB channels of each color pixel simultaneously using reflected-Gray code to determine the embedded bit from secret information. Following the transforming rule, the LSB of stego-image are not always same as the secret bits and the differences are up to almost 50%. Experimental results confirm that the proposed scheme shows good performance and outperforms the previous ones currently found in the literature in terms of embedding capacity.
NASA Astrophysics Data System (ADS)
Kou, Yanbin; Liu, Siming; Zhang, Weiheng; Shen, Guansheng; Tian, Huiping
2017-03-01
We present a dynamic capacity allocation mechanism based on the Quality of Service (QoS) for different mobile users (MU) in 60 GHz radio-over-fiber (RoF) local access networks. The proposed mechanism is capable for collecting the request information of MUs to build a full list of MU capacity demands and service types at the Central Office (CO). A hybrid algorithm is introduced to implement the capacity allocation which can satisfy the requirements of different MUs at different network traffic loads. Compared with the weight dynamic frames assignment (WDFA) scheme, the Hybrid scheme can keep high priority MUs in low delay and maintain the packet loss rate less than 1% simultaneously. At the same time, low priority MUs have a relatively better performance.
An Unequal Secure Encryption Scheme for H.264/AVC Video Compression Standard
NASA Astrophysics Data System (ADS)
Fan, Yibo; Wang, Jidong; Ikenaga, Takeshi; Tsunoo, Yukiyasu; Goto, Satoshi
H.264/AVC is the newest video coding standard. There are many new features in it which can be easily used for video encryption. In this paper, we propose a new scheme to do video encryption for H.264/AVC video compression standard. We define Unequal Secure Encryption (USE) as an approach that applies different encryption schemes (with different security strength) to different parts of compressed video data. This USE scheme includes two parts: video data classification and unequal secure video data encryption. Firstly, we classify the video data into two partitions: Important data partition and unimportant data partition. Important data partition has small size with high secure protection, while unimportant data partition has large size with low secure protection. Secondly, we use AES as a block cipher to encrypt the important data partition and use LEX as a stream cipher to encrypt the unimportant data partition. AES is the most widely used symmetric cryptography which can ensure high security. LEX is a new stream cipher which is based on AES and its computational cost is much lower than AES. In this way, our scheme can achieve both high security and low computational cost. Besides the USE scheme, we propose a low cost design of hybrid AES/LEX encryption module. Our experimental results show that the computational cost of the USE scheme is low (about 25% of naive encryption at Level 0 with VEA used). The hardware cost for hybrid AES/LEX module is 4678 Gates and the AES encryption throughput is about 50Mbps.
NASA Astrophysics Data System (ADS)
Felfelani, F.; Pokhrel, Y. N.
2017-12-01
In this study, we use in-situ observations and satellite data of soil moisture and groundwater to improve irrigation and groundwater parameterizations in the version 4.5 of the Community Land Model (CLM). The irrigation application trigger, which is based on the soil moisture deficit mechanism, is enhanced by integrating soil moisture observations and the data from the Soil Moisture Active Passive (SMAP) mission which is available since 2015. Further, we incorporate different irrigation application mechanisms based on schemes used in various other land surface models (LSMs) and carry out a sensitivity analysis using point simulations at two different irrigated sites in Mead, Nebraska where data from the AmeriFlux observational network are available. We then conduct regional simulations over the entire High Plains region and evaluate model results with the available irrigation water use data at the county-scale. Finally, we present results of groundwater simulations by implementing a simple pumping scheme based on our previous studies. Results from the implementation of current irrigation parameterization used in various LSMs show relatively large difference in vertical soil moisture content profile (e.g., 0.2 mm3/mm3) at point scale which is mostly decreased when averaged over relatively large regions (e.g., 0.04 mm3/mm3 in the High Plains region). It is found that original irrigation module in CLM 4.5 tends to overestimate the soil moisture content compared to both point observations and SMAP, and the results from the improved scheme linked with the groundwater pumping scheme show better agreement with the observations.
NASA Astrophysics Data System (ADS)
Kutty, Govindan; Muraleedharan, Rohit; Kesarkar, Amit P.
2018-03-01
Uncertainties in the numerical weather prediction models are generally not well-represented in ensemble-based data assimilation (DA) systems. The performance of an ensemble-based DA system becomes suboptimal, if the sources of error are undersampled in the forecast system. The present study examines the effect of accounting for model error treatments in the hybrid ensemble transform Kalman filter—three-dimensional variational (3DVAR) DA system (hybrid) in the track forecast of two tropical cyclones viz. Hudhud and Thane, formed over the Bay of Bengal, using Advanced Research Weather Research and Forecasting (ARW-WRF) model. We investigated the effect of two types of model error treatment schemes and their combination on the hybrid DA system; (i) multiphysics approach, which uses different combination of cumulus, microphysics and planetary boundary layer schemes, (ii) stochastic kinetic energy backscatter (SKEB) scheme, which perturbs the horizontal wind and potential temperature tendencies, (iii) a combination of both multiphysics and SKEB scheme. Substantial improvements are noticed in the track positions of both the cyclones, when flow-dependent ensemble covariance is used in 3DVAR framework. Explicit model error representation is found to be beneficial in treating the underdispersive ensembles. Among the model error schemes used in this study, a combination of multiphysics and SKEB schemes has outperformed the other two schemes with improved track forecast for both the tropical cyclones.
Eze, J I; Correia-Gomes, C; Borobia-Belsué, J; Tucker, A W; Sparrow, D; Strachan, D W; Gunn, G J
2015-01-01
Surveillance of animal diseases provides information essential for the protection of animal health and ultimately public health. The voluntary pig health schemes, implemented in the United Kingdom, are integrated systems which capture information on different macroscopic disease conditions detected in slaughtered pigs. Many of these conditions have been associated with a reduction in performance traits and consequent increases in production costs. The schemes are the Wholesome Pigs Scotland in Scotland, the BPEX Pig Health Scheme in England and Wales and the Pig Regen Ltd. health and welfare checks done in Northern Ireland. This report set out to compare the prevalence of four respiratory conditions (enzootic pneumonia-like lesions, pleurisy, pleuropneumonia lesions and abscesses in the lung) assessed by these three Pig Health Schemes. The seasonal variations and year trends associated with the conditions in each scheme are presented. The paper also highlights the differences in prevalence for each condition across these schemes and areas where further research is needed. A general increase in the prevalence of enzootic pneumonia like lesions was observed in Scotland, England and Wales since 2009, while a general decrease was observed in Northern Ireland over the years of the scheme. Pleurisy prevalence has increased since 2010 in all three schemes, whilst pleuropneumonia has been decreasing. Prevalence of abscesses in the lung has decreased in England, Wales and Northern Ireland but has increased in Scotland. This analysis highlights the value of surveillance schemes based on abattoir pathology monitoring of four respiratory lesions. The outputs at scheme level have significant value as indicators of endemic and emerging disease, and for producers and herd veterinarians in planning and evaluating herd health control programs when comparing individual farm results with national averages.
Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang
2016-01-01
Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI. PMID:26880873
Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang
2016-01-01
Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI.
Iteration and superposition encryption scheme for image sequences based on multi-dimensional keys
NASA Astrophysics Data System (ADS)
Han, Chao; Shen, Yuzhen; Ma, Wenlin
2017-12-01
An iteration and superposition encryption scheme for image sequences based on multi-dimensional keys is proposed for high security, big capacity and low noise information transmission. Multiple images to be encrypted are transformed into phase-only images with the iterative algorithm and then are encrypted by different random phase, respectively. The encrypted phase-only images are performed by inverse Fourier transform, respectively, thus new object functions are generated. The new functions are located in different blocks and padded zero for a sparse distribution, then they propagate to a specific region at different distances by angular spectrum diffraction, respectively and are superposed in order to form a single image. The single image is multiplied with a random phase in the frequency domain and then the phase part of the frequency spectrums is truncated and the amplitude information is reserved. The random phase, propagation distances, truncated phase information in frequency domain are employed as multiple dimensional keys. The iteration processing and sparse distribution greatly reduce the crosstalk among the multiple encryption images. The superposition of image sequences greatly improves the capacity of encrypted information. Several numerical experiments based on a designed optical system demonstrate that the proposed scheme can enhance encrypted information capacity and make image transmission at a highly desired security level.
Effects of Planetary Boundary Layer Parameterizations on CWRF Regional Climate Simulation
NASA Astrophysics Data System (ADS)
Liu, S.; Liang, X.
2011-12-01
Planetary Boundary Layer (PBL) parameterizations incorporated in CWRF (Climate extension of the Weather Research and Forecasting model) are first evaluated by comparing simulated PBL heights with observations. Among the 10 evaluated PBL schemes, 2 (CAM, UW) are new in CWRF while the other 8 are original WRF schemes. MYJ, QNSE and UW determine the PBL heights based on turbulent kinetic energy (TKE) profiles, while others (YSU, ACM, GFS, CAM, TEMF) are from bulk Richardson criteria. All TKE-based schemes (MYJ, MYNN, QNSE, UW, Boulac) substantially underestimate convective or residual PBL heights from noon toward evening, while others (ACM, CAM, YSU) well capture the observed diurnal cycle except for the GFS with systematic overestimation. These differences among the schemes are representative over most areas of the simulation domain, suggesting systematic behaviors of the parameterizations. Lower PBL heights simulated by the QNSE and MYJ are consistent with their smaller Bowen ratios and heavier rainfalls, while higher PBL tops by the GFS correspond to warmer surface temperatures. Effects of PBL parameterizations on CWRF regional climate simulation are then compared. The QNSE PBL scheme yields systematically heavier rainfall almost everywhere and throughout the year; this is identified with a much greater surface Bowen ratio (smaller sensible versus larger latent heating) and wetter soil moisture than other PBL schemes. Its predecessor MYJ scheme shares the same deficiency to a lesser degree. For temperature, the performance of the QNSE and MYJ schemes remains poor, having substantially larger rms errors in all seasons. GFS PBL scheme also produces large warm biases. Pronounced sensitivities are also found to the PBL schemes in winter and spring over most areas except the southern U.S. (Southeast, Gulf States, NAM); excluding the outliers (QNSE, MYJ, GFS) that cause extreme biases of -6 to +3°C, the differences among the schemes are still visible (±2°C), where the CAM is generally more realistic. QNSE, MYJ, GFS and BouLac PBL parameterizations are identified as obvious outliers of overall performance in representing precipitation, surface air temperature or PBL height variations. Their poor performance may result from deficiencies in physical formulations, dependences on applicable scales, or trouble numerical implementations, requiring future detailed investigation to isolate the actual cause.
NASA Astrophysics Data System (ADS)
Zhang, Chongfu; Xiao, Nengwu; Chen, Chen; Yuan, Weicheng; Qiu, Kun
2016-02-01
We propose an energy-efficient orthogonal frequency division multiplexing-based passive optical network (OFDM-PON) using adaptive sleep-mode control and dynamic bandwidth allocation. In this scheme, a bidirectional-centralized algorithm named the receiver and transmitter accurate sleep control and dynamic bandwidth allocation (RTASC-DBA), which has an overall bandwidth scheduling policy, is employed to enhance the energy efficiency of the OFDM-PON. The RTASC-DBA algorithm is used in an optical line terminal (OLT) to control the sleep mode of an optical network unit (ONU) sleep and guarantee the quality of service of different services of the OFDM-PON. The obtained results show that, by using the proposed scheme, the average power consumption of the ONU is reduced by ˜40% when the normalized ONU load is less than 80%, compared with the average power consumption without using the proposed scheme.
Temporal abstraction for the analysis of intensive care information
NASA Astrophysics Data System (ADS)
Hadad, Alejandro J.; Evin, Diego A.; Drozdowicz, Bartolomé; Chiotti, Omar
2007-11-01
This paper proposes a scheme for the analysis of time-stamped series data from multiple monitoring devices of intensive care units, using Temporal Abstraction concepts. This scheme is oriented to obtain a description of the patient state evolution in an unsupervised way. The case of study is based on a dataset clinically classified with Pulmonary Edema. For this dataset a trends based Temporal Abstraction mechanism is proposed, by means of a Behaviours Base of time-stamped series and then used in a classification step. Combining this approach with the introduction of expert knowledge, using Fuzzy Logic, and multivariate analysis by means of Self-Organizing Maps, a states characterization model is obtained. This model is feasible of being extended to different patients groups and states. The proposed scheme allows to obtain intermediate states descriptions through which it is passing the patient and that could be used to anticipate alert situations.
Performance Optimization of Priority Assisted CSMA/CA Mechanism of 802.15.6 under Saturation Regime
Shakir, Mustafa; Rehman, Obaid Ur; Rahim, Mudassir; Alrajeh, Nabil; Khan, Zahoor Ali; Khan, Mahmood Ashraf; Niaz, Iftikhar Azim; Javaid, Nadeem
2016-01-01
Due to the recent development in the field of Wireless Sensor Networks (WSNs), the Wireless Body Area Networks (WBANs) have become a major area of interest for the developers and researchers. Human body exhibits postural mobility due to which distance variation occurs and the status of connections amongst sensors change time to time. One of the major requirements of WBAN is to prolong the network lifetime without compromising on other performance measures, i.e., delay, throughput and bandwidth efficiency. Node prioritization is one of the possible solutions to obtain optimum performance in WBAN. IEEE 802.15.6 CSMA/CA standard splits the nodes with different user priorities based on Contention Window (CW) size. Smaller CW size is assigned to higher priority nodes. This standard helps to reduce delay, however, it is not energy efficient. In this paper, we propose a hybrid node prioritization scheme based on IEEE 802.15.6 CSMA/CA to reduce energy consumption and maximize network lifetime. In this scheme, optimum performance is achieved by node prioritization based on CW size as well as power in respective user priority. Our proposed scheme reduces the average back off time for channel access due to CW based prioritization. Additionally, power based prioritization for a respective user priority helps to minimize required number of retransmissions. Furthermore, we also compare our scheme with IEEE 802.15.6 CSMA/CA standard (CW assisted node prioritization) and power assisted node prioritization under postural mobility in WBAN. Mathematical expressions are derived to determine the accurate analytical model for throughput, delay, bandwidth efficiency, energy consumption and life time for each node prioritization scheme. With the intention of analytical model validation, we have performed the simulations in OMNET++/MIXIM framework. Analytical and simulation results show that our proposed hybrid node prioritization scheme outperforms other node prioritization schemes in terms of average network delay, average throughput, average bandwidth efficiency and network lifetime. PMID:27598167
Numerical viscosity and the entropy condition for conservative difference schemes
NASA Technical Reports Server (NTRS)
Tadmor, E.
1983-01-01
Consider a scalar, nonlinear conservative difference scheme satisfying the entropy condition. It is shown that difference schemes containing more numerical viscosity will necessarily converge to the unique, physically relevant weak solution of the approximated conservation equation. In particular, entropy satisfying convergence follows for E schemes - those containing more numerical viscosity than Godunov's scheme.
You, Siming; Wang, Wei; Dai, Yanjun; Tong, Yen Wah; Wang, Chi-Hwa
2016-10-01
The compositions of food wastes and their co-gasification producer gas were compared with the existing data of sewage sludge. Results showed that food wastes are more favorable than sewage sludge for co-gasification based on residue generation and energy output. Two decentralized gasification-based schemes were proposed to dispose of the sewage sludge and food wastes in Singapore. Monte Carlo simulation-based cost-benefit analysis was conducted to compare the proposed schemes with the existing incineration-based scheme. It was found that the gasification-based schemes are financially superior to the incineration-based scheme based on the data of net present value (NPV), benefit-cost ratio (BCR), and internal rate of return (IRR). Sensitivity analysis was conducted to suggest effective measures to improve the economics of the schemes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Compact high order schemes with gradient-direction derivatives for absorbing boundary conditions
NASA Astrophysics Data System (ADS)
Gordon, Dan; Gordon, Rachel; Turkel, Eli
2015-09-01
We consider several compact high order absorbing boundary conditions (ABCs) for the Helmholtz equation in three dimensions. A technique called "the gradient method" (GM) for ABCs is also introduced and combined with the high order ABCs. GM is based on the principle of using directional derivatives in the direction of the wavefront propagation. The new ABCs are used together with the recently introduced compact sixth order finite difference scheme for variable wave numbers. Experiments on problems with known analytic solutions produced very accurate results, demonstrating the efficacy of the high order schemes, particularly when combined with GM. The new ABCs are then applied to the SEG/EAGE Salt model, showing the advantages of the new schemes.
Convergence Acceleration of Runge-Kutta Schemes for Solving the Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Swanson, Roy C., Jr.; Turkel, Eli; Rossow, C.-C.
2007-01-01
The convergence of a Runge-Kutta (RK) scheme with multigrid is accelerated by preconditioning with a fully implicit operator. With the extended stability of the Runge-Kutta scheme, CFL numbers as high as 1000 can be used. The implicit preconditioner addresses the stiffness in the discrete equations associated with stretched meshes. This RK/implicit scheme is used as a smoother for multigrid. Fourier analysis is applied to determine damping properties. Numerical dissipation operators based on the Roe scheme, a matrix dissipation, and the CUSP scheme are considered in evaluating the RK/implicit scheme. In addition, the effect of the number of RK stages is examined. Both the numerical and computational efficiency of the scheme with the different dissipation operators are discussed. The RK/implicit scheme is used to solve the two-dimensional (2-D) and three-dimensional (3-D) compressible, Reynolds-averaged Navier-Stokes equations. Turbulent flows over an airfoil and wing at subsonic and transonic conditions are computed. The effects of the cell aspect ratio on convergence are investigated for Reynolds numbers between 5:7 x 10(exp 6) and 100 x 10(exp 6). It is demonstrated that the implicit preconditioner can reduce the computational time of a well-tuned standard RK scheme by a factor between four and ten.
NASA Astrophysics Data System (ADS)
Ise, Takeshi; Litton, Creighton M.; Giardina, Christian P.; Ito, Akihiko
2010-12-01
Partitioning of gross primary production (GPP) to aboveground versus belowground, to growth versus respiration, and to short versus long-lived tissues exerts a strong influence on ecosystem structure and function, with potentially large implications for the global carbon budget. A recent meta-analysis of forest ecosystems suggests that carbon partitioning to leaves, stems, and roots varies consistently with GPP and that the ratio of net primary production (NPP) to GPP is conservative across environmental gradients. To examine influences of carbon partitioning schemes employed by global ecosystem models, we used this meta-analysis-based model and a satellite-based (MODIS) terrestrial GPP data set to estimate global woody NPP and equilibrium biomass, and then compared it to two process-based ecosystem models (Biome-BGC and VISIT) using the same GPP data set. We hypothesized that different carbon partitioning schemes would result in large differences in global estimates of woody NPP and equilibrium biomass. Woody NPP estimated by Biome-BGC and VISIT was 25% and 29% higher than the meta-analysis-based model for boreal forests, with smaller differences in temperate and tropics. Global equilibrium woody biomass, calculated from model-specific NPP estimates and a single set of tissue turnover rates, was 48 and 226 Pg C higher for Biome-BGC and VISIT compared to the meta-analysis-based model, reflecting differences in carbon partitioning to structural versus metabolically active tissues. In summary, we found that different carbon partitioning schemes resulted in large variations in estimates of global woody carbon flux and storage, indicating that stand-level controls on carbon partitioning are not yet accurately represented in ecosystem models.
NASA Astrophysics Data System (ADS)
Raeli, Alice; Bergmann, Michel; Iollo, Angelo
2018-02-01
We consider problems governed by a linear elliptic equation with varying coefficients across internal interfaces. The solution and its normal derivative can undergo significant variations through these internal boundaries. We present a compact finite-difference scheme on a tree-based adaptive grid that can be efficiently solved using a natively parallel data structure. The main idea is to optimize the truncation error of the discretization scheme as a function of the local grid configuration to achieve second-order accuracy. Numerical illustrations are presented in two and three-dimensional configurations.
Huang, Qingchao; Liu, Dachang; Chen, Yinfang; Wang, Yuehui; Tan, Jun; Chen, Wei; Liu, Jianguo; Zhu, Ninghua
2018-05-14
A secure free-space optical (S-FSO) communication system based on data fragmentation multipath transmission (DFMT) scheme is proposed and demonstrated for enhancing the security of FSO communications. By fragmenting the transmitted data and simultaneously distributing data fragments into different atmospheric channels, the S-FSO communication system can protect confidential messages from being eavesdropped effectively. A field experiment of S-FSO communication between two buildings has been successfully undertaken, and the experiment results demonstrate the feasibility of the scheme. The transmission distance is 50m and the maximum throughput is 1 Gb/s. We also established a theoretical model to analysis the security performance of the S-FSO communication system. To the best of our knowledge, this is the first application of DFMT scheme in FSO communication system.
Bit-rate transparent DPSK demodulation scheme based on injection locking FP-LD
NASA Astrophysics Data System (ADS)
Feng, Hanlin; Xiao, Shilin; Yi, Lilin; Zhou, Zhao; Yang, Pei; Shi, Jie
2013-05-01
We propose and demonstrate a bit-rate transparent differential phase shift-keying (DPSK) demodulation scheme based on injection locking multiple-quantum-well (MQW) strained InGaAsP FP-LD. By utilizing frequency deviation generated by phase modulation and unstable injection locking state with Fabry-Perot laser diode (FP-LD), DPSK to polarization shift-keying (PolSK) and PolSK to intensity modulation (IM) format conversions are realized. We analyze bit error rate (BER) performance of this demodulation scheme. Experimental results show that different longitude modes, bit rates and seeding power have influences on demodulation performance. We achieve error free DPSK signal demodulation under various bit rates of 10 Gbit/s, 5 Gbit/s, 2.5 Gbit/s and 1.25 Gbit/s with the same demodulation setting.
Runge-Kutta methods combined with compact difference schemes for the unsteady Euler equations
NASA Technical Reports Server (NTRS)
Yu, Sheng-Tao
1992-01-01
Recent development using compact difference schemes to solve the Navier-Stokes equations show spectral-like accuracy. A study was made of the numerical characteristics of various combinations of the Runge-Kutta (RK) methods and compact difference schemes to calculate the unsteady Euler equations. The accuracy of finite difference schemes is assessed based on the evaluations of dissipative error. The objectives are reducing the numerical damping and, at the same time, preserving numerical stability. While this approach has tremendous success solving steady flows, numerical characteristics of unsteady calculations remain largely unclear. For unsteady flows, in addition to the dissipative errors, phase velocity and harmonic content of the numerical results are of concern. As a result of the discretization procedure, the simulated unsteady flow motions actually propagate in a dispersive numerical medium. Consequently, the dispersion characteristics of the numerical schemes which relate the phase velocity and wave number may greatly impact the numerical accuracy. The aim is to assess the numerical accuracy of the simulated results. To this end, the Fourier analysis is to provide the dispersive correlations of various numerical schemes. First, a detailed investigation of the existing RK methods is carried out. A generalized form of an N-step RK method is derived. With this generalized form, the criteria are derived for the three and four-step RK methods to be third and fourth-order time accurate for the non-linear equations, e.g., flow equations. These criteria are then applied to commonly used RK methods such as Jameson's 3-step and 4-step schemes and Wray's algorithm to identify the accuracy of the methods. For the spatial discretization, compact difference schemes are presented. The schemes are formulated in the operator-type to render themselves suitable for the Fourier analyses. The performance of the numerical methods is shown by numerical examples. These examples are detailed. described. The third case is a two-dimensional simulation of a Lamb vortex in an uniform flow. This calculation provides a realistic assessment of various finite difference schemes in terms of the conservation of the vortex strength and the harmonic content after travelling a substantial distance. The numerical implementation of Giles' non-refelctive equations coupled with the characteristic equations as the boundary condition is discussed in detail. Finally, the single vortex calculation is extended to simulate vortex pairing. For the distance between two vortices less than a threshold value, numerical results show crisp resolution of the vortex merging.
Survey of Header Compression Techniques
NASA Technical Reports Server (NTRS)
Ishac, Joseph
2001-01-01
This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves compression schemes which provide better tolerances in conditions with a high BER.
Enhanced DNA Sensing via Catalytic Aggregation of Gold Nanoparticles
Huttanus, Herbert M.; Graugnard, Elton; Yurke, Bernard; Knowlton, William B.; Kuang, Wan; Hughes, William L.; Lee, Jeunghoon
2014-01-01
A catalytic colorimetric detection scheme that incorporates a DNA-based hybridization chain reaction into gold nanoparticles was designed and tested. While direct aggregation forms an inter-particle linkage from only ones target DNA strand, the catalytic aggregation forms multiple linkages from a single target DNA strand. Gold nanoparticles were functionalized with thiol-modified DNA strands capable of undergoing hybridization chain reactions. The changes in their absorption spectra were measured at different times and target concentrations and compared against direct aggregation. Catalytic aggregation showed a multifold increase in sensitivity at low target concentrations when compared to direct aggregation. Gel electrophoresis was performed to compare DNA hybridization reactions in catalytic and direct aggregation schemes, and the product formation was confirmed in the catalytic aggregation scheme at low levels of target concentrations. The catalytic aggregation scheme also showed high target specificity. This application of a DNA reaction network to gold nanoparticle-based colorimetric detection enables highly-sensitive, field-deployable, colorimetric readout systems capable of detecting a variety of biomolecules. PMID:23891867
UMDR: Multi-Path Routing Protocol for Underwater Ad Hoc Networks with Directional Antenna
NASA Astrophysics Data System (ADS)
Yang, Jianmin; Liu, Songzuo; Liu, Qipei; Qiao, Gang
2018-01-01
This paper presents a new routing scheme for underwater ad hoc networks based on directional antennas. Ad hoc networks with directional antennas have become a hot research topic because of space reuse may increase networks capacity. At present, researchers have applied traditional self-organizing routing protocols (such as DSR, AODV) [1] [2] on this type of networks, and the routing scheme is based on the shortest path metric. However, such routing schemes often suffer from long transmission delays and frequent link fragmentation along the intermediate nodes of the selected route. This is caused by a unique feature of directional transmission, often called as “deafness”. In this paper, we take a different approach to explore the advantages of space reuse through multipath routing. This paper introduces the validity of the conventional routing scheme in underwater ad hoc networks with directional antennas, and presents a special design of multipath routing algorithm for directional transmission. The experimental results show a significant performance improvement in throughput and latency.
Assessment of numerical techniques for unsteady flow calculations
NASA Technical Reports Server (NTRS)
Hsieh, Kwang-Chung
1989-01-01
The characteristics of unsteady flow motions have long been a serious concern in the study of various fluid dynamic and combustion problems. With the advancement of computer resources, numerical approaches to these problems appear to be feasible. The objective of this paper is to assess the accuracy of several numerical schemes for unsteady flow calculations. In the present study, Fourier error analysis is performed for various numerical schemes based on a two-dimensional wave equation. Four methods sieved from the error analysis are then adopted for further assessment. Model problems include unsteady quasi-one-dimensional inviscid flows, two-dimensional wave propagations, and unsteady two-dimensional inviscid flows. According to the comparison between numerical and exact solutions, although second-order upwind scheme captures the unsteady flow and wave motions quite well, it is relatively more dissipative than sixth-order central difference scheme. Among various numerical approaches tested in this paper, the best performed one is Runge-Kutta method for time integration and six-order central difference for spatial discretization.
Code-Time Diversity for Direct Sequence Spread Spectrum Systems
Hassan, A. Y.
2014-01-01
Time diversity is achieved in direct sequence spread spectrum by receiving different faded delayed copies of the transmitted symbols from different uncorrelated channel paths when the transmission signal bandwidth is greater than the coherence bandwidth of the channel. In this paper, a new time diversity scheme is proposed for spread spectrum systems. It is called code-time diversity. In this new scheme, N spreading codes are used to transmit one data symbol over N successive symbols interval. The diversity order in the proposed scheme equals to the number of the used spreading codes N multiplied by the number of the uncorrelated paths of the channel L. The paper represents the transmitted signal model. Two demodulators structures will be proposed based on the received signal models from Rayleigh flat and frequency selective fading channels. Probability of error in the proposed diversity scheme is also calculated for the same two fading channels. Finally, simulation results are represented and compared with that of maximal ration combiner (MRC) and multiple-input and multiple-output (MIMO) systems. PMID:24982925
A Novel Scheme for an Energy Efficient Internet of Things Based on Wireless Sensor Networks.
Rani, Shalli; Talwar, Rajneesh; Malhotra, Jyoteesh; Ahmed, Syed Hassan; Sarkar, Mahasweta; Song, Houbing
2015-11-12
One of the emerging networking standards that gap between the physical world and the cyber one is the Internet of Things. In the Internet of Things, smart objects communicate with each other, data are gathered and certain requests of users are satisfied by different queried data. The development of energy efficient schemes for the IoT is a challenging issue as the IoT becomes more complex due to its large scale the current techniques of wireless sensor networks cannot be applied directly to the IoT. To achieve the green networked IoT, this paper addresses energy efficiency issues by proposing a novel deployment scheme. This scheme, introduces: (1) a hierarchical network design; (2) a model for the energy efficient IoT; (3) a minimum energy consumption transmission algorithm to implement the optimal model. The simulation results show that the new scheme is more energy efficient and flexible than traditional WSN schemes and consequently it can be implemented for efficient communication in the IoT.
A Novel Scheme for an Energy Efficient Internet of Things Based on Wireless Sensor Networks
Rani, Shalli; Talwar, Rajneesh; Malhotra, Jyoteesh; Ahmed, Syed Hassan; Sarkar, Mahasweta; Song, Houbing
2015-01-01
One of the emerging networking standards that gap between the physical world and the cyber one is the Internet of Things. In the Internet of Things, smart objects communicate with each other, data are gathered and certain requests of users are satisfied by different queried data. The development of energy efficient schemes for the IoT is a challenging issue as the IoT becomes more complex due to its large scale the current techniques of wireless sensor networks cannot be applied directly to the IoT. To achieve the green networked IoT, this paper addresses energy efficiency issues by proposing a novel deployment scheme. This scheme, introduces: (1) a hierarchical network design; (2) a model for the energy efficient IoT; (3) a minimum energy consumption transmission algorithm to implement the optimal model. The simulation results show that the new scheme is more energy efficient and flexible than traditional WSN schemes and consequently it can be implemented for efficient communication in the IoT. PMID:26569260
High Order Schemes in Bats-R-US for Faster and More Accurate Predictions
NASA Astrophysics Data System (ADS)
Chen, Y.; Toth, G.; Gombosi, T. I.
2014-12-01
BATS-R-US is a widely used global magnetohydrodynamics model that originally employed second order accurate TVD schemes combined with block based Adaptive Mesh Refinement (AMR) to achieve high resolution in the regions of interest. In the last years we have implemented fifth order accurate finite difference schemes CWENO5 and MP5 for uniform Cartesian grids. Now the high order schemes have been extended to generalized coordinates, including spherical grids and also to the non-uniform AMR grids including dynamic regridding. We present numerical tests that verify the preservation of free-stream solution and high-order accuracy as well as robust oscillation-free behavior near discontinuities. We apply the new high order accurate schemes to both heliospheric and magnetospheric simulations and show that it is robust and can achieve the same accuracy as the second order scheme with much less computational resources. This is especially important for space weather prediction that requires faster than real time code execution.
LES of Temporally Evolving Mixing Layers by Three High Order Schemes
NASA Astrophysics Data System (ADS)
Yee, H.; Sjögreen, B.; Hadjadj, A.
2011-10-01
The performance of three high order shock-capturing schemes is compared for large eddy simulations (LES) of temporally evolving mixing layers for different convective Mach number (Mc) ranging from the quasi-incompressible regime to highly compressible supersonic regime. The considered high order schemes are fifth-order WENO (WENO5), seventh-order WENO (WENO7), and the associated eighth-order central spatial base scheme with the dissipative portion of WENO7 as a nonlinear post-processing filter step (WENO7fi). This high order nonlinear filter method (Yee & Sjögreen 2009) is designed for accurate and efficient simulations of shock-free compressible turbulence, turbulence with shocklets and turbulence with strong shocks with minimum tuning of scheme parameters. The LES results by WENO7fi using the same scheme parameter agree well with experimental results of Barone et al. (2006), and published direct numerical simulations (DNS) by Rogers & Moser (1994) and Pantano & Sarkar (2002), whereas results by WENO5 and WENO7 compare poorly with experimental data and DNS computations.
Unequal Probability Marking Approach to Enhance Security of Traceback Scheme in Tree-Based WSNs.
Huang, Changqin; Ma, Ming; Liu, Xiao; Liu, Anfeng; Zuo, Zhengbang
2017-06-17
Fog (from core to edge) computing is a newly emerging computing platform, which utilizes a large number of network devices at the edge of a network to provide ubiquitous computing, thus having great development potential. However, the issue of security poses an important challenge for fog computing. In particular, the Internet of Things (IoT) that constitutes the fog computing platform is crucial for preserving the security of a huge number of wireless sensors, which are vulnerable to attack. In this paper, a new unequal probability marking approach is proposed to enhance the security performance of logging and migration traceback (LM) schemes in tree-based wireless sensor networks (WSNs). The main contribution of this paper is to overcome the deficiency of the LM scheme that has a higher network lifetime and large storage space. In the unequal probability marking logging and migration (UPLM) scheme of this paper, different marking probabilities are adopted for different nodes according to their distances to the sink. A large marking probability is assigned to nodes in remote areas (areas at a long distance from the sink), while a small marking probability is applied to nodes in nearby area (areas at a short distance from the sink). This reduces the consumption of storage and energy in addition to enhancing the security performance, lifetime, and storage capacity. Marking information will be migrated to nodes at a longer distance from the sink for increasing the amount of stored marking information, thus enhancing the security performance in the process of migration. The experimental simulation shows that for general tree-based WSNs, the UPLM scheme proposed in this paper can store 1.12-1.28 times the amount of stored marking information that the equal probability marking approach achieves, and has 1.15-1.26 times the storage utilization efficiency compared with other schemes.
Unequal Probability Marking Approach to Enhance Security of Traceback Scheme in Tree-Based WSNs
Huang, Changqin; Ma, Ming; Liu, Xiao; Liu, Anfeng; Zuo, Zhengbang
2017-01-01
Fog (from core to edge) computing is a newly emerging computing platform, which utilizes a large number of network devices at the edge of a network to provide ubiquitous computing, thus having great development potential. However, the issue of security poses an important challenge for fog computing. In particular, the Internet of Things (IoT) that constitutes the fog computing platform is crucial for preserving the security of a huge number of wireless sensors, which are vulnerable to attack. In this paper, a new unequal probability marking approach is proposed to enhance the security performance of logging and migration traceback (LM) schemes in tree-based wireless sensor networks (WSNs). The main contribution of this paper is to overcome the deficiency of the LM scheme that has a higher network lifetime and large storage space. In the unequal probability marking logging and migration (UPLM) scheme of this paper, different marking probabilities are adopted for different nodes according to their distances to the sink. A large marking probability is assigned to nodes in remote areas (areas at a long distance from the sink), while a small marking probability is applied to nodes in nearby area (areas at a short distance from the sink). This reduces the consumption of storage and energy in addition to enhancing the security performance, lifetime, and storage capacity. Marking information will be migrated to nodes at a longer distance from the sink for increasing the amount of stored marking information, thus enhancing the security performance in the process of migration. The experimental simulation shows that for general tree-based WSNs, the UPLM scheme proposed in this paper can store 1.12–1.28 times the amount of stored marking information that the equal probability marking approach achieves, and has 1.15–1.26 times the storage utilization efficiency compared with other schemes. PMID:28629135
A broadcast-based key agreement scheme using set reconciliation for wireless body area networks.
Ali, Aftab; Khan, Farrukh Aslam
2014-05-01
Information and communication technologies have thrived over the last few years. Healthcare systems have also benefited from this progression. A wireless body area network (WBAN) consists of small, low-power sensors used to monitor human physiological values remotely, which enables physicians to remotely monitor the health of patients. Communication security in WBANs is essential because it involves human physiological data. Key agreement and authentication are the primary issues in the security of WBANs. To agree upon a common key, the nodes exchange information with each other using wireless communication. This information exchange process must be secure enough or the information exchange should be minimized to a certain level so that if information leak occurs, it does not affect the overall system. Most of the existing solutions for this problem exchange too much information for the sake of key agreement; getting this information is sufficient for an attacker to reproduce the key. Set reconciliation is a technique used to reconcile two similar sets held by two different hosts with minimal communication complexity. This paper presents a broadcast-based key agreement scheme using set reconciliation for secure communication in WBANs. The proposed scheme allows the neighboring nodes to agree upon a common key with the personal server (PS), generated from the electrocardiogram (EKG) feature set of the host body. Minimal information is exchanged in a broadcast manner, and even if every node is missing a different subset, by reconciling these feature sets, the whole network will still agree upon a single common key. Because of the limited information exchange, if an attacker gets the information in any way, he/she will not be able to reproduce the key. The proposed scheme mitigates replay, selective forwarding, and denial of service attacks using a challenge-response authentication mechanism. The simulation results show that the proposed scheme has a great deal of adoptability in terms of security, communication overhead, and running time complexity, as compared to the existing EKG-based key agreement scheme.
NASA Astrophysics Data System (ADS)
Guimberteau, M.; Ducharne, A.; Ciais, P.; Boisier, J. P.; Peng, S.; De Weirdt, M.; Verbeeck, H.
2014-06-01
This study analyzes the performance of the two soil hydrology schemes of the land surface model ORCHIDEE in estimating Amazonian hydrology and phenology for five major sub-basins (Xingu, Tapajós, Madeira, Solimões and Negro), during the 29-year period 1980-2008. A simple 2-layer scheme with a bucket topped by an evaporative layer is compared to an 11-layer diffusion scheme. The soil schemes are coupled with a river routing module and a process model of plant physiology, phenology and carbon dynamics. The simulated water budget and vegetation functioning components are compared with several data sets at sub-basin scale. The use of the 11-layer soil diffusion scheme does not significantly change the Amazonian water budget simulation when compared to the 2-layer soil scheme (+3.1 and -3.0% in evapotranspiration and river discharge, respectively). However, the higher water-holding capacity of the soil and the physically based representation of runoff and drainage in the 11-layer soil diffusion scheme result in more dynamic soil water storage variation and improved simulation of the total terrestrial water storage when compared to GRACE satellite estimates. The greater soil water storage within the 11-layer scheme also results in increased dry-season evapotranspiration (+0.5 mm d-1, +17%) and improves river discharge simulation in the southeastern sub-basins such as the Xingu. Evapotranspiration over this sub-basin is sustained during the whole dry season with the 11-layer soil diffusion scheme, whereas the 2-layer scheme limits it after only 2 dry months. Lower plant drought stress simulated by the 11-layer soil diffusion scheme leads to better simulation of the seasonal cycle of photosynthesis (GPP) when compared to a GPP data-driven model based on eddy covariance and satellite greenness measurements. A dry-season length between 4 and 7 months over the entire Amazon Basin is found to be critical in distinguishing differences in hydrological feedbacks between the soil and the vegetation cover simulated by the two soil schemes. On average, the multilayer soil diffusion scheme provides little improvement in simulated hydrology over the wet tropical Amazonian sub-basins, but a more significant improvement is found over the drier sub-basins. The use of a multilayer soil diffusion scheme might become critical for assessments of future hydrological changes, especially in southern regions of the Amazon Basin where longer dry seasons and more severe droughts are expected in the next century.
Amin, Ruhul; Islam, S K Hafizul; Biswas, G P; Khan, Muhammad Khurram; Kumar, Neeraj
2015-11-01
In the last few years, numerous remote user authentication and session key agreement schemes have been put forwarded for Telecare Medical Information System, where the patient and medical server exchange medical information using Internet. We have found that most of the schemes are not usable for practical applications due to known security weaknesses. It is also worth to note that unrestricted number of patients login to the single medical server across the globe. Therefore, the computation and maintenance overhead would be high and the server may fail to provide services. In this article, we have designed a medical system architecture and a standard mutual authentication scheme for single medical server, where the patient can securely exchange medical data with the doctor(s) via trusted central medical server over any insecure network. We then explored the security of the scheme with its resilience to attacks. Moreover, we formally validated the proposed scheme through the simulation using Automated Validation of Internet Security Schemes and Applications software whose outcomes confirm that the scheme is protected against active and passive attacks. The performance comparison demonstrated that the proposed scheme has lower communication cost than the existing schemes in literature. In addition, the computation cost of the proposed scheme is nearly equal to the exiting schemes. The proposed scheme not only efficient in terms of different security attacks, but it also provides an efficient login, mutual authentication, session key agreement and verification and password update phases along with password recovery.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2008-09-15
We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.
Gutierrez, Hialy; Shewade, Ashwini; Dai, Minghan; Mendoza-Arana, Pedro; Gómez-Dantés, Octavio; Jain, Nishant; Khonelidze, Irma; Nabyonga-Orem, Juliet; Saleh, Karima; Teerawattananon, Yot; Nishtar, Sania; Hornberger, John
2015-08-01
Lessons learned by countries that have successfully implemented coverage schemes for health services may be valuable for other countries, especially low- and middle-income countries (LMICs), which likewise are seeking to provide/expand coverage. The research team surveyed experts in population health management from LMICs for information on characteristics of health care coverage schemes and factors that influenced decision-making processes. The level of coverage provided by the different schemes varied. Nearly all the health care coverage schemes involved various representatives and stakeholders in their decision-making processes. Maternal and child health, cardiovascular diseases, cancer, and HIV were among the highest priorities guiding coverage development decisions. Evidence used to inform coverage decisions included medical literature, regional and global epidemiology, and coverage policies of other coverage schemes. Funding was the most commonly reported reason for restricting coverage. This exploratory study provides an overview of health care coverage schemes from participating LMICs and contributes to the scarce evidence base on coverage decision making. Sharing knowledge and experiences among LMICs can support efforts to establish systems for accessible, affordable, and equitable health care.
Access and accounting schemes of wireless broadband
NASA Astrophysics Data System (ADS)
Zhang, Jian; Huang, Benxiong; Wang, Yan; Yu, Xing
2004-04-01
In this paper, two wireless broadband access and accounting schemes were introduced. There are some differences in the client and the access router module between them. In one scheme, Secure Shell (SSH) protocol is used in the access system. The SSH server makes the authentication based on private key cryptography. The advantage of this scheme is the security of the user's information, and we have sophisticated access control. In the other scheme, Secure Sockets Layer (SSL) protocol is used the access system. It uses the technology of public privacy key. Nowadays, web browser generally combines HTTP and SSL protocol and we use the SSL protocol to implement the encryption of the data between the clients and the access route. The schemes are same in the radius sever part. Remote Authentication Dial in User Service (RADIUS), as a security protocol in the form of Client/Sever, is becoming an authentication/accounting protocol for standard access to the Internet. It will be explained in a flow chart. In our scheme, the access router serves as the client to the radius server.
ESS-FH: Enhanced Security Scheme for Fast Handover in Hierarchical Mobile IPv6
NASA Astrophysics Data System (ADS)
You, Ilsun; Lee, Jong-Hyouk; Sakurai, Kouichi; Hori, Yoshiaki
Fast Handover for Hierarchical Mobile IPv6 (F-HMIPv6) that combines advantages of Fast Handover for Mobile IPv6 (FMIPv6) and Hierarchical Mobile IPv6 (HMIPv6) achieves the superior performance in terms of handover latency and signaling overhead compared with previously developed mobility protocols. However, without being secured, F-HMIPv6 is vulnerable to various security threats. In 2007, Kang and Park proposed a security scheme, which is seamlessly integrated into F-HMIPv6. In this paper, we reveal that Kang-Park's scheme cannot defend against the Denial of Service (DoS) and redirect attacks while largely relying on the group key. Then, we propose an Enhanced Security Scheme for F-HMIPv6 (ESS-FH) that achieves the strong key exchange and the key independence as well as addresses the weaknesses of Kang-Park's scheme. More importantly, it enables fast handover between different MAP domains. The proposed scheme is formally verified based on BAN-logic, and its handover latency is analyzed and compared with that of Kang-Park's scheme.
Design of an image encryption scheme based on a multiple chaotic map
NASA Astrophysics Data System (ADS)
Tong, Xiao-Jun
2013-07-01
In order to solve the problem that chaos is degenerated in limited computer precision and Cat map is the small key space, this paper presents a chaotic map based on topological conjugacy and the chaotic characteristics are proved by Devaney definition. In order to produce a large key space, a Cat map named block Cat map is also designed for permutation process based on multiple-dimensional chaotic maps. The image encryption algorithm is based on permutation-substitution, and each key is controlled by different chaotic maps. The entropy analysis, differential analysis, weak-keys analysis, statistical analysis, cipher random analysis, and cipher sensibility analysis depending on key and plaintext are introduced to test the security of the new image encryption scheme. Through the comparison to the proposed scheme with AES, DES and Logistic encryption methods, we come to the conclusion that the image encryption method solves the problem of low precision of one dimensional chaotic function and has higher speed and higher security.
Single-pixel computational ghost imaging with helicity-dependent metasurface hologram.
Liu, Hong-Chao; Yang, Biao; Guo, Qinghua; Shi, Jinhui; Guan, Chunying; Zheng, Guoxing; Mühlenbernd, Holger; Li, Guixin; Zentgraf, Thomas; Zhang, Shuang
2017-09-01
Different optical imaging techniques are based on different characteristics of light. By controlling the abrupt phase discontinuities with different polarized incident light, a metasurface can host a phase-only and helicity-dependent hologram. In contrast, ghost imaging (GI) is an indirect imaging modality to retrieve the object information from the correlation of the light intensity fluctuations. We report single-pixel computational GI with a high-efficiency reflective metasurface in both simulations and experiments. Playing a fascinating role in switching the GI target with different polarized light, the metasurface hologram generates helicity-dependent reconstructed ghost images and successfully introduces an additional security lock in a proposed optical encryption scheme based on the GI. The robustness of our encryption scheme is further verified with the vulnerability test. Building the first bridge between the metasurface hologram and the GI, our work paves the way to integrate their applications in the fields of optical communications, imaging technology, and security.
Fuzzy Matching Based on Gray-scale Difference for Quantum Images
NASA Astrophysics Data System (ADS)
Luo, GaoFeng; Zhou, Ri-Gui; Liu, XingAo; Hu, WenWen; Luo, Jia
2018-05-01
Quantum image processing has recently emerged as an essential problem in practical tasks, e.g. real-time image matching. Previous studies have shown that the superposition and entanglement of quantum can greatly improve the efficiency of complex image processing. In this paper, a fuzzy quantum image matching scheme based on gray-scale difference is proposed to find out the target region in a reference image, which is very similar to the template image. Firstly, we employ the proposed enhanced quantum representation (NEQR) to store digital images. Then some certain quantum operations are used to evaluate the gray-scale difference between two quantum images by thresholding. If all of the obtained gray-scale differences are not greater than the threshold value, it indicates a successful fuzzy matching of quantum images. Theoretical analysis and experiments show that the proposed scheme performs fuzzy matching at a low cost and also enables exponentially significant speedup via quantum parallel computation.
Single-pixel computational ghost imaging with helicity-dependent metasurface hologram
Liu, Hong-Chao; Yang, Biao; Guo, Qinghua; Shi, Jinhui; Guan, Chunying; Zheng, Guoxing; Mühlenbernd, Holger; Li, Guixin; Zentgraf, Thomas; Zhang, Shuang
2017-01-01
Different optical imaging techniques are based on different characteristics of light. By controlling the abrupt phase discontinuities with different polarized incident light, a metasurface can host a phase-only and helicity-dependent hologram. In contrast, ghost imaging (GI) is an indirect imaging modality to retrieve the object information from the correlation of the light intensity fluctuations. We report single-pixel computational GI with a high-efficiency reflective metasurface in both simulations and experiments. Playing a fascinating role in switching the GI target with different polarized light, the metasurface hologram generates helicity-dependent reconstructed ghost images and successfully introduces an additional security lock in a proposed optical encryption scheme based on the GI. The robustness of our encryption scheme is further verified with the vulnerability test. Building the first bridge between the metasurface hologram and the GI, our work paves the way to integrate their applications in the fields of optical communications, imaging technology, and security. PMID:28913433
A high-order vertex-based central ENO finite-volume scheme for three-dimensional compressible flows
Charest, Marc R.J.; Canfield, Thomas R.; Morgan, Nathaniel R.; ...
2015-03-11
High-order discretization methods offer the potential to reduce the computational cost associated with modeling compressible flows. However, it is difficult to obtain accurate high-order discretizations of conservation laws that do not produce spurious oscillations near discontinuities, especially on multi-dimensional unstructured meshes. A novel, high-order, central essentially non-oscillatory (CENO) finite-volume method that does not have these difficulties is proposed for tetrahedral meshes. The proposed unstructured method is vertex-based, which differs from existing cell-based CENO formulations, and uses a hybrid reconstruction procedure that switches between two different solution representations. It applies a high-order k-exact reconstruction in smooth regions and a limited linearmore » reconstruction when discontinuities are encountered. Both reconstructions use a single, central stencil for all variables, making the application of CENO to arbitrary unstructured meshes relatively straightforward. The new approach was applied to the conservation equations governing compressible flows and assessed in terms of accuracy and computational cost. For all problems considered, which included various function reconstructions and idealized flows, CENO demonstrated excellent reliability and robustness. Up to fifth-order accuracy was achieved in smooth regions and essentially non-oscillatory solutions were obtained near discontinuities. The high-order schemes were also more computationally efficient for high-accuracy solutions, i.e., they took less wall time than the lower-order schemes to achieve a desired level of error. In one particular case, it took a factor of 24 less wall-time to obtain a given level of error with the fourth-order CENO scheme than to obtain the same error with the second-order scheme.« less
NASA Technical Reports Server (NTRS)
Abramopoulos, Frank
1988-01-01
The conditions under which finite difference schemes for the shallow water equations can conserve both total energy and potential enstrophy are considered. A method of deriving such schemes using operator formalism is developed. Several such schemes are derived for the A-, B- and C-grids. The derived schemes include second-order schemes and pseudo-fourth-order schemes. The simplest B-grid pseudo-fourth-order schemes are presented.
NASA Astrophysics Data System (ADS)
Ge, Yongbin; Cao, Fujun
2011-05-01
In this paper, a multigrid method based on the high order compact (HOC) difference scheme on nonuniform grids, which has been proposed by Kalita et al. [J.C. Kalita, A.K. Dass, D.C. Dalal, A transformation-free HOC scheme for steady convection-diffusion on non-uniform grids, Int. J. Numer. Methods Fluids 44 (2004) 33-53], is proposed to solve the two-dimensional (2D) convection diffusion equation. The HOC scheme is not involved in any grid transformation to map the nonuniform grids to uniform grids, consequently, the multigrid method is brand-new for solving the discrete system arising from the difference equation on nonuniform grids. The corresponding multigrid projection and interpolation operators are constructed by the area ratio. Some boundary layer and local singularity problems are used to demonstrate the superiority of the present method. Numerical results show that the multigrid method with the HOC scheme on nonuniform grids almost gets as equally efficient convergence rate as on uniform grids and the computed solution on nonuniform grids retains fourth order accuracy while on uniform grids just gets very poor solution for very steep boundary layer or high local singularity problems. The present method is also applied to solve the 2D incompressible Navier-Stokes equations using the stream function-vorticity formulation and the numerical solutions of the lid-driven cavity flow problem are obtained and compared with solutions available in the literature.
A blind reversible robust watermarking scheme for relational databases.
Chang, Chin-Chen; Nguyen, Thai-Son; Lin, Chia-Chen
2013-01-01
Protecting the ownership and controlling the copies of digital data have become very important issues in Internet-based applications. Reversible watermark technology allows the distortion-free recovery of relational databases after the embedded watermark data are detected or verified. In this paper, we propose a new, blind, reversible, robust watermarking scheme that can be used to provide proof of ownership for the owner of a relational database. In the proposed scheme, a reversible data-embedding algorithm, which is referred to as "histogram shifting of adjacent pixel difference" (APD), is used to obtain reversibility. The proposed scheme can detect successfully 100% of the embedded watermark data, even if as much as 80% of the watermarked relational database is altered. Our extensive analysis and experimental results show that the proposed scheme is robust against a variety of data attacks, for example, alteration attacks, deletion attacks, mix-match attacks, and sorting attacks.
Multiswitching compound antisynchronization of four chaotic systems
NASA Astrophysics Data System (ADS)
Khan, Ayub; Khattar, Dinesh; Prajapati, Nitish
2017-12-01
Based on three drive-one response system, in this article, the authors investigate a novel synchronization scheme for a class of chaotic systems. The new scheme, multiswitching compound antisynchronization (MSCoAS), is a notable extension of the earlier multiswitching schemes concerning only one drive-one response system model. The concept of multiswitching synchronization is extended to compound synchronization scheme such that the state variables of three drive systems antisynchronize with different state variables of the response system, simultaneously. The study involving multiswitching of three drive systems and one response system is first of its kind. Various switched modified function projective antisynchronization schemes are obtained as special cases of MSCoAS, for a suitable choice of scaling factors. Using suitable controllers and Lyapunov stability theory, sufficient condition is obtained to achieve MSCoAS between four chaotic systems and the corresponding theoretical proof is given. Numerical simulations are performed using Lorenz system in MATLAB to demonstrate the validity of the presented method.
Analysis of High Order Difference Methods for Multiscale Complex Compressible Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, H. C.; Tang, Harry (Technical Monitor)
2002-01-01
Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes with incremental studies was initiated. Here we further refine the analysis on, and improve the understanding of the adaptive numerical dissipation control strategy. Basically, the development of these schemes focuses on high order nondissipative schemes and takes advantage of the progress that has been made for the last 30 years in numerical methods for conservation laws, such as techniques for imposing boundary conditions, techniques for stability at shock waves, and techniques for stable and accurate long-time integration. We concentrate on high order centered spatial discretizations and a fourth-order Runge-Kutta temporal discretizations as the base scheme. Near the bound-aries, the base scheme has stable boundary difference operators. To further enhance stability, the split form of the inviscid flux derivatives is frequently used for smooth flow problems. To enhance nonlinear stability, linear high order numerical dissipations are employed away from discontinuities, and nonlinear filters are employed after each time step in order to suppress spurious oscillations near discontinuities to minimize the smearing of turbulent fluctuations. Although these schemes are built from many components, each of which is well-known, it is not entirely obvious how the different components be best connected. For example, the nonlinear filter could instead have been built into the spatial discretization, so that it would have been activated at each stage in the Runge-Kutta time stepping. We could think of a mechanism that activates the split form of the equations only at some parts of the domain. Another issue is how to define good sensors for determining in which parts of the computational domain a certain feature should be filtered by the appropriate numerical dissipation. For the present study we employ a wavelet technique introduced in as sensors. Here, the method is briefly described with selected numerical experiments.
Mixed finite-difference scheme for analysis of simply supported thick plates.
NASA Technical Reports Server (NTRS)
Noor, A. K.
1973-01-01
A mixed finite-difference scheme is presented for the stress and free vibration analysis of simply supported nonhomogeneous and layered orthotropic thick plates. The analytical formulation is based on the linear, three-dimensional theory of orthotropic elasticity and a Fourier approach is used to reduce the governing equations to six first-order ordinary differential equations in the thickness coordinate. The governing equations possess a symmetric coefficient matrix and are free of derivatives of the elastic characteristics of the plate. In the finite difference discretization two interlacing grids are used for the different fundamental unknowns in such a way as to reduce both the local discretization error and the bandwidth of the resulting finite-difference field equations. Numerical studies are presented for the effects of reducing the interior and boundary discretization errors and of mesh refinement on the accuracy and convergence of solutions. It is shown that the proposed scheme, in addition to a number of other advantages, leads to highly accurate results, even when a small number of finite difference intervals is used.
Numerical experiments with a symmetric high-resolution shock-capturing scheme
NASA Technical Reports Server (NTRS)
Yee, H. C.
1986-01-01
Characteristic-based explicit and implicit total variation diminishing (TVD) schemes for the two-dimensional compressible Euler equations have recently been developed. This is a generalization of recent work of Roe and Davis to a wider class of symmetric (non-upwind) TVD schemes other than Lax-Wendroff. The Roe and Davis schemes can be viewed as a subset of the class of explicit methods. The main properties of the present class of schemes are that they can be implicit, and, when steady-state calculations are sought, the numerical solution is independent of the time step. In a recent paper, a comparison of a linearized form of the present implicit symmetric TVD scheme with an implicit upwind TVD scheme originally developed by Harten and modified by Yee was given. Results favored the symmetric method. It was found that the latter is just as accurate as the upwind method while requiring less computational effort. Currently, more numerical experiments are being conducted on time-accurate calculations and on the effect of grid topology, numerical boundary condition procedures, and different flow conditions on the behavior of the method for steady-state applications. The purpose here is to report experiences with this type of scheme and give guidelines for its use.
Effectiveness Information and Institutional Change: An Exploratory Analysis.
ERIC Educational Resources Information Center
Ewell, Peter T.
Factors that affect the implementation of information-based improvements in college instruction and decision-making are considered, based on a conceptual scheme for comparing information-based change efforts. Based on a student outcomes project, eight brief case studies of public colleges illustrate different patterns leading to successful use of…
Outgassing rate analysis of a velvet cathode and a carbon fiber cathode
NASA Astrophysics Data System (ADS)
Li, An-Kun; Fan, Yu-Wei; Qian, Bao-Liang; Zhang, Zi-cheng; Xun, Tao
2017-11-01
In this paper, the outgassing-rates of a carbon fiber array cathode and a polymer velvet cathode are tested and discussed. Two different methods of measurements are used in the experiments. In one scheme, a method based on dynamic equilibrium of pressure is used. Namely, the cathode works in the repetitive mode in a vacuum diode, a dynamic equilibrium pressure would be reached when the outgassing capacity in the chamber equals the pumping capacity of the pump, and the outgassing rate could be figured out according to this equilibrium pressure. In another scheme, a method based on static equilibrium of pressure is used. Namely, the cathode works in a closed vacuum chamber (a hard tube), and the outgassing rate could be calculated from the pressure difference between the pressure in the chamber before and after the work of the cathode. The outgassing rate is analyzed from the real time pressure evolution data which are measured using a magnetron gauge in both schemes. The outgassing rates of the carbon fiber array cathode and the velvet cathode are 7.3 ± 0.4 neutrals/electron and 85 ± 5 neutrals/electron in the first scheme and 9 ± 0.5 neutrals/electron and 98 ± 7 neutrals/electron in the second scheme. Both the results of two schemes show that the outgassing rate of the carbon fiber array cathode is an order smaller than that of the velvet cathode under similar conditions, which shows that this carbon fiber array cathode is a promising replacement of the velvet cathode in the application of magnetically insulated transmission line oscillators and relativistic magnetrons.
Laser-phased-array beam steering based on crystal fiber
NASA Astrophysics Data System (ADS)
Yang, Deng-cai; Zhao, Si-si; Wang, Da-yong; Wang, Zhi-yong; Zhang, Xiao-fei
2011-06-01
Laser-phased-array system provides an elegant means for achieving the inertial-free, high-resolution, rapid and random beam steering. In laser-phased-array system, phase controlling is the most important factor that impacts the system performance. A novel scheme is provided in this paper, the beam steering is accomplished by using crystal fiber array, the difference length between adjacent fiber is fixed. The phase difference between adjacent fiber decides the direction of the output beam. When the wavelength of the input fiber laser is tuned, the phase difference between the adjacent elements has changed. Therefore, the laser beam direction has changed and the beam steering has been accomplished. In this article, based on the proposed scheme, the steering angle of the laser beam is calculated and analyzed theoretically. Moreover, the far-field steering beam quality is discussed.
Performance Analysis of Transmit Diversity Systems with Multiple Antenna Replacement
NASA Astrophysics Data System (ADS)
Park, Ki-Hong; Yang, Hong-Chuan; Ko, Young-Chai
Transmit diversity systems based on orthogonal space-time block coding (OSTBC) usually suffer from rate loss and power spreading. Proper antenna selection scheme can help to more effectively utilize the transmit antennas and transmission power in such systems. In this paper, we propose a new antenna selection scheme for such systems based on the idea of antenna switching. In particular, targeting at reducing the number of pilot channels and RF chains, the transmitter now replaces the antennas with the lowest received SNR with unused ones if the output SNR of space time decoder at the receiver is below a certain threshold. With this new scheme, not only the number of pilot channels and RF chains to be implemented is decreased, the average amount of feedback information is also reduced. To analyze the performance of this scheme, we derive the exact integral closed form for the probability density function (PDF) of the received SNR. We show through numerical examples that the proposed scheme offers better performance than traditional OSTBC systems using all available transmitting antennas, with a small amount of feedback information. We also examine the effect of different antenna configuration and feedback delay.
NMRPipe: a multidimensional spectral processing system based on UNIX pipes.
Delaglio, F; Grzesiek, S; Vuister, G W; Zhu, G; Pfeifer, J; Bax, A
1995-11-01
The NMRPipe system is a UNIX software environment of processing, graphics, and analysis tools designed to meet current routine and research-oriented multidimensional processing requirements, and to anticipate and accommodate future demands and developments. The system is based on UNIX pipes, which allow programs running simultaneously to exchange streams of data under user control. In an NMRPipe processing scheme, a stream of spectral data flows through a pipeline of processing programs, each of which performs one component of the overall scheme, such as Fourier transformation or linear prediction. Complete multidimensional processing schemes are constructed as simple UNIX shell scripts. The processing modules themselves maintain and exploit accurate records of data sizes, detection modes, and calibration information in all dimensions, so that schemes can be constructed without the need to explicitly define or anticipate data sizes or storage details of real and imaginary channels during processing. The asynchronous pipeline scheme provides other substantial advantages, including high flexibility, favorable processing speeds, choice of both all-in-memory and disk-bound processing, easy adaptation to different data formats, simpler software development and maintenance, and the ability to distribute processing tasks on multi-CPU computers and computer networks.
Key Management Scheme Based on Route Planning of Mobile Sink in Wireless Sensor Networks.
Zhang, Ying; Liang, Jixing; Zheng, Bingxin; Jiang, Shengming; Chen, Wei
2016-01-29
In many wireless sensor network application scenarios the key management scheme with a Mobile Sink (MS) should be fully investigated. This paper proposes a key management scheme based on dynamic clustering and optimal-routing choice of MS. The concept of Traveling Salesman Problem with Neighbor areas (TSPN) in dynamic clustering for data exchange is proposed, and the selection probability is used in MS route planning. The proposed scheme extends static key management to dynamic key management by considering the dynamic clustering and mobility of MSs, which can effectively balance the total energy consumption during the activities. Considering the different resources available to the member nodes and sink node, the session key between cluster head and MS is established by modified an ECC encryption with Diffie-Hellman key exchange (ECDH) algorithm and the session key between member node and cluster head is built with a binary symmetric polynomial. By analyzing the security of data storage, data transfer and the mechanism of dynamic key management, the proposed scheme has more advantages to help improve the resilience of the key management system of the network on the premise of satisfying higher connectivity and storage efficiency.
An Adaptive Ship Detection Scheme for Spaceborne SAR Imagery
Leng, Xiangguang; Ji, Kefeng; Zhou, Shilin; Xing, Xiangwei; Zou, Huanxin
2016-01-01
With the rapid development of spaceborne synthetic aperture radar (SAR) and the increasing need of ship detection, research on adaptive ship detection in spaceborne SAR imagery is of great importance. Focusing on practical problems of ship detection, this paper presents a highly adaptive ship detection scheme for spaceborne SAR imagery. It is able to process a wide range of sensors, imaging modes and resolutions. Two main stages are identified in this paper, namely: ship candidate detection and ship discrimination. Firstly, this paper proposes an adaptive land masking method using ship size and pixel size. Secondly, taking into account the imaging mode, incidence angle, and polarization channel of SAR imagery, it implements adaptive ship candidate detection in spaceborne SAR imagery by applying different strategies to different resolution SAR images. Finally, aiming at different types of typical false alarms, this paper proposes a comprehensive ship discrimination method in spaceborne SAR imagery based on confidence level and complexity analysis. Experimental results based on RADARSAT-1, RADARSAT-2, TerraSAR-X, RS-1, and RS-3 images demonstrate that the adaptive scheme proposed in this paper is able to detect ship targets in a fast, efficient and robust way. PMID:27563902
R2NA: Received Signal Strength (RSS) Ratio-Based Node Authentication for Body Area Network
Wu, Yang; Wang, Kai; Sun, Yongmei; Ji, Yuefeng
2013-01-01
The body area network (BAN) is an emerging branch of wireless sensor networks for personalized applications. The services in BAN usually have a high requirement on security, especially for the medical diagnosis. One of the fundamental directions to ensure security in BAN is how to provide node authentication. Traditional research using cryptography relies on prior secrets shared among nodes, which leads to high resource cost. In addition, most existing non-cryptographic solutions exploit out-of-band (OOB) channels, but they need the help of additional hardware support or significant modifications to the system software. To avoid the above problems, this paper presents a proximity-based node authentication scheme, which only uses wireless modules equipped on sensors. With only one sensor and one control unit (CU) in BAN, we could detect a unique physical layer characteristic, namely, the difference between the received signal strength (RSS) measured on different devices in BAN. Through the above-mentioned particular difference, we can tell whether the sender is close enough to be legitimate. We validate our scheme through both theoretical analysis and experiments, which are conducted on the real Shimmer nodes. The results demonstrate that our proposed scheme has a good security performance.
Lee, Tian-Fu; Liu, Chuan-Ming
2013-06-01
A smart-card based authentication scheme for telecare medicine information systems enables patients, doctors, nurses, health visitors and the medicine information systems to establish a secure communication platform through public networks. Zhu recently presented an improved authentication scheme in order to solve the weakness of the authentication scheme of Wei et al., where the off-line password guessing attacks cannot be resisted. This investigation indicates that the improved scheme of Zhu has some faults such that the authentication scheme cannot execute correctly and is vulnerable to the attack of parallel sessions. Additionally, an enhanced authentication scheme based on the scheme of Zhu is proposed. The enhanced scheme not only avoids the weakness in the original scheme, but also provides users' anonymity and authenticated key agreements for secure data communications.
Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming
2016-01-01
With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.'s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks.
NASA Technical Reports Server (NTRS)
Wang, Shugong; Liang, Xu
2013-01-01
A new approach is presented in this paper to effectively obtain parameter estimations for the Multiscale Kalman Smoother (MKS) algorithm. This new approach has demonstrated promising potentials in deriving better data products based on data of different spatial scales and precisions. Our new approach employs a multi-objective (MO) parameter estimation scheme (called MO scheme hereafter), rather than using the conventional maximum likelihood scheme (called ML scheme) to estimate the MKS parameters. Unlike the ML scheme, the MO scheme is not simply built on strict statistical assumptions related to prediction errors and observation errors, rather, it directly associates the fused data of multiple scales with multiple objective functions in searching best parameter estimations for MKS through optimization. In the MO scheme, objective functions are defined to facilitate consistency among the fused data at multiscales and the input data at their original scales in terms of spatial patterns and magnitudes. The new approach is evaluated through a Monte Carlo experiment and a series of comparison analyses using synthetic precipitation data. Our results show that the MKS fused precipitation performs better using the MO scheme than that using the ML scheme. Particularly, improvements are significant compared to that using the ML scheme for the fused precipitation associated with fine spatial resolutions. This is mainly due to having more criteria and constraints involved in the MO scheme than those included in the ML scheme. The weakness of the original ML scheme that blindly puts more weights onto the data associated with finer resolutions is overcome in our new approach.
Implementation of the high-order schemes QUICK and LECUSSO in the COMMIX-1C Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakai, K.; Sun, J.G.; Sha, W.T.
Multidimensional analysis computer programs based on the finite volume method, such as COMMIX-1C, have been commonly used to simulate thermal-hydraulic phenomena in engineering systems such as nuclear reactors. In COMMIX-1C, the first-order schemes with respect to both space and time are used. In many situations such as flow recirculations and stratifications with steep gradient of velocity and temperature fields, however, high-order difference schemes are necessary for an accurate prediction of the fields. For these reasons, two second-order finite difference numerical schemes, QUICK (Quadratic Upstream Interpolation for Convective Kinematics) and LECUSSO (Local Exact Consistent Upwind Scheme of Second Order), have beenmore » implemented in the COMMIX-1C computer code. The formulations were derived for general three-dimensional flows with nonuniform grid sizes. Numerical oscillation analyses for QUICK and LECUSSO were performed. To damp the unphysical oscillations which occur in calculations with high-order schemes at high mesh Reynolds numbers, a new FRAM (Filtering Remedy and Methodology) scheme was developed and implemented. To be consistent with the high-order schemes, the pressure equation and the boundary conditions for all the conservation equations were also modified to be of second order. The new capabilities in the code are listed. Test calculations were performed to validate the implementation of the high-order schemes. They include the test of the one-dimensional nonlinear Burgers equation, two-dimensional scalar transport in two impinging streams, von Karmann vortex shedding, shear driven cavity flow, Couette flow, and circular pipe flow. The calculated results were compared with available data; the agreement is good.« less
Introducing the MIT Regional Climate Model (MRCM)
NASA Astrophysics Data System (ADS)
Eltahir, Elfatih A. B.; Winter, Jonathn M.; Marcella, Marc P.; Gianotti, Rebecca L.; Im, Eun-Soon
2013-04-01
During the last decade researchers at MIT have worked on improving the skill of Regional Climate Model version 3 (RegCM3) in simulating climate over different regions through the incorporation of new physical schemes or modification of original schemes. The MIT Regional Climate Model (MRCM) features several modifications over RegCM3 including coupling of Integrated Biosphere Simulator (IBIS), a new surface albedo assignment method, a new convective cloud and rainfall auto-conversion scheme, and a modified boundary layer height and cloud scheme. Here, we introduce the MRCM and briefly describe the major model modifications relative to RegCM3 and their impact on the model performance. The most significant difference relative to the RegCM3 original configuration is coupling the Integrated Biosphere Simulator (IBIS) land-surface scheme (Winter et al., 2009). Based on the simulations using IBIS over the North America, the Maritime Continent, Southwest Asia and West Africa, we demonstrate that the use of IBIS as the land surface scheme results in better representation of surface energy and water budgets in comparison to BATS. Furthermore, the addition of a new irrigation scheme to IBIS makes it possible to investigate the effects of irrigation over any region. Also a new surface albedo assignment method used together with IBIS brings further improvement in simulations of surface radiation (Marcella and Eltahir, 2013). Another important feature of the MRCM is the introduction of a new convective cloud and rainfall auto-conversion scheme (Gianotti and Eltahir, 2013). This modification brings more physical realism into an important component of the model, and succeeds in simulating convective-radiative feedback improving model performance across several radiation fields and rainfall characteristics. Other features of MRCM such as the modified boundary layer height and cloud scheme, and the improvements in the dust emission and transport representations will be discussed.
ID-based encryption scheme with revocation
NASA Astrophysics Data System (ADS)
Othman, Hafizul Azrie; Ismail, Eddie Shahril
2017-04-01
In 2015, Meshram proposed an efficient ID-based cryptographic encryption based on the difficulty of solving discrete logarithm and integer-factoring problems. The scheme was pairing free and claimed to be secure against adaptive chosen plaintext attacks (CPA). Later, Tan et al. proved that the scheme was insecure by presenting a method to recover the secret master key and to obtain prime factorization of modulo n. In this paper, we propose a new pairing-free ID-based encryption scheme with revocation based on Meshram's ID-based encryption scheme, which is also secure against Tan et al.'s attacks.
Ding, Chao; Yang, Lijun; Wu, Meng
2017-01-01
Due to the unattended nature and poor security guarantee of the wireless sensor networks (WSNs), adversaries can easily make replicas of compromised nodes, and place them throughout the network to launch various types of attacks. Such an attack is dangerous because it enables the adversaries to control large numbers of nodes and extend the damage of attacks to most of the network with quite limited cost. To stop the node replica attack, we propose a location similarity-based detection scheme using deployment knowledge. Compared with prior solutions, our scheme provides extra functionalities that prevent replicas from generating false location claims without deploying resource-consuming localization techniques on the resource-constraint sensor nodes. We evaluate the security performance of our proposal under different attack strategies through heuristic analysis, and show that our scheme achieves secure and robust replica detection by increasing the cost of node replication. Additionally, we evaluate the impact of network environment on the proposed scheme through theoretic analysis and simulation experiments, and indicate that our scheme achieves effectiveness and efficiency with substantially lower communication, computational, and storage overhead than prior works under different situations and attack strategies. PMID:28098846
Ding, Chao; Yang, Lijun; Wu, Meng
2017-01-15
Due to the unattended nature and poor security guarantee of the wireless sensor networks (WSNs), adversaries can easily make replicas of compromised nodes, and place them throughout the network to launch various types of attacks. Such an attack is dangerous because it enables the adversaries to control large numbers of nodes and extend the damage of attacks to most of the network with quite limited cost. To stop the node replica attack, we propose a location similarity-based detection scheme using deployment knowledge. Compared with prior solutions, our scheme provides extra functionalities that prevent replicas from generating false location claims without deploying resource-consuming localization techniques on the resource-constraint sensor nodes. We evaluate the security performance of our proposal under different attack strategies through heuristic analysis, and show that our scheme achieves secure and robust replica detection by increasing the cost of node replication. Additionally, we evaluate the impact of network environment on the proposed scheme through theoretic analysis and simulation experiments, and indicate that our scheme achieves effectiveness and efficiency with substantially lower communication, computational, and storage overhead than prior works under different situations and attack strategies.
On Multi-Dimensional Unstructured Mesh Adaption
NASA Technical Reports Server (NTRS)
Wood, William A.; Kleb, William L.
1999-01-01
Anisotropic unstructured mesh adaption is developed for a truly multi-dimensional upwind fluctuation splitting scheme, as applied to scalar advection-diffusion. The adaption is performed locally using edge swapping, point insertion/deletion, and nodal displacements. Comparisons are made versus the current state of the art for aggressive anisotropic unstructured adaption, which is based on a posteriori error estimates. Demonstration of both schemes to model problems, with features representative of compressible gas dynamics, show the present method to be superior to the a posteriori adaption for linear advection. The performance of the two methods is more similar when applied to nonlinear advection, with a difference in the treatment of shocks. The a posteriori adaption can excessively cluster points to a shock, while the present multi-dimensional scheme tends to merely align with a shock, using fewer nodes. As a consequence of this alignment tendency, an implementation of eigenvalue limiting for the suppression of expansion shocks is developed for the multi-dimensional distribution scheme. The differences in the treatment of shocks by the adaption schemes, along with the inherently low levels of artificial dissipation in the fluctuation splitting solver, suggest the present method is a strong candidate for applications to compressible gas dynamics.
Crystal collimator systems for high energy frontier
NASA Astrophysics Data System (ADS)
Sytov, A. I.; Tikhomirov, V. V.; Lobko, A. S.
2017-07-01
Crystalline collimators can potentially considerably improve the cleaning performance of the presently used collimator systems using amorphous collimators. A crystal-based collimation scheme which relies on the channeling particle deflection in bent crystals has been proposed and extensively studied both theoretically and experimentally. However, since the efficiency of particle capture into the channeling regime does not exceed ninety percent, this collimation scheme partly suffers from the same leakage problems as the schemes using amorphous collimators. To improve further the cleaning efficiency of the crystal-based collimation system to meet the requirements of the FCC, we suggest here a double crystal-based collimation scheme, to which the second crystal is introduced to enhance the deflection of the particles escaping the capture to the channeling regime in its first crystal. The application of the effect of multiple volume reflection in one bent crystal and of the same in a sequence of crystals is simulated and compared for different crystal numbers and materials at the energy of 50 TeV. To enhance also the efficiency of use of the first crystal of the suggested double crystal-based scheme, we propose: the method of increase of the probability of particle capture into the channeling regime at the first crystal passage by means of fabrication of a crystal cut and the method of the amplification of nonchanneled particle deflection through the multiple volume reflection in one bent crystal, accompanying the particle channeling by a skew plane. We simulate both of these methods for the 50 TeV FCC energy.
A secure biometrics-based authentication scheme for telecare medicine information systems.
Yan, Xiaopeng; Li, Weiheng; Li, Ping; Wang, Jiantao; Hao, Xinhong; Gong, Peng
2013-10-01
The telecare medicine information system (TMIS) allows patients and doctors to access medical services or medical information at remote sites. Therefore, it could bring us very big convenient. To safeguard patients' privacy, authentication schemes for the TMIS attracted wide attention. Recently, Tan proposed an efficient biometrics-based authentication scheme for the TMIS and claimed their scheme could withstand various attacks. However, in this paper, we point out that Tan's scheme is vulnerable to the Denial-of-Service attack. To enhance security, we also propose an improved scheme based on Tan's work. Security and performance analysis shows our scheme not only could overcome weakness in Tan's scheme but also has better performance.
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.
2010-01-01
High resolution weather forecast models with explicit prediction of hydrometeor type, size distribution, and fall speed may be useful in the development of precipitation retrievals, by providing representative characteristics of frozen hydrometeors. Several single or double-moment microphysics schemes are currently available within the Weather Research and Forecasting (WRF) model, allowing for the prediction of up to three ice species. Each scheme incorporates different assumptions regarding the characteristics of their ice classes, particularly in terms of size distribution, density, and fall speed. In addition to the prediction of hydrometeor content, these schemes must accurately represent the vertical profile of water vapor to account for possible attenuation, along with the size distribution, density, and shape characteristics of ice crystals that are relevant to microwave scattering. An evaluation of a particular scheme requires the availability of field campaign measurements. The Canadian CloudSat/CALIPSO Validation Project (C3VP) obtained measurements of ice crystal shapes, size distributions, fall speeds, and precipitation during several intensive observation periods. In this study, C3VP observations obtained during the 22 January 2007 synoptic-scale snowfall event are compared against WRF model output, based upon forecasts using four single-moment and two double-moment schemes available as of version 3.1. Schemes are compared against aircraft observations by examining differences in size distribution, density, and content. In addition to direct measurements from aircraft probes, simulated precipitation can also be converted to equivalent, remotely sensed characteristics through the use of the NASA Goddard Satellite Data Simulator Unit. Outputs from high resolution forecasts are compared against radar and satellite observations emphasizing differences in assumed crystal shape and size distribution characteristics.
A study of the response of nonlinear springs
NASA Technical Reports Server (NTRS)
Hyer, M. W.; Knott, T. W.; Johnson, E. R.
1991-01-01
The various phases to developing a methodology for studying the response of a spring-reinforced arch subjected to a point load are discussed. The arch is simply supported at its ends with both the spring and the point load assumed to be at midspan. The spring is present to off-set the typical snap through behavior normally associated with arches, and to provide a structure that responds with constant resistance over a finite displacement. The various phases discussed consist of the following: (1) development of the closed-form solution for the shallow arch case; (2) development of a finite difference analysis to study (shallow) arches; and (3) development of a finite element analysis for studying more general shallow and nonshallow arches. The two numerical analyses rely on a continuation scheme to move the solution past limit points, and to move onto bifurcated paths, both characteristics being common to the arch problem. An eigenvalue method is used for a continuation scheme. The finite difference analysis is based on a mixed formulation (force and displacement variables) of the governing equations. The governing equations for the mixed formulation are in first order form, making the finite difference implementation convenient. However, the mixed formulation is not well-suited for the eigenvalue continuation scheme. This provided the motivation for the displacement based finite element analysis. Both the finite difference and the finite element analyses are compared with the closed form shallow arch solution. Agreement is excellent, except for the potential problems with the finite difference analysis and the continuation scheme. Agreement between the finite element analysis and another investigator's numerical analysis for deep arches is also good.
Carlson, Josh J; Sullivan, Sean D; Garrison, Louis P; Neumann, Peter J; Veenstra, David L
2010-08-01
To identify, categorize and examine performance-based health outcomes reimbursement schemes for medical technology. We performed a review of performance-based health outcomes reimbursement schemes over the past 10 years (7/98-010/09) using publicly available databases, web and grey literature searches, and input from healthcare reimbursement experts. We developed a taxonomy of scheme types by inductively organizing the schemes identified according to the timing, execution, and health outcomes measured in the schemes. Our search yielded 34 coverage with evidence development schemes, 10 conditional treatment continuation schemes, and 14 performance-linked reimbursement schemes. The majority of schemes are in Europe and Australia, with an increasing number in Canada and the U.S. These schemes have the potential to alter the reimbursement and pricing landscape for medical technology, but significant challenges, including high transaction costs and insufficient information systems, may limit their long-term impact. Future studies regarding experiences and outcomes of implemented schemes are necessary. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Performance Analysis of Cluster Formation in Wireless Sensor Networks.
Montiel, Edgar Romo; Rivero-Angeles, Mario E; Rubino, Gerardo; Molina-Lozano, Heron; Menchaca-Mendez, Rolando; Menchaca-Mendez, Ricardo
2017-12-13
Clustered-based wireless sensor networks have been extensively used in the literature in order to achieve considerable energy consumption reductions. However, two aspects of such systems have been largely overlooked. Namely, the transmission probability used during the cluster formation phase and the way in which cluster heads are selected. Both of these issues have an important impact on the performance of the system. For the former, it is common to consider that sensor nodes in a clustered-based Wireless Sensor Network (WSN) use a fixed transmission probability to send control data in order to build the clusters. However, due to the highly variable conditions experienced by these networks, a fixed transmission probability may lead to extra energy consumption. In view of this, three different transmission probability strategies are studied: optimal, fixed and adaptive. In this context, we also investigate cluster head selection schemes, specifically, we consider two intelligent schemes based on the fuzzy C-means and k-medoids algorithms and a random selection with no intelligence. We show that the use of intelligent schemes greatly improves the performance of the system, but their use entails higher complexity and selection delay. The main performance metrics considered in this work are energy consumption, successful transmission probability and cluster formation latency. As an additional feature of this work, we study the effect of errors in the wireless channel and the impact on the performance of the system under the different transmission probability schemes.
Performance Analysis of Cluster Formation in Wireless Sensor Networks
Montiel, Edgar Romo; Rivero-Angeles, Mario E.; Rubino, Gerardo; Molina-Lozano, Heron; Menchaca-Mendez, Rolando; Menchaca-Mendez, Ricardo
2017-01-01
Clustered-based wireless sensor networks have been extensively used in the literature in order to achieve considerable energy consumption reductions. However, two aspects of such systems have been largely overlooked. Namely, the transmission probability used during the cluster formation phase and the way in which cluster heads are selected. Both of these issues have an important impact on the performance of the system. For the former, it is common to consider that sensor nodes in a clustered-based Wireless Sensor Network (WSN) use a fixed transmission probability to send control data in order to build the clusters. However, due to the highly variable conditions experienced by these networks, a fixed transmission probability may lead to extra energy consumption. In view of this, three different transmission probability strategies are studied: optimal, fixed and adaptive. In this context, we also investigate cluster head selection schemes, specifically, we consider two intelligent schemes based on the fuzzy C-means and k-medoids algorithms and a random selection with no intelligence. We show that the use of intelligent schemes greatly improves the performance of the system, but their use entails higher complexity and selection delay. The main performance metrics considered in this work are energy consumption, successful transmission probability and cluster formation latency. As an additional feature of this work, we study the effect of errors in the wireless channel and the impact on the performance of the system under the different transmission probability schemes. PMID:29236065
Higher-Order Compact Schemes for Numerical Simulation of Incompressible Flows
NASA Technical Reports Server (NTRS)
Wilson, Robert V.; Demuren, Ayodeji O.; Carpenter, Mark
1998-01-01
A higher order accurate numerical procedure has been developed for solving incompressible Navier-Stokes equations for 2D or 3D fluid flow problems. It is based on low-storage Runge-Kutta schemes for temporal discretization and fourth and sixth order compact finite-difference schemes for spatial discretization. The particular difficulty of satisfying the divergence-free velocity field required in incompressible fluid flow is resolved by solving a Poisson equation for pressure. It is demonstrated that for consistent global accuracy, it is necessary to employ the same order of accuracy in the discretization of the Poisson equation. Special care is also required to achieve the formal temporal accuracy of the Runge-Kutta schemes. The accuracy of the present procedure is demonstrated by application to several pertinent benchmark problems.
Ensemble Manifold Rank Preserving for Acceleration-Based Human Activity Recognition.
Tao, Dapeng; Jin, Lianwen; Yuan, Yuan; Xue, Yang
2016-06-01
With the rapid development of mobile devices and pervasive computing technologies, acceleration-based human activity recognition, a difficult yet essential problem in mobile apps, has received intensive attention recently. Different acceleration signals for representing different activities or even a same activity have different attributes, which causes troubles in normalizing the signals. We thus cannot directly compare these signals with each other, because they are embedded in a nonmetric space. Therefore, we present a nonmetric scheme that retains discriminative and robust frequency domain information by developing a novel ensemble manifold rank preserving (EMRP) algorithm. EMRP simultaneously considers three aspects: 1) it encodes the local geometry using the ranking order information of intraclass samples distributed on local patches; 2) it keeps the discriminative information by maximizing the margin between samples of different classes; and 3) it finds the optimal linear combination of the alignment matrices to approximate the intrinsic manifold lied in the data. Experiments are conducted on the South China University of Technology naturalistic 3-D acceleration-based activity dataset and the naturalistic mobile-devices based human activity dataset to demonstrate the robustness and effectiveness of the new nonmetric scheme for acceleration-based human activity recognition.
Mishra, Dheerendra
2015-03-01
Smart card based authentication and key agreement schemes for telecare medicine information systems (TMIS) enable doctors, nurses, patients and health visitors to use smart cards for secure login to medical information systems. In recent years, several authentication and key agreement schemes have been proposed to present secure and efficient solution for TMIS. Most of the existing authentication schemes for TMIS have either higher computation overhead or are vulnerable to attacks. To reduce the computational overhead and enhance the security, Lee recently proposed an authentication and key agreement scheme using chaotic maps for TMIS. Xu et al. also proposed a password based authentication and key agreement scheme for TMIS using elliptic curve cryptography. Both the schemes provide better efficiency from the conventional public key cryptography based schemes. These schemes are important as they present an efficient solution for TMIS. We analyze the security of both Lee's scheme and Xu et al.'s schemes. Unfortunately, we identify that both the schemes are vulnerable to denial of service attack. To understand the security failures of these cryptographic schemes which are the key of patching existing schemes and designing future schemes, we demonstrate the security loopholes of Lee's scheme and Xu et al.'s scheme in this paper.
Lu, Yanrong; Li, Lixiang; Peng, Haipeng; Xie, Dong; Yang, Yixian
2015-06-01
The Telecare Medicine Information Systems (TMISs) provide an efficient communicating platform supporting the patients access health-care delivery services via internet or mobile networks. Authentication becomes an essential need when a remote patient logins into the telecare server. Recently, many extended chaotic maps based authentication schemes using smart cards for TMISs have been proposed. Li et al. proposed a secure smart cards based authentication scheme for TMISs using extended chaotic maps based on Lee's and Jiang et al.'s scheme. In this study, we show that Li et al.'s scheme has still some weaknesses such as violation the session key security, vulnerability to user impersonation attack and lack of local verification. To conquer these flaws, we propose a chaotic maps and smart cards based password authentication scheme by applying biometrics technique and hash function operations. Through the informal and formal security analyses, we demonstrate that our scheme is resilient possible known attacks including the attacks found in Li et al.'s scheme. As compared with the previous authentication schemes, the proposed scheme is more secure and efficient and hence more practical for telemedical environments.
Exact density functional and wave function embedding schemes based on orbital localization
NASA Astrophysics Data System (ADS)
Hégely, Bence; Nagy, Péter R.; Ferenczy, György G.; Kállay, Mihály
2016-08-01
Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up the system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.
Enhanced smartcard-based password-authenticated key agreement using extended chaotic maps.
Lee, Tian-Fu; Hsiao, Chia-Hung; Hwang, Shi-Han; Lin, Tsung-Hung
2017-01-01
A smartcard based password-authenticated key agreement scheme enables a legal user to log in to a remote authentication server and access remote services through public networks using a weak password and a smart card. Lin recently presented an improved chaotic maps-based password-authenticated key agreement scheme that used smartcards to eliminate the weaknesses of the scheme of Guo and Chang, which does not provide strong user anonymity and violates session key security. However, the improved scheme of Lin does not exhibit the freshness property and the validity of messages so it still fails to withstand denial-of-service and privileged-insider attacks. Additionally, a single malicious participant can predetermine the session key such that the improved scheme does not exhibit the contributory property of key agreements. This investigation discusses these weaknesses and proposes an enhanced smartcard-based password-authenticated key agreement scheme that utilizes extended chaotic maps. The session security of this enhanced scheme is based on the extended chaotic map-based Diffie-Hellman problem, and is proven in the real-or-random and the sequence of games models. Moreover, the enhanced scheme ensures the freshness of communicating messages by appending timestamps, and thereby avoids the weaknesses in previous schemes.
Enhanced smartcard-based password-authenticated key agreement using extended chaotic maps
Lee, Tian-Fu; Hsiao, Chia-Hung; Hwang, Shi-Han
2017-01-01
A smartcard based password-authenticated key agreement scheme enables a legal user to log in to a remote authentication server and access remote services through public networks using a weak password and a smart card. Lin recently presented an improved chaotic maps-based password-authenticated key agreement scheme that used smartcards to eliminate the weaknesses of the scheme of Guo and Chang, which does not provide strong user anonymity and violates session key security. However, the improved scheme of Lin does not exhibit the freshness property and the validity of messages so it still fails to withstand denial-of-service and privileged-insider attacks. Additionally, a single malicious participant can predetermine the session key such that the improved scheme does not exhibit the contributory property of key agreements. This investigation discusses these weaknesses and proposes an enhanced smartcard-based password-authenticated key agreement scheme that utilizes extended chaotic maps. The session security of this enhanced scheme is based on the extended chaotic map-based Diffie-Hellman problem, and is proven in the real-or-random and the sequence of games models. Moreover, the enhanced scheme ensures the freshness of communicating messages by appending timestamps, and thereby avoids the weaknesses in previous schemes. PMID:28759615
Zhu, Jianping; Tao, Zhengsu; Lv, Chunfeng
2012-01-01
Studies of the IEEE 802.15.4 Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) scheme have been received considerable attention recently, with most of these studies focusing on homogeneous or saturated traffic. Two novel transmission schemes-OSTS/BSTS (One Service a Time Scheme/Bulk Service a Time Scheme)-are proposed in this paper to improve the behaviors of time-critical buffered networks with heterogeneous unsaturated traffic. First, we propose a model which contains two modified semi-Markov chains and a macro-Markov chain combined with the theory of M/G/1/K queues to evaluate the characteristics of these two improved CSMA/CA schemes, in which traffic arrivals and accessing packets are bestowed with non-preemptive priority over each other, instead of prioritization. Then, throughput, packet delay and energy consumption of unsaturated, unacknowledged IEEE 802.15.4 beacon-enabled networks are predicted based on the overall point of view which takes the dependent interactions of different types of nodes into account. Moreover, performance comparisons of these two schemes with other non-priority schemes are also proposed. Analysis and simulation results show that delay and fairness of our schemes are superior to those of other schemes, while throughput and energy efficiency are superior to others in more heterogeneous situations. Comprehensive simulations demonstrate that the analysis results of these models match well with the simulation results.
NASA Astrophysics Data System (ADS)
Zhang, Jinfang; Yan, Xiaoqing; Wang, Hongfu
2018-02-01
With the rapid development of renewable energy in Northwest China, curtailment phenomena is becoming more and more serve owing to lack of adjustment ability and enough transmission capacity. Based on the existing HVDC projects, exploring the hybrid transmission mode associated with thermal power and renewable power will be necessary and important. This paper has proposed a method on optimal thermal power and renewable energy combination for HVDC lines, based on multi-scheme comparison. Having established the mathematic model for electric power balance in time series mode, ten different schemes have been picked for figuring out the suitable one by test simulation. By the proposed related discriminated principle, including generation device utilization hours, renewable energy electricity proportion and curtailment level, the recommendation scheme has been found. The result has also validated the efficiency of the method.
NASA Astrophysics Data System (ADS)
Tanohata, Naoki; Seki, Hirokazu
This paper describes a novel drive control scheme of electric power assisted wheelchairs based on neural network learning of human wheelchair operation characteristics. “Electric power assisted wheelchair” which enhances the drive force of the operator by employing electric motors is expected to be widely used as a mobility support system for elderly and disabled people. However, some handicapped people with paralysis of the muscles of one side of the body cannot maneuver the wheelchair as desired because of the difference in the right and left input force. Therefore, this study proposes a neural network learning system of such human wheelchair operation characteristics and a drive control scheme with variable distribution and assistance ratios. Some driving experiments will be performed to confirm the effectiveness of the proposed control system.
Methods of separation of variables in turbulence theory
NASA Technical Reports Server (NTRS)
Tsuge, S.
1978-01-01
Two schemes of closing turbulent moment equations are proposed both of which make double correlation equations separated into single-point equations. The first is based on neglected triple correlation, leading to an equation differing from small perturbed gasdynamic equations where the separation constant appears as the frequency. Grid-produced turbulence is described in this light as time-independent, cylindrically-isotropic turbulence. Application to wall turbulence guided by a new asymptotic method for the Orr-Sommerfeld equation reveals a neutrally stable mode of essentially three dimensional nature. The second closure scheme is based on an assumption of identity of the separated variables through which triple and quadruple correlations are formed. The resulting equation adds, to its equivalent of the first scheme, an integral of nonlinear convolution in the frequency describing a role due to triple correlation of direct energy-cascading.
NASA Astrophysics Data System (ADS)
Antón, M.; Kroon, M.; López, M.; Vilaplana, J. M.; Bañón, M.; van der A, R.; Veefkind, J. P.; Stammes, P.; Alados-Arboledas, L.
2011-11-01
This article focuses on the validation of the total ozone column (TOC) data set acquired by the Global Ozone Monitoring Experiment (GOME) and the Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) satellite remote sensing instruments using the Total Ozone Retrieval Scheme for the GOME Instrument Based on the Ozone Monitoring Instrument (TOGOMI) and Total Ozone Retrieval Scheme for the SCIAMACHY Instrument Based on the Ozone Monitoring Instrument (TOSOMI) retrieval algorithms developed by the Royal Netherlands Meteorological Institute. In this analysis, spatially colocated, daily averaged ground-based observations performed by five well-calibrated Brewer spectrophotometers at the Iberian Peninsula are used. The period of study runs from January 2004 to December 2009. The agreement between satellite and ground-based TOC data is excellent (R2 higher than 0.94). Nevertheless, the TOC data derived from both satellite instruments underestimate the ground-based data. On average, this underestimation is 1.1% for GOME and 1.3% for SCIAMACHY. The SCIAMACHY-Brewer TOC differences show a significant solar zenith angle (SZA) dependence which causes a systematic seasonal dependence. By contrast, GOME-Brewer TOC differences show no significant SZA dependence and hence no seasonality although processed with exactly the same algorithm. The satellite-Brewer TOC differences for the two satellite instruments show a clear and similar dependence on the viewing zenith angle under cloudy conditions. In addition, both the GOME-Brewer and SCIAMACHY-Brewer TOC differences reveal a very similar behavior with respect to the satellite cloud properties, being cloud fraction and cloud top pressure, which originate from the same cloud algorithm (Fast Retrieval Scheme for Clouds from the Oxygen A-Band (FRESCO+)) in both the TOSOMI and TOGOMI retrieval algorithms.
Hyun, Eugin; Jin, Young-Seok; Lee, Jong-Hun
2016-01-01
For an automotive pedestrian detection radar system, fast-ramp based 2D range-Doppler Frequency Modulated Continuous Wave (FMCW) radar is effective for distinguishing between moving targets and unwanted clutter. However, when a weak moving target such as a pedestrian exists together with strong clutter, the pedestrian may be masked by the side-lobe of the clutter even though they are notably separated in the Doppler dimension. To prevent this problem, one popular solution is the use of a windowing scheme with a weighting function. However, this method leads to a spread spectrum, so the pedestrian with weak signal power and slow Doppler may also be masked by the main-lobe of clutter. With a fast-ramp based FMCW radar, if the target is moving, the complex spectrum of the range- Fast Fourier Transform (FFT) is changed with a constant phase difference over ramps. In contrast, the clutter exhibits constant phase irrespective of the ramps. Based on this fact, in this paper we propose a pedestrian detection for highly cluttered environments using a coherent phase difference method. By detecting the coherent phase difference from the complex spectrum of the range-FFT, we first extract the range profile of the moving pedestrians. Then, through the Doppler FFT, we obtain the 2D range-Doppler map for only the pedestrian. To test the proposed detection scheme, we have developed a real-time data logging system with a 24 GHz FMCW transceiver. In laboratory tests, we verified that the signal processing results from the proposed method were much better than those expected from the conventional 2D FFT-based detection method. PMID:26805835
Hyun, Eugin; Jin, Young-Seok; Lee, Jong-Hun
2016-01-20
For an automotive pedestrian detection radar system, fast-ramp based 2D range-Doppler Frequency Modulated Continuous Wave (FMCW) radar is effective for distinguishing between moving targets and unwanted clutter. However, when a weak moving target such as a pedestrian exists together with strong clutter, the pedestrian may be masked by the side-lobe of the clutter even though they are notably separated in the Doppler dimension. To prevent this problem, one popular solution is the use of a windowing scheme with a weighting function. However, this method leads to a spread spectrum, so the pedestrian with weak signal power and slow Doppler may also be masked by the main-lobe of clutter. With a fast-ramp based FMCW radar, if the target is moving, the complex spectrum of the range- Fast Fourier Transform (FFT) is changed with a constant phase difference over ramps. In contrast, the clutter exhibits constant phase irrespective of the ramps. Based on this fact, in this paper we propose a pedestrian detection for highly cluttered environments using a coherent phase difference method. By detecting the coherent phase difference from the complex spectrum of the range-FFT, we first extract the range profile of the moving pedestrians. Then, through the Doppler FFT, we obtain the 2D range-Doppler map for only the pedestrian. To test the proposed detection scheme, we have developed a real-time data logging system with a 24 GHz FMCW transceiver. In laboratory tests, we verified that the signal processing results from the proposed method were much better than those expected from the conventional 2D FFT-based detection method.
Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming
2016-01-01
With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.’s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks. PMID:26866606
Similar and yet so different: cash-for-care in six European countries' long-term care policies.
Da Roit, Barbara; Le Bihan, Blanche
2010-09-01
In response to increasing care needs, the reform or development of long-term care (LTC) systems has become a prominent policy issue in all European countries. Cash-for-care schemes-allowances instead of services provided to dependents-represent a key policy aimed at ensuring choice, fostering family care, developing care markets, and containing costs. A detailed analysis of policy documents and regulations, together with a systematic review of existing studies, was used to investigate the differences among six European countries (Austria, France, Germany, Italy, the Netherlands, and Sweden). The rationale and evolution of their various cash-for-care schemes within the framework of their LTC systems also were explored. While most of the literature present cash-for-care schemes as a common trend in the reforms that began in the 1990s and often treat them separately from the overarching LTC policies, this article argues that the policy context, timing, and specific regulation of the new schemes have created different visions of care and care work that in turn have given rise to distinct LTC configurations. A new typology of long-term care configurations is proposed based on the inclusiveness of the system, the role of cash-for-care schemes and their specific regulations, as well as the views of informal care and the care work that they require. © 2010 Milbank Memorial Fund. Published by Wiley Periodicals Inc.
NASA Astrophysics Data System (ADS)
Abramov, E. Y.; Sopov, V. I.
2017-10-01
In a given research using the example of traction network area with high asymmetry of power supply parameters, the sequence of comparative assessment of power losses in DC traction network with parallel and traditional separated operating modes of traction substation feeders was shown. Experimental measurements were carried out under these modes of operation. The calculation data results based on statistic processing showed the power losses decrease in contact network and the increase in feeders. The changes proved to be critical ones and this demonstrates the significance of potential effects when converting traction network areas into parallel feeder operation. An analytical method of calculation the average power losses for different feed schemes of the traction network was developed. On its basis, the dependences of the relative losses were obtained by varying the difference in feeder voltages. The calculation results showed unreasonableness transition to a two-sided feed scheme for the considered traction network area. A larger reduction in the total power loss can be obtained with a smaller difference of the feeders’ resistance and / or a more symmetrical sectioning scheme of contact network.
NASA Astrophysics Data System (ADS)
Bensiali, Bouchra; Bodi, Kowsik; Ciraolo, Guido; Ghendrih, Philippe; Liandrat, Jacques
2013-03-01
In this work, we compare different interpolation operators in the context of particle tracking with an emphasis on situations involving velocity field with steep gradients. Since, in this case, most classical methods give rise to the Gibbs phenomenon (generation of oscillations near discontinuities), we present new methods for particle tracking based on subdivision schemes and especially on the Piecewise Parabolic Harmonic (PPH) scheme which has shown its advantage in image processing in presence of strong contrasts. First an analytic univariate case with a discontinuous velocity field is considered in order to highlight the effect of the Gibbs phenomenon on trajectory calculation. Theoretical results are provided. Then, we show, regardless of the interpolation method, the need to use a conservative approach when integrating a conservative problem with a velocity field deriving from a potential. Finally, the PPH scheme is applied in a more realistic case of a time-dependent potential encountered in the edge turbulence of magnetically confined plasmas, to compare the propagation of density structures (turbulence bursts) with the dynamics of test particles. This study highlights the difference between particle transport and density transport in turbulent fields.
Adaptive Intuitionistic Fuzzy Enhancement of Brain Tumor MR Images
NASA Astrophysics Data System (ADS)
Deng, He; Deng, Wankai; Sun, Xianping; Ye, Chaohui; Zhou, Xin
2016-10-01
Image enhancement techniques are able to improve the contrast and visual quality of magnetic resonance (MR) images. However, conventional methods cannot make up some deficiencies encountered by respective brain tumor MR imaging modes. In this paper, we propose an adaptive intuitionistic fuzzy sets-based scheme, called as AIFE, which takes information provided from different MR acquisitions and tries to enhance the normal and abnormal structural regions of the brain while displaying the enhanced results as a single image. The AIFE scheme firstly separates an input image into several sub images, then divides each sub image into object and background areas. After that, different novel fuzzification, hyperbolization and defuzzification operations are implemented on each object/background area, and finally an enhanced result is achieved via nonlinear fusion operators. The fuzzy implementations can be processed in parallel. Real data experiments demonstrate that the AIFE scheme is not only effectively useful to have information from images acquired with different MR sequences fused in a single image, but also has better enhancement performance when compared to conventional baseline algorithms. This indicates that the proposed AIFE scheme has potential for improving the detection and diagnosis of brain tumors.
Mentasti, Massimo; Tewolde, Rediat; Aslett, Martin; Harris, Simon R.; Afshar, Baharak; Underwood, Anthony; Harrison, Timothy G.
2016-01-01
Sequence-based typing (SBT), analogous to multilocus sequence typing (MLST), is the current “gold standard” typing method for investigation of legionellosis outbreaks caused by Legionella pneumophila. However, as common sequence types (STs) cause many infections, some investigations remain unresolved. In this study, various whole-genome sequencing (WGS)-based methods were evaluated according to published guidelines, including (i) a single nucleotide polymorphism (SNP)-based method, (ii) extended MLST using different numbers of genes, (iii) determination of gene presence or absence, and (iv) a kmer-based method. L. pneumophila serogroup 1 isolates (n = 106) from the standard “typing panel,” previously used by the European Society for Clinical Microbiology Study Group on Legionella Infections (ESGLI), were tested together with another 229 isolates. Over 98% of isolates were considered typeable using the SNP- and kmer-based methods. Percentages of isolates with complete extended MLST profiles ranged from 99.1% (50 genes) to 86.8% (1,455 genes), while only 41.5% produced a full profile with the gene presence/absence scheme. Replicates demonstrated that all methods offer 100% reproducibility. Indices of discrimination range from 0.972 (ribosomal MLST) to 0.999 (SNP based), and all values were higher than that achieved with SBT (0.940). Epidemiological concordance is generally inversely related to discriminatory power. We propose that an extended MLST scheme with ∼50 genes provides optimal epidemiological concordance while substantially improving the discrimination offered by SBT and can be used as part of a hierarchical typing scheme that should maintain backwards compatibility and increase discrimination where necessary. This analysis will be useful for the ESGLI to design a scheme that has the potential to become the new gold standard typing method for L. pneumophila. PMID:27280420
David, Sophia; Mentasti, Massimo; Tewolde, Rediat; Aslett, Martin; Harris, Simon R; Afshar, Baharak; Underwood, Anthony; Fry, Norman K; Parkhill, Julian; Harrison, Timothy G
2016-08-01
Sequence-based typing (SBT), analogous to multilocus sequence typing (MLST), is the current "gold standard" typing method for investigation of legionellosis outbreaks caused by Legionella pneumophila However, as common sequence types (STs) cause many infections, some investigations remain unresolved. In this study, various whole-genome sequencing (WGS)-based methods were evaluated according to published guidelines, including (i) a single nucleotide polymorphism (SNP)-based method, (ii) extended MLST using different numbers of genes, (iii) determination of gene presence or absence, and (iv) a kmer-based method. L. pneumophila serogroup 1 isolates (n = 106) from the standard "typing panel," previously used by the European Society for Clinical Microbiology Study Group on Legionella Infections (ESGLI), were tested together with another 229 isolates. Over 98% of isolates were considered typeable using the SNP- and kmer-based methods. Percentages of isolates with complete extended MLST profiles ranged from 99.1% (50 genes) to 86.8% (1,455 genes), while only 41.5% produced a full profile with the gene presence/absence scheme. Replicates demonstrated that all methods offer 100% reproducibility. Indices of discrimination range from 0.972 (ribosomal MLST) to 0.999 (SNP based), and all values were higher than that achieved with SBT (0.940). Epidemiological concordance is generally inversely related to discriminatory power. We propose that an extended MLST scheme with ∼50 genes provides optimal epidemiological concordance while substantially improving the discrimination offered by SBT and can be used as part of a hierarchical typing scheme that should maintain backwards compatibility and increase discrimination where necessary. This analysis will be useful for the ESGLI to design a scheme that has the potential to become the new gold standard typing method for L. pneumophila. Copyright © 2016 David et al.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Podgorsak, A; Bednarek, D; Rudin, S
2016-06-15
Purpose: To successfully implement and operate a photon counting scheme on an electron multiplying charged-coupled device (EMCCD) based micro-CT system. Methods: We built an EMCCD based micro-CT system and implemented a photon counting scheme. EMCCD detectors use avalanche transfer registries to multiply the input signal far above the readout noise floor. Due to intrinsic differences in the pixel array, using a global threshold for photon counting is not optimal. To address this shortcoming, we generated a threshold array based on sixty dark fields (no x-ray exposure). We calculated an average matrix and a variance matrix of the dark field sequence.more » The average matrix was used for the offset correction while the variance matrix was used to set individual pixel thresholds for the photon counting scheme. Three hundred photon counting frames were added for each projection and 360 projections were acquired for each object. The system was used to scan various objects followed by reconstruction using an FDK algorithm. Results: Examination of the projection images and reconstructed slices of the objects indicated clear interior detail free of beam hardening artifacts. This suggests successful implementation of the photon counting scheme on our EMCCD based micro-CT system. Conclusion: This work indicates that it is possible to implement and operate a photon counting scheme on an EMCCD based micro-CT system, suggesting that these devices might be able to operate at very low x-ray exposures in a photon counting mode. Such devices could have future implications in clinical CT protocols. NIH Grant R01EB002873; Toshiba Medical Systems Corp.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhan, Fei; Tao, Ye; Zhao, Haifeng
Time-resolved X-ray absorption spectroscopy (TR-XAS), based on the laser-pump/X-ray-probe method, is powerful in capturing the change of the geometrical and electronic structure of the absorbing atom upon excitation. TR-XAS data analysis is generally performed on the laser-on minus laser-off difference spectrum. Here, a new analysis scheme is presented for the TR-XAS difference fitting in both the extended X-ray absorption fine-structure (EXAFS) and the X-ray absorption near-edge structure (XANES) regions.R-space EXAFS difference fitting could quickly provide the main quantitative structure change of the first shell. The XANES fitting part introduces a global non-derivative optimization algorithm and optimizes the local structure changemore » in a flexible way where both the core XAS calculation package and the search method in the fitting shell are changeable. The scheme was applied to the TR-XAS difference analysis of Fe(phen) 3spin crossover complex and yielded reliable distance change and excitation population.« less
Zhan, Fei; Tao, Ye; Zhao, Haifeng
2017-07-01
Time-resolved X-ray absorption spectroscopy (TR-XAS), based on the laser-pump/X-ray-probe method, is powerful in capturing the change of the geometrical and electronic structure of the absorbing atom upon excitation. TR-XAS data analysis is generally performed on the laser-on minus laser-off difference spectrum. Here, a new analysis scheme is presented for the TR-XAS difference fitting in both the extended X-ray absorption fine-structure (EXAFS) and the X-ray absorption near-edge structure (XANES) regions. R-space EXAFS difference fitting could quickly provide the main quantitative structure change of the first shell. The XANES fitting part introduces a global non-derivative optimization algorithm and optimizes the local structure change in a flexible way where both the core XAS calculation package and the search method in the fitting shell are changeable. The scheme was applied to the TR-XAS difference analysis of Fe(phen) 3 spin crossover complex and yielded reliable distance change and excitation population.
NASA Astrophysics Data System (ADS)
Tian, Jiyang; Liu, Jia; Wang, Jianhua; Li, Chuanzhe; Yu, Fuliang; Chu, Zhigang
2017-07-01
Mesoscale Numerical Weather Prediction systems can provide rainfall products at high resolutions in space and time, playing an increasingly more important role in water management and flood forecasting. The Weather Research and Forecasting (WRF) model is one of the most popular mesoscale systems and has been extensively used in research and practice. However, for hydrologists, an unsolved question must be addressed before each model application in a different target area. That is, how are the most appropriate combinations of physical parameterisations from the vast WRF library selected to provide the best downscaled rainfall? In this study, the WRF model was applied with 12 designed parameterisation schemes with different combinations of physical parameterisations, including microphysics, radiation, planetary boundary layer (PBL), land-surface model (LSM) and cumulus parameterisations. The selected study areas are two semi-humid and semi-arid catchments located in the Daqinghe River basin, Northern China. The performance of WRF with different parameterisation schemes is tested for simulating eight typical 24-h storm events with different evenness in space and time. In addition to the cumulative rainfall amount, the spatial and temporal patterns of the simulated rainfall are evaluated based on a two-dimensional composed verification statistic. Among the 12 parameterisation schemes, Scheme 4 outperforms the other schemes with the best average performance in simulating rainfall totals and temporal patterns; in contrast, Scheme 6 is generally a good choice for simulations of spatial rainfall distributions. Regarding the individual parameterisations, Single-Moment 6 (WSM6), Yonsei University (YSU), Kain-Fritsch (KF) and Grell-Devenyi (GD) are better choices for microphysics, planetary boundary layers (PBL) and cumulus parameterisations, respectively, in the study area. These findings provide helpful information for WRF rainfall downscaling in semi-humid and semi-arid areas. The methodologies to design and test the combination schemes of parameterisations can also be regarded as a reference for generating ensembles in numerical rainfall predictions using the WRF model.
Cost-Efficient and Multi-Functional Secure Aggregation in Large Scale Distributed Application
Zhang, Ping; Li, Wenjun; Sun, Hua
2016-01-01
Secure aggregation is an essential component of modern distributed applications and data mining platforms. Aggregated statistical results are typically adopted in constructing a data cube for data analysis at multiple abstraction levels in data warehouse platforms. Generating different types of statistical results efficiently at the same time (or referred to as enabling multi-functional support) is a fundamental requirement in practice. However, most of the existing schemes support a very limited number of statistics. Securely obtaining typical statistical results simultaneously in the distribution system, without recovering the original data, is still an open problem. In this paper, we present SEDAR, which is a SEcure Data Aggregation scheme under the Range segmentation model. Range segmentation model is proposed to reduce the communication cost by capturing the data characteristics, and different range uses different aggregation strategy. For raw data in the dominant range, SEDAR encodes them into well defined vectors to provide value-preservation and order-preservation, and thus provides the basis for multi-functional aggregation. A homomorphic encryption scheme is used to achieve data privacy. We also present two enhanced versions. The first one is a Random based SEDAR (REDAR), and the second is a Compression based SEDAR (CEDAR). Both of them can significantly reduce communication cost with the trade-off lower security and lower accuracy, respectively. Experimental evaluations, based on six different scenes of real data, show that all of them have an excellent performance on cost and accuracy. PMID:27551747
Cost-Efficient and Multi-Functional Secure Aggregation in Large Scale Distributed Application.
Zhang, Ping; Li, Wenjun; Sun, Hua
2016-01-01
Secure aggregation is an essential component of modern distributed applications and data mining platforms. Aggregated statistical results are typically adopted in constructing a data cube for data analysis at multiple abstraction levels in data warehouse platforms. Generating different types of statistical results efficiently at the same time (or referred to as enabling multi-functional support) is a fundamental requirement in practice. However, most of the existing schemes support a very limited number of statistics. Securely obtaining typical statistical results simultaneously in the distribution system, without recovering the original data, is still an open problem. In this paper, we present SEDAR, which is a SEcure Data Aggregation scheme under the Range segmentation model. Range segmentation model is proposed to reduce the communication cost by capturing the data characteristics, and different range uses different aggregation strategy. For raw data in the dominant range, SEDAR encodes them into well defined vectors to provide value-preservation and order-preservation, and thus provides the basis for multi-functional aggregation. A homomorphic encryption scheme is used to achieve data privacy. We also present two enhanced versions. The first one is a Random based SEDAR (REDAR), and the second is a Compression based SEDAR (CEDAR). Both of them can significantly reduce communication cost with the trade-off lower security and lower accuracy, respectively. Experimental evaluations, based on six different scenes of real data, show that all of them have an excellent performance on cost and accuracy.
Systematic comparison of jet energy-loss schemes in a realistic hydrodynamic medium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bass, Steffen A.; Majumder, Abhijit; Gale, Charles
2009-02-15
We perform a systematic comparison of three different jet energy-loss approaches. These include the Armesto-Salgado-Wiedemann scheme based on the approach of Baier-Dokshitzer-Mueller-Peigne-Schiff and Zakharov (BDMPS-Z/ASW), the higher twist (HT) approach and a scheme based on the Arnold-Moore-Yaffe (AMY) approach. In this comparison, an identical medium evolution will be utilized for all three approaches: this entails not only the use of the same realistic three-dimensional relativistic fluid dynamics (RFD) simulation, but also the use of identical initial parton-distribution functions and final fragmentation functions. We are, thus, in a unique position to not only isolate fundamental differences between the various approaches butmore » also make rigorous calculations for different experimental measurements using state of the art components. All three approaches are reduced to versions containing only one free tunable parameter, this is then related to the well-known transport parameter q. We find that the parameters of all three calculations can be adjusted to provide a good description of inclusive data on R{sub AA} vs transverse momentum. However, we do observe slight differences in their predictions for the centrality and azimuthal angular dependence of R{sub AA} vs p{sub T}. We also note that the values of the transport coefficient q in the three approaches to describe the data differ significantly.« less
3D deblending of simultaneous source data based on 3D multi-scale shaping operator
NASA Astrophysics Data System (ADS)
Zu, Shaohuan; Zhou, Hui; Mao, Weijian; Gong, Fei; Huang, Weilin
2018-04-01
We propose an iterative three-dimensional (3D) deblending scheme using 3D multi-scale shaping operator to separate 3D simultaneous source data. The proposed scheme is based on the property that signal is coherent, whereas interference is incoherent in some domains, e.g., common receiver domain and common midpoint domain. In two-dimensional (2D) blended record, the coherency difference of signal and interference is in only one spatial direction. Compared with 2D deblending, the 3D deblending can take more sparse constraints into consideration to obtain better performance, e.g., in 3D common receiver gather, the coherency difference is in two spatial directions. Furthermore, with different levels of coherency, signal and interference distribute in different scale curvelet domains. In both 2D and 3D blended records, most coherent signal locates in coarse scale curvelet domain, while most incoherent interference distributes in fine scale curvelet domain. The scale difference is larger in 3D deblending, thus, we apply the multi-scale shaping scheme to further improve the 3D deblending performance. We evaluate the performance of 3D and 2D deblending with the multi-scale and global shaping operators, respectively. One synthetic and one field data examples demonstrate the advantage of the 3D deblending with 3D multi-scale shaping operator.
Adaptive implicit-explicit and parallel element-by-element iteration schemes
NASA Technical Reports Server (NTRS)
Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.
1989-01-01
Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.
The controlled growth method - A tool for structural optimization
NASA Technical Reports Server (NTRS)
Hajela, P.; Sobieszczanski-Sobieski, J.
1981-01-01
An adaptive design variable linking scheme in a NLP based optimization algorithm is proposed and evaluated for feasibility of application. The present scheme, based on an intuitive effectiveness measure for each variable, differs from existing methodology in that a single dominant variable controls the growth of all others in a prescribed optimization cycle. The proposed method is implemented for truss assemblies and a wing box structure for stress, displacement and frequency constraints. Substantial reduction in computational time, even more so for structures under multiple load conditions, coupled with a minimal accompanying loss in accuracy, vindicates the algorithm.
Visual Privacy by Context: Proposal and Evaluation of a Level-Based Visualisation Scheme
Padilla-López, José Ramón; Chaaraoui, Alexandros Andre; Gu, Feng; Flórez-Revuelta, Francisco
2015-01-01
Privacy in image and video data has become an important subject since cameras are being installed in an increasing number of public and private spaces. Specifically, in assisted living, intelligent monitoring based on computer vision can allow one to provide risk detection and support services that increase people's autonomy at home. In the present work, a level-based visualisation scheme is proposed to provide visual privacy when human intervention is necessary, such as at telerehabilitation and safety assessment applications. Visualisation levels are dynamically selected based on the previously modelled context. In this way, different levels of protection can be provided, maintaining the necessary intelligibility required for the applications. Furthermore, a case study of a living room, where a top-view camera is installed, is presented. Finally, the performed survey-based evaluation indicates the degree of protection provided by the different visualisation models, as well as the personal privacy preferences and valuations of the users. PMID:26053746
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Chunguang; Zheng, Chuantao; Dong, Lei
A ppb-level mid-infrared ethane (C 2H 6) sensor was developed using a continuous-wave, thermoelectrically cooled, distributed feedback interband cascade laser emitting at 3.34 μm and a miniature dense patterned multipass gas cell with a 54.6-m optical path length. The performance of the sensor was investigated using two different techniques based on the tunable interband cascade laser: direct absorption spectroscopy (DAS) and second-harmonic wavelength modulation spectroscopy (2f-WMS). Three measurement schemes, DAS, WMS and quasi-simultaneous DAS and WMS, were realized based on the same optical sensor core. A detection limit of ~7.92 ppbv with a precision of ±30 ppbv for the separatemore » DAS scheme with an averaging time of 1 s and a detection limit of ~1.19 ppbv with a precision of about ±4 ppbv for the separate WMS scheme with a 4-s averaging time were achieved. An Allan–Werle variance analysis indicated that the precisions can be further improved to 777 pptv @ 166 s for the separate DAS scheme and 269 pptv @ 108 s for the WMS scheme, respectively. For the quasi-simultaneous DAS and WMS scheme, both the 2f signal and the direct absorption signal were simultaneously extracted using a LabVIEW platform, and four C 2H 6 samples (0, 30, 60 and 90 ppbv with nitrogen as the balance gas) were used as the target gases to assess the sensor performance. A detailed comparison of the three measurement schemes is reported. Here, atmospheric C 2H 6 measurements on the Rice University campus and a field test at a compressed natural gas station in Houston, TX, were conducted to evaluate the performance of the sensor system as a robust and reliable field-deployable sensor system.« less
Li, Chunguang; Zheng, Chuantao; Dong, Lei; ...
2016-06-20
A ppb-level mid-infrared ethane (C 2H 6) sensor was developed using a continuous-wave, thermoelectrically cooled, distributed feedback interband cascade laser emitting at 3.34 μm and a miniature dense patterned multipass gas cell with a 54.6-m optical path length. The performance of the sensor was investigated using two different techniques based on the tunable interband cascade laser: direct absorption spectroscopy (DAS) and second-harmonic wavelength modulation spectroscopy (2f-WMS). Three measurement schemes, DAS, WMS and quasi-simultaneous DAS and WMS, were realized based on the same optical sensor core. A detection limit of ~7.92 ppbv with a precision of ±30 ppbv for the separatemore » DAS scheme with an averaging time of 1 s and a detection limit of ~1.19 ppbv with a precision of about ±4 ppbv for the separate WMS scheme with a 4-s averaging time were achieved. An Allan–Werle variance analysis indicated that the precisions can be further improved to 777 pptv @ 166 s for the separate DAS scheme and 269 pptv @ 108 s for the WMS scheme, respectively. For the quasi-simultaneous DAS and WMS scheme, both the 2f signal and the direct absorption signal were simultaneously extracted using a LabVIEW platform, and four C 2H 6 samples (0, 30, 60 and 90 ppbv with nitrogen as the balance gas) were used as the target gases to assess the sensor performance. A detailed comparison of the three measurement schemes is reported. Here, atmospheric C 2H 6 measurements on the Rice University campus and a field test at a compressed natural gas station in Houston, TX, were conducted to evaluate the performance of the sensor system as a robust and reliable field-deployable sensor system.« less
An efficient and provable secure revocable identity-based encryption scheme.
Wang, Changji; Li, Yuan; Xia, Xiaonan; Zheng, Kangjia
2014-01-01
Revocation functionality is necessary and crucial to identity-based cryptosystems. Revocable identity-based encryption (RIBE) has attracted a lot of attention in recent years, many RIBE schemes have been proposed in the literature but shown to be either insecure or inefficient. In this paper, we propose a new scalable RIBE scheme with decryption key exposure resilience by combining Lewko and Waters' identity-based encryption scheme and complete subtree method, and prove our RIBE scheme to be semantically secure using dual system encryption methodology. Compared to existing scalable and semantically secure RIBE schemes, our proposed RIBE scheme is more efficient in term of ciphertext size, public parameters size and decryption cost at price of a little looser security reduction. To the best of our knowledge, this is the first construction of scalable and semantically secure RIBE scheme with constant size public system parameters.
Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling
2016-01-01
Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model. PMID:27898703
Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling
2016-01-01
Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model.
Numerical solution of modified differential equations based on symmetry preservation.
Ozbenli, Ersin; Vedula, Prakash
2017-12-01
In this paper, we propose a method to construct invariant finite-difference schemes for solution of partial differential equations (PDEs) via consideration of modified forms of the underlying PDEs. The invariant schemes, which preserve Lie symmetries, are obtained based on the method of equivariant moving frames. While it is often difficult to construct invariant numerical schemes for PDEs due to complicated symmetry groups associated with cumbersome discrete variable transformations, we note that symmetries associated with more convenient transformations can often be obtained by appropriately modifying the original PDEs. In some cases, modifications to the original PDEs are also found to be useful in order to avoid trivial solutions that might arise from particular selections of moving frames. In our proposed method, modified forms of PDEs can be obtained either by addition of perturbation terms to the original PDEs or through defect correction procedures. These additional terms, whose primary purpose is to enable symmetries with more convenient transformations, are then removed from the system by considering moving frames for which these specific terms go to zero. Further, we explore selection of appropriate moving frames that result in improvement in accuracy of invariant numerical schemes based on modified PDEs. The proposed method is tested using the linear advection equation (in one- and two-dimensions) and the inviscid Burgers' equation. Results obtained for these tests cases indicate that numerical schemes derived from the proposed method perform significantly better than existing schemes not only by virtue of improvement in numerical accuracy but also due to preservation of qualitative properties or symmetries of the underlying differential equations.
Tehranchi, Amirhossein; Morandotti, Roberto; Kashyap, Raman
2011-11-07
High-efficiency ultra-broadband wavelength converters based on double-pass quasi-phase-matched cascaded sum and difference frequency generation including engineered chirped gratings in lossy lithium niobate waveguides are numerically investigated and compared to the single-pass counterparts, assuming a large twin-pump wavelength difference of 75 nm. Instead of uniform gratings, few-section chirped gratings with the same length, but with a small constant period change among sections with uniform gratings, are proposed to flatten the response and increase the mean efficiency by finding the common critical period shift and minimum number of sections for both single-pass and double-pass schemes whilst for the latter the efficiency is remarkably higher in a low-loss waveguide. It is also verified that for the same waveguide length and power, the efficiency enhancement expected due to the use of the double-pass scheme instead of the single-pass one, is finally lost if the waveguide loss increases above a certain value. For the double-pass scheme, the criteria for the design of the low-loss waveguide length, and the assignment of power in the pumps to achieve the desired efficiency, bandwidth and ripple are presented for the optimum 3-section chirped-gratings-based devices. Efficient conversions with flattop bandwidths > 84 nm for lengths < 3 cm can be obtained.
Liu, Jie; Guo, Liang; Jiang, Jiping; Jiang, Dexun; Wang, Peng
2017-01-01
In the emergency management relevant to chemical contingency spills, efficiency emergency rescue can be deeply influenced by a reasonable assignment of the available emergency materials to the related risk sources. In this study, an emergency material scheduling model (EMSM) with time-effective and cost-effective objectives is developed to coordinate both allocation and scheduling of the emergency materials. Meanwhile, an improved genetic algorithm (IGA) which includes a revision operation for EMSM is proposed to identify the emergency material scheduling schemes. Then, scenario analysis is used to evaluate optimal emergency rescue scheme under different emergency pollution conditions associated with different threat degrees based on analytic hierarchy process (AHP) method. The whole framework is then applied to a computational experiment based on south-to-north water transfer project in China. The results demonstrate that the developed method not only could guarantee the implementation of the emergency rescue to satisfy the requirements of chemical contingency spills but also help decision makers identify appropriate emergency material scheduling schemes in a balance between time-effective and cost-effective objectives.
NASA Astrophysics Data System (ADS)
Liu, Changying; Wu, Xinyuan
2017-07-01
In this paper we explore arbitrarily high-order Lagrange collocation-type time-stepping schemes for effectively solving high-dimensional nonlinear Klein-Gordon equations with different boundary conditions. We begin with one-dimensional periodic boundary problems and first formulate an abstract ordinary differential equation (ODE) on a suitable infinity-dimensional function space based on the operator spectrum theory. We then introduce an operator-variation-of-constants formula which is essential for the derivation of our arbitrarily high-order Lagrange collocation-type time-stepping schemes for the nonlinear abstract ODE. The nonlinear stability and convergence are rigorously analysed once the spatial differential operator is approximated by an appropriate positive semi-definite matrix under some suitable smoothness assumptions. With regard to the two dimensional Dirichlet or Neumann boundary problems, our new time-stepping schemes coupled with discrete Fast Sine / Cosine Transformation can be applied to simulate the two-dimensional nonlinear Klein-Gordon equations effectively. All essential features of the methodology are present in one-dimensional and two-dimensional cases, although the schemes to be analysed lend themselves with equal to higher-dimensional case. The numerical simulation is implemented and the numerical results clearly demonstrate the advantage and effectiveness of our new schemes in comparison with the existing numerical methods for solving nonlinear Klein-Gordon equations in the literature.
Cao, Cong; Duan, Yu-Wen; Chen, Xi; Zhang, Ru; Wang, Tie-Jun; Wang, Chuan
2017-07-24
Quantum router is a key element needed for the construction of future complex quantum networks. However, quantum routing with photons, and its inverse, quantum decoupling, are difficult to implement as photons do not interact, or interact very weakly in nonlinear media. In this paper, we investigate the possibility of implementing photonic quantum routing based on effects in cavity quantum electrodynamics, and present a scheme for single-photon quantum routing controlled by the other photon using a hybrid system consisting of a single nitrogen-vacancy (NV) center coupled with a whispering-gallery-mode resonator-waveguide structure. Different from the cases in which classical information is used to control the path of quantum signals, both the control and signal photons are quantum in our implementation. Compared with the probabilistic quantum routing protocols based on linear optics, our scheme is deterministic and also scalable to multiple photons. We also present a scheme for single-photon quantum decoupling from an initial state with polarization and spatial-mode encoding, which can implement an inverse operation to the quantum routing. We discuss the feasibility of our schemes by considering current or near-future techniques, and show that both the schemes can operate effectively in the bad-cavity regime. We believe that the schemes could be key building blocks for future complex quantum networks and large-scale quantum information processing.
Wang, Weiguo; Liu, Xiaohong; Xie, Shaocheng; ...
2009-07-23
Here, cloud properties have been simulated with a new double-moment microphysics scheme under the framework of the single-column version of NCAR Community Atmospheric Model version 3 (CAM3). For comparison, the same simulation was made with the standard single-moment microphysics scheme of CAM3. Results from both simulations compared favorably with observations during the Tropical Warm Pool–International Cloud Experiment by the U.S. Department of Energy Atmospheric Radiation Measurement Program in terms of the temporal variation and vertical distribution of cloud fraction and cloud condensate. Major differences between the two simulations are in the magnitude and distribution of ice water content within themore » mixed-phase cloud during the monsoon period, though the total frozen water (snow plus ice) contents are similar. The ice mass content in the mixed-phase cloud from the new scheme is larger than that from the standard scheme, and ice water content extends 2 km further downward, which is in better agreement with observations. The dependence of the frozen water mass fraction on temperature from the new scheme is also in better agreement with available observations. Outgoing longwave radiation (OLR) at the top of the atmosphere (TOA) from the simulation with the new scheme is, in general, larger than that with the standard scheme, while the surface downward longwave radiation is similar. Sensitivity tests suggest that different treatments of the ice crystal effective radius contribute significantly to the difference in the calculations of TOA OLR, in addition to cloud water path. Numerical experiments show that cloud properties in the new scheme can respond reasonably to changes in the concentration of aerosols and emphasize the importance of correctly simulating aerosol effects in climate models for aerosol-cloud interactions. Further evaluation, especially for ice cloud properties based on in-situ data, is needed.« less
A provably-secure ECC-based authentication scheme for wireless sensor networks.
Nam, Junghyun; Kim, Moonseong; Paik, Juryon; Lee, Youngsook; Won, Dongho
2014-11-06
A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes.
A Provably-Secure ECC-Based Authentication Scheme for Wireless Sensor Networks
Nam, Junghyun; Kim, Moonseong; Paik, Juryon; Lee, Youngsook; Won, Dongho
2014-01-01
A smart-card-based user authentication scheme for wireless sensor networks (in short, a SUA-WSN scheme) is designed to restrict access to the sensor data only to users who are in possession of both a smart card and the corresponding password. While a significant number of SUA-WSN schemes have been suggested in recent years, their intended security properties lack formal definitions and proofs in a widely-accepted model. One consequence is that SUA-WSN schemes insecure against various attacks have proliferated. In this paper, we devise a security model for the analysis of SUA-WSN schemes by extending the widely-accepted model of Bellare, Pointcheval and Rogaway (2000). Our model provides formal definitions of authenticated key exchange and user anonymity while capturing side-channel attacks, as well as other common attacks. We also propose a new SUA-WSN scheme based on elliptic curve cryptography (ECC), and prove its security properties in our extended model. To the best of our knowledge, our proposed scheme is the first SUA-WSN scheme that provably achieves both authenticated key exchange and user anonymity. Our scheme is also computationally competitive with other ECC-based (non-provably secure) schemes. PMID:25384009
NASA Astrophysics Data System (ADS)
Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars
2018-02-01
The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.
NASA Astrophysics Data System (ADS)
Kadhem, Hasan; Amagasa, Toshiyuki; Kitagawa, Hiroyuki
Encryption can provide strong security for sensitive data against inside and outside attacks. This is especially true in the “Database as Service” model, where confidentiality and privacy are important issues for the client. In fact, existing encryption approaches are vulnerable to a statistical attack because each value is encrypted to another fixed value. This paper presents a novel database encryption scheme called MV-OPES (Multivalued — Order Preserving Encryption Scheme), which allows privacy-preserving queries over encrypted databases with an improved security level. Our idea is to encrypt a value to different multiple values to prevent statistical attacks. At the same time, MV-OPES preserves the order of the integer values to allow comparison operations to be directly applied on encrypted data. Using calculated distance (range), we propose a novel method that allows a join query between relations based on inequality over encrypted values. We also present techniques to offload query execution load to a database server as much as possible, thereby making a better use of server resources in a database outsourcing environment. Our scheme can easily be integrated with current database systems as it is designed to work with existing indexing structures. It is robust against statistical attack and the estimation of true values. MV-OPES experiments show that security for sensitive data can be achieved with reasonable overhead, establishing the practicability of the scheme.
NASA Astrophysics Data System (ADS)
Liu, Xiyao; Lou, Jieting; Wang, Yifan; Du, Jingyu; Zou, Beiji; Chen, Yan
2018-03-01
Authentication and copyright identification are two critical security issues for medical images. Although zerowatermarking schemes can provide durable, reliable and distortion-free protection for medical images, the existing zerowatermarking schemes for medical images still face two problems. On one hand, they rarely considered the distinguishability for medical images, which is critical because different medical images are sometimes similar to each other. On the other hand, their robustness against geometric attacks, such as cropping, rotation and flipping, is insufficient. In this study, a novel discriminative and robust zero-watermarking (DRZW) is proposed to address these two problems. In DRZW, content-based features of medical images are first extracted based on completed local binary pattern (CLBP) operator to ensure the distinguishability and robustness, especially against geometric attacks. Then, master shares and ownership shares are generated from the content-based features and watermark according to (2,2) visual cryptography. Finally, the ownership shares are stored for authentication and copyright identification. For queried medical images, their content-based features are extracted and master shares are generated. Their watermarks for authentication and copyright identification are recovered by stacking the generated master shares and stored ownership shares. 200 different medical images of 5 types are collected as the testing data and our experimental results demonstrate that DRZW ensures both the accuracy and reliability of authentication and copyright identification. When fixing the false positive rate to 1.00%, the average value of false negative rates by using DRZW is only 1.75% under 20 common attacks with different parameters.
A Novel Passive Tracking Scheme Exploiting Geometric and Intercept Theorems
Zhou, Biao; Sun, Chao; Ahn, Deockhyeon; Kim, Youngok
2018-01-01
Passive tracking aims to track targets without assistant devices, that is, device-free targets. Passive tracking based on Radio Frequency (RF) Tomography in wireless sensor networks has recently been addressed as an emerging field. The passive tracking scheme using geometric theorems (GTs) is one of the most popular RF Tomography schemes, because the GT-based method can effectively mitigate the demand for a high density of wireless nodes. In the GT-based tracking scheme, the tracking scenario is considered as a two-dimensional geometric topology and then geometric theorems are applied to estimate crossing points (CPs) of the device-free target on line-of-sight links (LOSLs), which reveal the target’s trajectory information in a discrete form. In this paper, we review existing GT-based tracking schemes, and then propose a novel passive tracking scheme by exploiting the Intercept Theorem (IT). To create an IT-based CP estimation scheme available in the noisy non-parallel LOSL situation, we develop the equal-ratio traverse (ERT) method. Finally, we analyze properties of three GT-based tracking algorithms and the performance of these schemes is evaluated experimentally under various trajectories, node densities, and noisy topologies. Analysis of experimental results shows that tracking schemes exploiting geometric theorems can achieve remarkable positioning accuracy even under rather a low density of wireless nodes. Moreover, the proposed IT scheme can provide generally finer tracking accuracy under even lower node density and noisier topologies, in comparison to other schemes. PMID:29562621
Rahman, Md Mostafizur; Fattah, Shaikh Anowarul
2017-01-01
In view of recent increase of brain computer interface (BCI) based applications, the importance of efficient classification of various mental tasks has increased prodigiously nowadays. In order to obtain effective classification, efficient feature extraction scheme is necessary, for which, in the proposed method, the interchannel relationship among electroencephalogram (EEG) data is utilized. It is expected that the correlation obtained from different combination of channels will be different for different mental tasks, which can be exploited to extract distinctive feature. The empirical mode decomposition (EMD) technique is employed on a test EEG signal obtained from a channel, which provides a number of intrinsic mode functions (IMFs), and correlation coefficient is extracted from interchannel IMF data. Simultaneously, different statistical features are also obtained from each IMF. Finally, the feature matrix is formed utilizing interchannel correlation features and intrachannel statistical features of the selected IMFs of EEG signal. Different kernels of the support vector machine (SVM) classifier are used to carry out the classification task. An EEG dataset containing ten different combinations of five different mental tasks is utilized to demonstrate the classification performance and a very high level of accuracy is achieved by the proposed scheme compared to existing methods.
Optical vector network analyzer based on double-sideband modulation.
Jun, Wen; Wang, Ling; Yang, Chengwu; Li, Ming; Zhu, Ning Hua; Guo, Jinjin; Xiong, Liangming; Li, Wei
2017-11-01
We report an optical vector network analyzer (OVNA) based on double-sideband (DSB) modulation using a dual-parallel Mach-Zehnder modulator. The device under test (DUT) is measured twice with different modulation schemes. By post-processing the measurement results, the response of the DUT can be obtained accurately. Since DSB modulation is used in our approach, the measurement range is doubled compared with conventional single-sideband (SSB) modulation-based OVNA. Moreover, the measurement accuracy is improved by eliminating the even-order sidebands. The key advantage of the proposed scheme is that the measurement of a DUT with bandpass response can also be simply realized, which is a big challenge for the SSB-based OVNA. The proposed method is theoretically and experimentally demonstrated.
Projection methods for incompressible flow problems with WENO finite difference schemes
NASA Astrophysics Data System (ADS)
de Frutos, Javier; John, Volker; Novo, Julia
2016-03-01
Weighted essentially non-oscillatory (WENO) finite difference schemes have been recommended in a competitive study of discretizations for scalar evolutionary convection-diffusion equations [20]. This paper explores the applicability of these schemes for the simulation of incompressible flows. To this end, WENO schemes are used in several non-incremental and incremental projection methods for the incompressible Navier-Stokes equations. Velocity and pressure are discretized on the same grid. A pressure stabilization Petrov-Galerkin (PSPG) type of stabilization is introduced in the incremental schemes to account for the violation of the discrete inf-sup condition. Algorithmic aspects of the proposed schemes are discussed. The schemes are studied on several examples with different features. It is shown that the WENO finite difference idea can be transferred to the simulation of incompressible flows. Some shortcomings of the methods, which are due to the splitting in projection schemes, become also obvious.
An Improved Biometrics-Based Remote User Authentication Scheme with User Anonymity
Kumari, Saru
2013-01-01
The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An's scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An's scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability. PMID:24350272
An improved biometrics-based remote user authentication scheme with user anonymity.
Khan, Muhammad Khurram; Kumari, Saru
2013-01-01
The authors review the biometrics-based user authentication scheme proposed by An in 2012. The authors show that there exist loopholes in the scheme which are detrimental for its security. Therefore the authors propose an improved scheme eradicating the flaws of An's scheme. Then a detailed security analysis of the proposed scheme is presented followed by its efficiency comparison. The proposed scheme not only withstands security problems found in An's scheme but also provides some extra features with mere addition of only two hash operations. The proposed scheme allows user to freely change his password and also provides user anonymity with untraceability.
Provably secure identity-based identification and signature schemes from code assumptions
Zhao, Yiming
2017-01-01
Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure. PMID:28809940
Provably secure identity-based identification and signature schemes from code assumptions.
Song, Bo; Zhao, Yiming
2017-01-01
Code-based cryptography is one of few alternatives supposed to be secure in a post-quantum world. Meanwhile, identity-based identification and signature (IBI/IBS) schemes are two of the most fundamental cryptographic primitives, so several code-based IBI/IBS schemes have been proposed. However, with increasingly profound researches on coding theory, the security reduction and efficiency of such schemes have been invalidated and challenged. In this paper, we construct provably secure IBI/IBS schemes from code assumptions against impersonation under active and concurrent attacks through a provably secure code-based signature technique proposed by Preetha, Vasant and Rangan (PVR signature), and a security enhancement Or-proof technique. We also present the parallel-PVR technique to decrease parameter values while maintaining the standard security level. Compared to other code-based IBI/IBS schemes, our schemes achieve not only preferable public parameter size, private key size, communication cost and signature length due to better parameter choices, but also provably secure.
Crypto-Watermarking of Transmitted Medical Images.
Al-Haj, Ali; Mohammad, Ahmad; Amer, Alaa'
2017-02-01
Telemedicine is a booming healthcare practice that has facilitated the exchange of medical data and expertise between healthcare entities. However, the widespread use of telemedicine applications requires a secured scheme to guarantee confidentiality and verify authenticity and integrity of exchanged medical data. In this paper, we describe a region-based, crypto-watermarking algorithm capable of providing confidentiality, authenticity, and integrity for medical images of different modalities. The proposed algorithm provides authenticity by embedding robust watermarks in images' region of non-interest using SVD in the DWT domain. Integrity is provided in two levels: strict integrity implemented by a cryptographic hash watermark, and content-based integrity implemented by a symmetric encryption-based tamper localization scheme. Confidentiality is achieved as a byproduct of hiding patient's data in the image. Performance of the algorithm was evaluated with respect to imperceptibility, robustness, capacity, and tamper localization, using different medical images. The results showed the effectiveness of the algorithm in providing security for telemedicine applications.
Tong, Yitian; Zhou, Qian; Han, Daming; Li, Baiyu; Xie, Weilin; Liu, Zhangweiyi; Qin, Jie; Wang, Xiaocheng; Dong, Yi; Hu, Weisheng
2016-08-15
A photonics-based scheme is presented for generating wideband and phase-stable chirped microwave signals based on two phase-locked combs with fixed and agile repetition rates. By tuning the difference of the two combs' repetition rates and extracting different order comb tones, a wideband linearly frequency-chirped microwave signal with flexible carrier frequency and chirped range is obtained. Owing to the scheme of dual-heterodyne phase transfer and phase-locked loop, extrinsic phase drift and noise induced by the separated optical paths is detected and suppressed efficiently. Linearly frequency-chirped microwave signals from 5 to 15 GHz and 237 to 247 GHz with 30 ms duration are achieved, respectively, contributing to the time-bandwidth product of 3×108. And less than 1.3×10-5 linearity errors (RMS) are also obtained.
Rate and power efficient image compressed sensing and transmission
NASA Astrophysics Data System (ADS)
Olanigan, Saheed; Cao, Lei; Viswanathan, Ramanarayanan
2016-01-01
This paper presents a suboptimal quantization and transmission scheme for multiscale block-based compressed sensing images over wireless channels. The proposed method includes two stages: dealing with quantization distortion and transmission errors. First, given the total transmission bit rate, the optimal number of quantization bits is assigned to the sensed measurements in different wavelet sub-bands so that the total quantization distortion is minimized. Second, given the total transmission power, the energy is allocated to different quantization bit layers based on their different error sensitivities. The method of Lagrange multipliers with Karush-Kuhn-Tucker conditions is used to solve both optimization problems, for which the first problem can be solved with relaxation and the second problem can be solved completely. The effectiveness of the scheme is illustrated through simulation results, which have shown up to 10 dB improvement over the method without the rate and power optimization in medium and low signal-to-noise ratio cases.
Mishra, Dheerendra; Srinivas, Jangirala; Mukhopadhyay, Sourav
2014-10-01
Advancement in network technology provides new ways to utilize telecare medicine information systems (TMIS) for patient care. Although TMIS usually faces various attacks as the services are provided over the public network. Recently, Jiang et al. proposed a chaotic map-based remote user authentication scheme for TMIS. Their scheme has the merits of low cost and session key agreement using Chaos theory. It enhances the security of the system by resisting various attacks. In this paper, we analyze the security of Jiang et al.'s scheme and demonstrate that their scheme is vulnerable to denial of service attack. Moreover, we demonstrate flaws in password change phase of their scheme. Further, our aim is to propose a new chaos map-based anonymous user authentication scheme for TMIS to overcome the weaknesses of Jiang et al.'s scheme, while also retaining the original merits of their scheme. We also show that our scheme is secure against various known attacks including the attacks found in Jiang et al.'s scheme. The proposed scheme is comparable in terms of the communication and computational overheads with Jiang et al.'s scheme and other related existing schemes. Moreover, we demonstrate the validity of the proposed scheme through the BAN (Burrows, Abadi, and Needham) logic.
Pan, Yao; Chen, Shanquan; Chen, Manli; Zhang, Pei; Long, Qian; Xiang, Li; Lucas, Henry
2016-01-27
Health inequity is an important issue all around the world. The Chinese basic medical security system comprises three major insurance schemes, namely the Urban Employee Basic Medical Insurance (UEBMI), the Urban Resident Basic Medical Insurance (URBMI), and the New Cooperative Medical Scheme (NCMS). Little research has been conducted to look into the disparity in payments among the health insurance schemes in China. In this study, we aimed to evaluate the disparity in reimbursements for tuberculosis (TB) care among the abovementioned health insurance schemes. This study uses a World Health Organization (WHO) framework to analyze the disparities and equity relating to the three dimensions of health insurance: population coverage, the range of services covered, and the extent to which costs are covered. Each of the health insurance scheme's policies were categorized and analyzed. An analysis of the claims database of all hospitalizations reimbursed from 2010 to 2012 in three counties of Yichang city (YC), which included 1506 discharges, was conducted to identify the differences in reimbursement rates and out-of-pocket (OOP) expenses among the health insurance schemes. Tuberculosis patients had various inpatient expenses depending on which scheme they were covered by (TB patients covered by the NCMS have less inpatient expenses than those who were covered by the URBMI, who have less inpatient expenses than those covered by the UEBMI). We found a significant horizontal inequity of healthcare utilization among the lower socioeconomic groups. In terms of financial inequity, TB patients who earned less paid more. The NCMS provides modest financial protection, based on income. Overall, TB patients from lower socioeconomic groups were the most vulnerable. There are large disparities in reimbursement for TB care among the three health insurance schemes and this, in turn, hampers TB control. Reducing the gap in health outcomes between the three health insurance schemes in China should be a focus of TB care and control. Achieving equity through integrated policies that avoid discrimination is likely to be effective.
Research to Assembly Scheme for Satellite Deck Based on Robot Flexibility Control Principle
NASA Astrophysics Data System (ADS)
Guo, Tao; Hu, Ruiqin; Xiao, Zhengyi; Zhao, Jingjing; Fang, Zhikai
2018-03-01
Deck assembly is critical quality control point in final satellite assembly process, and cable extrusion and structure collision problems in assembly process will affect development quality and progress of satellite directly. Aimed at problems existing in deck assembly process, assembly project scheme for satellite deck based on robot flexibility control principle is proposed in this paper. Scheme is introduced firstly; secondly, key technologies on end force perception and flexible docking control in the scheme are studied; then, implementation process of assembly scheme for satellite deck is described in detail; finally, actual application case of assembly scheme is given. Result shows that compared with traditional assembly scheme, assembly scheme for satellite deck based on robot flexibility control principle has obvious advantages in work efficiency, reliability and universality aspects etc.
Local sensory control of a dexterous end effector
NASA Technical Reports Server (NTRS)
Pinto, Victor H.; Everett, Louis J.; Driels, Morris
1990-01-01
A numerical scheme was developed to solve the inverse kinematics for a user-defined manipulator. The scheme was based on a nonlinear least-squares technique which determines the joint variables by minimizing the difference between the target end effector pose and the actual end effector pose. The scheme was adapted to a dexterous hand in which the joints are either prismatic or revolute and the fingers are considered open kinematic chains. Feasible solutions were obtained using a three-fingered dexterous hand. An algorithm to estimate the position and orientation of a pre-grasped object was also developed. The algorithm was based on triangulation using an ideal sensor and a spherical object model. By choosing the object to be a sphere, only the position of the object frame was important. Based on these simplifications, a minimum of three sensors are needed to find the position of a sphere. A two dimensional example to determine the position of a circle coordinate frame using a two-fingered dexterous hand was presented.
NASA Astrophysics Data System (ADS)
Lin, Guofen; Hong, Hanshu; Xia, Yunhao; Sun, Zhixin
2017-10-01
Attribute-based encryption (ABE) is an interesting cryptographic technique for flexible cloud data sharing access control. However, some open challenges hinder its practical application. In previous schemes, all attributes are considered as in the same status while they are not in most of practical scenarios. Meanwhile, the size of access policy increases dramatically with the raise of its expressiveness complexity. In addition, current research hardly notices that mobile front-end devices, such as smartphones, are poor in computational performance while too much bilinear pairing computation is needed for ABE. In this paper, we propose a key-policy weighted attribute-based encryption without bilinear pairing computation (KP-WABE-WB) for secure cloud data sharing access control. A simple weighted mechanism is presented to describe different importance of each attribute. We introduce a novel construction of ABE without executing any bilinear pairing computation. Compared to previous schemes, our scheme has a better performance in expressiveness of access policy and computational efficiency.
The PLUTO code for astrophysical gasdynamics .
NASA Astrophysics Data System (ADS)
Mignone, A.
Present numerical codes appeal to a consolidated theory based on finite difference and Godunov-type schemes. In this context we have developed a versatile numerical code, PLUTO, suitable for the solution of high-mach number flow in 1, 2 and 3 spatial dimensions and different systems of coordinates. Different hydrodynamic modules and algorithms may be independently selected to properly describe Newtonian, relativistic, MHD, or relativistic MHD fluids. The modular structure exploits a general framework for integrating a system of conservation laws, built on modern Godunov-type shock-capturing schemes. The code is freely distributed under the GNU public license and it is available for download to the astrophysical community at the URL http://plutocode.to.astro.it.
A keyword searchable attribute-based encryption scheme with attribute update for cloud storage.
Wang, Shangping; Ye, Jian; Zhang, Yaling
2018-01-01
Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption.
A keyword searchable attribute-based encryption scheme with attribute update for cloud storage
Wang, Shangping; Zhang, Yaling
2018-01-01
Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption. PMID:29795577
Guo, Hua; Zheng, Yandong; Zhang, Xiyong; Li, Zhoujun
2016-01-01
In resource-constrained wireless networks, resources such as storage space and communication bandwidth are limited. To guarantee secure communication in resource-constrained wireless networks, group keys should be distributed to users. The self-healing group key distribution (SGKD) scheme is a promising cryptographic tool, which can be used to distribute and update the group key for the secure group communication over unreliable wireless networks. Among all known SGKD schemes, exponential arithmetic based SGKD (E-SGKD) schemes reduce the storage overhead to constant, thus is suitable for the the resource-constrained wireless networks. In this paper, we provide a new mechanism to achieve E-SGKD schemes with backward secrecy. We first propose a basic E-SGKD scheme based on a known polynomial-based SGKD, where it has optimal storage overhead while having no backward secrecy. To obtain the backward secrecy and reduce the communication overhead, we introduce a novel approach for message broadcasting and self-healing. Compared with other E-SGKD schemes, our new E-SGKD scheme has the optimal storage overhead, high communication efficiency and satisfactory security. The simulation results in Zigbee-based networks show that the proposed scheme is suitable for the resource-restrained wireless networks. Finally, we show the application of our proposed scheme. PMID:27136550
On central-difference and upwind schemes
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, Eli
1990-01-01
A class of numerical dissipation models for central-difference schemes constructed with second- and fourth-difference terms is considered. The notion of matrix dissipation associated with upwind schemes is used to establish improved shock capturing capability for these models. In addition, conditions are given that guarantee that such dissipation models produce a Total Variation Diminishing (TVD) scheme. Appropriate switches for this type of model to ensure satisfaction of the TVD property are presented. Significant improvements in the accuracy of a central-difference scheme are demonstrated by computing both inviscid and viscous transonic airfoil flows.
An automated multi-scale network-based scheme for detection and location of seismic sources
NASA Astrophysics Data System (ADS)
Poiata, N.; Aden-Antoniow, F.; Satriano, C.; Bernard, P.; Vilotte, J. P.; Obara, K.
2017-12-01
We present a recently developed method - BackTrackBB (Poiata et al. 2016) - allowing to image energy radiation from different seismic sources (e.g., earthquakes, LFEs, tremors) in different tectonic environments using continuous seismic records. The method exploits multi-scale frequency-selective coherence in the wave field, recorded by regional seismic networks or local arrays. The detection and location scheme is based on space-time reconstruction of the seismic sources through an imaging function built from the sum of station-pair time-delay likelihood functions, projected onto theoretical 3D time-delay grids. This imaging function is interpreted as the location likelihood of the seismic source. A signal pre-processing step constructs a multi-band statistical representation of the non stationary signal, i.e. time series, by means of higher-order statistics or energy envelope characteristic functions. Such signal-processing is designed to detect in time signal transients - of different scales and a priori unknown predominant frequency - potentially associated with a variety of sources (e.g., earthquakes, LFE, tremors), and to improve the performance and the robustness of the detection-and-location location step. The initial detection-location, based on a single phase analysis with the P- or S-phase only, can then be improved recursively in a station selection scheme. This scheme - exploiting the 3-component records - makes use of P- and S-phase characteristic functions, extracted after a polarization analysis of the event waveforms, and combines the single phase imaging functions with the S-P differential imaging functions. The performance of the method is demonstrated here in different tectonic environments: (1) analysis of the one year long precursory phase of 2014 Iquique earthquake in Chile; (2) detection and location of tectonic tremor sources and low-frequency earthquakes during the multiple episodes of tectonic tremor activity in southwestern Japan.
Teacher argumentation in the secondary science classroom: Images of two modes of scientific inquiry
NASA Astrophysics Data System (ADS)
Gray, Ron E.
The purpose of this exploratory study was to examine scientific arguments constructed by secondary science teachers during instruction. The analysis focused on how arguments constructed by teachers differed based on the mode of inquiry underlying the topic. Specifically, how did the structure and content of arguments differ between experimentally and historically based topics? In addition, what factors mediate these differences? Four highly experienced high school science teachers were observed daily during instructional units for both experimental and historical science topics. Data sources include classroom observations, field notes, reflective memos, classroom artifacts, a nature of science survey, and teacher interviews. The arguments were analyzed for structure and content using Toulmin's argumentation pattern and Walton's schemes for presumptive reasoning revealing specific patterns of use between the two modes of inquiry. Interview data was analyzed to determine possible factors mediating these patterns. The results of this study reveal that highly experienced teachers present arguments to their students that, while simple in structure, reveal authentic images of science based on experimental and historical modes of inquiry. Structural analysis of the data revealed a common trend toward a greater amount of scientific data used to evidence knowledge claims in the historical science units. The presumptive reasoning analysis revealed that, while some presumptive reasoning schemes remained stable across the two units (e.g. 'causal inferences' and 'sign' schemes), others revealed different patterns of use including the 'analogy', 'evidence to hypothesis', 'example', and 'expert opinion' schemes. Finally, examination of the interview and survey data revealed five specific factors mediating the arguments constructed by the teachers: view of the nature of science, nature of the topic, teacher personal factors, view of students, and pedagogical decisions. These factors influenced both the structure and use of presumptive reasoning in the arguments. The results have implications for classroom practice, teacher education, and further research.
A third-order gas-kinetic CPR method for the Euler and Navier-Stokes equations on triangular meshes
NASA Astrophysics Data System (ADS)
Zhang, Chao; Li, Qibing; Fu, Song; Wang, Z. J.
2018-06-01
A third-order accurate gas-kinetic scheme based on the correction procedure via reconstruction (CPR) framework is developed for the Euler and Navier-Stokes equations on triangular meshes. The scheme combines the accuracy and efficiency of the CPR formulation with the multidimensional characteristics and robustness of the gas-kinetic flux solver. Comparing with high-order finite volume gas-kinetic methods, the current scheme is more compact and efficient by avoiding wide stencils on unstructured meshes. Unlike the traditional CPR method where the inviscid and viscous terms are treated differently, the inviscid and viscous fluxes in the current scheme are coupled and computed uniformly through the kinetic evolution model. In addition, the present scheme adopts a fully coupled spatial and temporal gas distribution function for the flux evaluation, achieving high-order accuracy in both space and time within a single step. Numerical tests with a wide range of flow problems, from nearly incompressible to supersonic flows with strong shocks, for both inviscid and viscous problems, demonstrate the high accuracy and efficiency of the present scheme.
Comparative Study of Three High Order Schemes for LES of Temporally Evolving Mixing Layers
NASA Technical Reports Server (NTRS)
Yee, Helen M. C.; Sjogreen, Biorn Axel; Hadjadj, C.
2012-01-01
Three high order shock-capturing schemes are compared for large eddy simulations (LES) of temporally evolving mixing layers (TML) for different convective Mach numbers (Mc) ranging from the quasi-incompressible regime to highly compressible supersonic regime. The considered high order schemes are fifth-order WENO (WENO5), seventh-order WENO (WENO7) and the associated eighth-order central spatial base scheme with the dissipative portion of WENO7 as a nonlinear post-processing filter step (WENO7fi). This high order nonlinear filter method (H.C. Yee and B. Sjogreen, Proceedings of ICOSAHOM09, June 22-26, 2009, Trondheim, Norway) is designed for accurate and efficient simulations of shock-free compressible turbulence, turbulence with shocklets and turbulence with strong shocks with minimum tuning of scheme parameters. The LES results by WENO7fi using the same scheme parameter agree well with experimental results of Barone et al. (2006), and published direct numerical simulations (DNS) work of Rogers & Moser (1994) and Pantano & Sarkar (2002), whereas results by WENO5 and WENO7 compare poorly with experimental data and DNS computations.
The scheme and research of TV series multidimensional comprehensive evaluation on cross-platform
NASA Astrophysics Data System (ADS)
Chai, Jianping; Bai, Xuesong; Zhou, Hongjun; Yin, Fulian
2016-10-01
As for shortcomings of the comprehensive evaluation system on traditional TV programs such as single data source, ignorance of new media as well as the high time cost and difficulty of making surveys, a new evaluation of TV series is proposed in this paper, which has a perspective in cross-platform multidimensional evaluation after broadcasting. This scheme considers the data directly collected from cable television and the Internet as research objects. It's based on TOPSIS principle, after preprocessing and calculation of the data, they become primary indicators that reflect different profiles of the viewing of TV series. Then after the process of reasonable empowerment and summation by the six methods(PCA, AHP, etc.), the primary indicators form the composite indices on different channels or websites. The scheme avoids the inefficiency and difficulty of survey and marking; At the same time, it not only reflects different dimensions of viewing, but also combines TV media and new media, completing the multidimensional comprehensive evaluation of TV series on cross-platform.
Efficient Low Dissipative High Order Schemes for Multiscale MHD Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, Helen C.; Mansour, Nagi (Technical Monitor)
2002-01-01
Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes for the compressible Euler and Navier-Stokes equations has been developed and verified by the authors and collaborators. These schemes are suitable for the problems in question. Basically, the scheme consists of sixth-order or higher non-dissipative spatial difference operators as the base scheme. To control the amount of numerical dissipation, multiresolution wavelets are used as sensors to adaptively limit the amount and to aid the selection and/or blending of the appropriate types of numerical dissipation to be used. Magnetohydrodynamics (MHD) waves play a key role in drag reduction in highly maneuverable high speed combat aircraft, in space weather forecasting, and in the understanding of the dynamics of the evolution of our solar system and the main sequence stars. Although there exist a few well-studied second and third-order high-resolution shock-capturing schemes for the MHD in the literature, these schemes are too diffusive and not practical for turbulence/combustion MHD flows. On the other hand, extension of higher than third-order high-resolution schemes to the MHD system of equations is not straightforward. Unlike the hydrodynamic equations, the inviscid MHD system is non-strictly hyperbolic with non-convex fluxes. The wave structures and shock types are different from their hydrodynamic counterparts. Many of the non-traditional hydrodynamic shocks are not fully understood. Consequently, reliable and highly accurate numerical schemes for multiscale MHD equations pose a great challenge to algorithm development. In addition, controlling the numerical error of the divergence free condition of the magnetic fields for high order methods has been a stumbling block. Lower order methods are not practical for the astrophysical problems in question. We propose to extend our hydrodynamics schemes to the MHD equations with several desired properties over commonly used MHD schemes.
A united event grand canonical Monte Carlo study of partially doped polyaniline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byshkin, M. S., E-mail: mbyshkin@unisa.it, E-mail: gmilano@unisa.it; Correa, A.; Buonocore, F.
2013-12-28
A Grand Canonical Monte Carlo scheme, based on united events combining protonation/deprotonation and insertion/deletion of HCl molecules is proposed for the generation of polyaniline structures at intermediate doping levels between 0% (PANI EB) and 100% (PANI ES). A procedure based on this scheme and subsequent structure relaxations using molecular dynamics is described and validated. Using the proposed scheme and the corresponding procedure, atomistic models of amorphous PANI-HCl structures were generated and studied at different doping levels. Density, structure factors, and solubility parameters were calculated. Their values agree well with available experimental data. The interactions of HCl with PANI have beenmore » studied and distribution of their energies has been analyzed. The procedure has also been extended to the generation of PANI models including adsorbed water and the effect of inclusion of water molecules on PANI properties has also been modeled and discussed. The protocol described here is general and the proposed United Event Grand Canonical Monte Carlo scheme can be easily extended to similar polymeric materials used in gas sensing and to other systems involving adsorption and chemical reactions steps.« less
Performance Analysis of a Wind Turbine Driven Swash Plate Pump for Large Scale Offshore Applications
NASA Astrophysics Data System (ADS)
Buhagiar, D.; Sant, T.
2014-12-01
This paper deals with the performance modelling and analysis of offshore wind turbine-driven hydraulic pumps. The concept consists of an open loop hydraulic system with the rotor main shaft directly coupled to a swash plate pump to supply pressurised sea water. A mathematical model is derived to cater for the steady state behaviour of entire system. A simplified model for the pump is implemented together with different control scheme options for regulating the rotor shaft power. A new control scheme is investigated, based on the combined use of hydraulic pressure and pitch control. Using a steady-state analysis, the study shows how the adoption of alternative control schemes in a the wind turbine-hydraulic pump system may result in higher energy yields than those from a conventional system with an electrical generator and standard pitch control for power regulation. This is in particular the case with the new control scheme investigated in this study that is based on the combined use of pressure and rotor blade pitch control.
Identification of Piecewise Linear Uniform Motion Blur
NASA Astrophysics Data System (ADS)
Patanukhom, Karn; Nishihara, Akinori
A motion blur identification scheme is proposed for nonlinear uniform motion blurs approximated by piecewise linear models which consist of more than one linear motion component. The proposed scheme includes three modules that are a motion direction estimator, a motion length estimator and a motion combination selector. In order to identify the motion directions, the proposed scheme is based on a trial restoration by using directional forward ramp motion blurs along different directions and an analysis of directional information via frequency domain by using a Radon transform. Autocorrelation functions of image derivatives along several directions are employed for estimation of the motion lengths. A proper motion combination is identified by analyzing local autocorrelation functions of non-flat component of trial restored results. Experimental examples of simulated and real world blurred images are given to demonstrate a promising performance of the proposed scheme.
Influence diagnostics in meta-regression model.
Shi, Lei; Zuo, ShanShan; Yu, Dalei; Zhou, Xiaohua
2017-09-01
This paper studies the influence diagnostics in meta-regression model including case deletion diagnostic and local influence analysis. We derive the subset deletion formulae for the estimation of regression coefficient and heterogeneity variance and obtain the corresponding influence measures. The DerSimonian and Laird estimation and maximum likelihood estimation methods in meta-regression are considered, respectively, to derive the results. Internal and external residual and leverage measure are defined. The local influence analysis based on case-weights perturbation scheme, responses perturbation scheme, covariate perturbation scheme, and within-variance perturbation scheme are explored. We introduce a method by simultaneous perturbing responses, covariate, and within-variance to obtain the local influence measure, which has an advantage of capable to compare the influence magnitude of influential studies from different perturbations. An example is used to illustrate the proposed methodology. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Chen, Zhou; Tong, Qiu-Nan; Zhang, Cong-Cong; Hu, Zhan
2015-04-01
Identification of acetone and its two isomers, and the control of their ionization and dissociation processes are performed using a dual-mass-spectrometer scheme. The scheme employs two sets of time of flight mass spectrometers to simultaneously acquire the mass spectra of two different molecules under the irradiation of identically shaped femtosecond laser pulses. The optimal laser pulses are found using closed-loop learning method based on a genetic algorithm. Compared with the mass spectra of the two isomers that are obtained with the transform limited pulse, those obtained under the irradiation of the optimal laser pulse show large differences and the various reaction pathways of the two molecules are selectively controlled. The experimental results demonstrate that the scheme is quite effective and useful in studies of two molecules having common mass peaks, which makes a traditional single mass spectrometer unfeasible. Project supported by the National Basic Research Program of China (Grant No. 2013CB922200) and the National Natural Science Foundation of China (Grant No. 11374124).
Non-linear hydrodynamical evolution of rotating relativistic stars: numerical methods and code tests
NASA Astrophysics Data System (ADS)
Font, José A.; Stergioulas, Nikolaos; Kokkotas, Kostas D.
2000-04-01
We present numerical hydrodynamical evolutions of rapidly rotating relativistic stars, using an axisymmetric, non-linear relativistic hydrodynamics code. We use four different high-resolution shock-capturing (HRSC) finite-difference schemes (based on approximate Riemann solvers) and compare their accuracy in preserving uniformly rotating stationary initial configurations in long-term evolutions. Among these four schemes, we find that the third-order piecewise parabolic method scheme is superior in maintaining the initial rotation law in long-term evolutions, especially near the surface of the star. It is further shown that HRSC schemes are suitable for the evolution of perturbed neutron stars and for the accurate identification (via Fourier transforms) of normal modes of oscillation. This is demonstrated for radial and quadrupolar pulsations in the non-rotating limit, where we find good agreement with frequencies obtained with a linear perturbation code. The code can be used for studying small-amplitude or non-linear pulsations of differentially rotating neutron stars, while our present results serve as testbed computations for three-dimensional general-relativistic evolution codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul M. Torrens; Atsushi Nara; Xun Li
2012-01-01
Human movement is a significant ingredient of many social, environmental, and technical systems, yet the importance of movement is often discounted in considering systems complexity. Movement is commonly abstracted in agent-based modeling (which is perhaps the methodological vehicle for modeling complex systems), despite the influence of movement upon information exchange and adaptation in a system. In particular, agent-based models of urban pedestrians often treat movement in proxy form at the expense of faithfully treating movement behavior with realistic agency. There exists little consensus about which method is appropriate for representing movement in agent-based schemes. In this paper, we examine popularly-usedmore » methods to drive movement in agent-based models, first by introducing a methodology that can flexibly handle many representations of movement at many different scales and second, introducing a suite of tools to benchmark agent movement between models and against real-world trajectory data. We find that most popular movement schemes do a relatively poor job of representing movement, but that some schemes may well be 'good enough' for some applications. We also discuss potential avenues for improving the representation of movement in agent-based frameworks.« less
Xu, Y.; Xia, J.; Miller, R.D.
2007-01-01
The need for incorporating the traction-free condition at the air-earth boundary for finite-difference modeling of seismic wave propagation has been discussed widely. A new implementation has been developed for simulating elastic wave propagation in which the free-surface condition is replaced by an explicit acoustic-elastic boundary. Detailed comparisons of seismograms with different implementations for the air-earth boundary were undertaken using the (2,2) (the finite-difference operators are second order in time and space) and the (2,6) (second order in time and sixth order in space) standard staggered-grid (SSG) schemes. Methods used in these comparisons to define the air-earth boundary included the stress image method (SIM), the heterogeneous approach, the scheme of modifying material properties based on transversely isotropic medium approach, the acoustic-elastic boundary approach, and an analytical approach. The method proposed achieves the same or higher accuracy of modeled body waves relative to the SIM. Rayleigh waves calculated using the explicit acoustic-elastic boundary approach differ slightly from those calculated using the SIM. Numerical results indicate that when using the (2,2) SSG scheme for SIM and our new method, a spatial step of 16 points per minimum wavelength is sufficient to achieve 90% accuracy; 32 points per minimum wavelength achieves 95% accuracy in modeled Rayleigh waves. When using the (2,6) SSG scheme for the two methods, a spatial step of eight points per minimum wavelength achieves 95% accuracy in modeled Rayleigh waves. Our proposed method is physically reasonable and, based on dispersive analysis of simulated seismographs from a layered half-space model, is highly accurate. As a bonus, our proposed method is easy to program and slightly faster than the SIM. ?? 2007 Society of Exploration Geophysicists.
Zhu, Jianping; Tao, Zhengsu; Lv, Chunfeng
2012-01-01
Studies of the IEEE 802.15.4 Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) scheme have been received considerable attention recently, with most of these studies focusing on homogeneous or saturated traffic. Two novel transmission schemes—OSTS/BSTS (One Service a Time Scheme/Bulk Service a Time Scheme)—are proposed in this paper to improve the behaviors of time-critical buffered networks with heterogeneous unsaturated traffic. First, we propose a model which contains two modified semi-Markov chains and a macro-Markov chain combined with the theory of M/G/1/K queues to evaluate the characteristics of these two improved CSMA/CA schemes, in which traffic arrivals and accessing packets are bestowed with non-preemptive priority over each other, instead of prioritization. Then, throughput, packet delay and energy consumption of unsaturated, unacknowledged IEEE 802.15.4 beacon-enabled networks are predicted based on the overall point of view which takes the dependent interactions of different types of nodes into account. Moreover, performance comparisons of these two schemes with other non-priority schemes are also proposed. Analysis and simulation results show that delay and fairness of our schemes are superior to those of other schemes, while throughput and energy efficiency are superior to others in more heterogeneous situations. Comprehensive simulations demonstrate that the analysis results of these models match well with the simulation results. PMID:22666076
A Game Theoretic Framework for Power Control in Wireless Sensor Networks (POSTPRINT)
2010-02-01
which the sensor nodes compute based on past observations. Correspondingly, Pe can only be estimated; for example, with a noncoherent FSK modula...bit error probability for the link (i ! j) is given by some inverse function of j. For example, with noncoherent FSK modulation scheme, Pe ¼ 0:5e j...show the results for two different modulation schemes: DPSK and noncoherent PSK. As expected, with improvement in channel condition, i.e., with increase
Implementation of density-based solver for all speeds in the framework of OpenFOAM
NASA Astrophysics Data System (ADS)
Shen, Chun; Sun, Fengxian; Xia, Xinlin
2014-10-01
In the framework of open source CFD code OpenFOAM, a density-based solver for all speeds flow field is developed. In this solver the preconditioned all speeds AUSM+(P) scheme is adopted and the dual time scheme is implemented to complete the unsteady process. Parallel computation could be implemented to accelerate the solving process. Different interface reconstruction algorithms are implemented, and their accuracy with respect to convection is compared. Three benchmark tests of lid-driven cavity flow, flow crossing over a bump, and flow over a forward-facing step are presented to show the accuracy of the AUSM+(P) solver for low-speed incompressible flow, transonic flow, and supersonic/hypersonic flow. Firstly, for the lid driven cavity flow, the computational results obtained by different interface reconstruction algorithms are compared. It is indicated that the one dimensional reconstruction scheme adopted in this solver possesses high accuracy and the solver developed in this paper can effectively catch the features of low incompressible flow. Then via the test cases regarding the flow crossing over bump and over forward step, the ability to capture characteristics of the transonic and supersonic/hypersonic flows are confirmed. The forward-facing step proves to be the most challenging for the preconditioned solvers with and without the dual time scheme. Nonetheless, the solvers described in this paper reproduce the main features of this flow, including the evolution of the initial transient.
NASA Technical Reports Server (NTRS)
Yefet, Amir; Petropoulos, Peter G.
1999-01-01
We consider a divergence-free non-dissipative fourth-order explicit staggered finite difference scheme for the hyperbolic Maxwell's equations. Special one-sided difference operators are derived in order to implement the scheme near metal boundaries and dielectric interfaces. Numerical results show the scheme is long-time stable, and is fourth-order convergent over complex domains that include dielectric interfaces and perfectly conducting surfaces. We also examine the scheme's behavior near metal surfaces that are not aligned with the grid axes, and compare its accuracy to that obtained by the Yee scheme.
NASA Astrophysics Data System (ADS)
Sjögreen, Björn; Yee, H. C.
2018-07-01
The Sjogreen and Yee [31] high order entropy conservative numerical method for compressible gas dynamics is extended to include discontinuities and also extended to equations of ideal magnetohydrodynamics (MHD). The basic idea is based on Tadmor's [40] original work for inviscid perfect gas flows. For the MHD four formulations of the MHD are considered: (a) the conservative MHD, (b) the Godunov [14] non-conservative form, (c) the Janhunen [19] - MHD with magnetic field source terms, and (d) a MHD with source terms by Brackbill and Barnes [5]. Three forms of the high order entropy numerical fluxes for the MHD in the finite difference framework are constructed. They are based on the extension of the low order form of Chandrashekar and Klingenberg [9], and two forms with modifications of the Winters and Gassner [49] numerical fluxes. For flows containing discontinuities and multiscale turbulence fluctuations the high order entropy conservative numerical fluxes as the new base scheme under the Yee and Sjogreen [31] and Kotov et al. [21,22] high order nonlinear filter approach is developed. The added nonlinear filter step on the high order centered entropy conservative spatial base scheme is only utilized at isolated computational regions, while maintaining high accuracy almost everywhere for long time integration of unsteady flows and DNS and LES of turbulence computations. Representative test cases for both smooth flows and problems containing discontinuities for the gas dynamics and the ideal MHD are included. The results illustrate the improved stability by using the high order entropy conservative numerical flux as the base scheme instead of the pure high order central scheme.
Comba, Peter; Martin, Bodo; Sanyal, Avik; Stephan, Holger
2013-08-21
A QSPR scheme for the computation of lipophilicities of ⁶⁴Cu complexes was developed with a training set of 24 tetraazamacrocylic and bispidine-based Cu(II) compounds and their experimentally available 1-octanol-water distribution coefficients. A minimum number of physically meaningful parameters were used in the scheme, and these are primarily based on data available from molecular mechanics calculations, using an established force field for Cu(II) complexes and a recently developed scheme for the calculation of fluctuating atomic charges. The developed model was also applied to an independent validation set and was found to accurately predict distribution coefficients of potential ⁶⁴Cu PET (positron emission tomography) systems. A possible next step would be the development of a QSAR-based biodistribution model to track the uptake of imaging agents in different organs and tissues of the body. It is expected that such simple, empirical models of lipophilicity and biodistribution will be very useful in the design and virtual screening of positron emission tomography (PET) imaging agents.
The Hilbert-Huang Transform-Based Denoising Method for the TEM Response of a PRBS Source Signal
NASA Astrophysics Data System (ADS)
Hai, Li; Guo-qiang, Xue; Pan, Zhao; Hua-sen, Zhong; Khan, Muhammad Younis
2016-08-01
The denoising process is critical in processing transient electromagnetic (TEM) sounding data. For the full waveform pseudo-random binary sequences (PRBS) response, an inadequate noise estimation may result in an erroneous interpretation. We consider the Hilbert-Huang transform (HHT) and its application to suppress the noise in the PRBS response. The focus is on the thresholding scheme to suppress the noise and the analysis of the signal based on its Hilbert time-frequency representation. The method first decomposes the signal into the intrinsic mode function, and then, inspired by the thresholding scheme in wavelet analysis; an adaptive and interval thresholding is conducted to set to zero all the components in intrinsic mode function which are lower than a threshold related to the noise level. The algorithm is based on the characteristic of the PRBS response. The HHT-based denoising scheme is tested on the synthetic and field data with the different noise levels. The result shows that the proposed method has a good capability in denoising and detail preservation.
Versteeg, Bart; Bruisten, Sylvia M; van der Ende, Arie; Pannekoek, Yvonne
2016-04-18
Chlamydia trachomatis infections remain the most common bacterial sexually transmitted infection worldwide. To gain more insight into the epidemiology and transmission of C. trachomatis, several schemes of multilocus sequence typing (MLST) have been developed. We investigated the clustering of C. trachomatis strains derived from men who have sex with men (MSM) and heterosexuals using the MLST scheme based on 7 housekeeping genes (MLST-7) adapted for clinical specimens and a high-resolution MLST scheme based on 6 polymorphic genes, including ompA (hr-MLST-6). Specimens from 100 C. trachomatis infected men who have sex with men (MSM) and 100 heterosexual women were randomly selected from previous studies and sequenced. We adapted the MLST-7 scheme to a nested assay to be suitable for direct typing of clinical specimens. All selected specimens were typed using both the adapted MLST-7 scheme and the hr-MLST-6 scheme. Clustering of C. trachomatis strains derived from MSM and heterosexuals was assessed using minimum spanning tree analysis. Sufficient chlamydial DNA was present in 188 of the 200 (94 %) selected samples. Using the adapted MLST-7 scheme, full MLST profiles were obtained for 187 of 188 tested specimens resulting in a high success rate of 99.5 %. Of these 187 specimens, 91 (48.7 %) were from MSM and 96 (51.3 %) from heterosexuals. We detected 21 sequence types (STs) using the adapted MLST-7 and 79 STs using the hr-MLST-6 scheme. Minimum spanning tree analyses was used to examine the clustering of MLST-7 data, which showed no reflection of separate transmission in MSM and heterosexual hosts. Moreover, typing using the hr-MLST-6 scheme identified genetically related clusters within each of clusters that were identified by using the MLST-7 scheme. No distinct transmission of C. trachomatis could be observed in MSM and heterosexuals using the adapted MLST-7 scheme in contrast to using the hr-MLST-6. In addition, we compared clustering of both MLST schemes and demonstrated that typing using the hr-MLST-6 scheme is able to identify genetically related clusters of C. trachomatis strains within each of the clusters that were identified by using the MLST-7 scheme.
A Difference Scheme with Autocontrol Artificial Viscosity to Predict Ablated Nosetip Shape
1989-09-29
I FTD-ID(RS)T-0640-89 U) (0 FOREIGN TECHNOLOGY DIVISION A DIFFERENCE SCHEME WITH AUTOCONTROL ARTIFICIAL VISCOSITY TO PREDICT ABLATED NOSETIP SHAPE by...September 1989 MICROFICHE NR: FTD-89-C-000800 A DIFFERENCE SCHEME WITH AUTOCONTROL ARTIFICIAL VISCOSITY TO PREDICT ABLATED NOSETIP SHAPE By: Yang Maozhao...DIFFERENCE SCHEME WITH AUTOCONTROL ARTIFICIAL VISCOSITY TO PREDICT ABLATED NOSETIP SHAPE Yang Maozhao (China Aerodynamic Research and Development Centre
A novel 'Gold on Gold' biosensing scheme for an on-fiber immunoassay
NASA Astrophysics Data System (ADS)
Punjabi, N.; Satija, J.; Mukherji, S.
2015-05-01
In this paper, we propose a novel „gold on gold‟ biosensing scheme for absorbance based fiber-optic biosensor. First, a self-assembled monolayer of gold nanoparticles is formed at the sensing region of the fiber-optic probe by incubating an amino-silanized probe in a colloidal gold solution. Thereafter, the receptor moieties, i.e. Human immunoglobulin G (HIgG) were immobilized by using standard alkanethiol and classic carbodiimide coupling chemistry. Finally, biosensing experiments were performed with different concentrations of gold nanoparticle-tagged analyte, i.e. Goat anti- Human immunoglobulin G (Nanogold-GaHIgG). The sensor response was observed to be more than five-fold compared to the control bioassay, in which the sensor matrix was devoid of gold nanoparticle film. Also, the response was found to be ~10 times higher compared to the FITC-tagged scheme and ~14.5 times better compared to untagged scheme. This novel scheme also demonstrated the potential in improving the limit of detection for the fiber-optic biosensors.
New regularization scheme for blind color image deconvolution
NASA Astrophysics Data System (ADS)
Chen, Li; He, Yu; Yap, Kim-Hui
2011-01-01
This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.
Nirmal Raja, K; Maraline Beno, M
2017-07-01
In the wireless sensor network(WSN) security is a major issue. There are several network security schemes proposed in research. In the network, malicious nodes obstruct the performance of the network. The network can be vulnerable by Sybil attack. When a node illicitly assertions multiple identities or claims fake IDs, the WSN grieves from an attack named Sybil attack. This attack threatens wireless sensor network in data aggregation, synchronizing system, routing, fair resource allocation and misbehavior detection. Henceforth, the research is carried out to prevent the Sybil attack and increase the performance of the network. This paper presents the novel security mechanism and Fujisaki Okamoto algorithm and also application of the work. The Fujisaki-Okamoto (FO) algorithm is ID based cryptographic scheme and gives strong authentication against Sybil attack. By using Network simulator2 (NS2) the scheme is simulated. In this proposed scheme broadcasting key, time taken for different key sizes, energy consumption, Packet delivery ratio, Throughput were analyzed.
A Maple package for computing Gröbner bases for linear recurrence relations
NASA Astrophysics Data System (ADS)
Gerdt, Vladimir P.; Robertz, Daniel
2006-04-01
A Maple package for computing Gröbner bases of linear difference ideals is described. The underlying algorithm is based on Janet and Janet-like monomial divisions associated with finite difference operators. The package can be used, for example, for automatic generation of difference schemes for linear partial differential equations and for reduction of multiloop Feynman integrals. These two possible applications are illustrated by simple examples of the Laplace equation and a one-loop scalar integral of propagator type.
Optimized Controller Design for a 12-Pulse Voltage Source Converter Based HVDC System
NASA Astrophysics Data System (ADS)
Agarwal, Ruchi; Singh, Sanjeev
2017-12-01
The paper proposes an optimized controller design scheme for power quality improvement in 12-pulse voltage source converter based high voltage direct current system. The proposed scheme is hybrid combination of golden section search and successive linear search method. The paper aims at reduction of current sensor and optimization of controller. The voltage and current controller parameters are selected for optimization due to its impact on power quality. The proposed algorithm for controller optimizes the objective function which is composed of current harmonic distortion, power factor, and DC voltage ripples. The detailed designs and modeling of the complete system are discussed and its simulation is carried out in MATLAB-Simulink environment. The obtained results are presented to demonstrate the effectiveness of the proposed scheme under different transient conditions such as load perturbation, non-linear load condition, voltage sag condition, and tapped load fault under one phase open condition at both points-of-common coupling.
Toward privacy-preserving JPEG image retrieval
NASA Astrophysics Data System (ADS)
Cheng, Hang; Wang, Jingyue; Wang, Meiqing; Zhong, Shangping
2017-07-01
This paper proposes a privacy-preserving retrieval scheme for JPEG images based on local variance. Three parties are involved in the scheme: the content owner, the server, and the authorized user. The content owner encrypts JPEG images for privacy protection by jointly using permutation cipher and stream cipher, and then, the encrypted versions are uploaded to the server. With an encrypted query image provided by an authorized user, the server may extract blockwise local variances in different directions without knowing the plaintext content. After that, it can calculate the similarity between the encrypted query image and each encrypted database image by a local variance-based feature comparison mechanism. The authorized user with the encryption key can decrypt the returned encrypted images with plaintext content similar to the query image. The experimental results show that the proposed scheme not only provides effective privacy-preserving retrieval service but also ensures both format compliance and file size preservation for encrypted JPEG images.
A RONI Based Visible Watermarking Approach for Medical Image Authentication.
Thanki, Rohit; Borra, Surekha; Dwivedi, Vedvyas; Borisagar, Komal
2017-08-09
Nowadays medical data in terms of image files are often exchanged between different hospitals for use in telemedicine and diagnosis. Visible watermarking being extensively used for Intellectual Property identification of such medical images, leads to serious issues if failed to identify proper regions for watermark insertion. In this paper, the Region of Non-Interest (RONI) based visible watermarking for medical image authentication is proposed. In this technique, to RONI of the cover medical image is first identified using Human Visual System (HVS) model. Later, watermark logo is visibly inserted into RONI of the cover medical image to get watermarked medical image. Finally, the watermarked medical image is compared with the original medical image for measurement of imperceptibility and authenticity of proposed scheme. The experimental results showed that this proposed scheme reduces the computational complexity and improves the PSNR when compared to many existing schemes.
Human visual system-based color image steganography using the contourlet transform
NASA Astrophysics Data System (ADS)
Abdul, W.; Carré, P.; Gaborit, P.
2010-01-01
We present a steganographic scheme based on the contourlet transform which uses the contrast sensitivity function (CSF) to control the force of insertion of the hidden information in a perceptually uniform color space. The CIELAB color space is used as it is well suited for steganographic applications because any change in the CIELAB color space has a corresponding effect on the human visual system as is very important for steganographic schemes to be undetectable by the human visual system (HVS). The perceptual decomposition of the contourlet transform gives it a natural advantage over other decompositions as it can be molded with respect to the human perception of different frequencies in an image. The evaluation of the imperceptibility of the steganographic scheme with respect to the color perception of the HVS is done using standard methods such as the structural similarity (SSIM) and CIEDE2000. The robustness of the inserted watermark is tested against JPEG compression.
NASA Astrophysics Data System (ADS)
Han, Y.; Misra, S.
2018-04-01
Multi-frequency measurement of a dispersive electromagnetic (EM) property, such as electrical conductivity, dielectric permittivity, or magnetic permeability, is commonly analyzed for purposes of material characterization. Such an analysis requires inversion of the multi-frequency measurement based on a specific relaxation model, such as Cole-Cole model or Pelton's model. We develop a unified inversion scheme that can be coupled to various type of relaxation models to independently process multi-frequency measurement of varied EM properties for purposes of improved EM-based geomaterial characterization. The proposed inversion scheme is firstly tested in few synthetic cases in which different relaxation models are coupled into the inversion scheme and then applied to multi-frequency complex conductivity, complex resistivity, complex permittivity, and complex impedance measurements. The method estimates up to seven relaxation-model parameters exhibiting convergence and accuracy for random initializations of the relaxation-model parameters within up to 3-orders of magnitude variation around the true parameter values. The proposed inversion method implements a bounded Levenberg algorithm with tuning initial values of damping parameter and its iterative adjustment factor, which are fixed in all the cases shown in this paper and irrespective of the type of measured EM property and the type of relaxation model. Notably, jump-out step and jump-back-in step are implemented as automated methods in the inversion scheme to prevent the inversion from getting trapped around local minima and to honor physical bounds of model parameters. The proposed inversion scheme can be easily used to process various types of EM measurements without major changes to the inversion scheme.
NASA Technical Reports Server (NTRS)
Lang, Stephen E.; Tao, Wei-Kuo; Chern, Jiun-Dar; Wu, Di; Li, Xiaowen
2015-01-01
Numerous cloud microphysical schemes designed for cloud and mesoscale models are currently in use, ranging from simple bulk to multi-moment, multi-class to explicit bin schemes. This study details the benefits of adding a 4th ice class (hail) to an already improved 3-class ice bulk microphysics scheme developed for the Goddard Cumulus Ensemble model based on Rutledge and Hobbs (1983,1984). Besides the addition and modification of several hail processes from Lin et al. (1983), further modifications were made to the 3-ice processes, including allowing greater ice super saturation and mitigating spurious evaporationsublimation in the saturation adjustment scheme, allowing graupelhail to become snow via vapor growth and hail to become graupel via riming, and the inclusion of a rain evaporation correction and vapor diffusivity factor. The improved 3-ice snowgraupel size-mapping schemes were adjusted to be more stable at higher mixing rations and to increase the aggregation effect for snow. A snow density mapping was also added. The new scheme was applied to an intense continental squall line and a weaker, loosely-organized continental case using three different hail intercepts. Peak simulated reflectivities agree well with radar for both the intense and weaker case and were better than earlier 3-ice versions when using a moderate and large intercept for hail, respectively. Simulated reflectivity distributions versus height were also improved versus radar in both cases compared to earlier 3-ice versions. The bin-based rain evaporation correction affected the squall line case more but did not change the overall agreement in reflectivity distributions.
NASA Astrophysics Data System (ADS)
Pilan, N.; Antoni, V.; De Lorenzi, A.; Chitarin, G.; Veltri, P.; Sartori, E.
2016-02-01
A scheme of a neutral beam injector (NBI), based on electrostatic acceleration and magneto-static deflection of negative ions, is proposed and analyzed in terms of feasibility and performance. The scheme is based on the deflection of a high energy (2 MeV) and high current (some tens of amperes) negative ion beam by a large magnetic deflector placed between the Beam Source (BS) and the neutralizer. This scheme has the potential of solving two key issues, which at present limit the applicability of a NBI to a fusion reactor: the maximum achievable acceleration voltage and the direct exposure of the BS to the flux of neutrons and radiation coming from the fusion reactor. In order to solve these two issues, a magnetic deflector is proposed to screen the BS from direct exposure to radiation and neutrons so that the voltage insulation between the electrostatic accelerator and the grounded vessel can be enhanced by using compressed SF6 instead of vacuum so that the negative ions can be accelerated at energies higher than 1 MeV. By solving the beam transport with different magnetic deflector properties, an optimum scheme has been found which is shown to be effective to guarantee both the steering effect and the beam aiming.
[GIS and scenario analysis aid to water pollution control planning of river basin].
Wang, Shao-ping; Cheng, Sheng-tong; Jia, Hai-feng; Ou, Zhi-dan; Tan, Bin
2004-07-01
The forward and backward algorithms for watershed water pollution control planning were summarized in this paper as well as their advantages and shortages. The spatial databases of water environmental function region, pollution sources, monitoring sections and sewer outlets were built with ARCGIS8.1 as the platform in the case study of Ganjiang valley, Jiangxi province. Based on the principles of the forward algorithm, four scenarios were designed for the watershed pollution control. Under these scenarios, ten sets of planning schemes were generated to implement cascade pollution source control. The investment costs of sewage treatment for these schemes were estimated by means of a series of cost-effective functions; with pollution source prediction, the water quality was modeled with CSTR model for each planning scheme. The modeled results of different planning schemes were visualized through GIS to aid decision-making. With the results of investment cost and water quality attainment as decision-making accords and based on the analysis of the economic endurable capacity for water pollution control in Ganjiang river basin, two optimized schemes were proposed. The research shows that GIS technology and scenario analysis can provide a good guidance to the synthesis, integrity and sustainability aspects for river basin water quality planning.
Pilan, N; Antoni, V; De Lorenzi, A; Chitarin, G; Veltri, P; Sartori, E
2016-02-01
A scheme of a neutral beam injector (NBI), based on electrostatic acceleration and magneto-static deflection of negative ions, is proposed and analyzed in terms of feasibility and performance. The scheme is based on the deflection of a high energy (2 MeV) and high current (some tens of amperes) negative ion beam by a large magnetic deflector placed between the Beam Source (BS) and the neutralizer. This scheme has the potential of solving two key issues, which at present limit the applicability of a NBI to a fusion reactor: the maximum achievable acceleration voltage and the direct exposure of the BS to the flux of neutrons and radiation coming from the fusion reactor. In order to solve these two issues, a magnetic deflector is proposed to screen the BS from direct exposure to radiation and neutrons so that the voltage insulation between the electrostatic accelerator and the grounded vessel can be enhanced by using compressed SF6 instead of vacuum so that the negative ions can be accelerated at energies higher than 1 MeV. By solving the beam transport with different magnetic deflector properties, an optimum scheme has been found which is shown to be effective to guarantee both the steering effect and the beam aiming.
NASA Astrophysics Data System (ADS)
Schroeder, Sascha Thorsten; Costa, Ana; Obé, Elisabeth
In recent years, fuel cell based micro-combined heat and power (mCHP) has received increasing attention due to its potential contribution to European energy policy goals, i.e., sustainability, competitiveness and security of supply. Besides technical advances, regulatory framework and ownership structures are of crucial importance in order to achieve greater diffusion of the technology in residential applications. This paper analyses the interplay of policy and ownership structures for the future deployment of mCHP. Furthermore, it regards the three country cases Denmark, France and Portugal. Firstly, the implications of different kinds of support schemes on investment risk and the diffusion of a technology are explained conceptually. Secondly, ownership arrangements are addressed. Then, a cross-country comparison on present support schemes for mCHP and competing technologies discusses the national implementation of European legislation in Denmark, France and Portugal. Finally, resulting implications for ownership arrangements on the choice of support scheme are explained. From a conceptual point of view, investment support, feed-in tariffs and price premiums are the most appropriate schemes for fuel cell mCHP. This can be used for improved analysis of operational strategies. The interaction of this plethora of elements necessitates careful balancing from a private- and socio-economic point of view.
Dynamic Droop–Based Inertial Control of a Doubly-Fed Induction Generator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, Min; Muljadi, Eduard; Park, Jung-Wook
2016-07-01
If a large disturbance occurs in a power grid, two auxiliary loops for the inertial control of a wind turbine generator have been used: droop loop and rate of change of frequency (ROCOF) loop. Because their gains are fixed, difficulties arise in determining them suitable for all grid and wind conditions. This paper proposes a dynamic droop-based inertial control scheme of a doubly-fed induction generator (DFIG). The scheme aims to improve the frequency nadir (FN) and ensure stable operation of a DFIG. To achieve the first goal, the scheme uses a droop loop, but it dynamically changes its gain basedmore » on the ROCOF to release a large amount of kinetic energy during the initial stage of a disturbance. To do this, a shaping function that relates the droop to the ROCOF is used. To achieve the second goal, different shaping functions, which depend on rotor speeds, are used to give a large contribution in high wind conditions and prevent over-deceleration in low wind conditions during inertial control. The performance of the proposed scheme was investigated under various wind conditions using an EMTP-RV simulator. The results indicate that the scheme improves the FN and ensures stable operation of a DFIG.« less
Sharing Regional Cooperative Gains From Reusing Effluent for Irrigation
NASA Astrophysics Data System (ADS)
Dinar, Ariel; Yaron, Dan; Kannai, Yakar
1986-03-01
This paper is concerned with the allocation of cost and benefits from regional cooperation, with respect to reuse of municipal effluent for irrigation at the Ramla region of Israel. An efficient regional solution provides the maximal regional income which has to be redistributed among the town and several farms. Different allocations based on marginal cost pricing and schemes from cooperative game theory like the core, Shapley value, generalized Shapley value, and nucleolus are applied. The town and farm A have the main additional gains according to all allocation schemes presented. Advantages and disadvantages of these allocation schemes are examined in order to suggest a fair and acceptable allocation of the regional cooperative gains. Although no method has been preferred, the marginal cost pricing was found to be unacceptable by the participants. The conclusion is that the theory of cooperative games may provide guidelines while comparing the different solutions.
A 3D staggered-grid finite difference scheme for poroelastic wave equation
NASA Astrophysics Data System (ADS)
Zhang, Yijie; Gao, Jinghuai
2014-10-01
Three dimensional numerical modeling has been a viable tool for understanding wave propagation in real media. The poroelastic media can better describe the phenomena of hydrocarbon reservoirs than acoustic and elastic media. However, the numerical modeling in 3D poroelastic media demands significantly more computational capacity, including both computational time and memory. In this paper, we present a 3D poroelastic staggered-grid finite difference (SFD) scheme. During the procedure, parallel computing is implemented to reduce the computational time. Parallelization is based on domain decomposition, and communication between processors is performed using message passing interface (MPI). Parallel analysis shows that the parallelized SFD scheme significantly improves the simulation efficiency and 3D decomposition in domain is the most efficient. We also analyze the numerical dispersion and stability condition of the 3D poroelastic SFD method. Numerical results show that the 3D numerical simulation can provide a real description of wave propagation.
An improved biometrics-based authentication scheme for telecare medical information systems.
Guo, Dianli; Wen, Qiaoyan; Li, Wenmin; Zhang, Hua; Jin, Zhengping
2015-03-01
Telecare medical information system (TMIS) offers healthcare delivery services and patients can acquire their desired medical services conveniently through public networks. The protection of patients' privacy and data confidentiality are significant. Very recently, Mishra et al. proposed a biometrics-based authentication scheme for telecare medical information system. Their scheme can protect user privacy and is believed to resist a range of network attacks. In this paper, we analyze Mishra et al.'s scheme and identify that their scheme is insecure to against known session key attack and impersonation attack. Thereby, we present a modified biometrics-based authentication scheme for TMIS to eliminate the aforementioned faults. Besides, we demonstrate the completeness of the proposed scheme through BAN-logic. Compared to the related schemes, our protocol can provide stronger security and it is more practical.
Optimized Energy Harvesting, Cluster-Head Selection and Channel Allocation for IoTs in Smart Cities
Aslam, Saleem; Hasan, Najam Ul; Jang, Ju Wook; Lee, Kyung-Geun
2016-01-01
This paper highlights three critical aspects of the internet of things (IoTs), namely (1) energy efficiency, (2) energy balancing and (3) quality of service (QoS) and presents three novel schemes for addressing these aspects. For energy efficiency, a novel radio frequency (RF) energy-harvesting scheme is presented in which each IoT device is associated with the best possible RF source in order to maximize the overall energy that the IoT devices harvest. For energy balancing, the IoT devices in close proximity are clustered together and then an IoT device with the highest residual energy is selected as a cluster head (CH) on a rotational basis. Once the CH is selected, it assigns channels to the IoT devices to report their data using a novel integer linear program (ILP)-based channel allocation scheme by satisfying their desired QoS. To evaluate the presented schemes, exhaustive simulations are carried out by varying different parameters, including the number of IoT devices, the number of harvesting sources, the distance between RF sources and IoT devices and the primary user (PU) activity of different channels. The simulation results demonstrate that our proposed schemes perform better than the existing ones. PMID:27918424
Optimized Energy Harvesting, Cluster-Head Selection and Channel Allocation for IoTs in Smart Cities.
Aslam, Saleem; Hasan, Najam Ul; Jang, Ju Wook; Lee, Kyung-Geun
2016-12-02
This paper highlights three critical aspects of the internet of things (IoTs), namely (1) energy efficiency, (2) energy balancing and (3) quality of service (QoS) and presents three novel schemes for addressing these aspects. For energy efficiency, a novel radio frequency (RF) energy-harvesting scheme is presented in which each IoT device is associated with the best possible RF source in order to maximize the overall energy that the IoT devices harvest. For energy balancing, the IoT devices in close proximity are clustered together and then an IoT device with the highest residual energy is selected as a cluster head (CH) on a rotational basis. Once the CH is selected, it assigns channels to the IoT devices to report their data using a novel integer linear program (ILP)-based channel allocation scheme by satisfying their desired QoS. To evaluate the presented schemes, exhaustive simulations are carried out by varying different parameters, including the number of IoT devices, the number of harvesting sources, the distance between RF sources and IoT devices and the primary user (PU) activity of different channels. The simulation results demonstrate that our proposed schemes perform better than the existing ones.
NASA Astrophysics Data System (ADS)
Yan, Y.; Barth, A.; Beckers, J. M.; Brankart, J. M.; Brasseur, P.; Candille, G.
2017-07-01
In this paper, three incremental analysis update schemes (IAU 0, IAU 50 and IAU 100) are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The difference between the three IAU schemes lies on the position of the increment update window. The relevance of each IAU scheme is evaluated through analyses on both thermohaline and dynamical variables. The validation of the assimilation results is performed according to both deterministic and probabilistic metrics against different sources of observations. For deterministic validation, the ensemble mean and the ensemble spread are compared to the observations. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score. The obtained results show that 1) the IAU 50 scheme has the same performance as the IAU 100 scheme 2) the IAU 50/100 schemes outperform the IAU 0 scheme in error covariance propagation for thermohaline variables in relatively stable region, while the IAU 0 scheme outperforms the IAU 50/100 schemes in dynamical variables estimation in dynamically active region 3) in case with sufficient number of observations and good error specification, the impact of IAU schemes is negligible. The differences between the IAU 0 scheme and the IAU 50/100 schemes are mainly due to different model integration time and different instability (density inversion, large vertical velocity, etc.) induced by the increment update. The longer model integration time with the IAU 50/100 schemes, especially the free model integration, on one hand, allows for better re-establishment of the equilibrium model state, on the other hand, smooths the strong gradients in dynamically active region.
Self-adaptive relevance feedback based on multilevel image content analysis
NASA Astrophysics Data System (ADS)
Gao, Yongying; Zhang, Yujin; Fu, Yu
2001-01-01
In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.
Self-adaptive relevance feedback based on multilevel image content analysis
NASA Astrophysics Data System (ADS)
Gao, Yongying; Zhang, Yujin; Fu, Yu
2000-12-01
In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.
Finite Difference Schemes as Algebraic Correspondences between Layers
NASA Astrophysics Data System (ADS)
Malykh, Mikhail; Sevastianov, Leonid
2018-02-01
For some differential equations, especially for Riccati equation, new finite difference schemes are suggested. These schemes define protective correspondences between the layers. Calculation using these schemes can be extended to the area beyond movable singularities of exact solution without any error accumulation.
Involution and Difference Schemes for the Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Gerdt, Vladimir P.; Blinkov, Yuri A.
In the present paper we consider the Navier-Stokes equations for the two-dimensional viscous incompressible fluid flows and apply to these equations our earlier designed general algorithmic approach to generation of finite-difference schemes. In doing so, we complete first the Navier-Stokes equations to involution by computing their Janet basis and discretize this basis by its conversion into the integral conservation law form. Then we again complete the obtained difference system to involution with eliminating the partial derivatives and extracting the minimal Gröbner basis from the Janet basis. The elements in the obtained difference Gröbner basis that do not contain partial derivatives of the dependent variables compose a conservative difference scheme. By exploiting arbitrariness in the numerical integration approximation we derive two finite-difference schemes that are similar to the classical scheme by Harlow and Welch. Each of the two schemes is characterized by a 5×5 stencil on an orthogonal and uniform grid. We also demonstrate how an inconsistent difference scheme with a 3×3 stencil is generated by an inappropriate numerical approximation of the underlying integrals.
NASA Astrophysics Data System (ADS)
Zhang, Yue; Zhu, Lianhua; Wang, Ruijie; Guo, Zhaoli
2018-05-01
Recently a discrete unified gas kinetic scheme (DUGKS) in a finite-volume formulation based on the Boltzmann model equation has been developed for gas flows in all flow regimes. The original DUGKS is designed for flows of single-species gases. In this work, we extend the DUGKS to flows of binary gas mixtures of Maxwell molecules based on the Andries-Aoki-Perthame kinetic model [P. Andries et al., J. Stat. Phys. 106, 993 (2002), 10.1023/A:1014033703134. A particular feature of the method is that the flux at each cell interface is evaluated based on the characteristic solution of the kinetic equation itself; thus the numerical dissipation is low in comparison with that using direct reconstruction. Furthermore, the implicit treatment of the collision term enables the time step to be free from the restriction of the relaxation time. Unlike the DUGKS for single-species flows, a nonlinear system must be solved to determine the interaction parameters appearing in the equilibrium distribution function, which can be obtained analytically for Maxwell molecules. Several tests are performed to validate the scheme, including the shock structure problem under different Mach numbers and molar concentrations, the channel flow driven by a small gradient of pressure, temperature, or concentration, the plane Couette flow, and the shear driven cavity flow under different mass ratios and molar concentrations. The results are compared with those from other reliable numerical methods. The results show that the proposed scheme is an effective and reliable method for binary gas mixtures in all flow regimes.
Computerized Detection of Lung Nodules by Means of “Virtual Dual-Energy” Radiography
Chen, Sheng; Suzuki, Kenji
2014-01-01
Major challenges in current computer-aided detection (CADe) schemes for nodule detection in chest radiographs (CXRs) are to detect nodules that overlap with ribs and/or clavicles and to reduce the frequent false positives (FPs) caused by ribs. Detection of such nodules by a CADe scheme is very important, because radiologists are likely to miss such subtle nodules. Our purpose in this study was to develop a CADe scheme with improved sensitivity and specificity by use of “virtual dual-energy” (VDE) CXRs where ribs and clavicles are suppressed with massive-training artificial neural networks (MTANNs). To reduce rib-induced FPs and detect nodules overlapping with ribs, we incorporated the VDE technology in our CADe scheme. The VDE technology suppressed rib and clavicle opacities in CXRs while maintaining soft-tissue opacity by use of the MTANN technique that had been trained with real dual-energy imaging. Our scheme detected nodule candidates on VDE images by use of a morphologic filtering technique. Sixty morphologic and gray-level-based features were extracted from each candidate from both original and VDE CXRs. A nonlinear support vector classifier was employed for classification of the nodule candidates. A publicly available database containing 140 nodules in 140 CXRs and 93 normal CXRs was used for testing our CADe scheme. All nodules were confirmed by computed tomography examinations, and the average size of the nodules was 17.8 mm. Thirty percent (42/140) of the nodules were rated “extremely subtle” or “very subtle” by a radiologist. The original scheme without VDE technology achieved a sensitivity of 78.6% (110/140) with 5 (1165/233) FPs per image. By use of the VDE technology, more nodules overlapping with ribs or clavicles were detected and the sensitivity was improved substantially to 85.0% (119/140) at the same FP rate in a leave-one-out cross-validation test, whereas the FP rate was reduced to 2.5 (583/233) per image at the same sensitivity level as the original CADe scheme obtained (Difference between the specificities of the original and the VDE-based CADe schemes was statistically significant). In particular, the sensitivity of our VDE-based CADe scheme for subtle nodules (66.7% = 28/42) was statistically significantly higher than that of the original CADe scheme (57.1% = 24/42). Therefore, by use of VDE technology, the sensitivity and specificity of our CADe scheme for detection of nodules, especially subtle nodules, in CXRs were improved substantially. PMID:23193306
Computerized detection of lung nodules by means of "virtual dual-energy" radiography.
Chen, Sheng; Suzuki, Kenji
2013-02-01
Major challenges in current computer-aided detection (CADe) schemes for nodule detection in chest radiographs (CXRs) are to detect nodules that overlap with ribs and/or clavicles and to reduce the frequent false positives (FPs) caused by ribs. Detection of such nodules by a CADe scheme is very important, because radiologists are likely to miss such subtle nodules. Our purpose in this study was to develop a CADe scheme with improved sensitivity and specificity by use of "virtual dual-energy" (VDE) CXRs where ribs and clavicles are suppressed with massive-training artificial neural networks (MTANNs). To reduce rib-induced FPs and detect nodules overlapping with ribs, we incorporated the VDE technology in our CADe scheme. The VDE technology suppressed rib and clavicle opacities in CXRs while maintaining soft-tissue opacity by use of the MTANN technique that had been trained with real dual-energy imaging. Our scheme detected nodule candidates on VDE images by use of a morphologic filtering technique. Sixty morphologic and gray-level-based features were extracted from each candidate from both original and VDE CXRs. A nonlinear support vector classifier was employed for classification of the nodule candidates. A publicly available database containing 140 nodules in 140 CXRs and 93 normal CXRs was used for testing our CADe scheme. All nodules were confirmed by computed tomography examinations, and the average size of the nodules was 17.8 mm. Thirty percent (42/140) of the nodules were rated "extremely subtle" or "very subtle" by a radiologist. The original scheme without VDE technology achieved a sensitivity of 78.6% (110/140) with 5 (1165/233) FPs per image. By use of the VDE technology, more nodules overlapping with ribs or clavicles were detected and the sensitivity was improved substantially to 85.0% (119/140) at the same FP rate in a leave-one-out cross-validation test, whereas the FP rate was reduced to 2.5 (583/233) per image at the same sensitivity level as the original CADe scheme obtained (Difference between the specificities of the original and the VDE-based CADe schemes was statistically significant). In particular, the sensitivity of our VDE-based CADe scheme for subtle nodules (66.7% = 28/42) was statistically significantly higher than that of the original CADe scheme (57.1% = 24/42). Therefore, by use of VDE technology, the sensitivity and specificity of our CADe scheme for detection of nodules, especially subtle nodules, in CXRs were improved substantially.
Comparison of different pairing fluctuation approaches to BCS-BEC crossover
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levin, Kathryn; Chen Qijin; Zhejiang Institute of Modern Physics and Department of Physics, Zhejiang University, Hangzhou, Zhejiang 310027
2010-02-15
The subject of BCS-Bose-Einstein condensation (BEC) crossover is particularly exciting because of its realization in ultracold atomic Fermi gases and its possible relevance to high temperature superconductors. In this paper we review the body of theoretical work on this subject, which represents a natural extension of the seminal papers by Leggett and by Nozieres and Schmitt-Rink (NSR). The former addressed only the ground state, now known as the 'BCS-Leggett' wave-function, and the key contributions of the latter pertain to calculations of the superfluid transition temperature T{sub c}. These two papers have given rise to two main and, importantly, distinct, theoreticalmore » schools in the BCS-BEC crossover literature. The first of these extends the BCS-Leggett ground state to finite temperature and the second extends the NSR scheme away from T{sub c} both in the superfluid and normal phases. It is now rather widely accepted that these extensions of NSR produce a different ground state than that first introduced by Leggett. This observation provides a central motivation for the present paper which seeks to clarify the distinctions in the two approaches. Our analysis shows how the NSR-based approach views the bosonic contributions more completely but treats the fermions as 'quasi-free'. By contrast, the BCS-Leggett based approach treats the fermionic contributions more completely but treats the bosons as 'quasi-free'. In a related fashion, the NSR-based schemes approach the crossover between BCS and BEC by starting from the BEC limit and the BCS-Leggett based scheme approaches this crossover by starting from the BCS limit. Ultimately, one would like to combine these two schemes. There are, however, many difficult problems to surmount in any attempt to bridge the gap in the two theory classes. In this paper we review the strengths and weaknesses of both approaches. The flexibility of the BCS-Leggett based approach and its ease of handling make it widely used in T=0 applications, although the NSR-based schemes tend to be widely used at T{ne}0. To reach a full understanding, it is important in the future to invest effort in investigating in more detail the T=0 aspects of NSR-based theory and at the same time the T{ne}0 aspects of BCS-Leggett theory.« less
Teleportation-based realization of an optical quantum two-qubit entangling gate.
Gao, Wei-Bo; Goebel, Alexander M; Lu, Chao-Yang; Dai, Han-Ning; Wagenknecht, Claudia; Zhang, Qiang; Zhao, Bo; Peng, Cheng-Zhi; Chen, Zeng-Bing; Chen, Yu-Ao; Pan, Jian-Wei
2010-12-07
In recent years, there has been heightened interest in quantum teleportation, which allows for the transfer of unknown quantum states over arbitrary distances. Quantum teleportation not only serves as an essential ingredient in long-distance quantum communication, but also provides enabling technologies for practical quantum computation. Of particular interest is the scheme proposed by D. Gottesman and I. L. Chuang [(1999) Nature 402:390-393], showing that quantum gates can be implemented by teleporting qubits with the help of some special entangled states. Therefore, the construction of a quantum computer can be simply based on some multiparticle entangled states, Bell-state measurements, and single-qubit operations. The feasibility of this scheme relaxes experimental constraints on realizing universal quantum computation. Using two different methods, we demonstrate the smallest nontrivial module in such a scheme--a teleportation-based quantum entangling gate for two different photonic qubits. One uses a high-fidelity six-photon interferometer to realize controlled-NOT gates, and the other uses four-photon hyperentanglement to realize controlled-Phase gates. The results clearly demonstrate the working principles and the entangling capability of the gates. Our experiment represents an important step toward the realization of practical quantum computers and could lead to many further applications in linear optics quantum information processing.
NASA Astrophysics Data System (ADS)
Saeb Gilani, T.; Villringer, C.; Zhang, E.; Gundlach, H.; Buchmann, J.; Schrader, S.; Laufer, J.
2018-02-01
Tomographic photoacoustic (PA) images acquired using a Fabry-Perot (FP) based scanner offer high resolution and image fidelity but can result in long acquisition times due to the need for raster scanning. To reduce the acquisition times, a parallelised camera-based PA signal detection scheme is developed. The scheme is based on using a sCMOScamera and FPI sensors with high homogeneity of optical thickness. PA signals were acquired using the camera-based setup and the signal to noise ratio (SNR) was measured. A comparison of the SNR of PA signal detected using 1) a photodiode in a conventional raster scanning detection scheme and 2) a sCMOS camera in parallelised detection scheme is made. The results show that the parallelised interrogation scheme has the potential to provide high speed PA imaging.
NASA Astrophysics Data System (ADS)
Guo, Kai; Xie, Yongjie; Ye, Hu; Zhang, Song; Li, Yunfei
2018-04-01
Due to the uncertainty of stratospheric airship's shape and the security problem caused by the uncertainty, surface reconstruction and surface deformation monitoring of airship was conducted based on laser scanning technology and a √3-subdivision scheme based on Shepard interpolation was developed. Then, comparison was conducted between our subdivision scheme and the original √3-subdivision scheme. The result shows our subdivision scheme could reduce the shrinkage of surface and the number of narrow triangles. In addition, our subdivision scheme could keep the sharp features. So, surface reconstruction and surface deformation monitoring of airship could be conducted precisely by our subdivision scheme.
Pseudo color ghost coding imaging with pseudo thermal light
NASA Astrophysics Data System (ADS)
Duan, De-yang; Xia, Yun-jie
2018-04-01
We present a new pseudo color imaging scheme named pseudo color ghost coding imaging based on ghost imaging but with multiwavelength source modulated by a spatial light modulator. Compared with conventional pseudo color imaging where there is no nondegenerate wavelength spatial correlations resulting in extra monochromatic images, the degenerate wavelength and nondegenerate wavelength spatial correlations between the idle beam and signal beam can be obtained simultaneously. This scheme can obtain more colorful image with higher quality than that in conventional pseudo color coding techniques. More importantly, a significant advantage of the scheme compared to the conventional pseudo color coding imaging techniques is the image with different colors can be obtained without changing the light source and spatial filter.
Fault Analysis and Detection in Microgrids with High PV Penetration
DOE Office of Scientific and Technical Information (OSTI.GOV)
El Khatib, Mohamed; Hernandez Alvidrez, Javier; Ellis, Abraham
In this report we focus on analyzing current-controlled PV inverters behaviour under faults in order to develop fault detection schemes for microgrids with high PV penetration. Inverter model suitable for steady state fault studies is presented and the impact of PV inverters on two protection elements is analyzed. The studied protection elements are superimposed quantities based directional element and negative sequence directional element. Additionally, several non-overcurrent fault detection schemes are discussed in this report for microgrids with high PV penetration. A detailed time-domain simulation study is presented to assess the performance of the presented fault detection schemes under different microgridmore » modes of operation.« less
NASA Astrophysics Data System (ADS)
Xiao, Dan; Li, Xiaowei; Liu, Su-Juan; Wang, Qiong-Hua
2018-03-01
In this paper, a new scheme of multiple-image encryption and display based on computer-generated holography (CGH) and maximum length cellular automata (MLCA) is presented. With the scheme, the computer-generated hologram, which has the information of the three primitive images, is generated by modified Gerchberg-Saxton (GS) iterative algorithm using three different fractional orders in fractional Fourier domain firstly. Then the hologram is encrypted using MLCA mask. The ciphertext can be decrypted combined with the fractional orders and the rules of MLCA. Numerical simulations and experimental display results have been carried out to verify the validity and feasibility of the proposed scheme.
Dynamic principle for ensemble control tools.
Samoletov, A; Vasiev, B
2017-11-28
Dynamical equations describing physical systems in contact with a thermal bath are commonly extended by mathematical tools called "thermostats." These tools are designed for sampling ensembles in statistical mechanics. Here we propose a dynamic principle underlying a range of thermostats which is derived using fundamental laws of statistical physics and ensures invariance of the canonical measure. The principle covers both stochastic and deterministic thermostat schemes. Our method has a clear advantage over a range of proposed and widely used thermostat schemes that are based on formal mathematical reasoning. Following the derivation of the proposed principle, we show its generality and illustrate its applications including design of temperature control tools that differ from the Nosé-Hoover-Langevin scheme.
A hybrid deep learning approach to predict malignancy of breast lesions using mammograms
NASA Astrophysics Data System (ADS)
Wang, Yunzhi; Heidari, Morteza; Mirniaharikandehei, Seyedehnafiseh; Gong, Jing; Qian, Wei; Qiu, Yuchen; Zheng, Bin
2018-03-01
Applying deep learning technology to medical imaging informatics field has been recently attracting extensive research interest. However, the limited medical image dataset size often reduces performance and robustness of the deep learning based computer-aided detection and/or diagnosis (CAD) schemes. In attempt to address this technical challenge, this study aims to develop and evaluate a new hybrid deep learning based CAD approach to predict likelihood of a breast lesion detected on mammogram being malignant. In this approach, a deep Convolutional Neural Network (CNN) was firstly pre-trained using the ImageNet dataset and serve as a feature extractor. A pseudo-color Region of Interest (ROI) method was used to generate ROIs with RGB channels from the mammographic images as the input to the pre-trained deep network. The transferred CNN features from different layers of the CNN were then obtained and a linear support vector machine (SVM) was trained for the prediction task. By applying to a dataset involving 301 suspicious breast lesions and using a leave-one-case-out validation method, the areas under the ROC curves (AUC) = 0.762 and 0.792 using the traditional CAD scheme and the proposed deep learning based CAD scheme, respectively. An ensemble classifier that combines the classification scores generated by the two schemes yielded an improved AUC value of 0.813. The study results demonstrated feasibility and potentially improved performance of applying a new hybrid deep learning approach to develop CAD scheme using a relatively small dataset of medical images.
Design and evaluation of sparse quantization index modulation watermarking schemes
NASA Astrophysics Data System (ADS)
Cornelis, Bruno; Barbarien, Joeri; Dooms, Ann; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter
2008-08-01
In the past decade the use of digital data has increased significantly. The advantages of digital data are, amongst others, easy editing, fast, cheap and cross-platform distribution and compact storage. The most crucial disadvantages are the unauthorized copying and copyright issues, by which authors and license holders can suffer considerable financial losses. Many inexpensive methods are readily available for editing digital data and, unlike analog information, the reproduction in the digital case is simple and robust. Hence, there is great interest in developing technology that helps to protect the integrity of a digital work and the copyrights of its owners. Watermarking, which is the embedding of a signal (known as the watermark) into the original digital data, is one method that has been proposed for the protection of digital media elements such as audio, video and images. In this article, we examine watermarking schemes for still images, based on selective quantization of the coefficients of a wavelet transformed image, i.e. sparse quantization-index modulation (QIM) watermarking. Different grouping schemes for the wavelet coefficients are evaluated and experimentally verified for robustness against several attacks. Wavelet tree-based grouping schemes yield a slightly improved performance over block-based grouping schemes. Additionally, the impact of the deployment of error correction codes on the most promising configurations is examined. The utilization of BCH-codes (Bose, Ray-Chaudhuri, Hocquenghem) results in an improved robustness as long as the capacity of the error codes is not exceeded (cliff-effect).
Finite difference schemes for long-time integration
NASA Technical Reports Server (NTRS)
Haras, Zigo; Taasan, Shlomo
1993-01-01
Finite difference schemes for the evaluation of first and second derivatives are presented. These second order compact schemes were designed for long-time integration of evolution equations by solving a quadratic constrained minimization problem. The quadratic cost function measures the global truncation error while taking into account the initial data. The resulting schemes are applicable for integration times fourfold, or more, longer than similar previously studied schemes. A similar approach was used to obtain improved integration schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashyralyev, Allaberen; Okur, Ulker
In the present paper, the Crank-Nicolson difference scheme for the numerical solution of the stochastic parabolic equation with the dependent operator coefficient is considered. Theorem on convergence estimates for the solution of this difference scheme is established. In applications, convergence estimates for the solution of difference schemes for the numerical solution of three mixed problems for parabolic equations are obtained. The numerical results are given.
NASA Astrophysics Data System (ADS)
Tan, Maxine; Aghaei, Faranak; Wang, Yunzhi; Qian, Wei; Zheng, Bin
2016-03-01
Current commercialized CAD schemes have high false-positive (FP) detection rates and also have high correlations in positive lesion detection with radiologists. Thus, we recently investigated a new approach to improve the efficacy of applying CAD to assist radiologists in reading and interpreting screening mammograms. Namely, we developed a new global feature based CAD approach/scheme that can cue the warning sign on the cases with high risk of being positive. In this study, we investigate the possibility of fusing global feature or case-based scores with the local or lesion-based CAD scores using an adaptive cueing method. We hypothesize that the information from the global feature extraction (features extracted from the whole breast regions) are different from and can provide supplementary information to the locally-extracted features (computed from the segmented lesion regions only). On a large and diverse full-field digital mammography (FFDM) testing dataset with 785 cases (347 negative and 438 cancer cases with masses only), we ran our lesion-based and case-based CAD schemes "as is" on the whole dataset. To assess the supplementary information provided by the global features, we used an adaptive cueing method to adaptively adjust the original CAD-generated detection scores (Sorg) of a detected suspicious mass region based on the computed case-based score (Scase) of the case associated with this detected region. Using the adaptive cueing method, better sensitivity results were obtained at lower FP rates (<= 1 FP per image). Namely, increases of sensitivities (in the FROC curves) of up to 6.7% and 8.2% were obtained for the ROI and Case-based results, respectively.
An Identity-Based Anti-Quantum Privacy-Preserving Blind Authentication in Wireless Sensor Networks.
Zhu, Hongfei; Tan, Yu-An; Zhu, Liehuang; Wang, Xianmin; Zhang, Quanxin; Li, Yuanzhang
2018-05-22
With the development of wireless sensor networks, IoT devices are crucial for the Smart City; these devices change people's lives such as e-payment and e-voting systems. However, in these two systems, the state-of-art authentication protocols based on traditional number theory cannot defeat a quantum computer attack. In order to protect user privacy and guarantee trustworthy of big data, we propose a new identity-based blind signature scheme based on number theorem research unit lattice, this scheme mainly uses a rejection sampling theorem instead of constructing a trapdoor. Meanwhile, this scheme does not depend on complex public key infrastructure and can resist quantum computer attack. Then we design an e-payment protocol using the proposed scheme. Furthermore, we prove our scheme is secure in the random oracle, and satisfies confidentiality, integrity, and non-repudiation. Finally, we demonstrate that the proposed scheme outperforms the other traditional existing identity-based blind signature schemes in signing speed and verification speed, outperforms the other lattice-based blind signature in signing speed, verification speed, and signing secret key size.
An Identity-Based Anti-Quantum Privacy-Preserving Blind Authentication in Wireless Sensor Networks
Zhu, Hongfei; Tan, Yu-an; Zhu, Liehuang; Wang, Xianmin; Zhang, Quanxin; Li, Yuanzhang
2018-01-01
With the development of wireless sensor networks, IoT devices are crucial for the Smart City; these devices change people’s lives such as e-payment and e-voting systems. However, in these two systems, the state-of-art authentication protocols based on traditional number theory cannot defeat a quantum computer attack. In order to protect user privacy and guarantee trustworthy of big data, we propose a new identity-based blind signature scheme based on number theorem research unit lattice, this scheme mainly uses a rejection sampling theorem instead of constructing a trapdoor. Meanwhile, this scheme does not depend on complex public key infrastructure and can resist quantum computer attack. Then we design an e-payment protocol using the proposed scheme. Furthermore, we prove our scheme is secure in the random oracle, and satisfies confidentiality, integrity, and non-repudiation. Finally, we demonstrate that the proposed scheme outperforms the other traditional existing identity-based blind signature schemes in signing speed and verification speed, outperforms the other lattice-based blind signature in signing speed, verification speed, and signing secret key size. PMID:29789475
Critical analysis of fragment-orbital DFT schemes for the calculation of electronic coupling values
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schober, Christoph; Reuter, Karsten; Oberhofer, Harald, E-mail: harald.oberhofer@ch.tum.de
2016-02-07
We present a critical analysis of the popular fragment-orbital density-functional theory (FO-DFT) scheme for the calculation of electronic coupling values. We discuss the characteristics of different possible formulations or “flavors” of the scheme which differ by the number of electrons in the calculation of the fragments and the construction of the Hamiltonian. In addition to two previously described variants based on neutral fragments, we present a third version taking a different route to the approximate diabatic state by explicitly considering charged fragments. In applying these FO-DFT flavors to the two molecular test sets HAB7 (electron transfer) and HAB11 (hole transfer),more » we find that our new scheme gives improved electronic couplings for HAB7 (−6.2% decrease in mean relative signed error) and greatly improved electronic couplings for HAB11 (−15.3% decrease in mean relative signed error). A systematic investigation of the influence of exact exchange on the electronic coupling values shows that the use of hybrid functionals in FO-DFT calculations improves the electronic couplings, giving values close to or even better than more sophisticated constrained DFT calculations. Comparing the accuracy and computational cost of each variant, we devise simple rules to choose the best possible flavor depending on the task. For accuracy, our new scheme with charged-fragment calculations performs best, while numerically more efficient at reasonable accuracy is the variant with neutral fragments.« less
An efficient scheme for automatic web pages categorization using the support vector machine
NASA Astrophysics Data System (ADS)
Bhalla, Vinod Kumar; Kumar, Neeraj
2016-07-01
In the past few years, with an evolution of the Internet and related technologies, the number of the Internet users grows exponentially. These users demand access to relevant web pages from the Internet within fraction of seconds. To achieve this goal, there is a requirement of an efficient categorization of web page contents. Manual categorization of these billions of web pages to achieve high accuracy is a challenging task. Most of the existing techniques reported in the literature are semi-automatic. Using these techniques, higher level of accuracy cannot be achieved. To achieve these goals, this paper proposes an automatic web pages categorization into the domain category. The proposed scheme is based on the identification of specific and relevant features of the web pages. In the proposed scheme, first extraction and evaluation of features are done followed by filtering the feature set for categorization of domain web pages. A feature extraction tool based on the HTML document object model of the web page is developed in the proposed scheme. Feature extraction and weight assignment are based on the collection of domain-specific keyword list developed by considering various domain pages. Moreover, the keyword list is reduced on the basis of ids of keywords in keyword list. Also, stemming of keywords and tag text is done to achieve a higher accuracy. An extensive feature set is generated to develop a robust classification technique. The proposed scheme was evaluated using a machine learning method in combination with feature extraction and statistical analysis using support vector machine kernel as the classification tool. The results obtained confirm the effectiveness of the proposed scheme in terms of its accuracy in different categories of web pages.
Wavelet-based scalable L-infinity-oriented compression.
Alecu, Alin; Munteanu, Adrian; Cornelis, Jan P H; Schelkens, Peter
2006-09-01
Among the different classes of coding techniques proposed in literature, predictive schemes have proven their outstanding performance in near-lossless compression. However, these schemes are incapable of providing embedded L(infinity)-oriented compression, or, at most, provide a very limited number of potential L(infinity) bit-stream truncation points. We propose a new multidimensional wavelet-based L(infinity)-constrained scalable coding framework that generates a fully embedded L(infinity)-oriented bit stream and that retains the coding performance and all the scalability options of state-of-the-art L2-oriented wavelet codecs. Moreover, our codec instantiation of the proposed framework clearly outperforms JPEG2000 in L(infinity) coding sense.