Sample records for correction scheme based

  1. An Inherent-Optical-Property-Centered Approach to Correct the Angular Effects in Water-Leaving Radiance

    DTIC Science & Technology

    2011-07-01

    10%. These results demonstrate that the IOP-based BRDF correction scheme (which is composed of the R„ model along with the IOP retrieval...distribution was averaged over 10 min 5. Validation of the lOP-Based BRDF Correction Scheme The IOP-based BRDF correction scheme is applied to both...oceanic and coastal waters were very consistent qualitatively and quantitatively and thus validate the IOP- based BRDF correction system, at least

  2. Joint Schemes for Physical Layer Security and Error Correction

    ERIC Educational Resources Information Center

    Adamo, Oluwayomi

    2011-01-01

    The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…

  3. Adaptive Packet Combining Scheme in Three State Channel Model

    NASA Astrophysics Data System (ADS)

    Saring, Yang; Bulo, Yaka; Bhunia, Chandan Tilak

    2018-01-01

    The two popular techniques of packet combining based error correction schemes are: Packet Combining (PC) scheme and Aggressive Packet Combining (APC) scheme. PC scheme and APC scheme have their own merits and demerits; PC scheme has better throughput than APC scheme, but suffers from higher packet error rate than APC scheme. The wireless channel state changes all the time. Because of this random and time varying nature of wireless channel, individual application of SR ARQ scheme, PC scheme and APC scheme can't give desired levels of throughput. Better throughput can be achieved if appropriate transmission scheme is used based on the condition of channel. Based on this approach, adaptive packet combining scheme has been proposed to achieve better throughput. The proposed scheme adapts to the channel condition to carry out transmission using PC scheme, APC scheme and SR ARQ scheme to achieve better throughput. Experimentally, it was observed that the error correction capability and throughput of the proposed scheme was significantly better than that of SR ARQ scheme, PC scheme and APC scheme.

  4. Asynchronous error-correcting secure communication scheme based on fractional-order shifting chaotic system

    NASA Astrophysics Data System (ADS)

    Chao, Luo

    2015-11-01

    In this paper, a novel digital secure communication scheme is firstly proposed. Different from the usual secure communication schemes based on chaotic synchronization, the proposed scheme employs asynchronous communication which avoids the weakness of synchronous systems and is susceptible to environmental interference. Moreover, as to the transmission errors and data loss in the process of communication, the proposed scheme has the ability to be error-checking and error-correcting in real time. In order to guarantee security, the fractional-order complex chaotic system with the shifting of order is utilized to modulate the transmitted signal, which has high nonlinearity and complexity in both frequency and time domains. The corresponding numerical simulations demonstrate the effectiveness and feasibility of the scheme.

  5. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    PubMed

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  6. Efficacy of distortion correction on diffusion imaging: comparison of FSL eddy and eddy_correct using 30 and 60 directions diffusion encoding.

    PubMed

    Yamada, Haruyasu; Abe, Osamu; Shizukuishi, Takashi; Kikuta, Junko; Shinozaki, Takahiro; Dezawa, Ko; Nagano, Akira; Matsuda, Masayuki; Haradome, Hiroki; Imamura, Yoshiki

    2014-01-01

    Diffusion imaging is a unique noninvasive tool to detect brain white matter trajectory and integrity in vivo. However, this technique suffers from spatial distortion and signal pileup or dropout originating from local susceptibility gradients and eddy currents. Although there are several methods to mitigate these problems, most techniques can be applicable either to susceptibility or eddy-current induced distortion alone with a few exceptions. The present study compared the correction efficiency of FSL tools, "eddy_correct" and the combination of "eddy" and "topup" in terms of diffusion-derived fractional anisotropy (FA). The brain diffusion images were acquired from 10 healthy subjects using 30 and 60 directions encoding schemes based on the electrostatic repulsive forces. For the 30 directions encoding, 2 sets of diffusion images were acquired with the same parameters, except for the phase-encode blips which had opposing polarities along the anteroposterior direction. For the 60 directions encoding, non-diffusion-weighted and diffusion-weighted images were obtained with forward phase-encoding blips and non-diffusion-weighted images with the same parameter, except for the phase-encode blips, which had opposing polarities. FA images without and with distortion correction were compared in a voxel-wise manner with tract-based spatial statistics. We showed that images corrected with eddy and topup possessed higher FA values than images uncorrected and corrected with eddy_correct with trilinear (FSL default setting) or spline interpolation in most white matter skeletons, using both encoding schemes. Furthermore, the 60 directions encoding scheme was superior as measured by increased FA values to the 30 directions encoding scheme, despite comparable acquisition time. This study supports the combination of eddy and topup as a superior correction tool in diffusion imaging rather than the eddy_correct tool, especially with trilinear interpolation, using 60 directions encoding scheme.

  7. Implementation of an approximate self-energy correction scheme in the orthogonalized linear combination of atomic orbitals method of band-structure calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gu, Z.; Ching, W.Y.

    Based on the Sterne-Inkson model for the self-energy correction to the single-particle energy in the local-density approximation (LDA), we have implemented an approximate energy-dependent and [bold k]-dependent [ital GW] correction scheme to the orthogonalized linear combination of atomic orbital-based local-density calculation for insulators. In contrast to the approach of Jenkins, Srivastava, and Inkson, we evaluate the on-site exchange integrals using the LDA Bloch functions throughout the Brillouin zone. By using a [bold k]-weighted band gap [ital E][sub [ital g

  8. An analog gamma correction scheme for high dynamic range CMOS logarithmic image sensors.

    PubMed

    Cao, Yuan; Pan, Xiaofang; Zhao, Xiaojin; Wu, Huisi

    2014-12-15

    In this paper, a novel analog gamma correction scheme with a logarithmic image sensor dedicated to minimize the quantization noise of the high dynamic applications is presented. The proposed implementation exploits a non-linear voltage-controlled-oscillator (VCO) based analog-to-digital converter (ADC) to perform the gamma correction during the analog-to-digital conversion. As a result, the quantization noise does not increase while the same high dynamic range of logarithmic image sensor is preserved. Moreover, by combining the gamma correction with the analog-to-digital conversion, the silicon area and overall power consumption can be greatly reduced. The proposed gamma correction scheme is validated by the reported simulation results and the experimental results measured for our designed test structure, which is fabricated with 0.35 μm standard complementary-metal-oxide-semiconductor (CMOS) process.

  9. An Analog Gamma Correction Scheme for High Dynamic Range CMOS Logarithmic Image Sensors

    PubMed Central

    Cao, Yuan; Pan, Xiaofang; Zhao, Xiaojin; Wu, Huisi

    2014-01-01

    In this paper, a novel analog gamma correction scheme with a logarithmic image sensor dedicated to minimize the quantization noise of the high dynamic applications is presented. The proposed implementation exploits a non-linear voltage-controlled-oscillator (VCO) based analog-to-digital converter (ADC) to perform the gamma correction during the analog-to-digital conversion. As a result, the quantization noise does not increase while the same high dynamic range of logarithmic image sensor is preserved. Moreover, by combining the gamma correction with the analog-to-digital conversion, the silicon area and overall power consumption can be greatly reduced. The proposed gamma correction scheme is validated by the reported simulation results and the experimental results measured for our designed test structure, which is fabricated with 0.35 μm standard complementary-metal-oxide-semiconductor (CMOS) process. PMID:25517692

  10. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Agueh, Max; Diouris, Jean-François; Diop, Magaye; Devaux, François-Olivier; De Vleeschouwer, Christophe; Macq, Benoit

    2008-12-01

    Based on the analysis of real mobile ad hoc network (MANET) traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC) rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS) to wireless clients is demonstrated.

  11. Efficacy of Distortion Correction on Diffusion Imaging: Comparison of FSL Eddy and Eddy_Correct Using 30 and 60 Directions Diffusion Encoding

    PubMed Central

    Yamada, Haruyasu; Abe, Osamu; Shizukuishi, Takashi; Kikuta, Junko; Shinozaki, Takahiro; Dezawa, Ko; Nagano, Akira; Matsuda, Masayuki; Haradome, Hiroki; Imamura, Yoshiki

    2014-01-01

    Diffusion imaging is a unique noninvasive tool to detect brain white matter trajectory and integrity in vivo. However, this technique suffers from spatial distortion and signal pileup or dropout originating from local susceptibility gradients and eddy currents. Although there are several methods to mitigate these problems, most techniques can be applicable either to susceptibility or eddy-current induced distortion alone with a few exceptions. The present study compared the correction efficiency of FSL tools, “eddy_correct” and the combination of “eddy” and “topup” in terms of diffusion-derived fractional anisotropy (FA). The brain diffusion images were acquired from 10 healthy subjects using 30 and 60 directions encoding schemes based on the electrostatic repulsive forces. For the 30 directions encoding, 2 sets of diffusion images were acquired with the same parameters, except for the phase-encode blips which had opposing polarities along the anteroposterior direction. For the 60 directions encoding, non–diffusion-weighted and diffusion-weighted images were obtained with forward phase-encoding blips and non–diffusion-weighted images with the same parameter, except for the phase-encode blips, which had opposing polarities. FA images without and with distortion correction were compared in a voxel-wise manner with tract-based spatial statistics. We showed that images corrected with eddy and topup possessed higher FA values than images uncorrected and corrected with eddy_correct with trilinear (FSL default setting) or spline interpolation in most white matter skeletons, using both encoding schemes. Furthermore, the 60 directions encoding scheme was superior as measured by increased FA values to the 30 directions encoding scheme, despite comparable acquisition time. This study supports the combination of eddy and topup as a superior correction tool in diffusion imaging rather than the eddy_correct tool, especially with trilinear interpolation, using 60 directions encoding scheme. PMID:25405472

  12. Computational technique for stepwise quantitative assessment of equation correctness

    NASA Astrophysics Data System (ADS)

    Othman, Nuru'l Izzah; Bakar, Zainab Abu

    2017-04-01

    Many of the computer-aided mathematics assessment systems that are available today possess the capability to implement stepwise correctness checking of a working scheme for solving equations. The computational technique for assessing the correctness of each response in the scheme mainly involves checking the mathematical equivalence and providing qualitative feedback. This paper presents a technique, known as the Stepwise Correctness Checking and Scoring (SCCS) technique that checks the correctness of each equation in terms of structural equivalence and provides quantitative feedback. The technique, which is based on the Multiset framework, adapts certain techniques from textual information retrieval involving tokenization, document modelling and similarity evaluation. The performance of the SCCS technique was tested using worked solutions on solving linear algebraic equations in one variable. 350 working schemes comprising of 1385 responses were collected using a marking engine prototype, which has been developed based on the technique. The results show that both the automated analytical scores and the automated overall scores generated by the marking engine exhibit high percent agreement, high correlation and high degree of agreement with manual scores with small average absolute and mixed errors.

  13. Simple wavefront correction framework for two-photon microscopy of in-vivo brain

    PubMed Central

    Galwaduge, P. T.; Kim, S. H.; Grosberg, L. E.; Hillman, E. M. C.

    2015-01-01

    We present an easily implemented wavefront correction scheme that has been specifically designed for in-vivo brain imaging. The system can be implemented with a single liquid crystal spatial light modulator (LCSLM), which makes it compatible with existing patterned illumination setups, and provides measurable signal improvements even after a few seconds of optimization. The optimization scheme is signal-based and does not require exogenous guide-stars, repeated image acquisition or beam constraint. The unconstrained beam approach allows the use of Zernike functions for aberration correction and Hadamard functions for scattering correction. Low order corrections performed in mouse brain were found to be valid up to hundreds of microns away from the correction location. PMID:26309763

  14. Correction of phase errors in quantitative water-fat imaging using a monopolar time-interleaved multi-echo gradient echo sequence.

    PubMed

    Ruschke, Stefan; Eggers, Holger; Kooijman, Hendrik; Diefenbach, Maximilian N; Baum, Thomas; Haase, Axel; Rummeny, Ernst J; Hu, Houchun H; Karampinos, Dimitrios C

    2017-09-01

    To propose a phase error correction scheme for monopolar time-interleaved multi-echo gradient echo water-fat imaging that allows accurate and robust complex-based quantification of the proton density fat fraction (PDFF). A three-step phase correction scheme is proposed to address a) a phase term induced by echo misalignments that can be measured with a reference scan using reversed readout polarity, b) a phase term induced by the concomitant gradient field that can be predicted from the gradient waveforms, and c) a phase offset between time-interleaved echo trains. Simulations were carried out to characterize the concomitant gradient field-induced PDFF bias and the performance estimating the phase offset between time-interleaved echo trains. Phantom experiments and in vivo liver and thigh imaging were performed to study the relevance of each of the three phase correction steps on PDFF accuracy and robustness. The simulation, phantom, and in vivo results showed in agreement with the theory an echo time-dependent PDFF bias introduced by the three phase error sources. The proposed phase correction scheme was found to provide accurate PDFF estimation independent of the employed echo time combination. Complex-based time-interleaved water-fat imaging was found to give accurate and robust PDFF measurements after applying the proposed phase error correction scheme. Magn Reson Med 78:984-996, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  15. Robot-Arm Dynamic Control by Computer

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.; Tarn, Tzyh J.; Chen, Yilong J.

    1987-01-01

    Feedforward and feedback schemes linearize responses to control inputs. Method for control of robot arm based on computed nonlinear feedback and state tranformations to linearize system and decouple robot end-effector motions along each of cartesian axes augmented with optimal scheme for correction of errors in workspace. Major new feature of control method is: optimal error-correction loop directly operates on task level and not on joint-servocontrol level.

  16. Bias correction of daily satellite precipitation data using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Pratama, A. W.; Buono, A.; Hidayat, R.; Harsa, H.

    2018-05-01

    Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) was producted by blending Satellite-only Climate Hazards Group InfraRed Precipitation (CHIRP) with Stasion observations data. The blending process was aimed to reduce bias of CHIRP. However, Biases of CHIRPS on statistical moment and quantil values were high during wet season over Java Island. This paper presented a bias correction scheme to adjust statistical moment of CHIRP using observation precipitation data. The scheme combined Genetic Algorithm and Nonlinear Power Transformation, the results was evaluated based on different season and different elevation level. The experiment results revealed that the scheme robustly reduced bias on variance around 100% reduction and leaded to reduction of first, and second quantile biases. However, bias on third quantile only reduced during dry months. Based on different level of elevation, the performance of bias correction process is only significantly different on skewness indicators.

  17. Statistical Evaluation of Combined Daily Gauge Observations and Rainfall Satellite Estimations over Continental South America

    NASA Technical Reports Server (NTRS)

    Vila, Daniel; deGoncalves, Luis Gustavo; Toll, David L.; Rozante, Jose Roberto

    2008-01-01

    This paper describes a comprehensive assessment of a new high-resolution, high-quality gauge-satellite based analysis of daily precipitation over continental South America during 2004. This methodology is based on a combination of additive and multiplicative bias correction schemes in order to get the lowest bias when compared with the observed values. Inter-comparisons and cross-validations tests have been carried out for the control algorithm (TMPA real-time algorithm) and different merging schemes: additive bias correction (ADD), ratio bias correction (RAT) and TMPA research version, for different months belonging to different seasons and for different network densities. All compared merging schemes produce better results than the control algorithm, but when finer temporal (daily) and spatial scale (regional networks) gauge datasets is included in the analysis, the improvement is remarkable. The Combined Scheme (CoSch) presents consistently the best performance among the five techniques. This is also true when a degraded daily gauge network is used instead of full dataset. This technique appears a suitable tool to produce real-time, high-resolution, high-quality gauge-satellite based analyses of daily precipitation over land in regional domains.

  18. Rate-distortion optimized tree-structured compression algorithms for piecewise polynomial images.

    PubMed

    Shukla, Rahul; Dragotti, Pier Luigi; Do, Minh N; Vetterli, Martin

    2005-03-01

    This paper presents novel coding algorithms based on tree-structured segmentation, which achieve the correct asymptotic rate-distortion (R-D) behavior for a simple class of signals, known as piecewise polynomials, by using an R-D based prune and join scheme. For the one-dimensional case, our scheme is based on binary-tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly to achieve the correct exponentially decaying R-D behavior (D(R) - c(o)2(-c1R)), thus improving over classic wavelet schemes. We also prove that the computational complexity of the scheme is of O(N log N). We then show the extension of this scheme to the two-dimensional case using a quadtree. This quadtree-coding scheme also achieves an exponentially decaying R-D behavior, for the polygonal image model composed of a white polygon-shaped object against a uniform black background, with low computational cost of O(N log N). Again, the key is an R-D optimized prune and join strategy. Finally, we conclude with numerical results, which show that the proposed quadtree-coding scheme outperforms JPEG2000 by about 1 dB for real images, like cameraman, at low rates of around 0.15 bpp.

  19. A secure smart-card based authentication and key agreement scheme for telecare medicine information systems.

    PubMed

    Lee, Tian-Fu; Liu, Chuan-Ming

    2013-06-01

    A smart-card based authentication scheme for telecare medicine information systems enables patients, doctors, nurses, health visitors and the medicine information systems to establish a secure communication platform through public networks. Zhu recently presented an improved authentication scheme in order to solve the weakness of the authentication scheme of Wei et al., where the off-line password guessing attacks cannot be resisted. This investigation indicates that the improved scheme of Zhu has some faults such that the authentication scheme cannot execute correctly and is vulnerable to the attack of parallel sessions. Additionally, an enhanced authentication scheme based on the scheme of Zhu is proposed. The enhanced scheme not only avoids the weakness in the original scheme, but also provides users' anonymity and authenticated key agreements for secure data communications.

  20. Relativistic density functional theory with picture-change corrected electron density based on infinite-order Douglas-Kroll-Hess method

    NASA Astrophysics Data System (ADS)

    Oyama, Takuro; Ikabata, Yasuhiro; Seino, Junji; Nakai, Hiromi

    2017-07-01

    This Letter proposes a density functional treatment based on the two-component relativistic scheme at the infinite-order Douglas-Kroll-Hess (IODKH) level. The exchange-correlation energy and potential are calculated using the electron density based on the picture-change corrected density operator transformed by the IODKH method. Numerical assessments indicated that the picture-change uncorrected density functional terms generate significant errors, on the order of hartree for heavy atoms. The present scheme was found to reproduce the energetics in the four-component treatment with high accuracy.

  1. Heuristic pattern correction scheme using adaptively trained generalized regression neural networks.

    PubMed

    Hoya, T; Chambers, J A

    2001-01-01

    In many pattern classification problems, an intelligent neural system is required which can learn the newly encountered but misclassified patterns incrementally, while keeping a good classification performance over the past patterns stored in the network. In the paper, an heuristic pattern correction scheme is proposed using adaptively trained generalized regression neural networks (GRNNs). The scheme is based upon both network growing and dual-stage shrinking mechanisms. In the network growing phase, a subset of the misclassified patterns in each incoming data set is iteratively added into the network until all the patterns in the incoming data set are classified correctly. Then, the redundancy in the growing phase is removed in the dual-stage network shrinking. Both long- and short-term memory models are considered in the network shrinking, which are motivated from biological study of the brain. The learning capability of the proposed scheme is investigated through extensive simulation studies.

  2. A Novel Grid SINS/DVL Integrated Navigation Algorithm for Marine Application

    PubMed Central

    Kang, Yingyao; Zhao, Lin; Cheng, Jianhua; Fan, Xiaoliang

    2018-01-01

    Integrated navigation algorithms under the grid frame have been proposed based on the Kalman filter (KF) to solve the problem of navigation in some special regions. However, in the existing study of grid strapdown inertial navigation system (SINS)/Doppler velocity log (DVL) integrated navigation algorithms, the Earth models of the filter dynamic model and the SINS mechanization are not unified. Besides, traditional integrated systems with the KF based correction scheme are susceptible to measurement errors, which would decrease the accuracy and robustness of the system. In this paper, an adaptive robust Kalman filter (ARKF) based hybrid-correction grid SINS/DVL integrated navigation algorithm is designed with the unified reference ellipsoid Earth model to improve the navigation accuracy in middle-high latitude regions for marine application. Firstly, to unify the Earth models, the mechanization of grid SINS is introduced and the error equations are derived based on the same reference ellipsoid Earth model. Then, a more accurate grid SINS/DVL filter model is designed according to the new error equations. Finally, a hybrid-correction scheme based on the ARKF is proposed to resist the effect of measurement errors. Simulation and experiment results show that, compared with the traditional algorithms, the proposed navigation algorithm can effectively improve the navigation performance in middle-high latitude regions by the unified Earth models and the ARKF based hybrid-correction scheme. PMID:29373549

  3. Multifocus image fusion scheme based on the multiscale curvature in nonsubsampled contourlet transform domain

    NASA Astrophysics Data System (ADS)

    Li, Xiaosong; Li, Huafeng; Yu, Zhengtao; Kong, Yingchun

    2015-07-01

    An efficient multifocus image fusion scheme in nonsubsampled contourlet transform (NSCT) domain is proposed. Based on the property of optical imaging and the theory of defocused image, we present a selection principle for lowpass frequency coefficients and also investigate the connection between a low-frequency image and the defocused image. Generally, the NSCT algorithm decomposes detail image information indwells in different scales and different directions in the bandpass subband coefficient. In order to correctly pick out the prefused bandpass directional coefficients, we introduce multiscale curvature, which not only inherits the advantages of windows with different sizes, but also correctly recognizes the focused pixels from source images, and then develop a new fusion scheme of the bandpass subband coefficients. The fused image can be obtained by inverse NSCT with the different fused coefficients. Several multifocus image fusion methods are compared with the proposed scheme. The experimental results clearly indicate the validity and superiority of the proposed scheme in terms of both the visual qualities and the quantitative evaluation.

  4. Phase Error Correction in Time-Averaged 3D Phase Contrast Magnetic Resonance Imaging of the Cerebral Vasculature

    PubMed Central

    MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard

    2016-01-01

    Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600

  5. Comparing multilayer brain networks between groups: Introducing graph metrics and recommendations.

    PubMed

    Mandke, Kanad; Meier, Jil; Brookes, Matthew J; O'Dea, Reuben D; Van Mieghem, Piet; Stam, Cornelis J; Hillebrand, Arjan; Tewarie, Prejaas

    2018-02-01

    There is an increasing awareness of the advantages of multi-modal neuroimaging. Networks obtained from different modalities are usually treated in isolation, which is however contradictory to accumulating evidence that these networks show non-trivial interdependencies. Even networks obtained from a single modality, such as frequency-band specific functional networks measured from magnetoencephalography (MEG) are often treated independently. Here, we discuss how a multilayer network framework allows for integration of multiple networks into a single network description and how graph metrics can be applied to quantify multilayer network organisation for group comparison. We analyse how well-known biases for single layer networks, such as effects of group differences in link density and/or average connectivity, influence multilayer networks, and we compare four schemes that aim to correct for such biases: the minimum spanning tree (MST), effective graph resistance cost minimisation, efficiency cost optimisation (ECO) and a normalisation scheme based on singular value decomposition (SVD). These schemes can be applied to the layers independently or to the multilayer network as a whole. For correction applied to whole multilayer networks, only the SVD showed sufficient bias correction. For correction applied to individual layers, three schemes (ECO, MST, SVD) could correct for biases. By using generative models as well as empirical MEG and functional magnetic resonance imaging (fMRI) data, we further demonstrated that all schemes were sensitive to identify network topology when the original networks were perturbed. In conclusion, uncorrected multilayer network analysis leads to biases. These biases may differ between centres and studies and could consequently lead to unreproducible results in a similar manner as for single layer networks. We therefore recommend using correction schemes prior to multilayer network analysis for group comparisons. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Local bounds preserving stabilization for continuous Galerkin discretization of hyperbolic systems

    NASA Astrophysics Data System (ADS)

    Mabuza, Sibusiso; Shadid, John N.; Kuzmin, Dmitri

    2018-05-01

    The objective of this paper is to present a local bounds preserving stabilized finite element scheme for hyperbolic systems on unstructured meshes based on continuous Galerkin (CG) discretization in space. A CG semi-discrete scheme with low order artificial dissipation that satisfies the local extremum diminishing (LED) condition for systems is used to discretize a system of conservation equations in space. The low order artificial diffusion is based on approximate Riemann solvers for hyperbolic conservation laws. In this case we consider both Rusanov and Roe artificial diffusion operators. In the Rusanov case, two designs are considered, a nodal based diffusion operator and a local projection stabilization operator. The result is a discretization that is LED and has first order convergence behavior. To achieve high resolution, limited antidiffusion is added back to the semi-discrete form where the limiter is constructed from a linearity preserving local projection stabilization operator. The procedure follows the algebraic flux correction procedure usually used in flux corrected transport algorithms. To further deal with phase errors (or terracing) common in FCT type methods, high order background dissipation is added to the antidiffusive correction. The resulting stabilized semi-discrete scheme can be discretized in time using a wide variety of time integrators. Numerical examples involving nonlinear scalar Burgers equation, and several shock hydrodynamics simulations for the Euler system are considered to demonstrate the performance of the method. For time discretization, Crank-Nicolson scheme and backward Euler scheme are utilized.

  7. Experimental Assessment and Enhancement of Planar Laser-Induced Fluorescence Measurements of Nitric Oxide in an Inverse Diffusion Flame

    NASA Technical Reports Server (NTRS)

    Partridge, William P.; Laurendeau, Normand M.

    1997-01-01

    We have experimentally assessed the quantitative nature of planar laser-induced fluorescence (PLIF) measurements of NO concentration in a unique atmospheric pressure, laminar, axial inverse diffusion flame (IDF). The PLIF measurements were assessed relative to a two-dimensional array of separate laser saturated fluorescence (LSF) measurements. We demonstrated and evaluated several experimentally-based procedures for enhancing the quantitative nature of PLIF concentration images. Because these experimentally-based PLIF correction schemes require only the ability to make PLIF and LSF measurements, they produce a more broadly applicable PLIF diagnostic compared to numerically-based correction schemes. We experimentally assessed the influence of interferences on both narrow-band and broad-band fluorescence measurements at atmospheric and high pressures. Optimum excitation and detection schemes were determined for the LSF and PLIF measurements. Single-input and multiple-input, experimentally-based PLIF enhancement procedures were developed for application in test environments with both negligible and significant quench-dependent error gradients. Each experimentally-based procedure provides an enhancement of approximately 50% in the quantitative nature of the PLIF measurements, and results in concentration images nominally as quantitative as LSF point measurements. These correction procedures can be applied to other species, including radicals, for which no experimental data are available from which to implement numerically-based PLIF enhancement procedures.

  8. Cryptanalysis and improvement of Yan et al.'s biometric-based authentication scheme for telecare medicine information systems.

    PubMed

    Mishra, Dheerendra; Mukhopadhyay, Sourav; Chaturvedi, Ankita; Kumari, Saru; Khan, Muhammad Khurram

    2014-06-01

    Remote user authentication is desirable for a Telecare Medicine Information System (TMIS) for the safety, security and integrity of transmitted data over the public channel. In 2013, Tan presented a biometric based remote user authentication scheme and claimed that his scheme is secure. Recently, Yan et al. demonstrated some drawbacks in Tan's scheme and proposed an improved scheme to erase the drawbacks of Tan's scheme. We analyze Yan et al.'s scheme and identify that their scheme is vulnerable to off-line password guessing attack, and does not protect anonymity. Moreover, in their scheme, login and password change phases are inefficient to identify the correctness of input where inefficiency in password change phase can cause denial of service attack. Further, we design an improved scheme for TMIS with the aim to eliminate the drawbacks of Yan et al.'s scheme.

  9. Measurement-free implementations of small-scale surface codes for quantum-dot qubits

    NASA Astrophysics Data System (ADS)

    Ercan, H. Ekmel; Ghosh, Joydip; Crow, Daniel; Premakumar, Vickram N.; Joynt, Robert; Friesen, Mark; Coppersmith, S. N.

    2018-01-01

    The performance of quantum-error-correction schemes depends sensitively on the physical realizations of the qubits and the implementations of various operations. For example, in quantum-dot spin qubits, readout is typically much slower than gate operations, and conventional surface-code implementations that rely heavily on syndrome measurements could therefore be challenging. However, fast and accurate reset of quantum-dot qubits, without readout, can be achieved via tunneling to a reservoir. Here we propose small-scale surface-code implementations for which syndrome measurements are replaced by a combination of Toffoli gates and qubit reset. For quantum-dot qubits, this enables much faster error correction than measurement-based schemes, but requires additional ancilla qubits and non-nearest-neighbor interactions. We have performed numerical simulations of two different coding schemes, obtaining error thresholds on the orders of 10-2 for a one-dimensional architecture that only corrects bit-flip errors and 10-4 for a two-dimensional architecture that corrects bit- and phase-flip errors.

  10. Edge-based nonlinear diffusion for finite element approximations of convection-diffusion equations and its relation to algebraic flux-correction schemes.

    PubMed

    Barrenechea, Gabriel R; Burman, Erik; Karakatsani, Fotini

    2017-01-01

    For the case of approximation of convection-diffusion equations using piecewise affine continuous finite elements a new edge-based nonlinear diffusion operator is proposed that makes the scheme satisfy a discrete maximum principle. The diffusion operator is shown to be Lipschitz continuous and linearity preserving. Using these properties we provide a full stability and error analysis, which, in the diffusion dominated regime, shows existence, uniqueness and optimal convergence. Then the algebraic flux correction method is recalled and we show that the present method can be interpreted as an algebraic flux correction method for a particular definition of the flux limiters. The performance of the method is illustrated on some numerical test cases in two space dimensions.

  11. A stable and high-order accurate discontinuous Galerkin based splitting method for the incompressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Piatkowski, Marian; Müthing, Steffen; Bastian, Peter

    2018-03-01

    In this paper we consider discontinuous Galerkin (DG) methods for the incompressible Navier-Stokes equations in the framework of projection methods. In particular we employ symmetric interior penalty DG methods within the second-order rotational incremental pressure correction scheme. The major focus of the paper is threefold: i) We propose a modified upwind scheme based on the Vijayasundaram numerical flux that has favourable properties in the context of DG. ii) We present a novel postprocessing technique in the Helmholtz projection step based on H (div) reconstruction of the pressure correction that is computed locally, is a projection in the discrete setting and ensures that the projected velocity satisfies the discrete continuity equation exactly. As a consequence it also provides local mass conservation of the projected velocity. iii) Numerical results demonstrate the properties of the scheme for different polynomial degrees applied to two-dimensional problems with known solution as well as large-scale three-dimensional problems. In particular we address second-order convergence in time of the splitting scheme as well as its long-time stability.

  12. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    PubMed Central

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.

    2013-01-01

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol−1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol−1). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning. PMID:24320250

  13. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: an accurate correction scheme for electrostatic finite-size effects.

    PubMed

    Rocklin, Gabriel J; Mobley, David L; Dill, Ken A; Hünenberger, Philippe H

    2013-11-14

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol(-1)) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol(-1)). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning.

  14. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    NASA Astrophysics Data System (ADS)

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.

    2013-11-01

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol-1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol-1). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning.

  15. Security and Correctness Analysis on Privacy-Preserving k-Means Clustering Schemes

    NASA Astrophysics Data System (ADS)

    Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi

    Due to the fast development of Internet and the related IT technologies, it becomes more and more easier to access a large amount of data. k-means clustering is a powerful and frequently used technique in data mining. Many research papers about privacy-preserving k-means clustering were published. In this paper, we analyze the existing privacy-preserving k-means clustering schemes based on the cryptographic techniques. We show those schemes will cause the privacy breach and cannot output the correct results due to the faults in the protocol construction. Furthermore, we analyze our proposal as an option to improve such problems but with intermediate information breach during the computation.

  16. Deficiencies of the cryptography based on multiple-parameter fractional Fourier transform.

    PubMed

    Ran, Qiwen; Zhang, Haiying; Zhang, Jin; Tan, Liying; Ma, Jing

    2009-06-01

    Methods of image encryption based on fractional Fourier transform have an incipient flaw in security. We show that the schemes have the deficiency that one group of encryption keys has many groups of keys to decrypt the encrypted image correctly for several reasons. In some schemes, many factors result in the deficiencies, such as the encryption scheme based on multiple-parameter fractional Fourier transform [Opt. Lett.33, 581 (2008)]. A modified method is proposed to avoid all the deficiencies. Security and reliability are greatly improved without increasing the complexity of the encryption process. (c) 2009 Optical Society of America.

  17. Palmprint Based Multidimensional Fuzzy Vault Scheme

    PubMed Central

    Liu, Hailun; Sun, Dongmei; Xiong, Ke; Qiu, Zhengding

    2014-01-01

    Fuzzy vault scheme (FVS) is one of the most popular biometric cryptosystems for biometric template protection. However, error correcting code (ECC) proposed in FVS is not appropriate to deal with real-valued biometric intraclass variances. In this paper, we propose a multidimensional fuzzy vault scheme (MDFVS) in which a general subspace error-tolerant mechanism is designed and embedded into FVS to handle intraclass variances. Palmprint is one of the most important biometrics; to protect palmprint templates; a palmprint based MDFVS implementation is also presented. Experimental results show that the proposed scheme not only can deal with intraclass variances effectively but also could maintain the accuracy and meanwhile enhance security. PMID:24892094

  18. A secure and efficient password-based user authentication scheme using smart cards for the integrated EPR information system.

    PubMed

    Lee, Tian-Fu; Chang, I-Pin; Lin, Tsung-Hung; Wang, Ching-Cheng

    2013-06-01

    The integrated EPR information system supports convenient and rapid e-medicine services. A secure and efficient authentication scheme for the integrated EPR information system provides safeguarding patients' electronic patient records (EPRs) and helps health care workers and medical personnel to rapidly making correct clinical decisions. Recently, Wu et al. proposed an efficient password-based user authentication scheme using smart cards for the integrated EPR information system, and claimed that the proposed scheme could resist various malicious attacks. However, their scheme is still vulnerable to lost smart card and stolen verifier attacks. This investigation discusses these weaknesses and proposes a secure and efficient authentication scheme for the integrated EPR information system as alternative. Compared with related approaches, the proposed scheme not only retains a lower computational cost and does not require verifier tables for storing users' secrets, but also solves the security problems in previous schemes and withstands possible attacks.

  19. A study of pressure-based methodology for resonant flows in non-linear combustion instabilities

    NASA Technical Reports Server (NTRS)

    Yang, H. Q.; Pindera, M. Z.; Przekwas, A. J.; Tucker, K.

    1992-01-01

    This paper presents a systematic assessment of a large variety of spatial and temporal differencing schemes on nonstaggered grids by the pressure-based methods for the problems of fast transient flows. The observation from the present study is that for steady state flow problems, pressure-based methods can be very competitive with the density-based methods. For transient flow problems, pressure-based methods utilizing the same differencing scheme are less accurate, even though the wave speeds are correctly predicted.

  20. Shear-layer correction after Amiet under consideration of additional temperature gradient. Working diagrams for correction of signals

    NASA Technical Reports Server (NTRS)

    Dobrzynski, W.

    1984-01-01

    Amiet's correction scheme for sound wave transmission through shear-layers is extended to incorporate the additional effects of different temperatures in the flow-field in the surrounding medium at rest. Within a parameter-regime typical for acoustic measurements in wind tunnels amplitude- and angle-correction is calculated and plotted systematically to provide a data base for the test engineer.

  1. Multi-Objective Memetic Search for Robust Motion and Distortion Correction in Diffusion MRI.

    PubMed

    Hering, Jan; Wolf, Ivo; Maier-Hein, Klaus H

    2016-10-01

    Effective image-based artifact correction is an essential step in the analysis of diffusion MR images. Many current approaches are based on retrospective registration, which becomes challenging in the realm of high b -values and low signal-to-noise ratio, rendering the corresponding correction schemes more and more ineffective. We propose a novel registration scheme based on memetic search optimization that allows for simultaneous exploitation of different signal intensity relationships between the images, leading to more robust registration results. We demonstrate the increased robustness and efficacy of our method on simulated as well as in vivo datasets. In contrast to the state-of-art methods, the median target registration error (TRE) stayed below the voxel size even for high b -values (3000 s ·mm -2 and higher) and low SNR conditions. We also demonstrate the increased precision in diffusion-derived quantities by evaluating Neurite Orientation Dispersion and Density Imaging (NODDI) derived measures on a in vivo dataset with severe motion artifacts. These promising results will potentially inspire further studies on metaheuristic optimization in diffusion MRI artifact correction and image registration in general.

  2. Improvement of forecast skill for severe weather by merging radar-based extrapolation and storm-scale NWP corrected forecast

    NASA Astrophysics Data System (ADS)

    Wang, Gaili; Wong, Wai-Kin; Hong, Yang; Liu, Liping; Dong, Jili; Xue, Ming

    2015-03-01

    The primary objective of this study is to improve the performance of deterministic high resolution rainfall forecasts caused by severe storms by merging an extrapolation radar-based scheme with a storm-scale Numerical Weather Prediction (NWP) model. Effectiveness of Multi-scale Tracking and Forecasting Radar Echoes (MTaRE) model was compared with that of a storm-scale NWP model named Advanced Regional Prediction System (ARPS) for forecasting a violent tornado event that developed over parts of western and much of central Oklahoma on May 24, 2011. Then the bias corrections were performed to improve the forecast accuracy of ARPS forecasts. Finally, the corrected ARPS forecast and radar-based extrapolation were optimally merged by using a hyperbolic tangent weight scheme. The comparison of forecast skill between MTaRE and ARPS in high spatial resolution of 0.01° × 0.01° and high temporal resolution of 5 min showed that MTaRE outperformed ARPS in terms of index of agreement and mean absolute error (MAE). MTaRE had a better Critical Success Index (CSI) for less than 20-min lead times and was comparable to ARPS for 20- to 50-min lead times, while ARPS had a better CSI for more than 50-min lead times. Bias correction significantly improved ARPS forecasts in terms of MAE and index of agreement, although the CSI of corrected ARPS forecasts was similar to that of the uncorrected ARPS forecasts. Moreover, optimally merging results using hyperbolic tangent weight scheme further improved the forecast accuracy and became more stable.

  3. SPECTRAL CORRECTION FACTORS FOR CONVENTIONAL NEUTRON DOSE METERS USED IN HIGH-ENERGY NEUTRON ENVIRONMENTS-IMPROVED AND EXTENDED RESULTS BASED ON A COMPLETE SURVEY OF ALL NEUTRON SPECTRA IN IAEA-TRS-403.

    PubMed

    Oparaji, U; Tsai, Y H; Liu, Y C; Lee, K W; Patelli, E; Sheu, R J

    2017-06-01

    This paper presents improved and extended results of our previous study on corrections for conventional neutron dose meters used in environments with high-energy neutrons (En > 10 MeV). Conventional moderated-type neutron dose meters tend to underestimate the dose contribution of high-energy neutrons because of the opposite trends of dose conversion coefficients and detection efficiencies as the neutron energy increases. A practical correction scheme was proposed based on analysis of hundreds of neutron spectra in the IAEA-TRS-403 report. By comparing 252Cf-calibrated dose responses with reference values derived from fluence-to-dose conversion coefficients, this study provides recommendations for neutron field characterization and the corresponding dose correction factors. Further sensitivity studies confirm the appropriateness of the proposed scheme and indicate that (1) the spectral correction factors are nearly independent of the selection of three commonly used calibration sources: 252Cf, 241Am-Be and 239Pu-Be; (2) the derived correction factors for Bonner spheres of various sizes (6"-9") are similar in trend and (3) practical high-energy neutron indexes based on measurements can be established to facilitate the application of these correction factors in workplaces. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Simple aerosol correction technique based on the spectral relationships of the aerosol multiple-scattering reflectances for atmospheric correction over the oceans.

    PubMed

    Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram

    2016-12-26

    An estimation of the aerosol multiple-scattering reflectance is an important part of the atmospheric correction procedure in satellite ocean color data processing. Most commonly, the utilization of two near-infrared (NIR) bands to estimate the aerosol optical properties has been adopted for the estimation of the effects of aerosols. Previously, the operational Geostationary Color Ocean Imager (GOCI) atmospheric correction scheme relies on a single-scattering reflectance ratio (SSE), which was developed for the processing of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data to determine the appropriate aerosol models and their aerosol optical thicknesses. The scheme computes reflectance contributions (weighting factor) of candidate aerosol models in a single scattering domain then spectrally extrapolates the single-scattering aerosol reflectance from NIR to visible (VIS) bands using the SSE. However, it directly applies the weight value to all wavelengths in a multiple-scattering domain although the multiple-scattering aerosol reflectance has a non-linear relationship with the single-scattering reflectance and inter-band relationship of multiple scattering aerosol reflectances is non-linear. To avoid these issues, we propose an alternative scheme for estimating the aerosol reflectance that uses the spectral relationships in the aerosol multiple-scattering reflectance between different wavelengths (called SRAMS). The process directly calculates the multiple-scattering reflectance contributions in NIR with no residual errors for selected aerosol models. Then it spectrally extrapolates the reflectance contribution from NIR to visible bands for each selected model using the SRAMS. To assess the performance of the algorithm regarding the errors in the water reflectance at the surface or remote-sensing reflectance retrieval, we compared the SRAMS atmospheric correction results with the SSE atmospheric correction using both simulations and in situ match-ups with the GOCI data. From simulations, the mean errors for bands from 412 to 555 nm were 5.2% for the SRAMS scheme and 11.5% for SSE scheme in case-I waters. From in situ match-ups, 16.5% for the SRAMS scheme and 17.6% scheme for the SSE scheme in both case-I and case-II waters. Although we applied the SRAMS algorithm to the GOCI, it can be applied to other ocean color sensors which have two NIR wavelengths.

  5. LDPC-PPM Coding Scheme for Optical Communication

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael

    2009-01-01

    In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.

  6. Self-shading associated with a skylight-blocked approach system for the measurement of water-leaving radiance and its correction.

    PubMed

    Shang, Zhehai; Lee, Zhongping; Dong, Qiang; Wei, Jianwei

    2017-09-01

    Self-shading associated with a skylight-blocked approach (SBA) system for the measurement of water-leaving radiance (L w ) and its correction [Appl. Opt.52, 1693 (2013)APOPAI0003-693510.1364/AO.52.001693] is characterized by Monte Carlo simulations, and it is found that this error is in a range of ∼1%-20% under most water properties and solar positions. A model for estimating this shading error is further developed, and eventually a scheme to correct this error based on the shaded measurements is proposed and evaluated. It is found that the shade-corrected value in the visible domain is within 3% of the true value, which thus indicates that we can obtain not only high precision but also high accuracy L w in the field with the SBA scheme.

  7. QR code based noise-free optical encryption and decryption of a gray scale image

    NASA Astrophysics Data System (ADS)

    Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-03-01

    In optical encryption systems, speckle noise is one major challenge in obtaining high quality decrypted images. This problem can be addressed by employing a QR code based noise-free scheme. Previous works have been conducted for optically encrypting a few characters or a short expression employing QR codes. This paper proposes a practical scheme for optically encrypting and decrypting a gray-scale image based on QR codes for the first time. The proposed scheme is compatible with common QR code generators and readers. Numerical simulation results reveal the proposed method can encrypt and decrypt an input image correctly.

  8. Error-correcting pairs for a public-key cryptosystem

    NASA Astrophysics Data System (ADS)

    Pellikaan, Ruud; Márquez-Corbella, Irene

    2017-06-01

    Code-based Cryptography (CBC) is a powerful and promising alternative for quantum resistant cryptography. Indeed, together with lattice-based cryptography, multivariate cryptography and hash-based cryptography are the principal available techniques for post-quantum cryptography. CBC was first introduced by McEliece where he designed one of the most efficient Public-Key encryption schemes with exceptionally strong security guarantees and other desirable properties that still resist to attacks based on Quantum Fourier Transform and Amplitude Amplification. The original proposal, which remains unbroken, was based on binary Goppa codes. Later, several families of codes have been proposed in order to reduce the key size. Some of these alternatives have already been broken. One of the main requirements of a code-based cryptosystem is having high performance t-bounded decoding algorithms which is achieved in the case the code has a t-error-correcting pair (ECP). Indeed, those McEliece schemes that use GRS codes, BCH, Goppa and algebraic geometry codes are in fact using an error-correcting pair as a secret key. That is, the security of these Public-Key Cryptosystems is not only based on the inherent intractability of bounded distance decoding but also on the assumption that it is difficult to retrieve efficiently an error-correcting pair. In this paper, the class of codes with a t-ECP is proposed for the McEliece cryptosystem. Moreover, we study the hardness of distinguishing arbitrary codes from those having a t-error correcting pair.

  9. Two-out-of-two color matching based visual cryptography schemes.

    PubMed

    Machizaud, Jacques; Fournel, Thierry

    2012-09-24

    Visual cryptography which consists in sharing a secret message between transparencies has been extended to color prints. In this paper, we propose a new visual cryptography scheme based on color matching. The stacked printed media reveal a uniformly colored message decoded by the human visual system. In contrast with the previous color visual cryptography schemes, the proposed one enables to share images without pixel expansion and to detect a forgery as the color of the message is kept secret. In order to correctly print the colors on the media and to increase the security of the scheme, we use spectral models developed for color reproduction describing printed colors from an optical point of view.

  10. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks

    PubMed Central

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-01-01

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C/2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C/2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi’s model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments. PMID:29401668

  11. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks.

    PubMed

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-02-03

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.

  12. Threshold quantum secret sharing based on single qubit

    NASA Astrophysics Data System (ADS)

    Lu, Changbin; Miao, Fuyou; Meng, Keju; Yu, Yue

    2018-03-01

    Based on unitary phase shift operation on single qubit in association with Shamir's ( t, n) secret sharing, a ( t, n) threshold quantum secret sharing scheme (or ( t, n)-QSS) is proposed to share both classical information and quantum states. The scheme uses decoy photons to prevent eavesdropping and employs the secret in Shamir's scheme as the private value to guarantee the correctness of secret reconstruction. Analyses show it is resistant to typical intercept-and-resend attack, entangle-and-measure attack and participant attacks such as entanglement swapping attack. Moreover, it is easier to realize in physic and more practical in applications when compared with related ones. By the method in our scheme, new ( t, n)-QSS schemes can be easily constructed using other classical ( t, n) secret sharing.

  13. Multiple-correction hybrid k-exact schemes for high-order compressible RANS-LES simulations on fully unstructured grids

    NASA Astrophysics Data System (ADS)

    Pont, Grégoire; Brenner, Pierre; Cinnella, Paola; Maugars, Bruno; Robinet, Jean-Christophe

    2017-12-01

    A Godunov's type unstructured finite volume method suitable for highly compressible turbulent scale-resolving simulations around complex geometries is constructed by using a successive correction technique. First, a family of k-exact Godunov schemes is developed by recursively correcting the truncation error of the piecewise polynomial representation of the primitive variables. The keystone of the proposed approach is a quasi-Green gradient operator which ensures consistency on general meshes. In addition, a high-order single-point quadrature formula, based on high-order approximations of the successive derivatives of the solution, is developed for flux integration along cell faces. The proposed family of schemes is compact in the algorithmic sense, since it only involves communications between direct neighbors of the mesh cells. The numerical properties of the schemes up to fifth-order are investigated, with focus on their resolvability in terms of number of mesh points required to resolve a given wavelength accurately. Afterwards, in the aim of achieving the best possible trade-off between accuracy, computational cost and robustness in view of industrial flow computations, we focus more specifically on the third-order accurate scheme of the family, and modify locally its numerical flux in order to reduce the amount of numerical dissipation in vortex-dominated regions. This is achieved by switching from the upwind scheme, mostly applied in highly compressible regions, to a fourth-order centered one in vortex-dominated regions. An analytical switch function based on the local grid Reynolds number is adopted in order to warrant numerical stability of the recentering process. Numerical applications demonstrate the accuracy and robustness of the proposed methodology for compressible scale-resolving computations. In particular, supersonic RANS/LES computations of the flow over a cavity are presented to show the capability of the scheme to predict flows with shocks, vortical structures and complex geometries.

  14. Analytical and numerical analysis of frictional damage in quasi brittle materials

    NASA Astrophysics Data System (ADS)

    Zhu, Q. Z.; Zhao, L. Y.; Shao, J. F.

    2016-07-01

    Frictional sliding and crack growth are two main dissipation processes in quasi brittle materials. The frictional sliding along closed cracks is the origin of macroscopic plastic deformation while the crack growth induces a material damage. The main difficulty of modeling is to consider the inherent coupling between these two processes. Various models and associated numerical algorithms have been proposed. But there are so far no analytical solutions even for simple loading paths for the validation of such algorithms. In this paper, we first present a micro-mechanical model taking into account the damage-friction coupling for a large class of quasi brittle materials. The model is formulated by combining a linear homogenization procedure with the Mori-Tanaka scheme and the irreversible thermodynamics framework. As an original contribution, a series of analytical solutions of stress-strain relations are developed for various loading paths. Based on the micro-mechanical model, two numerical integration algorithms are exploited. The first one involves a coupled friction/damage correction scheme, which is consistent with the coupling nature of the constitutive model. The second one contains a friction/damage decoupling scheme with two consecutive steps: the friction correction followed by the damage correction. With the analytical solutions as reference results, the two algorithms are assessed through a series of numerical tests. It is found that the decoupling correction scheme is efficient to guarantee a systematic numerical convergence.

  15. Intelligent Power Swing Detection Scheme to Prevent False Relay Tripping Using S-Transform

    NASA Astrophysics Data System (ADS)

    Mohamad, Nor Z.; Abidin, Ahmad F.; Musirin, Ismail

    2014-06-01

    Distance relay design is equipped with out-of-step tripping scheme to ensure correct distance relay operation during power swing. The out-of-step condition is a consequence result from unstable power swing. It requires proper detection of power swing to initiate a tripping signal followed by separation of unstable part from the entire power system. The distinguishing process of unstable swing from stable swing poses a challenging task. This paper presents an intelligent approach to detect power swing based on S-Transform signal processing tool. The proposed scheme is based on the use of S-Transform feature of active power at the distance relay measurement point. It is demonstrated that the proposed scheme is able to detect and discriminate the unstable swing from stable swing occurring in the system. To ascertain validity of the proposed scheme, simulations were carried out with the IEEE 39 bus system and its performance has been compared with the wavelet transform-based power swing detection scheme.

  16. Aerodynamic optimization by simultaneously updating flow variables and design parameters

    NASA Technical Reports Server (NTRS)

    Rizk, M. H.

    1990-01-01

    The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.

  17. Potential Energy Surface for Large Barrierless Reaction Systems: Application to the Kinetic Calculations of the Dissociation of Alkanes and the Reverse Recombination Reactions.

    PubMed

    Yao, Qian; Cao, Xiao-Mei; Zong, Wen-Gang; Sun, Xiao-Hui; Li, Ze-Rong; Li, Xiang-Yuan

    2018-05-31

    The isodesmic reaction method is applied to calculate the potential energy surface (PES) along the reaction coordinates and the rate constants of the barrierless reactions for unimolecular dissociation reactions of alkanes to form two alkyl radicals and their reverse recombination reactions. The reaction class is divided into 10 subclasses depending upon the type of carbon atoms in the reaction centers. A correction scheme based on isodesmic reaction theory is proposed to correct the PESs at UB3LYP/6-31+G(d,p) level. To validate the accuracy of this scheme, a comparison of the PESs at B3LYP level and the corrected PESs with the PESs at CASPT2/aug-cc-pVTZ level is performed for 13 representative reactions, and it is found that the deviations of the PESs at B3LYP level are up to 35.18 kcal/mol and are reduced to within 2 kcal/mol after correction, indicating that the PESs for barrierless reactions in a subclass can be calculated meaningfully accurately at a low level of ab initio method using our correction scheme. High-pressure limit rate constants and pressure dependent rate constants of these reactions are calculated based on their corrected PESs and the results show the pressure dependence of the rate constants cannot be ignored, especially at high temperatures. Furthermore, the impact of molecular size on the pressure-dependent rate constants of decomposition reactions of alkanes and their reverse reactions has been studied. The present work provides an effective method to generate meaningfully accurate PESs for large molecular system.

  18. Detection and Attribution of Simulated Climatic Extreme Events and Impacts: High Sensitivity to Bias Correction

    NASA Astrophysics Data System (ADS)

    Sippel, S.; Otto, F. E. L.; Forkel, M.; Allen, M. R.; Guillod, B. P.; Heimann, M.; Reichstein, M.; Seneviratne, S. I.; Kirsten, T.; Mahecha, M. D.

    2015-12-01

    Understanding, quantifying and attributing the impacts of climatic extreme events and variability is crucial for societal adaptation in a changing climate. However, climate model simulations generated for this purpose typically exhibit pronounced biases in their output that hinders any straightforward assessment of impacts. To overcome this issue, various bias correction strategies are routinely used to alleviate climate model deficiencies most of which have been criticized for physical inconsistency and the non-preservation of the multivariate correlation structure. We assess how biases and their correction affect the quantification and attribution of simulated extremes and variability in i) climatological variables and ii) impacts on ecosystem functioning as simulated by a terrestrial biosphere model. Our study demonstrates that assessments of simulated climatic extreme events and impacts in the terrestrial biosphere are highly sensitive to bias correction schemes with major implications for the detection and attribution of these events. We introduce a novel ensemble-based resampling scheme based on a large regional climate model ensemble generated by the distributed weather@home setup[1], which fully preserves the physical consistency and multivariate correlation structure of the model output. We use extreme value statistics to show that this procedure considerably improves the representation of climatic extremes and variability. Subsequently, biosphere-atmosphere carbon fluxes are simulated using a terrestrial ecosystem model (LPJ-GSI) to further demonstrate the sensitivity of ecosystem impacts to the methodology of bias correcting climate model output. We find that uncertainties arising from bias correction schemes are comparable in magnitude to model structural and parameter uncertainties. The present study consists of a first attempt to alleviate climate model biases in a physically consistent way and demonstrates that this yields improved simulations of climate extremes and associated impacts. [1] http://www.climateprediction.net/weatherathome/

  19. Measurement and compensation schemes for the pulse front distortion of ultra-intensity ultra-short laser pulses

    NASA Astrophysics Data System (ADS)

    Wu, Fenxiang; Xu, Yi; Yu, Linpeng; Yang, Xiaojun; Li, Wenkai; Lu, Jun; Leng, Yuxin

    2016-11-01

    Pulse front distortion (PFD) is mainly induced by the chromatic aberration in femtosecond high-peak power laser systems, and it can temporally distort the pulse in the focus and therefore decrease the peak intensity. A novel measurement scheme is proposed to directly measure the PFD of ultra-intensity ultra-short laser pulses, which can work not only without any extra struggle for the desired reference pulse, but also largely reduce the size of the required optical elements in measurement. The measured PFD in an experimental 200TW/27fs laser system is in good agreement with the calculated result, which demonstrates the validity and feasibility of this method effectively. In addition, a simple compensation scheme based on the combination of concave lens and parabolic lens is also designed and proposed to correct the PFD. Based on the theoretical calculation, the PFD of above experimental laser system can almost be completely corrected by using this compensator with proper parameters.

  20. Implementation and benchmark of a long-range corrected functional in the density functional based tight-binding method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lutsker, V.; Niehaus, T. A., E-mail: thomas.niehaus@physik.uni-regensburg.de; Aradi, B.

    2015-11-14

    Bridging the gap between first principles methods and empirical schemes, the density functional based tight-binding method (DFTB) has become a versatile tool in predictive atomistic simulations over the past years. One of the major restrictions of this method is the limitation to local or gradient corrected exchange-correlation functionals. This excludes the important class of hybrid or long-range corrected functionals, which are advantageous in thermochemistry, as well as in the computation of vibrational, photoelectron, and optical spectra. The present work provides a detailed account of the implementation of DFTB for a long-range corrected functional in generalized Kohn-Sham theory. We apply themore » method to a set of organic molecules and compare ionization potentials and electron affinities with the original DFTB method and higher level theory. The new scheme cures the significant overpolarization in electric fields found for local DFTB, which parallels the functional dependence in first principles density functional theory (DFT). At the same time, the computational savings with respect to full DFT calculations are not compromised as evidenced by numerical benchmark data.« less

  1. High-Order Methods for Computational Fluid Dynamics: A Brief Review of Compact Differential Formulations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.; Wang, Z. J.; Vincent, P. E.

    2013-01-01

    Popular high-order schemes with compact stencils for Computational Fluid Dynamics (CFD) include Discontinuous Galerkin (DG), Spectral Difference (SD), and Spectral Volume (SV) methods. The recently proposed Flux Reconstruction (FR) approach or Correction Procedure using Reconstruction (CPR) is based on a differential formulation and provides a unifying framework for these high-order schemes. Here we present a brief review of recent developments for the FR/CPR schemes as well as some pacing items.

  2. Studies of Several New Modifications of Aggressive Packet Combining to Achieve Higher Throughput, Based on Correction Capability of Disjoint Error Vectors

    NASA Astrophysics Data System (ADS)

    Chakraborty, Swarnendu Kumar; Goswami, Rajat Subhra; Bhunia, Chandan Tilak; Bhunia, Abhinandan

    2016-06-01

    Aggressive packet combining (APC) scheme is well-established in literature. Several modifications were studied earlier for improving throughput. In this paper, three new modifications of APC are proposed. The performance of proposed modified APC is studied by simulation and is reported here. A hybrid scheme is proposed here for getting higher throughput and also the disjoint factor is compared among conventional APC with proposed schemes for getting higher throughput.

  3. An Orbit And Dispersion Correction Scheme for the PEP II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Y.; Donald, M.; Shoaee, H.

    2011-09-01

    To achieve optimum luminosity in a storage ring it is vital to control the residual vertical dispersion. In the original PEP storage ring, a scheme to control the residual dispersion function was implemented using the ring orbit as the controlling element. The 'best' orbit not necessarily giving the lowest vertical dispersion. A similar scheme has been implemented in both the on-line control code and in the simulation code LEGO. The method involves finding the response matrices (sensitivity of orbit/dispersion at each Beam-Position-Monitor (BPM) to each orbit corrector) and solving in a least squares sense for minimum orbit, dispersion function ormore » both. The optimum solution is usually a subset of the full least squares solution. A scheme of simultaneously correcting the orbits and dispersion has been implemented in the simulation code and on-line control system for PEP-II. The scheme is based on the eigenvector decomposition method. An important ingredient of the scheme is to choose the optimum eigenvectors that minimize the orbit, dispersion and corrector strength. Simulations indicate this to be a very effective way to control the vertical residual dispersion.« less

  4. A modified non-binary LDPC scheme based on watermark symbols in high speed optical transmission systems

    NASA Astrophysics Data System (ADS)

    Wang, Liming; Qiao, Yaojun; Yu, Qian; Zhang, Wenbo

    2016-04-01

    We introduce a watermark non-binary low-density parity check code (NB-LDPC) scheme, which can estimate the time-varying noise variance by using prior information of watermark symbols, to improve the performance of NB-LDPC codes. And compared with the prior-art counterpart, the watermark scheme can bring about 0.25 dB improvement in net coding gain (NCG) at bit error rate (BER) of 1e-6 and 36.8-81% reduction of the iteration numbers. Obviously, the proposed scheme shows great potential in terms of error correction performance and decoding efficiency.

  5. TripSense: A Trust-Based Vehicular Platoon Crowdsensing Scheme with Privacy Preservation in VANETs

    PubMed Central

    Hu, Hao; Lu, Rongxing; Huang, Cheng; Zhang, Zonghua

    2016-01-01

    In this paper, we propose a trust-based vehicular platoon crowdsensing scheme, named TripSense, in VANET. The proposed TripSense scheme introduces a trust-based system to evaluate vehicles’ sensing abilities and then selects the more capable vehicles in order to improve sensing results accuracy. In addition, the sensing tasks are accomplished by platoon member vehicles and preprocessed by platoon head vehicles before the data are uploaded to server. Hence, it is less time-consuming and more efficient compared with the way where the data are submitted by individual platoon member vehicles. Hence it is more suitable in ephemeral networks like VANET. Moreover, our proposed TripSense scheme integrates unlinkable pseudo-ID techniques to achieve PM vehicle identity privacy, and employs a privacy-preserving sensing vehicle selection scheme without involving the PM vehicle’s trust score to keep its location privacy. Detailed security analysis shows that our proposed TripSense scheme not only achieves desirable privacy requirements but also resists against attacks launched by adversaries. In addition, extensive simulations are conducted to show the correctness and effectiveness of our proposed scheme. PMID:27258287

  6. Power corrections in the N -jettiness subtraction scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boughezal, Radja; Liu, Xiaohui; Petriello, Frank

    We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less

  7. Power corrections in the N -jettiness subtraction scheme

    DOE PAGES

    Boughezal, Radja; Liu, Xiaohui; Petriello, Frank

    2017-03-30

    We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less

  8. Automatic Approach for Lung Segmentation with Juxta-Pleural Nodules from Thoracic CT Based on Contour Tracing and Correction.

    PubMed

    Wang, Jinke; Guo, Haoyan

    2016-01-01

    This paper presents a fully automatic framework for lung segmentation, in which juxta-pleural nodule problem is brought into strong focus. The proposed scheme consists of three phases: skin boundary detection, rough segmentation of lung contour, and pulmonary parenchyma refinement. Firstly, chest skin boundary is extracted through image aligning, morphology operation, and connective region analysis. Secondly, diagonal-based border tracing is implemented for lung contour segmentation, with maximum cost path algorithm used for separating the left and right lungs. Finally, by arc-based border smoothing and concave-based border correction, the refined pulmonary parenchyma is obtained. The proposed scheme is evaluated on 45 volumes of chest scans, with volume difference (VD) 11.15 ± 69.63 cm 3 , volume overlap error (VOE) 3.5057 ± 1.3719%, average surface distance (ASD) 0.7917 ± 0.2741 mm, root mean square distance (RMSD) 1.6957 ± 0.6568 mm, maximum symmetric absolute surface distance (MSD) 21.3430 ± 8.1743 mm, and average time-cost 2 seconds per image. The preliminary results on accuracy and complexity prove that our scheme is a promising tool for lung segmentation with juxta-pleural nodules.

  9. Solar multi-conjugate adaptive optics based on high order ground layer adaptive optics and low order high altitude correction.

    PubMed

    Zhang, Lanqiang; Guo, Youming; Rao, Changhui

    2017-02-20

    Multi-conjugate adaptive optics (MCAO) is the most promising technique currently developed to enlarge the corrected field of view of adaptive optics for astronomy. In this paper, we propose a new configuration of solar MCAO based on high order ground layer adaptive optics and low order high altitude correction, which result in a homogeneous correction effect in the whole field of view. An individual high order multiple direction Shack-Hartmann wavefront sensor is employed in the configuration to detect the ground layer turbulence for low altitude correction. Furthermore, the other low order multiple direction Shack-Hartmann wavefront sensor supplies the wavefront information caused by high layers' turbulence through atmospheric tomography for high altitude correction. Simulation results based on the system design at the 1-meter New Vacuum Solar Telescope show that the correction uniform of the new scheme is obviously improved compared to conventional solar MCAO configuration.

  10. Practical scheme for error control using feedback

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarovar, Mohan; Milburn, Gerard J.; Ahn, Charlene

    2004-05-01

    We describe a scheme for quantum-error correction that employs feedback and weak measurement rather than the standard tools of projective measurement and fast controlled unitary gates. The advantage of this scheme over previous protocols [for example, Ahn et al. Phys. Rev. A 65, 042301 (2001)], is that it requires little side processing while remaining robust to measurement inefficiency, and is therefore considerably more practical. We evaluate the performance of our scheme by simulating the correction of bit flips. We also consider implementation in a solid-state quantum-computation architecture and estimate the maximal error rate that could be corrected with current technology.

  11. Benefits of a 4th Ice Class in the Simulated Radar Reflectivities of Convective Systems Using a Bulk Microphysics Scheme

    NASA Technical Reports Server (NTRS)

    Lang, Stephen E.; Tao, Wei-Kuo; Chern, Jiun-Dar; Wu, Di; Li, Xiaowen

    2015-01-01

    Numerous cloud microphysical schemes designed for cloud and mesoscale models are currently in use, ranging from simple bulk to multi-moment, multi-class to explicit bin schemes. This study details the benefits of adding a 4th ice class (hail) to an already improved 3-class ice bulk microphysics scheme developed for the Goddard Cumulus Ensemble model based on Rutledge and Hobbs (1983,1984). Besides the addition and modification of several hail processes from Lin et al. (1983), further modifications were made to the 3-ice processes, including allowing greater ice super saturation and mitigating spurious evaporationsublimation in the saturation adjustment scheme, allowing graupelhail to become snow via vapor growth and hail to become graupel via riming, and the inclusion of a rain evaporation correction and vapor diffusivity factor. The improved 3-ice snowgraupel size-mapping schemes were adjusted to be more stable at higher mixing rations and to increase the aggregation effect for snow. A snow density mapping was also added. The new scheme was applied to an intense continental squall line and a weaker, loosely-organized continental case using three different hail intercepts. Peak simulated reflectivities agree well with radar for both the intense and weaker case and were better than earlier 3-ice versions when using a moderate and large intercept for hail, respectively. Simulated reflectivity distributions versus height were also improved versus radar in both cases compared to earlier 3-ice versions. The bin-based rain evaporation correction affected the squall line case more but did not change the overall agreement in reflectivity distributions.

  12. Low- and high-order accurate boundary conditions: From Stokes to Darcy porous flow modeled with standard and improved Brinkman lattice Boltzmann schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva, Goncalo, E-mail: goncalo.nuno.silva@gmail.com; Talon, Laurent, E-mail: talon@fast.u-psud.fr; Ginzburg, Irina, E-mail: irina.ginzburg@irstea.fr

    The present contribution focuses on the accuracy of reflection-type boundary conditions in the Stokes–Brinkman–Darcy modeling of porous flows solved with the lattice Boltzmann method (LBM), which we operate with the two-relaxation-time (TRT) collision and the Brinkman-force based scheme (BF), called BF-TRT scheme. In parallel, we compare it with the Stokes–Brinkman–Darcy linear finite element method (FEM) where the Dirichlet boundary conditions are enforced on grid vertices. In bulk, both BF-TRT and FEM share the same defect: in their discretization a correction to the modeled Brinkman equation appears, given by the discrete Laplacian of the velocity-proportional resistance force. This correction modifies themore » effective Brinkman viscosity, playing a crucial role in the triggering of spurious oscillations in the bulk solution. While the exact form of this defect is available in lattice-aligned, straight or diagonal, flows; in arbitrary flow/lattice orientations its approximation is constructed. At boundaries, we verify that such a Brinkman viscosity correction has an even more harmful impact. Already at the first order, it shifts the location of the no-slip wall condition supported by traditional LBM boundary schemes, such as the bounce-back rule. For that reason, this work develops a new class of boundary schemes to prescribe the Dirichlet velocity condition at an arbitrary wall/boundary-node distance and that supports a higher order accuracy in the accommodation of the TRT-Brinkman solutions. For their modeling, we consider the standard BF scheme and its improved version, called IBF; this latter is generalized in this work to suppress or to reduce the viscosity correction in arbitrarily oriented flows. Our framework extends the one- and two-point families of linear and parabolic link-wise boundary schemes, respectively called B-LI and B-MLI, which avoid the interference of the Brinkman viscosity correction in their closure relations. The performance of LBM and FEM is thoroughly evaluated in three benchmark tests, which are run throughout three distinctive permeability regimes. The first configuration is a horizontal porous channel, studied with a symbolic approach, where we construct the exact solutions of FEM and BF/IBF with different boundary schemes. The second problem refers to an inclined porous channel flow, which brings in as new challenge the formation of spurious boundary layers in LBM; that is, numerical artefacts that arise due to a deficient accommodation of the bulk solution by the low-accurate boundary scheme. The third problem considers a porous flow past a periodic square array of solid cylinders, which intensifies the previous two tests with the simulation of a more complex flow pattern. The ensemble of numerical tests provides guidelines on the effect of grid resolution and the TRT free collision parameter over the accuracy and the quality of the velocity field, spanning from Stokes to Darcy permeability regimes. It is shown that, with the use of the high-order accurate boundary schemes, the simple, uniform-mesh-based TRT-LBM formulation can even surpass the accuracy of FEM employing hardworking body-fitted meshes.« less

  13. Low- and high-order accurate boundary conditions: From Stokes to Darcy porous flow modeled with standard and improved Brinkman lattice Boltzmann schemes

    NASA Astrophysics Data System (ADS)

    Silva, Goncalo; Talon, Laurent; Ginzburg, Irina

    2017-04-01

    The present contribution focuses on the accuracy of reflection-type boundary conditions in the Stokes-Brinkman-Darcy modeling of porous flows solved with the lattice Boltzmann method (LBM), which we operate with the two-relaxation-time (TRT) collision and the Brinkman-force based scheme (BF), called BF-TRT scheme. In parallel, we compare it with the Stokes-Brinkman-Darcy linear finite element method (FEM) where the Dirichlet boundary conditions are enforced on grid vertices. In bulk, both BF-TRT and FEM share the same defect: in their discretization a correction to the modeled Brinkman equation appears, given by the discrete Laplacian of the velocity-proportional resistance force. This correction modifies the effective Brinkman viscosity, playing a crucial role in the triggering of spurious oscillations in the bulk solution. While the exact form of this defect is available in lattice-aligned, straight or diagonal, flows; in arbitrary flow/lattice orientations its approximation is constructed. At boundaries, we verify that such a Brinkman viscosity correction has an even more harmful impact. Already at the first order, it shifts the location of the no-slip wall condition supported by traditional LBM boundary schemes, such as the bounce-back rule. For that reason, this work develops a new class of boundary schemes to prescribe the Dirichlet velocity condition at an arbitrary wall/boundary-node distance and that supports a higher order accuracy in the accommodation of the TRT-Brinkman solutions. For their modeling, we consider the standard BF scheme and its improved version, called IBF; this latter is generalized in this work to suppress or to reduce the viscosity correction in arbitrarily oriented flows. Our framework extends the one- and two-point families of linear and parabolic link-wise boundary schemes, respectively called B-LI and B-MLI, which avoid the interference of the Brinkman viscosity correction in their closure relations. The performance of LBM and FEM is thoroughly evaluated in three benchmark tests, which are run throughout three distinctive permeability regimes. The first configuration is a horizontal porous channel, studied with a symbolic approach, where we construct the exact solutions of FEM and BF/IBF with different boundary schemes. The second problem refers to an inclined porous channel flow, which brings in as new challenge the formation of spurious boundary layers in LBM; that is, numerical artefacts that arise due to a deficient accommodation of the bulk solution by the low-accurate boundary scheme. The third problem considers a porous flow past a periodic square array of solid cylinders, which intensifies the previous two tests with the simulation of a more complex flow pattern. The ensemble of numerical tests provides guidelines on the effect of grid resolution and the TRT free collision parameter over the accuracy and the quality of the velocity field, spanning from Stokes to Darcy permeability regimes. It is shown that, with the use of the high-order accurate boundary schemes, the simple, uniform-mesh-based TRT-LBM formulation can even surpass the accuracy of FEM employing hardworking body-fitted meshes.

  14. Computational method for the correction of proximity effect in electron-beam lithography (Poster Paper)

    NASA Astrophysics Data System (ADS)

    Chang, Chih-Yuan; Owen, Gerry; Pease, Roger Fabian W.; Kailath, Thomas

    1992-07-01

    Dose correction is commonly used to compensate for the proximity effect in electron lithography. The computation of the required dose modulation is usually carried out using 'self-consistent' algorithms that work by solving a large number of simultaneous linear equations. However, there are two major drawbacks: the resulting correction is not exact, and the computation time is excessively long. A computational scheme, as shown in Figure 1, has been devised to eliminate this problem by the deconvolution of the point spread function in the pattern domain. The method is iterative, based on a steepest descent algorithm. The scheme has been successfully tested on a simple pattern with a minimum feature size 0.5 micrometers , exposed on a MEBES tool at 10 KeV in 0.2 micrometers of PMMA resist on a silicon substrate.

  15. Security enhancement of a biometric based authentication scheme for telecare medicine information systems with nonce.

    PubMed

    Mishra, Dheerendra; Mukhopadhyay, Sourav; Kumari, Saru; Khan, Muhammad Khurram; Chaturvedi, Ankita

    2014-05-01

    Telecare medicine information systems (TMIS) present the platform to deliver clinical service door to door. The technological advances in mobile computing are enhancing the quality of healthcare and a user can access these services using its mobile device. However, user and Telecare system communicate via public channels in these online services which increase the security risk. Therefore, it is required to ensure that only authorized user is accessing the system and user is interacting with the correct system. The mutual authentication provides the way to achieve this. Although existing schemes are either vulnerable to attacks or they have higher computational cost while an scalable authentication scheme for mobile devices should be secure and efficient. Recently, Awasthi and Srivastava presented a biometric based authentication scheme for TMIS with nonce. Their scheme only requires the computation of the hash and XOR functions.pagebreak Thus, this scheme fits for TMIS. However, we observe that Awasthi and Srivastava's scheme does not achieve efficient password change phase. Moreover, their scheme does not resist off-line password guessing attack. Further, we propose an improvement of Awasthi and Srivastava's scheme with the aim to remove the drawbacks of their scheme.

  16. The role of the van der Waals interactions in the adsorption of anthracene and pentacene on the Ag(111) surface

    NASA Astrophysics Data System (ADS)

    Morbec, Juliana M.; Kratzer, Peter

    2017-01-01

    Using first-principles calculations based on density-functional theory (DFT), we investigated the effects of the van der Waals (vdW) interactions on the structural and electronic properties of anthracene and pentacene adsorbed on the Ag(111) surface. We found that the inclusion of vdW corrections strongly affects the binding of both anthracene/Ag(111) and pentacene/Ag(111), yielding adsorption heights and energies more consistent with the experimental results than standard DFT calculations with generalized gradient approximation (GGA). For anthracene/Ag(111) the effect of the vdW interactions is even more dramatic: we found that "pure" DFT-GGA calculations (without including vdW corrections) result in preference for a tilted configuration, in contrast to the experimental observations of flat-lying adsorption; including vdW corrections, on the other hand, alters the binding geometry of anthracene/Ag(111), favoring the flat configuration. The electronic structure obtained using a self-consistent vdW scheme was found to be nearly indistinguishable from the conventional DFT electronic structure once the correct vdW geometry is employed for these physisorbed systems. Moreover, we show that a vdW correction scheme based on a hybrid functional DFT calculation (HSE) results in an improved description of the highest occupied molecular level of the adsorbed molecules.

  17. Verifier-based three-party authentication schemes using extended chaotic maps for data exchange in telecare medicine information systems.

    PubMed

    Lee, Tian-Fu

    2014-12-01

    Telecare medicine information systems provide a communicating platform for accessing remote medical resources through public networks, and help health care workers and medical personnel to rapidly making correct clinical decisions and treatments. An authentication scheme for data exchange in telecare medicine information systems enables legal users in hospitals and medical institutes to establish a secure channel and exchange electronic medical records or electronic health records securely and efficiently. This investigation develops an efficient and secure verified-based three-party authentication scheme by using extended chaotic maps for data exchange in telecare medicine information systems. The proposed scheme does not require server's public keys and avoids time-consuming modular exponential computations and scalar multiplications on elliptic curve used in previous related approaches. Additionally, the proposed scheme is proven secure in the random oracle model, and realizes the lower bounds of messages and rounds in communications. Compared to related verified-based approaches, the proposed scheme not only possesses higher security, but also has lower computational cost and fewer transmissions. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Classification of ring artifacts for their effective removal using type adaptive correction schemes.

    PubMed

    Anas, Emran Mohammad Abu; Lee, Soo Yeol; Hasan, Kamrul

    2011-06-01

    High resolution tomographic images acquired with a digital X-ray detector are often degraded by the so called ring artifacts. In this paper, a detail analysis including the classification, detection and correction of these ring artifacts is presented. At first, a novel idea for classifying rings into two categories, namely type I and type II rings, is proposed based on their statistical characteristics. The defective detector elements and the dusty scintillator screens result in type I ring and the mis-calibrated detector elements lead to type II ring. Unlike conventional approaches, we emphasize here on the separate detection and correction schemes for each type of rings for their effective removal. For the detection of type I ring, the histogram of the responses of the detector elements is used and a modified fast image inpainting algorithm is adopted to correct the responses of the defective pixels. On the other hand, to detect the type II ring, first a simple filtering scheme is presented based on the fast Fourier transform (FFT) to smooth the sum curve derived form the type I ring corrected projection data. The difference between the sum curve and its smoothed version is then used to detect their positions. Then, to remove the constant bias suffered by the responses of the mis-calibrated detector elements with view angle, an estimated dc shift is subtracted from them. The performance of the proposed algorithm is evaluated using real micro-CT images and is compared with three recently reported algorithms. Simulation results demonstrate superior performance of the proposed technique as compared to the techniques reported in the literature. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Application of a net-based baseline correction scheme to strong-motion records of the 2011 Mw 9.0 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Tu, Rui; Wang, Rongjiang; Zhang, Yong; Walter, Thomas R.

    2014-06-01

    The description of static displacements associated with earthquakes is traditionally achieved using GPS, EDM or InSAR data. In addition, displacement histories can be derived from strong-motion records, allowing an improvement of geodetic networks at a high sampling rate and a better physical understanding of earthquake processes. Strong-motion records require a correction procedure appropriate for baseline shifts that may be caused by rotational motion, tilting and other instrumental effects. Common methods use an empirical bilinear correction on the velocity seismograms integrated from the strong-motion records. In this study, we overcome the weaknesses of an empirically based bilinear baseline correction scheme by using a net-based criterion to select the timing parameters. This idea is based on the physical principle that low-frequency seismic waveforms at neighbouring stations are coherent if the interstation distance is much smaller than the distance to the seismic source. For a dense strong-motion network, it is plausible to select the timing parameters so that the correlation coefficient between the velocity seismograms of two neighbouring stations is maximized after the baseline correction. We applied this new concept to the KiK-Net and K-Net strong-motion data available for the 2011 Mw 9.0 Tohoku earthquake. We compared the derived coseismic static displacement with high-quality GPS data, and with the results obtained using empirical methods. The results show that the proposed net-based approach is feasible and more robust than the individual empirical approaches. The outliers caused by unknown problems in the measurement system can be easily detected and quantified.

  20. Laser line illumination scheme allowing the reduction of background signal and the correction of absorption heterogeneities effects for fluorescence reflectance imaging.

    PubMed

    Fantoni, Frédéric; Hervé, Lionel; Poher, Vincent; Gioux, Sylvain; Mars, Jérôme I; Dinten, Jean-Marc

    2015-10-01

    Intraoperative fluorescence imaging in reflectance geometry is an attractive imaging modality as it allows to noninvasively monitor the fluorescence targeted tumors located below the tissue surface. Some drawbacks of this technique are the background fluorescence decreasing the contrast and absorption heterogeneities leading to misinterpretations concerning fluorescence concentrations. We propose a correction technique based on a laser line scanning illumination scheme. We scan the medium with the laser line and acquire, at each position of the line, both fluorescence and excitation images. We then use the finding that there is a relationship between the excitation intensity profile and the background fluorescence one to predict the amount of signal to subtract from the fluorescence images to get a better contrast. As the light absorption information is contained both in fluorescence and excitation images, this method also permits us to correct the effects of absorption heterogeneities. This technique has been validated on simulations and experimentally. Fluorescent inclusions are observed in several configurations at depths ranging from 1 mm to 1 cm. Results obtained with this technique are compared with those obtained with a classical wide-field detection scheme for contrast enhancement and with the fluorescence by an excitation ratio approach for absorption correction.

  1. Symmetric weak ternary quantum homomorphic encryption schemes

    NASA Astrophysics Data System (ADS)

    Wang, Yuqi; She, Kun; Luo, Qingbin; Yang, Fan; Zhao, Chao

    2016-03-01

    Based on a ternary quantum logic circuit, four symmetric weak ternary quantum homomorphic encryption (QHE) schemes were proposed. First, for a one-qutrit rotation gate, a QHE scheme was constructed. Second, in view of the synthesis of a general 3 × 3 unitary transformation, another one-qutrit QHE scheme was proposed. Third, according to the one-qutrit scheme, the two-qutrit QHE scheme about generalized controlled X (GCX(m,n)) gate was constructed and further generalized to the n-qutrit unitary matrix case. Finally, the security of these schemes was analyzed in two respects. It can be concluded that the attacker can correctly guess the encryption key with a maximum probability pk = 1/33n, thus it can better protect the privacy of users’ data. Moreover, these schemes can be well integrated into the future quantum remote server architecture, and thus the computational security of the users’ private quantum information can be well protected in a distributed computing environment.

  2. Calculation of Derivative Thermodynamic Hydration and Aqueous Partial Molar Properties of Ions Based on Atomistic Simulations.

    PubMed

    Dahlgren, Björn; Reif, Maria M; Hünenberger, Philippe H; Hansen, Niels

    2012-10-09

    The raw ionic solvation free energies calculated on the basis of atomistic (explicit-solvent) simulations are extremely sensitive to the boundary conditions and treatment of electrostatic interactions used during these simulations. However, as shown recently [Kastenholz, M. A.; Hünenberger, P. H. J. Chem. Phys.2006, 124, 224501 and Reif, M. M.; Hünenberger, P. H. J. Chem. Phys.2011, 134, 144104], the application of an appropriate correction scheme allows for a conversion of the methodology-dependent raw data into methodology-independent results. In this work, methodology-independent derivative thermodynamic hydration and aqueous partial molar properties are calculated for the Na(+) and Cl(-) ions at P° = 1 bar and T(-) = 298.15 K, based on the SPC water model and on ion-solvent Lennard-Jones interaction coefficients previously reoptimized against experimental hydration free energies. The hydration parameters considered are the hydration free energy and enthalpy. The aqueous partial molar parameters considered are the partial molar entropy, volume, heat capacity, volume-compressibility, and volume-expansivity. Two alternative calculation methods are employed to access these properties. Method I relies on the difference in average volume and energy between two aqueous systems involving the same number of water molecules, either in the absence or in the presence of the ion, along with variations of these differences corresponding to finite pressure or/and temperature changes. Method II relies on the calculation of the hydration free energy of the ion, along with variations of this free energy corresponding to finite pressure or/and temperature changes. Both methods are used considering two distinct variants in the application of the correction scheme. In variant A, the raw values from the simulations are corrected after the application of finite difference in pressure or/and temperature, based on correction terms specifically designed for derivative parameters at P° and T(-). In variant B, these raw values are corrected prior to differentiation, based on corresponding correction terms appropriate for the different simulation pressures P and temperatures T. The results corresponding to the different calculation schemes show that, except for the hydration free energy itself, accurate methodological independence and quantitative agreement with even the most reliable experimental parameters (ion-pair properties) are not yet reached. Nevertheless, approximate internal consistency and qualitative agreement with experimental results can be achieved, but only when an appropriate correction scheme is applied, along with a careful consideration of standard-state issues. In this sense, the main merit of the present study is to set a clear framework for these types of calculations and to point toward directions for future improvements, with the ultimate goal of reaching a consistent and quantitative description of single-ion hydration thermodynamics in molecular dynamics simulations.

  3. A Robust and Effective Smart-Card-Based Remote User Authentication Mechanism Using Hash Function

    PubMed Central

    Odelu, Vanga; Goswami, Adrijit

    2014-01-01

    In a remote user authentication scheme, a remote server verifies whether a login user is genuine and trustworthy, and also for mutual authentication purpose a login user validates whether the remote server is genuine and trustworthy. Several remote user authentication schemes using the password, the biometrics, and the smart card have been proposed in the literature. However, most schemes proposed in the literature are either computationally expensive or insecure against several known attacks. In this paper, we aim to propose a new robust and effective password-based remote user authentication scheme using smart card. Our scheme is efficient, because our scheme uses only efficient one-way hash function and bitwise XOR operations. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. We perform the simulation for the formal security analysis using the widely accepted AVISPA (Automated Validation Internet Security Protocols and Applications) tool to ensure that our scheme is secure against passive and active attacks. Furthermore, our scheme supports efficiently the password change phase always locally without contacting the remote server and correctly. In addition, our scheme performs significantly better than other existing schemes in terms of communication, computational overheads, security, and features provided by our scheme. PMID:24892078

  4. A robust and effective smart-card-based remote user authentication mechanism using hash function.

    PubMed

    Das, Ashok Kumar; Odelu, Vanga; Goswami, Adrijit

    2014-01-01

    In a remote user authentication scheme, a remote server verifies whether a login user is genuine and trustworthy, and also for mutual authentication purpose a login user validates whether the remote server is genuine and trustworthy. Several remote user authentication schemes using the password, the biometrics, and the smart card have been proposed in the literature. However, most schemes proposed in the literature are either computationally expensive or insecure against several known attacks. In this paper, we aim to propose a new robust and effective password-based remote user authentication scheme using smart card. Our scheme is efficient, because our scheme uses only efficient one-way hash function and bitwise XOR operations. Through the rigorous informal and formal security analysis, we show that our scheme is secure against possible known attacks. We perform the simulation for the formal security analysis using the widely accepted AVISPA (Automated Validation Internet Security Protocols and Applications) tool to ensure that our scheme is secure against passive and active attacks. Furthermore, our scheme supports efficiently the password change phase always locally without contacting the remote server and correctly. In addition, our scheme performs significantly better than other existing schemes in terms of communication, computational overheads, security, and features provided by our scheme.

  5. Five-wave-packet quantum error correction based on continuous-variable cluster entanglement

    PubMed Central

    Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi

    2015-01-01

    Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit. PMID:26498395

  6. Iterative Correction Scheme Based on Discrete Cosine Transform and L1 Regularization for Fluorescence Molecular Tomography With Background Fluorescence.

    PubMed

    Zhang, Jiulou; Shi, Junwei; Guang, Huizhi; Zuo, Simin; Liu, Fei; Bai, Jing; Luo, Jianwen

    2016-06-01

    High-intensity background fluorescence is generally encountered in fluorescence molecular tomography (FMT), because of the accumulation of fluorescent probes in nontarget tissues or the existence of autofluorescence in biological tissues. The reconstruction results are affected or even distorted by the background fluorescence, especially when the distribution of fluorescent targets is relatively sparse. The purpose of this paper is to reduce the negative effect of background fluorescence on FMT reconstruction. After each iteration of the Tikhonov regularization algorithm, 3-D discrete cosine transform is adopted to filter the intermediate results. And then, a sparsity constraint step based on L1 regularization is applied to restrain the energy of the objective function. Phantom experiments with different fluorescence intensities of homogeneous and heterogeneous background are carried out to validate the performance of the proposed scheme. The results show that the reconstruction quality can be improved with the proposed iterative correction scheme. The influence of background fluorescence in FMT can be reduced effectively because of the filtering of the intermediate results, the detail preservation, and noise suppression of L1 regularization.

  7. Smoothing and the second law

    NASA Technical Reports Server (NTRS)

    Merriam, Marshal L.

    1987-01-01

    The technique of obtaining second-order oscillation-free total -variation-diminishing (TVD), scalar difference schemes by adding a limited diffusive flux ('smoothing') to a second-order centered scheme is explored. It is shown that such schemes do not always converge to the correct physical answer. The approach presented here is to construct schemes that numerically satisfy the second law of thermodynamics on a cell-by-cell basis. Such schemes can only converge to the correct physical solution and in some cases can be shown to be TVD. An explicit scheme with this property and second-order spatial accuracy was found to have extremely restrictive time-step limitation. Switching to an implicit scheme removed the time-step limitation.

  8. High-order flux correction/finite difference schemes for strand grids

    NASA Astrophysics Data System (ADS)

    Katz, Aaron; Work, Dalon

    2015-02-01

    A novel high-order method combining unstructured flux correction along body surfaces and high-order finite differences normal to surfaces is formulated for unsteady viscous flows on strand grids. The flux correction algorithm is applied in each unstructured layer of the strand grid, and the layers are then coupled together via a source term containing derivatives in the strand direction. Strand-direction derivatives are approximated to high-order via summation-by-parts operators for first derivatives and second derivatives with variable coefficients. We show how this procedure allows for the proper truncation error canceling properties required for the flux correction scheme. The resulting scheme possesses third-order design accuracy, but often exhibits fourth-order accuracy when higher-order derivatives are employed in the strand direction, especially for highly viscous flows. We prove discrete conservation for the new scheme and time stability in the absence of the flux correction terms. Results in two dimensions are presented that demonstrate improvements in accuracy with minimal computational and algorithmic overhead over traditional second-order algorithms.

  9. Future efficiency of run of the river hydropower schemes based on climate change scenarios: case study in UK catchments

    NASA Astrophysics Data System (ADS)

    Pasten Zapata, Ernesto; Moggridge, Helen; Jones, Julie; Widmann, Martin

    2017-04-01

    Run-of-the-River (ROR) hydropower schemes are expected to be importantly affected by climate change as they rely in the availability of river flow to generate energy. As temperature and precipitation are expected to vary in the future, the hydrological cycle will also undergo changes. Therefore, climate models based on complex physical atmospheric interactions have been developed to simulate future climate scenarios considering the atmosphere's greenhouse gas concentrations. These scenarios are classified according to the Representative Concentration Pathways (RCP) that are generated according to the concentration of greenhouse gases. This study evaluates possible scenarios for selected ROR hydropower schemes within the UK, considering three different RCPs: 2.6, 4.5 and 8.5 W/m2 for 2100 relative to pre-industrial values. The study sites cover different climate, land cover, topographic and hydropower scheme characteristics representative of the UK's heterogeneity. Precipitation and temperature outputs from state-of-the-art Regional Climate Models (RCMs) from the Euro-CORDEX project are used as input for a HEC-HMS hydrological model to simulate the future river flow available. Both uncorrected and bias-corrected RCM simulations are analyzed. The results of this project provide an insight of the possible effects of climate change towards the generation of power from the ROR hydropower schemes according to the different RCP scenarios and contrasts the results obtained from uncorrected and bias-corrected RCMs. This analysis can aid on the adaptation to climate change as well as the planning of future ROR schemes in the region.

  10. A genetic fuzzy analytical hierarchy process based projection pursuit method for selecting schemes of water transportation projects

    NASA Astrophysics Data System (ADS)

    Jin, Juliang; Li, Lei; Wang, Wensheng; Zhang, Ming

    2006-10-01

    The optimal selection of schemes of water transportation projects is a process of choosing a relatively optimal scheme from a number of schemes of water transportation programming and management projects, which is of importance in both theory and practice in water resource systems engineering. In order to achieve consistency and eliminate the dimensions of fuzzy qualitative and fuzzy quantitative evaluation indexes, to determine the weights of the indexes objectively, and to increase the differences among the comprehensive evaluation index values of water transportation project schemes, a projection pursuit method, named FPRM-PP for short, was developed in this work for selecting the optimal water transportation project scheme based on the fuzzy preference relation matrix. The research results show that FPRM-PP is intuitive and practical, the correction range of the fuzzy preference relation matrix A it produces is relatively small, and the result obtained is both stable and accurate; therefore FPRM-PP can be widely used in the optimal selection of different multi-factor decision-making schemes.

  11. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    NASA Astrophysics Data System (ADS)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  12. BP artificial neural network based wave front correction for sensor-less free space optics communication

    NASA Astrophysics Data System (ADS)

    Li, Zhaokun; Zhao, Xiaohui

    2017-02-01

    The sensor-less adaptive optics (AO) is one of the most promising methods to compensate strong wave front disturbance in free space optics communication (FSO). The back propagation (BP) artificial neural network is applied for the sensor-less AO system to design a distortion correction scheme in this study. This method only needs one or a few online measurements to correct the wave front distortion compared with other model-based approaches, by which the real-time capacity of the system is enhanced and the Strehl Ratio (SR) is largely improved. Necessary comparisons in numerical simulation with other model-based and model-free correction methods proposed in Refs. [6,8,9,10] are given to show the validity and advantage of the proposed method.

  13. Evaluation of a Multigrid Scheme for the Incompressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.

    2004-01-01

    A fast multigrid solver for the steady, incompressible Navier-Stokes equations is presented. The multigrid solver is based upon a factorizable discrete scheme for the velocity-pressure form of the Navier-Stokes equations. This scheme correctly distinguishes between the advection-diffusion and elliptic parts of the operator, allowing efficient smoothers to be constructed. To evaluate the multigrid algorithm, solutions are computed for flow over a flat plate, parabola, and a Karman-Trefftz airfoil. Both nonlifting and lifting airfoil flows are considered, with a Reynolds number range of 200 to 800. Convergence and accuracy of the algorithm are discussed. Using Gauss-Seidel line relaxation in alternating directions, multigrid convergence behavior approaching that of O(N) methods is achieved. The computational efficiency of the numerical scheme is compared with that of Runge-Kutta and implicit upwind based multigrid methods.

  14. Higgs boson decay into b-quarks at NNLO accuracy

    NASA Astrophysics Data System (ADS)

    Del Duca, Vittorio; Duhr, Claude; Somogyi, Gábor; Tramontano, Francesco; Trócsányi, Zoltán

    2015-04-01

    We compute the fully differential decay rate of the Standard Model Higgs boson into b-quarks at next-to-next-to-leading order (NNLO) accuracy in αs. We employ a general subtraction scheme developed for the calculation of higher order perturbative corrections to QCD jet cross sections, which is based on the universal infrared factorization properties of QCD squared matrix elements. We show that the subtractions render the various contributions to the NNLO correction finite. In particular, we demonstrate analytically that the sum of integrated subtraction terms correctly reproduces the infrared poles of the two-loop double virtual contribution to this process. We present illustrative differential distributions obtained by implementing the method in a parton level Monte Carlo program. The basic ingredients of our subtraction scheme, used here for the first time to compute a physical observable, are universal and can be employed for the computation of more involved processes.

  15. Analysis of an ABE Scheme with Verifiable Outsourced Decryption.

    PubMed

    Liao, Yongjian; He, Yichuan; Li, Fagen; Jiang, Shaoquan; Zhou, Shijie

    2018-01-10

    Attribute-based encryption (ABE) is a popular cryptographic technology to protect the security of users' data in cloud computing. In order to reduce its decryption cost, outsourcing the decryption of ciphertexts is an available method, which enables users to outsource a large number of decryption operations to the cloud service provider. To guarantee the correctness of transformed ciphertexts computed by the cloud server via the outsourced decryption, it is necessary to check the correctness of the outsourced decryption to ensure security for the data of users. Recently, Li et al. proposed a full verifiability of the outsourced decryption of ABE scheme (ABE-VOD) for the authorized users and unauthorized users, which can simultaneously check the correctness of the transformed ciphertext for both them. However, in this paper we show that their ABE-VOD scheme cannot obtain the results which they had shown, such as finding out all invalid ciphertexts, and checking the correctness of the transformed ciphertext for the authorized user via checking it for the unauthorized user. We first construct some invalid ciphertexts which can pass the validity checking in the decryption algorithm. That means their "verify-then-decrypt" skill is unavailable. Next, we show that the method to check the validity of the outsourced decryption for the authorized users via checking it for the unauthorized users is not always correct. That is to say, there exist some invalid ciphertexts which can pass the validity checking for the unauthorized user, but cannot pass the validity checking for the authorized user.

  16. Analysis of an ABE Scheme with Verifiable Outsourced Decryption

    PubMed Central

    He, Yichuan; Li, Fagen; Jiang, Shaoquan; Zhou, Shijie

    2018-01-01

    Attribute-based encryption (ABE) is a popular cryptographic technology to protect the security of users’ data in cloud computing. In order to reduce its decryption cost, outsourcing the decryption of ciphertexts is an available method, which enables users to outsource a large number of decryption operations to the cloud service provider. To guarantee the correctness of transformed ciphertexts computed by the cloud server via the outsourced decryption, it is necessary to check the correctness of the outsourced decryption to ensure security for the data of users. Recently, Li et al. proposed a full verifiability of the outsourced decryption of ABE scheme (ABE-VOD) for the authorized users and unauthorized users, which can simultaneously check the correctness of the transformed ciphertext for both them. However, in this paper we show that their ABE-VOD scheme cannot obtain the results which they had shown, such as finding out all invalid ciphertexts, and checking the correctness of the transformed ciphertext for the authorized user via checking it for the unauthorized user. We first construct some invalid ciphertexts which can pass the validity checking in the decryption algorithm. That means their “verify-then-decrypt” skill is unavailable. Next, we show that the method to check the validity of the outsourced decryption for the authorized users via checking it for the unauthorized users is not always correct. That is to say, there exist some invalid ciphertexts which can pass the validity checking for the unauthorized user, but cannot pass the validity checking for the authorized user. PMID:29320418

  17. Smoothing and the second law

    NASA Technical Reports Server (NTRS)

    Merriam, Marshal L.

    1986-01-01

    The technique of obtaining second order, oscillation free, total variation diminishing (TVD), scalar difference schemes by adding a limited diffusion flux (smoothing) to a second order centered scheme is explored. It is shown that such schemes do not always converge to the correct physical answer. The approach presented here is to construct schemes that numerically satisfy the second law of thermodynamics on a cell by cell basis. Such schemes can only converge to the correct physical solution and in some cases can be shown to be TVD. An explicit scheme with this property and second order spatial accuracy was found to have an extremely restrictive time step limitation (Delta t less than Delta x squared). Switching to an implicit scheme removed the time step limitation.

  18. Performance analysis of a cascaded coding scheme with interleaved outer code

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.

  19. RSA and its Correctness through Modular Arithmetic

    NASA Astrophysics Data System (ADS)

    Meelu, Punita; Malik, Sitender

    2010-11-01

    To ensure the security to the applications of business, the business sectors use Public Key Cryptographic Systems (PKCS). An RSA system generally belongs to the category of PKCS for both encryption and authentication. This paper describes an introduction to RSA through encryption and decryption schemes, mathematical background which includes theorems to combine modular equations and correctness of RSA. In short, this paper explains some of the maths concepts that RSA is based on, and then provides a complete proof that RSA works correctly. We can proof the correctness of RSA through combined process of encryption and decryption based on the Chinese Remainder Theorem (CRT) and Euler theorem. However, there is no mathematical proof that RSA is secure, everyone takes that on trust!.

  20. Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries.

    PubMed

    Shafiey, Hassan; Gan, Xinjun; Waxman, David

    2017-11-01

    To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.

  1. Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries

    NASA Astrophysics Data System (ADS)

    Shafiey, Hassan; Gan, Xinjun; Waxman, David

    2017-11-01

    To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.

  2. Autonomous Quantum Error Correction with Application to Quantum Metrology

    NASA Astrophysics Data System (ADS)

    Reiter, Florentin; Sorensen, Anders S.; Zoller, Peter; Muschik, Christine A.

    2017-04-01

    We present a quantum error correction scheme that stabilizes a qubit by coupling it to an engineered environment which protects it against spin- or phase flips. Our scheme uses always-on couplings that run continuously in time and operates in a fully autonomous fashion without the need to perform measurements or feedback operations on the system. The correction of errors takes place entirely at the microscopic level through a build-in feedback mechanism. Our dissipative error correction scheme can be implemented in a system of trapped ions and can be used for improving high precision sensing. We show that the enhanced coherence time that results from the coupling to the engineered environment translates into a significantly enhanced precision for measuring weak fields. In a broader context, this work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.

  3. Unitary reconstruction of secret for stabilizer-based quantum secret sharing

    NASA Astrophysics Data System (ADS)

    Matsumoto, Ryutaroh

    2017-08-01

    We propose a unitary procedure to reconstruct quantum secret for a quantum secret sharing scheme constructed from stabilizer quantum error-correcting codes. Erasure correcting procedures for stabilizer codes need to add missing shares for reconstruction of quantum secret, while unitary reconstruction procedures for certain class of quantum secret sharing are known to work without adding missing shares. The proposed procedure also works without adding missing shares.

  4. Soft sensor based composition estimation and controller design for an ideal reactive distillation column.

    PubMed

    Vijaya Raghavan, S R; Radhakrishnan, T K; Srinivasan, K

    2011-01-01

    In this research work, the authors have presented the design and implementation of a recurrent neural network (RNN) based inferential state estimation scheme for an ideal reactive distillation column. Decentralized PI controllers are designed and implemented. The reactive distillation process is controlled by controlling the composition which has been estimated from the available temperature measurements using a type of RNN called Time Delayed Neural Network (TDNN). The performance of the RNN based state estimation scheme under both open loop and closed loop have been compared with a standard Extended Kalman filter (EKF) and a Feed forward Neural Network (FNN). The online training/correction has been done for both RNN and FNN schemes for every ten minutes whenever new un-trained measurements are available from a conventional composition analyzer. The performance of RNN shows better state estimation capability as compared to other state estimation schemes in terms of qualitative and quantitative performance indices. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Improved Iterative Decoding of Network-Channel Codes for Multiple-Access Relay Channel.

    PubMed

    Majumder, Saikat; Verma, Shrish

    2015-01-01

    Cooperative communication using relay nodes is one of the most effective means of exploiting space diversity for low cost nodes in wireless network. In cooperative communication, users, besides communicating their own information, also relay the information of other users. In this paper we investigate a scheme where cooperation is achieved using a common relay node which performs network coding to provide space diversity for two information nodes transmitting to a base station. We propose a scheme which uses Reed-Solomon error correcting code for encoding the information bit at the user nodes and convolutional code as network code, instead of XOR based network coding. Based on this encoder, we propose iterative soft decoding of joint network-channel code by treating it as a concatenated Reed-Solomon convolutional code. Simulation results show significant improvement in performance compared to existing scheme based on compound codes.

  6. A novel forward projection-based metal artifact reduction method for flat-detector computed tomography.

    PubMed

    Prell, Daniel; Kyriakou, Yiannis; Beister, Marcel; Kalender, Willi A

    2009-11-07

    Metallic implants generate streak-like artifacts in flat-detector computed tomography (FD-CT) reconstructed volumetric images. This study presents a novel method for reducing these disturbing artifacts by inserting discarded information into the original rawdata using a three-step correction procedure and working directly with each detector element. Computation times are minimized by completely implementing the correction process on graphics processing units (GPUs). First, the original volume is corrected using a three-dimensional interpolation scheme in the rawdata domain, followed by a second reconstruction. This metal artifact-reduced volume is then segmented into three materials, i.e. air, soft-tissue and bone, using a threshold-based algorithm. Subsequently, a forward projection of the obtained tissue-class model substitutes the missing or corrupted attenuation values directly for each flat detector element that contains attenuation values corresponding to metal parts, followed by a final reconstruction. Experiments using tissue-equivalent phantoms showed a significant reduction of metal artifacts (deviations of CT values after correction compared to measurements without metallic inserts reduced typically to below 20 HU, differences in image noise to below 5 HU) caused by the implants and no significant resolution losses even in areas close to the inserts. To cover a variety of different cases, cadaver measurements and clinical images in the knee, head and spine region were used to investigate the effectiveness and applicability of our method. A comparison to a three-dimensional interpolation correction showed that the new approach outperformed interpolation schemes. Correction times are minimized, and initial and corrected images are made available at almost the same time (12.7 s for the initial reconstruction, 46.2 s for the final corrected image compared to 114.1 s and 355.1 s on central processing units (CPUs)).

  7. Development of a three-dimensional high-order strand-grids approach

    NASA Astrophysics Data System (ADS)

    Tong, Oisin

    Development of a novel high-order flux correction method on strand grids is presented. The method uses a combination of flux correction in the unstructured plane and summation-by-parts operators in the strand direction to achieve high-fidelity solutions. Low-order truncation errors are cancelled with accurate flux and solution gradients in the flux correction method, thereby achieving a formal order of accuracy of 3, although higher orders are often obtained, especially for highly viscous flows. In this work, the scheme is extended to high-Reynolds number computations in both two and three dimensions. Turbulence closure is achieved with a robust version of the Spalart-Allmaras turbulence model that accommodates negative values of the turbulence working variable, and the Menter SST turbulence model, which blends the k-epsilon and k-o turbulence models for better accuracy. A major advantage of this high-order formulation is the ability to implement traditional finite volume-like limiters to cleanly capture shocked and discontinuous flows. In this work, this approach is explored via a symmetric limited positive (SLIP) limiter. Extensive verification and validation is conducted in two and three dimensions to determine the accuracy and fidelity of the scheme for a number of different cases. Verification studies show that the scheme achieves better than third order accuracy for low and high-Reynolds number flows. Cost studies show that in three-dimensions, the third-order flux correction scheme requires only 30% more walltime than a traditional second-order scheme on strand grids to achieve the same level of convergence. In order to overcome meshing issues at sharp corners and other small-scale features, a unique approach to traditional geometry, coined "asymptotic geometry," is explored. Asymptotic geometry is achieved by filtering out small-scale features in a level set domain through min/max flow. This approach is combined with a curvature based strand shortening strategy in order to qualitatively improve strand grid mesh quality.

  8. Hybrid architecture for encoded measurement-based quantum computation

    PubMed Central

    Zwerger, M.; Briegel, H. J.; Dür, W.

    2014-01-01

    We present a hybrid scheme for quantum computation that combines the modular structure of elementary building blocks used in the circuit model with the advantages of a measurement-based approach to quantum computation. We show how to construct optimal resource states of minimal size to implement elementary building blocks for encoded quantum computation in a measurement-based way, including states for error correction and encoded gates. The performance of the scheme is determined by the quality of the resource states, where within the considered error model a threshold of the order of 10% local noise per particle for fault-tolerant quantum computation and quantum communication. PMID:24946906

  9. A single-stage flux-corrected transport algorithm for high-order finite-volume methods

    DOE PAGES

    Chaplin, Christopher; Colella, Phillip

    2017-05-08

    We present a new limiter method for solving the advection equation using a high-order, finite-volume discretization. The limiter is based on the flux-corrected transport algorithm. Here, we modify the classical algorithm by introducing a new computation for solution bounds at smooth extrema, as well as improving the preconstraint on the high-order fluxes. We compute the high-order fluxes via a method-of-lines approach with fourth-order Runge-Kutta as the time integrator. For computing low-order fluxes, we select the corner-transport upwind method due to its improved stability over donor-cell upwind. Several spatial differencing schemes are investigated for the high-order flux computation, including centered- differencemore » and upwind schemes. We show that the upwind schemes perform well on account of the dissipation of high-wavenumber components. The new limiter method retains high-order accuracy for smooth solutions and accurately captures fronts in discontinuous solutions. Further, we need only apply the limiter once per complete time step.« less

  10. A Simple Noise Correction Scheme for Diffusional Kurtosis Imaging

    PubMed Central

    Glenn, G. Russell; Tabesh, Ali; Jensen, Jens H.

    2014-01-01

    Purpose Diffusional kurtosis imaging (DKI) is sensitive to the effects of signal noise due to strong diffusion weightings and higher order modeling of the diffusion weighted signal. A simple noise correction scheme is proposed to remove the majority of the noise bias in the estimated diffusional kurtosis. Methods Weighted linear least squares (WLLS) fitting together with a voxel-wise, subtraction-based noise correction from multiple, independent acquisitions are employed to reduce noise bias in DKI data. The method is validated in phantom experiments and demonstrated for in vivo human brain for DKI-derived parameter estimates. Results As long as the signal-to-noise ratio (SNR) for the most heavily diffusion weighted images is greater than 2.1, errors in phantom diffusional kurtosis estimates are found to be less than 5 percent with noise correction, but as high as 44 percent for uncorrected estimates. In human brain, noise correction is also shown to improve diffusional kurtosis estimates derived from measurements made with low SNR. Conclusion The proposed correction technique removes the majority of noise bias from diffusional kurtosis estimates in noisy phantom data and is applicable to DKI of human brain. Features of the method include computational simplicity and ease of integration into standard WLLS DKI post-processing algorithms. PMID:25172990

  11. Nonuniformity correction algorithm with efficient pixel offset estimation for infrared focal plane arrays.

    PubMed

    Orżanowski, Tomasz

    2016-01-01

    This paper presents an infrared focal plane array (IRFPA) response nonuniformity correction (NUC) algorithm which is easy to implement by hardware. The proposed NUC algorithm is based on the linear correction scheme with the useful method of pixel offset correction coefficients update. The new approach to IRFPA response nonuniformity correction consists in the use of pixel response change determined at the actual operating conditions in relation to the reference ones by means of shutter to compensate a pixel offset temporal drift. Moreover, it permits to remove any optics shading effect in the output image as well. To show efficiency of the proposed NUC algorithm some test results for microbolometer IRFPA are presented.

  12. Embedded feature ranking for ensemble MLP classifiers.

    PubMed

    Windeatt, Terry; Duangsoithong, Rakkrit; Smith, Raymond

    2011-06-01

    A feature ranking scheme for multilayer perceptron (MLP) ensembles is proposed, along with a stopping criterion based upon the out-of-bootstrap estimate. To solve multi-class problems feature ranking is combined with modified error-correcting output coding. Experimental results on benchmark data demonstrate the versatility of the MLP base classifier in removing irrelevant features.

  13. Coherent control of molecular alignment of homonuclear diatomic molecules by analytically designed laser pulses.

    PubMed

    Zou, Shiyang; Sanz, Cristina; Balint-Kurti, Gabriel G

    2008-09-28

    We present an analytic scheme for designing laser pulses to manipulate the field-free molecular alignment of a homonuclear diatomic molecule. The scheme is based on the use of a generalized pulse-area theorem and makes use of pulses constructed around two-photon resonant frequencies. In the proposed scheme, the populations and relative phases of the rovibrational states of the molecule are independently controlled utilizing changes in the laser intensity and in the carrier-envelope phase difference, respectively. This allows us to create the correct coherent superposition of rovibrational states needed to achieve optimal molecular alignment. The validity and efficiency of the scheme are demonstrated by explicit application to the H(2) molecule. The analytically designed laser pulses are tested by exact numerical solutions of the time-dependent Schrodinger equation including laser-molecule interactions to all orders of the field strength. The design of a sequence of pulses to further enhance molecular alignment is also discussed and tested. It is found that the rotating wave approximation used in the analytic design of the laser pulses leads to small errors in the prediction of the relative phase of the rotational states. It is further shown how these errors may be easily corrected.

  14. A Parameterization of Dry Thermals and Shallow Cumuli for Mesoscale Numerical Weather Prediction

    NASA Astrophysics Data System (ADS)

    Pergaud, Julien; Masson, Valéry; Malardel, Sylvie; Couvreux, Fleur

    2009-07-01

    For numerical weather prediction models and models resolving deep convection, shallow convective ascents are subgrid processes that are not parameterized by classical local turbulent schemes. The mass flux formulation of convective mixing is now largely accepted as an efficient approach for parameterizing the contribution of larger plumes in convective dry and cloudy boundary layers. We propose a new formulation of the EDMF scheme (for Eddy DiffusivityMass Flux) based on a single updraft that improves the representation of dry thermals and shallow convective clouds and conserves a correct representation of stratocumulus in mesoscale models. The definition of entrainment and detrainment in the dry part of the updraft is original, and is specified as proportional to the ratio of buoyancy to vertical velocity. In the cloudy part of the updraft, the classical buoyancy sorting approach is chosen. The main closure of the scheme is based on the mass flux near the surface, which is proportional to the sub-cloud layer convective velocity scale w *. The link with the prognostic grid-scale cloud content and cloud cover and the projection on the non- conservative variables is processed by the cloud scheme. The validation of this new formulation using large-eddy simulations focused on showing the robustness of the scheme to represent three different boundary layer regimes. For dry convective cases, this parameterization enables a correct representation of the countergradient zone where the mass flux part represents the top entrainment (IHOP case). It can also handle the diurnal cycle of boundary-layer cumulus clouds (EUROCSARM) and conserve a realistic evolution of stratocumulus (EUROCSFIRE).

  15. APC-PC Combined Scheme in Gilbert Two State Model: Proposal and Study

    NASA Astrophysics Data System (ADS)

    Bulo, Yaka; Saring, Yang; Bhunia, Chandan Tilak

    2017-04-01

    In an automatic repeat request (ARQ) scheme, a packet is retransmitted if it gets corrupted due to transmission errors caused by the channel. However, an erroneous packet may contain both erroneous bits and correct bits and hence it may still contain useful information. The receiver may be able to combine this information from multiple erroneous copies to recover the correct packet. Packet combining (PC) is a simple and elegant scheme of error correction in transmitted packet, in which two received copies are XORed to obtain the bit location of erroneous bits. Thereafter, the packet is corrected by bit inversion of bit located as erroneous. Aggressive packet combining (APC) is a logic extension of PC primarily designed for wireless communication with objective of correcting error with low latency. PC offers higher throughput than APC, but PC does not correct double bit errors if occur in same bit location of erroneous copies of the packet. A hybrid technique is proposed to utilize the advantages of both APC and PC while attempting to remove the limitation of both. In the proposed technique, applications of APC-PC on Gilbert two state model has been studied. The simulation results show that the proposed technique offers better throughput than the conventional APC and lesser packet error rate than PC scheme.

  16. Equivalence between the Energy Stable Flux Reconstruction and Filtered Discontinuous Galerkin Schemes

    NASA Astrophysics Data System (ADS)

    Zwanenburg, Philip; Nadarajah, Siva

    2016-02-01

    The aim of this paper is to demonstrate the equivalence between filtered Discontinuous Galerkin (DG) schemes and the Energy Stable Flux Reconstruction (ESFR) schemes, expanding on previous demonstrations in 1D [1] and for straight-sided elements in 3D [2]. We first derive the DG and ESFR schemes in strong form and compare the respective flux penalization terms while highlighting the implications of the fundamental assumptions for stability in the ESFR formulations, notably that all ESFR scheme correction fields can be interpreted as modally filtered DG correction fields. We present the result in the general context of all higher dimensional curvilinear element formulations. Through a demonstration that there exists a weak form of the ESFR schemes which is both discretely and analytically equivalent to the strong form, we then extend the results obtained for the strong formulations to demonstrate that ESFR schemes can be interpreted as a DG scheme in weak form where discontinuous edge flux is substituted for numerical edge flux correction. Theoretical derivations are then verified with numerical results obtained from a 2D Euler testcase with curved boundaries. Given the current choice of high-order DG-type schemes and the question as to which might be best to use for a specific application, the main significance of this work is the bridge that it provides between them. Clearly outlining the similarities between the schemes results in the important conclusion that it is always less efficient to use ESFR schemes, as opposed to the weak DG scheme, when solving problems implicitly.

  17. Presumptive identification of streptococci with a new test system.

    PubMed Central

    Facklam, R R; Thacker, L G; Fox, B; Eriquez, L

    1982-01-01

    A test is described that could replace bacitracin susceptibility for presumptive identification of group A streptococci as well as 6.5% NaCl agar tolerance for presumptive identification of enterococcal streptococci. The L-pyrrolidonyl-beta-naphthylamide test, based on hydrolysis of pyrrolidonyl-beta-naphthylamide, was used in conjunction with the CAMP and bile-esculin tests to presumptively identify the streptococci. Among the beta-hemolytic streptococci; 98% of 50 group A, 98% of 46 group B, and 100% of 70 strains that were not group A, B, or D were correctly identified by the new presumptive test scheme. Among the non-beta-hemolytic streptococci; 96% of 74 group D enterococcal, 100% of 30 group D nonenterococcal, and 82% of 112 viridans strains were correctly identified by the new presumptive test scheme. PMID:7050157

  18. Is QR code an optimal data container in optical encryption systems from an error-correction coding perspective?

    PubMed

    Jiao, Shuming; Jin, Zhi; Zhou, Changyuan; Zou, Wenbin; Li, Xia

    2018-01-01

    Quick response (QR) code has been employed as a data carrier for optical cryptosystems in many recent research works, and the error-correction coding mechanism allows the decrypted result to be noise free. However, in this paper, we point out for the first time that the Reed-Solomon coding algorithm in QR code is not a very suitable option for the nonlocally distributed speckle noise in optical cryptosystems from an information coding perspective. The average channel capacity is proposed to measure the data storage capacity and noise-resistant capability of different encoding schemes. We design an alternative 2D barcode scheme based on Bose-Chaudhuri-Hocquenghem (BCH) coding, which demonstrates substantially better average channel capacity than QR code in numerical simulated optical cryptosystems.

  19. Dry Bias and Variability in Vaisala RS80-H Radiosondes: The ARM Experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, David D.; Lesht, B. M.; Clough, Shepard A.

    2003-01-02

    Thousands of comparisons between total precipitable water vapor (PWV) obtained from radiosonde (Vaisala RS80-H) profiles and PWV retrieved from a collocated microwave radiometer (MWR) were made at the Atmospheric Radiation Measurement (ARM) Program's Southern Great Plains Cloud and Radiation Testbed (SGP/CART) site in northern Oklahoma from 1994 to 2000. These comparisons show that the RS80-H radiosonde has an approximate 5% dry bias compared to the MWR. This observation is consistent with interpretations of Vaisala RS80 radiosonde data obtained during the Tropical Ocean and Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA/COARE). In addition to the dry bias, analysis of the PWVmore » comparisons as well as of data obtained from dual-sonde soundings done at the SGP show that the calibration of the radiosonde humidity measurements varies considerably both when the radiosondes come from different calibration batches and when the radiosondes come from the same calibration batch. This variability can result in peak-to-peak differences between radiosondes of greater than 25% in PWV. Because accurate representation of the vertical profile of water vapor is critical for ARM's science objectives, we have developed an empirical method for correcting the radiosonde humidity profiles that is based on a constant scaling factor. By using an independent set of observations and radiative transfer models to test the correction, we show that the constant humidity scaling method appears both to improve the accuracy and reduce the uncertainty of the radiosonde data. We also used the ARM data to examine a different, physically-based, correction scheme that was developed recently by scientists from Vaisala and the National Center for Atmospheric Research (NCAR). This scheme, which addresses the dry bias problem as well as other calibration-related problems with the RS80-H sensor, results in excellent agreement between the PWV retrieved from the MWR and integrated from the corrected radiosonde. However, because the physically-based correction scheme does not address the apparently random calibration variations we observe, it does not reduce the variability either between radiosonde calibration batches or within individual calibration batches.« less

  20. A secure and robust password-based remote user authentication scheme using smart cards for the integrated EPR information system.

    PubMed

    Das, Ashok Kumar

    2015-03-01

    An integrated EPR (Electronic Patient Record) information system of all the patients provides the medical institutions and the academia with most of the patients' information in details for them to make corrective decisions and clinical decisions in order to maintain and analyze patients' health. In such system, the illegal access must be restricted and the information from theft during transmission over the insecure Internet must be prevented. Lee et al. proposed an efficient password-based remote user authentication scheme using smart card for the integrated EPR information system. Their scheme is very efficient due to usage of one-way hash function and bitwise exclusive-or (XOR) operations. However, in this paper, we show that though their scheme is very efficient, their scheme has three security weaknesses such as (1) it has design flaws in password change phase, (2) it fails to protect privileged insider attack and (3) it lacks the formal security verification. We also find that another recently proposed Wen's scheme has the same security drawbacks as in Lee at al.'s scheme. In order to remedy these security weaknesses found in Lee et al.'s scheme and Wen's scheme, we propose a secure and efficient password-based remote user authentication scheme using smart cards for the integrated EPR information system. We show that our scheme is also efficient as compared to Lee et al.'s scheme and Wen's scheme as our scheme only uses one-way hash function and bitwise exclusive-or (XOR) operations. Through the security analysis, we show that our scheme is secure against possible known attacks. Furthermore, we simulate our scheme for the formal security verification using the widely-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool and show that our scheme is secure against passive and active attacks.

  1. Real-time intraoperative fluorescence imaging system using light-absorption correction.

    PubMed

    Themelis, George; Yoo, Jung Sun; Soh, Kwang-Sup; Schulz, Ralf; Ntziachristos, Vasilis

    2009-01-01

    We present a novel fluorescence imaging system developed for real-time interventional imaging applications. The system implements a correction scheme that improves the accuracy of epi-illumination fluorescence images for light intensity variation in tissues. The implementation is based on the use of three cameras operating in parallel, utilizing a common lens, which allows for the concurrent collection of color, fluorescence, and light attenuation images at the excitation wavelength from the same field of view. The correction is based on a ratio approach of fluorescence over light attenuation images. Color images and video is used for surgical guidance and for registration with the corrected fluorescence images. We showcase the performance metrics of this system on phantoms and animals, and discuss the advantages over conventional epi-illumination systems developed for real-time applications and the limits of validity of corrected epi-illumination fluorescence imaging.

  2. On the security of two remote user authentication schemes for telecare medical information systems.

    PubMed

    Kim, Kee-Won; Lee, Jae-Dong

    2014-05-01

    The telecare medical information systems (TMISs) support convenient and rapid health-care services. A secure and efficient authentication scheme for TMIS provides safeguarding patients' electronic patient records (EPRs) and helps health care workers and medical personnel to rapidly making correct clinical decisions. Recently, Kumari et al. proposed a password based user authentication scheme using smart cards for TMIS, and claimed that the proposed scheme could resist various malicious attacks. However, we point out that their scheme is still vulnerable to lost smart card and cannot provide forward secrecy. Subsequently, Das and Goswami proposed a secure and efficient uniqueness-and-anonymity-preserving remote user authentication scheme for connected health care. They simulated their scheme for the formal security verification using the widely-accepted automated validation of Internet security protocols and applications (AVISPA) tool to ensure that their scheme is secure against passive and active attacks. However, we show that their scheme is still vulnerable to smart card loss attacks and cannot provide forward secrecy property. The proposed cryptanalysis discourages any use of the two schemes under investigation in practice and reveals some subtleties and challenges in designing this type of schemes.

  3. An Extended Chaotic Maps-Based Three-Party Password-Authenticated Key Agreement with User Anonymity

    PubMed Central

    Lu, Yanrong; Li, Lixiang; Zhang, Hao; Yang, Yixian

    2016-01-01

    User anonymity is one of the key security features of an authenticated key agreement especially for communicating messages via an insecure network. Owing to the better properties and higher performance of chaotic theory, the chaotic maps have been introduced into the security schemes, and hence numerous key agreement schemes have been put forward under chaotic-maps. Recently, Xie et al. released an enhanced scheme under Farash et al.’s scheme and claimed their improvements could withstand the security loopholes pointed out in the scheme of Farash et al., i.e., resistance to the off-line password guessing and user impersonation attacks. Nevertheless, through our careful analysis, the improvements were released by Xie et al. still could not solve the problems troubled in Farash et al‥ Besides, Xie et al.’s improvements failed to achieve the user anonymity and the session key security. With the purpose of eliminating the security risks of the scheme of Xie et al., we design an anonymous password-based three-party authenticated key agreement under chaotic maps. Both the formal analysis and the formal security verification using AVISPA are presented. Also, BAN logic is used to show the correctness of the enhancements. Furthermore, we also demonstrate that the design thwarts most of the common attacks. We also make a comparison between the recent chaotic-maps based schemes and our enhancements in terms of performance. PMID:27101305

  4. Elucidation of molecular kinetic schemes from macroscopic traces using system identification

    PubMed Central

    González-Maeso, Javier; Sealfon, Stuart C.; Galocha-Iragüen, Belén; Brezina, Vladimir

    2017-01-01

    Overall cellular responses to biologically-relevant stimuli are mediated by networks of simpler lower-level processes. Although information about some of these processes can now be obtained by visualizing and recording events at the molecular level, this is still possible only in especially favorable cases. Therefore the development of methods to extract the dynamics and relationships between the different lower-level (microscopic) processes from the overall (macroscopic) response remains a crucial challenge in the understanding of many aspects of physiology. Here we have devised a hybrid computational-analytical method to accomplish this task, the SYStems-based MOLecular kinetic scheme Extractor (SYSMOLE). SYSMOLE utilizes system-identification input-output analysis to obtain a transfer function between the stimulus and the overall cellular response in the Laplace-transformed domain. It then derives a Markov-chain state molecular kinetic scheme uniquely associated with the transfer function by means of a classification procedure and an analytical step that imposes general biological constraints. We first tested SYSMOLE with synthetic data and evaluated its performance in terms of its rate of convergence to the correct molecular kinetic scheme and its robustness to noise. We then examined its performance on real experimental traces by analyzing macroscopic calcium-current traces elicited by membrane depolarization. SYSMOLE derived the correct, previously known molecular kinetic scheme describing the activation and inactivation of the underlying calcium channels and correctly identified the accepted mechanism of action of nifedipine, a calcium-channel blocker clinically used in patients with cardiovascular disease. Finally, we applied SYSMOLE to study the pharmacology of a new class of glutamate antipsychotic drugs and their crosstalk mechanism through a heteromeric complex of G protein-coupled receptors. Our results indicate that our methodology can be successfully applied to accurately derive molecular kinetic schemes from experimental macroscopic traces, and we anticipate that it may be useful in the study of a wide variety of biological systems. PMID:28192423

  5. Atmospheric correction for satellite-based volcanic ash mapping and retrievals using ``split window'' IR data from GOES and AVHRR

    NASA Astrophysics Data System (ADS)

    Yu, Tianxu; Rose, William I.; Prata, A. J.

    2002-08-01

    Volcanic ash in volcanic clouds can be mapped in two dimensions using two-band thermal infrared data available from meteorological satellites. Wen and Rose [1994] developed an algorithm that allows retrieval of the effective particle size, the optical depth of the volcanic cloud, and the mass of fine ash in the cloud. Both the mapping and the retrieval scheme are less accurate in the humid tropical atmosphere. In this study we devised and tested a scheme for atmospheric correction of volcanic ash mapping and retrievals. The scheme utilizes infrared (IR) brightness temperature (BT) information in two infrared channels (both between 10 and 12.5 μm) and the brightness temperature differences (BTD) to estimate the amount of BTD shift caused by lower tropospheric water vapor. It is supported by the moderate resolution transmission (MODTRAN) analysis. The discrimination of volcanic clouds in the new scheme also uses both BT and BTD data but corrects for the effects of the water vapor. The new scheme is demonstrated and compared with the old scheme using two well-documented examples: (1) the 18 August 1992 volcanic cloud of Crater Peak, Mount Spurr, Alaska, and (2) the 26 December 1997 volcanic cloud from Soufriere Hills, Montserrat. The Spurr example represents a relatively ``dry'' subarctic atmospheric condition. The new scheme sees a volcanic cloud that is about 50% larger than the old. The mean optical depth and effective radii of cloud particles are lower by 22% and 9%, and the fine ash mass in the cloud is 14% higher. The Montserrat cloud is much smaller than Spurr and is more sensitive to atmospheric moisture. It also was located in a moist tropical atmosphere. For the Montserrat example the new scheme shows larger differences, with the area of the volcanic cloud being about 5.5 times larger, the optical depth and effective radii of particles lower by 56% and 28%, and the total fine particle mass in the cloud increased by 53%. The new scheme can be automated and can contribute to more accurate remote volcanic ash detection. More tests are needed to find the best way to estimate the water vapor effects in real time.

  6. An investigation of error characteristics and coding performance

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.

    1992-01-01

    The performance of forward error correcting coding schemes on errors anticipated for the Earth Observation System (EOS) Ku-band downlink are studied. The EOS transmits picture frame data to the ground via the Telemetry Data Relay Satellite System (TDRSS) to a ground-based receiver at White Sands. Due to unintentional RF interference from other systems operating in the Ku band, the noise at the receiver is non-Gaussian which may result in non-random errors output by the demodulator. That is, the downlink channel cannot be modeled by a simple memoryless Gaussian-noise channel. From previous experience, it is believed that those errors are bursty. The research proceeded by developing a computer based simulation, called Communication Link Error ANalysis (CLEAN), to model the downlink errors, forward error correcting schemes, and interleavers used with TDRSS. To date, the bulk of CLEAN was written, documented, debugged, and verified. The procedures for utilizing CLEAN to investigate code performance were established and are discussed.

  7. Corrections to the General (2,4) and (4,4) FDTD Schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meierbachtol, Collin S.; Smith, William S.; Shao, Xuan-Min

    The sampling weights associated with two general higher order FDTD schemes were derived by Smith, et al. and published in a IEEE Transactions on Antennas and Propagation article in 2012. Inconsistencies between governing equations and their resulting solutions were discovered within the article. In an effort to track down the root cause of these inconsistencies, the full three-dimensional, higher order FDTD dispersion relation was re-derived using Mathematica TM. During this process, two errors were identi ed in the article. Both errors are highlighted in this document. The corrected sampling weights are also provided. Finally, the original stability limits provided formore » both schemes are corrected, and presented in a more precise form. It is recommended any future implementations of the two general higher order schemes provided in the Smith, et al. 2012 article should instead use the sampling weights and stability conditions listed in this document.« less

  8. MRI non-uniformity correction through interleaved bias estimation and B-spline deformation with a template.

    PubMed

    Fletcher, E; Carmichael, O; Decarli, C

    2012-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer's disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions.

  9. MRI Non-Uniformity Correction Through Interleaved Bias Estimation and B-Spline Deformation with a Template*

    PubMed Central

    Fletcher, E.; Carmichael, O.; DeCarli, C.

    2013-01-01

    We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer’s disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions. PMID:23365843

  10. Loss Tolerance in One-Way Quantum Computation via Counterfactual Error Correction

    NASA Astrophysics Data System (ADS)

    Varnava, Michael; Browne, Daniel E.; Rudolph, Terry

    2006-09-01

    We introduce a scheme for fault tolerantly dealing with losses (or other “leakage” errors) in cluster state computation that can tolerate up to 50% qubit loss. This is achieved passively using an adaptive strategy of measurement—no coherent measurements or coherent correction is required. Since the scheme relies on inferring information about what would have been the outcome of a measurement had one been able to carry it out, we call this counterfactual error correction.

  11. Error determination of a successive correction type objective analysis scheme. [for surface meteorological data

    NASA Technical Reports Server (NTRS)

    Smith, D. R.; Leslie, F. W.

    1984-01-01

    The Purdue Regional Objective Analysis of the Mesoscale (PROAM) is a successive correction type scheme for the analysis of surface meteorological data. The scheme is subjected to a series of experiments to evaluate its performance under a variety of analysis conditions. The tests include use of a known analytic temperature distribution to quantify error bounds for the scheme. Similar experiments were conducted using actual atmospheric data. Results indicate that the multiple pass technique increases the accuracy of the analysis. Furthermore, the tests suggest appropriate values for the analysis parameters in resolving disturbances for the data set used in this investigation.

  12. Fish tracking by combining motion based segmentation and particle filtering

    NASA Astrophysics Data System (ADS)

    Bichot, E.; Mascarilla, L.; Courtellemont, P.

    2006-01-01

    In this paper, we suggest a new importance sampling scheme to improve a particle filtering based tracking process. This scheme relies on exploitation of motion segmentation. More precisely, we propagate hypotheses from particle filtering to blobs of similar motion to target. Hence, search is driven toward regions of interest in the state space and prediction is more accurate. We also propose to exploit segmentation to update target model. Once the moving target has been identified, a representative model is learnt from its spatial support. We refer to this model in the correction step of the tracking process. The importance sampling scheme and the strategy to update target model improve the performance of particle filtering in complex situations of occlusions compared to a simple Bootstrap approach as shown by our experiments on real fish tank sequences.

  13. Source-Adaptation-Based Wireless Video Transport: A Cross-Layer Approach

    NASA Astrophysics Data System (ADS)

    Qu, Qi; Pei, Yong; Modestino, James W.; Tian, Xusheng

    2006-12-01

    Real-time packet video transmission over wireless networks is expected to experience bursty packet losses that can cause substantial degradation to the transmitted video quality. In wireless networks, channel state information is hard to obtain in a reliable and timely manner due to the rapid change of wireless environments. However, the source motion information is always available and can be obtained easily and accurately from video sequences. Therefore, in this paper, we propose a novel cross-layer framework that exploits only the motion information inherent in video sequences and efficiently combines a packetization scheme, a cross-layer forward error correction (FEC)-based unequal error protection (UEP) scheme, an intracoding rate selection scheme as well as a novel intraframe interleaving scheme. Our objective and subjective results demonstrate that the proposed approach is very effective in dealing with the bursty packet losses occurring on wireless networks without incurring any additional implementation complexity or delay. Thus, the simplicity of our proposed system has important implications for the implementation of a practical real-time video transmission system.

  14. Nagy-Soper Subtraction: a Review

    NASA Astrophysics Data System (ADS)

    Robens, Tania

    2013-07-01

    In this review, we present a review on an alternative NLO subtraction scheme, based on the splitting kernels of an improved parton shower that promises to facilitate the inclusion of higher-order corrections into Monte Carlo event generators. We give expressions for the scheme for massless emitters, and point to work on the extension for massive cases. As an example, we show results for the C parameter of the process e+e-→3 jets at NLO which have recently been published as a verification of this scheme. We equally provide analytic expressions for integrated counterterms that have not been presented in previous work, and comment on the possibility of analytic approximations for the remaining numerical integrals.

  15. Forcing scheme analysis for the axisymmetric lattice Boltzmann method under incompressible limit.

    PubMed

    Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chen, Jie; Yin, Linmao; Chew, Jia Wei

    2017-04-01

    Because the standard lattice Boltzmann (LB) method is proposed for Cartesian Navier-Stokes (NS) equations, additional source terms are necessary in the axisymmetric LB method for representing the axisymmetric effects. Therefore, the accuracy and applicability of the axisymmetric LB models depend on the forcing schemes adopted for discretization of the source terms. In this study, three forcing schemes, namely, the trapezium rule based scheme, the direct forcing scheme, and the semi-implicit centered scheme, are analyzed theoretically by investigating their derived macroscopic equations in the diffusive scale. Particularly, the finite difference interpretation of the standard LB method is extended to the LB equations with source terms, and then the accuracy of different forcing schemes is evaluated for the axisymmetric LB method. Theoretical analysis indicates that the discrete lattice effects arising from the direct forcing scheme are part of the truncation error terms and thus would not affect the overall accuracy of the standard LB method with general force term (i.e., only the source terms in the momentum equation are considered), but lead to incorrect macroscopic equations for the axisymmetric LB models. On the other hand, the trapezium rule based scheme and the semi-implicit centered scheme both have the advantage of avoiding the discrete lattice effects and recovering the correct macroscopic equations. Numerical tests applied for validating the theoretical analysis show that both the numerical stability and the accuracy of the axisymmetric LB simulations are affected by the direct forcing scheme, which indicate that forcing schemes free of the discrete lattice effects are necessary for the axisymmetric LB method.

  16. Angular spectral framework to test full corrections of paraxial solutions.

    PubMed

    Mahillo-Isla, R; González-Morales, M J

    2015-07-01

    Different correction methods for paraxial solutions have been used when such solutions extend out of the paraxial regime. The authors have used correction methods guided by either their experience or some educated hypothesis pertinent to the particular problem that they were tackling. This article provides a framework so as to classify full wave correction schemes. Thus, for a given solution of the paraxial wave equation, we can select the best correction scheme of those available. Some common correction methods are considered and evaluated under the proposed scope. Another remarkable contribution is obtained by giving the necessary conditions that two solutions of the Helmholtz equation must accomplish to accept a common solution of the parabolic wave equation as a paraxial approximation of both solutions.

  17. Bio-inspired adaptive feedback error learning architecture for motor control.

    PubMed

    Tolu, Silvia; Vanegas, Mauricio; Luque, Niceto R; Garrido, Jesús A; Ros, Eduardo

    2012-10-01

    This study proposes an adaptive control architecture based on an accurate regression method called Locally Weighted Projection Regression (LWPR) and on a bio-inspired module, such as a cerebellar-like engine. This hybrid architecture takes full advantage of the machine learning module (LWPR kernel) to abstract an optimized representation of the sensorimotor space while the cerebellar component integrates this to generate corrective terms in the framework of a control task. Furthermore, we illustrate how the use of a simple adaptive error feedback term allows to use the proposed architecture even in the absence of an accurate analytic reference model. The presented approach achieves an accurate control with low gain corrective terms (for compliant control schemes). We evaluate the contribution of the different components of the proposed scheme comparing the obtained performance with alternative approaches. Then, we show that the presented architecture can be used for accurate manipulation of different objects when their physical properties are not directly known by the controller. We evaluate how the scheme scales for simulated plants of high Degrees of Freedom (7-DOFs).

  18. Measurement-based quantum communication with resource states generated by entanglement purification

    NASA Astrophysics Data System (ADS)

    Wallnöfer, J.; Dür, W.

    2017-01-01

    We investigate measurement-based quantum communication with noisy resource states that are generated by entanglement purification. We consider the transmission of encoded information via noisy quantum channels using a measurement-based implementation of encoding, error correction, and decoding. We show that such an approach offers advantages over direct transmission, gate-based error correction, and measurement-based schemes with direct generation of resource states. We analyze the noise structure of resource states generated by entanglement purification and show that a local error model, i.e., noise acting independently on all qubits of the resource state, is a good approximation in general, and provides an exact description for Greenberger-Horne-Zeilinger states. The latter are resources for a measurement-based implementation of error-correction codes for bit-flip or phase-flip errors. This provides an approach to link the recently found very high thresholds for fault-tolerant measurement-based quantum information processing based on local error models for resource states with error thresholds for gate-based computational models.

  19. Software Design Description for the HYbrid Coordinate Ocean Model (HYCOM), Version 2.2

    DTIC Science & Technology

    2009-02-12

    scalars. J. Phys. Oceanogr. 32: 240–264. Carnes, M., (2002). Data base description for the Generalized Digital Environmental Model ( GDEM -V) (U...Direction FCT Flux-Corrected Transport scheme GDEM Generalized Digital Environmental Model GISS NASA Goddard Institute for Space Studies GRD

  20. Optimal scan strategy for mega-pixel and kilo-gray-level OLED-on-silicon microdisplay.

    PubMed

    Ji, Yuan; Ran, Feng; Ji, Weigui; Xu, Meihua; Chen, Zhangjing; Jiang, Yuxi; Shen, Weixin

    2012-06-10

    The digital pixel driving scheme makes the organic light-emitting diode (OLED) microdisplays more immune to the pixel luminance variations and simplifies the circuit architecture and design flow compared to the analog pixel driving scheme. Additionally, it is easily applied in full digital systems. However, the data bottleneck becomes a notable problem as the number of pixels and gray levels grow dramatically. This paper will discuss the digital driving ability to achieve kilogray-levels for megapixel displays. The optimal scan strategy is proposed for creating ultra high gray levels and increasing light efficiency and contrast ratio. Two correction schemes are discussed to improve the gray level linearity. A 1280×1024×3 OLED-on-silicon microdisplay, with 4096 gray levels, is designed based on the optimal scan strategy. The circuit driver is integrated in the silicon backplane chip in the 0.35 μm 3.3 V-6 V dual voltage one polysilicon layer, four metal layers (1P4M) complementary metal-oxide semiconductor (CMOS) process with custom top metal. The design aspects of the optimal scan controller are also discussed. The test results show the gray level linearity of the correction schemes for the optimal scan strategy is acceptable by the human eye.

  1. 3D early embryogenesis image filtering by nonlinear partial differential equations.

    PubMed

    Krivá, Z; Mikula, K; Peyriéras, N; Rizzi, B; Sarti, A; Stasová, O

    2010-08-01

    We present nonlinear diffusion equations, numerical schemes to solve them and their application for filtering 3D images obtained from laser scanning microscopy (LSM) of living zebrafish embryos, with a goal to identify the optimal filtering method and its parameters. In the large scale applications dealing with analysis of 3D+time embryogenesis images, an important objective is a correct detection of the number and position of cell nuclei yielding the spatio-temporal cell lineage tree of embryogenesis. The filtering is the first and necessary step of the image analysis chain and must lead to correct results, removing the noise, sharpening the nuclei edges and correcting the acquisition errors related to spuriously connected subregions. In this paper we study such properties for the regularized Perona-Malik model and for the generalized mean curvature flow equations in the level-set formulation. A comparison with other nonlinear diffusion filters, like tensor anisotropic diffusion and Beltrami flow, is also included. All numerical schemes are based on the same discretization principles, i.e. finite volume method in space and semi-implicit scheme in time, for solving nonlinear partial differential equations. These numerical schemes are unconditionally stable, fast and naturally parallelizable. The filtering results are evaluated and compared first using the Mean Hausdorff distance between a gold standard and different isosurfaces of original and filtered data. Then, the number of isosurface connected components in a region of interest (ROI) detected in original and after the filtering is compared with the corresponding correct number of nuclei in the gold standard. Such analysis proves the robustness and reliability of the edge preserving nonlinear diffusion filtering for this type of data and lead to finding the optimal filtering parameters for the studied models and numerical schemes. Further comparisons consist in ability of splitting the very close objects which are artificially connected due to acquisition error intrinsically linked to physics of LSM. In all studied aspects it turned out that the nonlinear diffusion filter which is called geodesic mean curvature flow (GMCF) has the best performance. Copyright 2010 Elsevier B.V. All rights reserved.

  2. Fits of weak annihilation and hard spectator scattering corrections in B u,d \\wideoverrightarrow VV decays

    NASA Astrophysics Data System (ADS)

    Chang, Qin; Li, Xiao-Nan; Sun, Jun-Feng; Yang, Yue-Ling

    2016-10-01

    In this paper, the contributions of weak annihilation and hard spectator scattering in B\\to ρ {K}* , {K}* {\\bar{K}}* , φ {K}* , ρ ρ and φ φ decays are investigated within the framework of quantum chromodynamics factorization. Using the experimental data available, we perform {χ }2 analyses of end-point parameters in four cases based on the topology-dependent and polarization-dependent parameterization schemes. The fitted results indicate that: (i) in the topology-dependent scheme, the relation ({ρ }Ai,{φ }Ai)\

  3. Using concatenated quantum codes for universal fault-tolerant quantum gates.

    PubMed

    Jochym-O'Connor, Tomas; Laflamme, Raymond

    2014-01-10

    We propose a method for universal fault-tolerant quantum computation using concatenated quantum error correcting codes. The concatenation scheme exploits the transversal properties of two different codes, combining them to provide a means to protect against low-weight arbitrary errors. We give the required properties of the error correcting codes to ensure universal fault tolerance and discuss a particular example using the 7-qubit Steane and 15-qubit Reed-Muller codes. Namely, other than computational basis state preparation as required by the DiVincenzo criteria, our scheme requires no special ancillary state preparation to achieve universality, as opposed to schemes such as magic state distillation. We believe that optimizing the codes used in such a scheme could provide a useful alternative to state distillation schemes that exhibit high overhead costs.

  4. Compression of digital images over local area networks. Appendix 1: Item 3. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Gorjala, Bhargavi

    1991-01-01

    Differential Pulse Code Modulation (DPCM) has been used with speech for many years. It has not been as successful for images because of poor edge performance. The only corruption in DPC is quantizer error but this corruption becomes quite large in the region of an edge because of the abrupt changes in the statistics of the signal. We introduce two improved DPCM schemes; Edge correcting DPCM and Edge Preservation Differential Coding. These two coding schemes will detect the edges and take action to correct them. In an Edge Correcting scheme, the quantizer error for an edge is encoded using a recursive quantizer with entropy coding and sent to the receiver as side information. In an Edge Preserving scheme, when the quantizer input falls in the overload region, the quantizer error is encoded and sent to the receiver repeatedly until the quantizer input falls in the inner levels. Therefore these coding schemes increase the bit rate in the region of an edge and require variable rate channels. We implement these two variable rate coding schemes on a token wing network. Timed token protocol supports two classes of messages; asynchronous and synchronous. The synchronous class provides a pre-allocated bandwidth and guaranteed response time. The remaining bandwidth is dynamically allocated to the asynchronous class. The Edge Correcting DPCM is simulated by considering the edge information under the asynchronous class. For the simulation of the Edge Preserving scheme, the amount of information sent each time is fixed, but the length of the packet or the bit rate for that packet is chosen depending on the availability capacity. The performance of the network, and the performance of the image coding algorithms, is studied.

  5. Dynamically protected cat-qubits: a new paradigm for universal quantum computation

    NASA Astrophysics Data System (ADS)

    Mirrahimi, Mazyar; Leghtas, Zaki; Albert, Victor V.; Touzard, Steven; Schoelkopf, Robert J.; Jiang, Liang; Devoret, Michel H.

    2014-04-01

    We present a new hardware-efficient paradigm for universal quantum computation which is based on encoding, protecting and manipulating quantum information in a quantum harmonic oscillator. This proposal exploits multi-photon driven dissipative processes to encode quantum information in logical bases composed of Schrödinger cat states. More precisely, we consider two schemes. In a first scheme, a two-photon driven dissipative process is used to stabilize a logical qubit basis of two-component Schrödinger cat states. While such a scheme ensures a protection of the logical qubit against the photon dephasing errors, the prominent error channel of single-photon loss induces bit-flip type errors that cannot be corrected. Therefore, we consider a second scheme based on a four-photon driven dissipative process which leads to the choice of four-component Schrödinger cat states as the logical qubit. Such a logical qubit can be protected against single-photon loss by continuous photon number parity measurements. Next, applying some specific Hamiltonians, we provide a set of universal quantum gates on the encoded qubits of each of the two schemes. In particular, we illustrate how these operations can be rendered fault-tolerant with respect to various decoherence channels of participating quantum systems. Finally, we also propose experimental schemes based on quantum superconducting circuits and inspired by methods used in Josephson parametric amplification, which should allow one to achieve these driven dissipative processes along with the Hamiltonians ensuring the universal operations in an efficient manner.

  6. Correction tool for Active Shape Model based lumbar muscle segmentation.

    PubMed

    Valenzuela, Waldo; Ferguson, Stephen J; Ignasiak, Dominika; Diserens, Gaelle; Vermathen, Peter; Boesch, Chris; Reyes, Mauricio

    2015-08-01

    In the clinical environment, accuracy and speed of the image segmentation process plays a key role in the analysis of pathological regions. Despite advances in anatomic image segmentation, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a low number of interactions, and a user-independent solution. In this work we present a new interactive correction method for correcting the image segmentation. Given an initial segmentation and the original image, our tool provides a 2D/3D environment, that enables 3D shape correction through simple 2D interactions. Our scheme is based on direct manipulation of free form deformation adapted to a 2D environment. This approach enables an intuitive and natural correction of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle segmentation from Magnetic Resonance Images. Experimental results show that full segmentation correction could be performed within an average correction time of 6±4 minutes and an average of 68±37 number of interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.03.

  7. BossPro: a biometrics-based obfuscation scheme for software protection

    NASA Astrophysics Data System (ADS)

    Kuseler, Torben; Lami, Ihsan A.; Al-Assam, Hisham

    2013-05-01

    This paper proposes to integrate biometric-based key generation into an obfuscated interpretation algorithm to protect authentication application software from illegitimate use or reverse-engineering. This is especially necessary for mCommerce because application programmes on mobile devices, such as Smartphones and Tablet-PCs are typically open for misuse by hackers. Therefore, the scheme proposed in this paper ensures that a correct interpretation / execution of the obfuscated program code of the authentication application requires a valid biometric generated key of the actual person to be authenticated, in real-time. Without this key, the real semantics of the program cannot be understood by an attacker even if he/she gains access to this application code. Furthermore, the security provided by this scheme can be a vital aspect in protecting any application running on mobile devices that are increasingly used to perform business/financial or other security related applications, but are easily lost or stolen. The scheme starts by creating a personalised copy of any application based on the biometric key generated during an enrolment process with the authenticator as well as a nuance created at the time of communication between the client and the authenticator. The obfuscated code is then shipped to the client's mobile devise and integrated with real-time biometric extracted data of the client to form the unlocking key during execution. The novelty of this scheme is achieved by the close binding of this application program to the biometric key of the client, thus making this application unusable for others. Trials and experimental results on biometric key generation, based on client's faces, and an implemented scheme prototype, based on the Android emulator, prove the concept and novelty of this proposed scheme.

  8. Research on the Application of Fast-steering Mirror in Stellar Interferometer

    NASA Astrophysics Data System (ADS)

    Mei, R.; Hu, Z. W.; Xu, T.; Sun, C. S.

    2017-07-01

    For a stellar interferometer, the fast-steering mirror (FSM) is widely utilized to correct wavefront tilt caused by atmospheric turbulence and internal instrumental vibration due to its high resolution and fast response frequency. In this study, the non-coplanar error between the FSM and actuator deflection axis introduced by manufacture, assembly, and adjustment is analyzed. Via a numerical method, the additional optical path difference (OPD) caused by above factors is studied, and its effects on tracking accuracy of stellar interferometer are also discussed. On the other hand, the starlight parallelism between the beams of two arms is one of the main factors of the loss of fringe visibility. By analyzing the influence of wavefront tilt caused by the atmospheric turbulence on fringe visibility, a simple and efficient real-time correction scheme of starlight parallelism is proposed based on a single array detector. The feasibility of this scheme is demonstrated by laboratory experiment. The results show that starlight parallelism meets the requirement of stellar interferometer in wavefront tilt preliminarily after the correction of fast-steering mirror.

  9. Stokes space modulation format classification based on non-iterative clustering algorithm for coherent optical receivers.

    PubMed

    Mai, Xiaofeng; Liu, Jie; Wu, Xiong; Zhang, Qun; Guo, Changjian; Yang, Yanfu; Li, Zhaohui

    2017-02-06

    A Stokes-space modulation format classification (MFC) technique is proposed for coherent optical receivers by using a non-iterative clustering algorithm. In the clustering algorithm, two simple parameters are calculated to help find the density peaks of the data points in Stokes space and no iteration is required. Correct MFC can be realized in numerical simulations among PM-QPSK, PM-8QAM, PM-16QAM, PM-32QAM and PM-64QAM signals within practical optical signal-to-noise ratio (OSNR) ranges. The performance of the proposed MFC algorithm is also compared with those of other schemes based on clustering algorithms. The simulation results show that good classification performance can be achieved using the proposed MFC scheme with moderate time complexity. Proof-of-concept experiments are finally implemented to demonstrate MFC among PM-QPSK/16QAM/64QAM signals, which confirm the feasibility of our proposed MFC scheme.

  10. Teleportation-based continuous variable quantum cryptography

    NASA Astrophysics Data System (ADS)

    Luiz, F. S.; Rigolin, Gustavo

    2017-03-01

    We present a continuous variable (CV) quantum key distribution (QKD) scheme based on the CV quantum teleportation of coherent states that yields a raw secret key made up of discrete variables for both Alice and Bob. This protocol preserves the efficient detection schemes of current CV technology (no single-photon detection techniques) and, at the same time, has efficient error correction and privacy amplification schemes due to the binary modulation of the key. We show that for a certain type of incoherent attack, it is secure for almost any value of the transmittance of the optical line used by Alice to share entangled two-mode squeezed states with Bob (no 3 dB or 50% loss limitation characteristic of beam splitting attacks). The present CVQKD protocol works deterministically (no postselection needed) with efficient direct reconciliation techniques (no reverse reconciliation) in order to generate a secure key and beyond the 50% loss case at the incoherent attack level.

  11. Investigation on Beam-Blocker-Based Scatter Correction Method for Improving CT Number Accuracy

    NASA Astrophysics Data System (ADS)

    Lee, Hoyeon; Min, Jonghwan; Lee, Taewon; Pua, Rizza; Sabir, Sohail; Yoon, Kown-Ha; Kim, Hokyung; Cho, Seungryong

    2017-03-01

    Cone-beam computed tomography (CBCT) is gaining widespread use in various medical and industrial applications but suffers from substantially larger amount of scatter than that in the conventional diagnostic CT resulting in relatively poor image quality. Various methods that can reduce and/or correct for the scatter in the CBCT have therefore been developed. Scatter correction method that uses a beam-blocker has been considered a direct measurement-based approach providing accurate scatter estimation from the data in the shadows of the beam-blocker. To the best of our knowledge, there has been no record reporting the significance of the scatter from the beam-blocker itself in such correction methods. In this paper, we identified the scatter from the beam-blocker that is detected in the object-free projection data investigated its influence on the image accuracy of CBCT reconstructed images, and developed a scatter correction scheme that takes care of this scatter as well as the scatter from the scanned object.

  12. Upwind schemes and bifurcating solutions in real gas computations

    NASA Technical Reports Server (NTRS)

    Suresh, Ambady; Liou, Meng-Sing

    1992-01-01

    The area of high speed flow is seeing a renewed interest due to advanced propulsion concepts such as the National Aerospace Plane (NASP), Space Shuttle, and future civil transport concepts. Upwind schemes to solve such flows have become increasingly popular in the last decade due to their excellent shock capturing properties. In the first part of this paper the authors present the extension of the Osher scheme to equilibrium and non-equilibrium gases. For simplicity, the source terms are treated explicitly. Computations based on the above scheme are presented to demonstrate the feasibility, accuracy and efficiency of the proposed scheme. One of the test problems is a Chapman-Jouguet detonation problem for which numerical solutions have been known to bifurcate into spurious weak detonation solutions on coarse grids. Results indicate that the numerical solution obtained depends both on the upwinding scheme used and the limiter employed to obtain second order accuracy. For example, the Osher scheme gives the correct CJ solution when the super-bee limiter is used, but gives the spurious solution when the Van Leer limiter is used. With the Roe scheme the spurious solution is obtained for all limiters.

  13. An Improved and Secure Anonymous Biometric-Based User Authentication with Key Agreement Scheme for the Integrated EPR Information System.

    PubMed

    Jung, Jaewook; Kang, Dongwoo; Lee, Donghoon; Won, Dongho

    2017-01-01

    Nowadays, many hospitals and medical institutes employ an authentication protocol within electronic patient records (EPR) services in order to provide protected electronic transactions in e-medicine systems. In order to establish efficient and robust health care services, numerous studies have been carried out on authentication protocols. Recently, Li et al. proposed a user authenticated key agreement scheme according to EPR information systems, arguing that their scheme is able to resist various types of attacks and preserve diverse security properties. However, this scheme possesses critical vulnerabilities. First, the scheme cannot prevent off-line password guessing attacks and server spoofing attack, and cannot preserve user identity. Second, there is no password verification process with the failure to identify the correct password at the beginning of the login phase. Third, the mechanism of password change is incompetent, in that it induces inefficient communication in communicating with the server to change a user password. Therefore, we suggest an upgraded version of the user authenticated key agreement scheme that provides enhanced security. Our security and performance analysis shows that compared to other related schemes, our scheme not only improves the security level, but also ensures efficiency.

  14. An Improved and Secure Anonymous Biometric-Based User Authentication with Key Agreement Scheme for the Integrated EPR Information System

    PubMed Central

    Kang, Dongwoo; Lee, Donghoon; Won, Dongho

    2017-01-01

    Nowadays, many hospitals and medical institutes employ an authentication protocol within electronic patient records (EPR) services in order to provide protected electronic transactions in e-medicine systems. In order to establish efficient and robust health care services, numerous studies have been carried out on authentication protocols. Recently, Li et al. proposed a user authenticated key agreement scheme according to EPR information systems, arguing that their scheme is able to resist various types of attacks and preserve diverse security properties. However, this scheme possesses critical vulnerabilities. First, the scheme cannot prevent off-line password guessing attacks and server spoofing attack, and cannot preserve user identity. Second, there is no password verification process with the failure to identify the correct password at the beginning of the login phase. Third, the mechanism of password change is incompetent, in that it induces inefficient communication in communicating with the server to change a user password. Therefore, we suggest an upgraded version of the user authenticated key agreement scheme that provides enhanced security. Our security and performance analysis shows that compared to other related schemes, our scheme not only improves the security level, but also ensures efficiency. PMID:28046075

  15. Both channel coding and wavefront correction on the turbulence mitigation of optical communications using orbital angular momentum multiplexing

    NASA Astrophysics Data System (ADS)

    Zhao, Shengmei; Wang, Le; Zou, Li; Gong, Longyan; Cheng, Weiwen; Zheng, Baoyu; Chen, Hanwu

    2016-10-01

    A free-space optical (FSO) communication link with multiplexed orbital angular momentum (OAM) modes has been demonstrated to largely enhance the system capacity without a corresponding increase in spectral bandwidth, but the performance of the link is unavoidably degraded by atmospheric turbulence (AT). In this paper, we propose a turbulence mitigation scheme to improve AT tolerance of the OAM-multiplexed FSO communication link using both channel coding and wavefront correction. In the scheme, we utilize a wavefront correction method to mitigate the phase distortion first, and then we use a channel code to further correct the errors in each OAM mode. The improvement of AT tolerance is discussed over the performance of the link with or without channel coding/wavefront correction. The results show that the bit error rate performance has been improved greatly. The detrimental effect of AT on the OAM-multiplexed FSO communication link could be removed by the proposed scheme even in the relatively strong turbulence regime, such as Cn2 = 3.6 ×10-14m - 2 / 3.

  16. On the development of OpenFOAM solvers based on explicit and implicit high-order Runge-Kutta schemes for incompressible flows with heat transfer

    NASA Astrophysics Data System (ADS)

    D'Alessandro, Valerio; Binci, Lorenzo; Montelpare, Sergio; Ricci, Renato

    2018-01-01

    Open-source CFD codes provide suitable environments for implementing and testing low-dissipative algorithms typically used to simulate turbulence. In this research work we developed CFD solvers for incompressible flows based on high-order explicit and diagonally implicit Runge-Kutta (RK) schemes for time integration. In particular, an iterated PISO-like procedure based on Rhie-Chow correction was used to handle pressure-velocity coupling within each implicit RK stage. For the explicit approach, a projected scheme was used to avoid the "checker-board" effect. The above-mentioned approaches were also extended to flow problems involving heat transfer. It is worth noting that the numerical technology available in the OpenFOAM library was used for space discretization. In this work, we additionally explore the reliability and effectiveness of the proposed implementations by computing several unsteady flow benchmarks; we also show that the numerical diffusion due to the time integration approach is completely canceled using the solution techniques proposed here.

  17. High fidelity quantum teleportation assistance with quantum neural network

    NASA Astrophysics Data System (ADS)

    Huang, Chunhui; Wu, Bichun

    2014-09-01

    In this paper, a high fidelity scheme of quantum teleportation based on quantum neural network (QNN) is proposed. The QNN is composed of multi-bit control-not gates. The quantum teleportation of a qubit state via two-qubit entangled channels is investigated by solving the master equation in Lindblad operators with a noisy environment. To ensure the security of quantum teleportation, the indirect training of QNN is employed. Only 10% of teleported information is extracted for the training of QNN parameters. Then the outputs are corrected by the other QNN at Bob's side. We build a random series of numbers ranged in [0, π] as inputs and simulate the properties of our teleportation scheme. The results show that the fidelity of quantum teleportation system is significantly improved to approach 1 by the error-correction of QNN. It illustrates that the distortion can be eliminated perfectly and the high fidelity of quantum teleportation could be implemented.

  18. On the Difference Between Additive and Subtractive QM/MM Calculations

    PubMed Central

    Cao, Lili; Ryde, Ulf

    2018-01-01

    The combined quantum mechanical (QM) and molecular mechanical (MM) approach (QM/MM) is a popular method to study reactions in biochemical macromolecules. Even if the general procedure of using QM for a small, but interesting part of the system and MM for the rest is common to all approaches, the details of the implementations vary extensively, especially the treatment of the interface between the two systems. For example, QM/MM can use either additive or subtractive schemes, of which the former is often said to be preferable, although the two schemes are often mixed up with mechanical and electrostatic embedding. In this article, we clarify the similarities and differences of the two approaches. We show that inherently, the two approaches should be identical and in practice require the same sets of parameters. However, the subtractive scheme provides an opportunity to correct errors introduced by the truncation of the QM system, i.e., the link atoms, but such corrections require additional MM parameters for the QM system. We describe and test three types of link-atom correction, viz. for van der Waals, electrostatic, and bonded interactions. The calculations show that electrostatic and bonded link-atom corrections often give rise to problems in the geometries and energies. The van der Waals link-atom corrections are quite small and give results similar to a pure additive QM/MM scheme. Therefore, both approaches can be recommended. PMID:29666794

  19. Comparison of different Aethalometer correction schemes and a reference multi-wavelength absorption technique for ambient aerosol data

    NASA Astrophysics Data System (ADS)

    Saturno, Jorge; Pöhlker, Christopher; Massabò, Dario; Brito, Joel; Carbone, Samara; Cheng, Yafang; Chi, Xuguang; Ditas, Florian; Hrabě de Angelis, Isabella; Morán-Zuloaga, Daniel; Pöhlker, Mira L.; Rizzo, Luciana V.; Walter, David; Wang, Qiaoqiao; Artaxo, Paulo; Prati, Paolo; Andreae, Meinrat O.

    2017-08-01

    Deriving absorption coefficients from Aethalometer attenuation data requires different corrections to compensate for artifacts related to filter-loading effects, scattering by filter fibers, and scattering by aerosol particles. In this study, two different correction schemes were applied to seven-wavelength Aethalometer data, using multi-angle absorption photometer (MAAP) data as a reference absorption measurement at 637 nm. The compensation algorithms were compared to five-wavelength offline absorption measurements obtained with a multi-wavelength absorbance analyzer (MWAA), which serves as a multiple-wavelength reference measurement. The online measurements took place in the Amazon rainforest, from the wet-to-dry transition season to the dry season (June-September 2014). The mean absorption coefficient (at 637 nm) during this period was 1.8 ± 2.1 Mm-1, with a maximum of 15.9 Mm-1. Under these conditions, the filter-loading compensation was negligible. One of the correction schemes was found to artificially increase the short-wavelength absorption coefficients. It was found that accounting for the aerosol optical properties in the scattering compensation significantly affects the absorption Ångström exponent (åABS) retrievals. Proper Aethalometer data compensation schemes are crucial to retrieve the correct åABS, which is commonly implemented in brown carbon contribution calculations. Additionally, we found that the wavelength dependence of uncompensated Aethalometer attenuation data significantly correlates with the åABS retrieved from offline MWAA measurements.

  20. On the difference between additive and subtractive QM/MM calculations

    NASA Astrophysics Data System (ADS)

    Cao, Lili; Ryde, Ulf

    2018-04-01

    The combined quantum mechanical (QM) and molecular mechanical (MM) approach (QM/MM) is a popular method to study reactions in biochemical macromolecules. Even if the general procedure of using QM for a small, but interesting part of the system and MM for the rest is common to all approaches, the details of the implementations vary extensively, especially the treatment of the interface between the two systems. For example, QM/MM can use either additive or subtractive schemes, of which the former is often said to be preferable, although the two schemes are often mixed up with mechanical and electrostatic embedding. In this article, we clarify the similarities and differences of the two approaches. We show that inherently, the two approaches should be identical and in practice require the same sets of parameters. However, the subtractive scheme provides an opportunity to correct errors introduced by the truncation of the QM system, i.e. the link atoms, but such corrections require additional MM parameters for the QM system. We describe and test three types of link-atom correction, viz. for van der Waals, electrostatic and bonded interactions. The calculations show that electrostatic and bonded link-atom corrections often give rise to problems in the geometries and energies. The van der Waals link-atom corrections are quite small and give results similar to a pure additive QM/MM scheme. Therefore, both approaches can be recommended.

  1. Convergence of generalized MUSCL schemes

    NASA Technical Reports Server (NTRS)

    Osher, S.

    1984-01-01

    Semi-discrete generalizations of the second order extension of Godunov's scheme, known as the MUSCL scheme, are constructed, starting with any three point E scheme. They are used to approximate scalar conservation laws in one space dimension. For convex conservation laws, each member of a wide class is proven to be a convergent approximation to the correct physical solution. Comparison with another class of high resolution convergent schemes is made.

  2. Bias-correction and Spatial Disaggregation for Climate Change Impact Assessments at a basin scale

    NASA Astrophysics Data System (ADS)

    Nyunt, Cho; Koike, Toshio; Yamamoto, Akio; Nemoto, Toshihoro; Kitsuregawa, Masaru

    2013-04-01

    Basin-scale climate change impact studies mainly rely on general circulation models (GCMs) comprising the related emission scenarios. Realistic and reliable data from GCM is crucial for national scale or basin scale impact and vulnerability assessments to build safety society under climate change. However, GCM fail to simulate regional climate features due to the imprecise parameterization schemes in atmospheric physics and coarse resolution scale. This study describes how to exclude some unsatisfactory GCMs with respect to focused basin, how to minimize the biases of GCM precipitation through statistical bias correction and how to cover spatial disaggregation scheme, a kind of downscaling, within in a basin. GCMs rejection is based on the regional climate features of seasonal evolution as a bench mark and mainly depends on spatial correlation and root mean square error of precipitation and atmospheric variables over the target region. Global Precipitation Climatology Project (GPCP) and Japanese 25-uear Reanalysis Project (JRA-25) are specified as references in figuring spatial pattern and error of GCM. Statistical bias-correction scheme comprises improvements of three main flaws of GCM precipitation such as low intensity drizzled rain days with no dry day, underestimation of heavy rainfall and inter-annual variability of local climate. Biases of heavy rainfall are conducted by generalized Pareto distribution (GPD) fitting over a peak over threshold series. Frequency of rain day error is fixed by rank order statistics and seasonal variation problem is solved by using a gamma distribution fitting in each month against insi-tu stations vs. corresponding GCM grids. By implementing the proposed bias-correction technique to all insi-tu stations and their respective GCM grid, an easy and effective downscaling process for impact studies at the basin scale is accomplished. The proposed method have been examined its applicability to some of the basins in various climate regions all over the world. The biases are controlled very well by using this scheme in all applied basins. After that, bias-corrected and downscaled GCM precipitation are ready to use for simulating the Water and Energy Budget based Distributed Hydrological Model (WEB-DHM) to analyse the stream flow change or water availability of a target basin under the climate change in near future. Furthermore, it can be investigated any inter-disciplinary studies such as drought, flood, food, health and so on.In summary, an effective and comprehensive statistical bias-correction method was established to fulfil the generative applicability of GCM scale to basin scale without difficulty. This gap filling also promotes the sound decision of river management in the basin with more reliable information to build the resilience society.

  3. A new phase correction method in NMR imaging based on autocorrelation and histogram analysis.

    PubMed

    Ahn, C B; Cho, Z H

    1987-01-01

    A new statistical approach to phase correction in NMR imaging is proposed. The proposed scheme consists of first-and zero-order phase corrections each by the inverse multiplication of estimated phase error. The first-order error is estimated by the phase of autocorrelation calculated from the complex valued phase distorted image while the zero-order correction factor is extracted from the histogram of phase distribution of the first-order corrected image. Since all the correction procedures are performed on the spatial domain after completion of data acquisition, no prior adjustments or additional measurements are required. The algorithm can be applicable to most of the phase-involved NMR imaging techniques including inversion recovery imaging, quadrature modulated imaging, spectroscopic imaging, and flow imaging, etc. Some experimental results with inversion recovery imaging as well as quadrature spectroscopic imaging are shown to demonstrate the usefulness of the algorithm.

  4. A numerical study of the steady scalar convective diffusion equation for small viscosity

    NASA Technical Reports Server (NTRS)

    Giles, M. B.; Rose, M. E.

    1983-01-01

    A time-independent convection diffusion equation is studied by means of a compact finite difference scheme and numerical solutions are compared to the analytic inviscid solutions. The correct internal and external boundary layer behavior is observed, due to an inherent feature of the scheme which automatically produces upwind differencing in inviscid regions and the correct viscous behavior in viscous regions.

  5. Correction: All-solid-state Z-scheme system arrays of Fe2V4O13/RGO/CdS for visible light-driving photocatalytic CO2 reduction into renewable hydrocarbon fuel.

    PubMed

    Li, Ping; Zhou, Yong; Li, Haijin; Xu, Qinfeng; Meng, Xianguang; Wang, Xiaoyong; Xiao, Min; Zou, Zhigang

    2015-01-31

    Correction for 'All-solid-state Z-scheme system arrays of Fe2V4O13/RGO/CdS for visible light-driving photocatalytic CO2 reduction into renewable hydrocarbon fuel' by Ping Li et al., Chem. Commun., 2015, 51, 800-803.

  6. A family of chaotic pure analog coding schemes based on baker's map function

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Jing; Lu, Xuanxuan; Yuen, Chau; Wu, Jun

    2015-12-01

    This paper considers a family of pure analog coding schemes constructed from dynamic systems which are governed by chaotic functions—baker's map function and its variants. Various decoding methods, including maximum likelihood (ML), minimum mean square error (MMSE), and mixed ML-MMSE decoding algorithms, have been developed for these novel encoding schemes. The proposed mirrored baker's and single-input baker's analog codes perform a balanced protection against the fold error (large distortion) and weak distortion and outperform the classical chaotic analog coding and analog joint source-channel coding schemes in literature. Compared to the conventional digital communication system, where quantization and digital error correction codes are used, the proposed analog coding system has graceful performance evolution, low decoding latency, and no quantization noise. Numerical results show that under the same bandwidth expansion, the proposed analog system outperforms the digital ones over a wide signal-to-noise (SNR) range.

  7. A Computational Scheme To Evaluate Hamaker Constants of Molecules with Practical Size and Anisotropy.

    PubMed

    Hongo, Kenta; Maezono, Ryo

    2017-11-14

    We propose a computational scheme to evaluate Hamaker constants, A, of molecules with practical sizes and anisotropies. Upon the increasing feasibility of diffusion Monte Carlo (DMC) methods to evaluate binding curves for such molecules to extract the constants, we discussed how to treat the averaging over anisotropy and how to correct the bias due to the nonadditivity. We have developed a computational procedure for dealing with the anisotropy and reducing statistical errors and biases in DMC evaluations, based on possible validations on predicted A. We applied the scheme to cyclohexasilane molecule, Si 6 H 12 , used in "printed electronics" fabrications, getting A ≈ 105 ± 2 zJ, being in plausible range supported even by other possible extrapolations. The scheme provided here would open a way to use handy ab initio evaluations to predict wettabilities as in the form of materials informatics over broader molecules.

  8. Application Of Multi-grid Method On China Seas' Temperature Forecast

    NASA Astrophysics Data System (ADS)

    Li, W.; Xie, Y.; He, Z.; Liu, K.; Han, G.; Ma, J.; Li, D.

    2006-12-01

    Correlation scales have been used in traditional scheme of 3-dimensional variational (3D-Var) data assimilation to estimate the background error covariance for the numerical forecast and reanalysis of atmosphere and ocean for decades. However there are still some drawbacks of this scheme. First, the correlation scales are difficult to be determined accurately. Second, the positive definition of the first-guess error covariance matrix cannot be guaranteed unless the correlation scales are sufficiently small. Xie et al. (2005) indicated that a traditional 3D-Var only corrects some certain wavelength errors and its accuracy depends on the accuracy of the first-guess covariance. And in general, short wavelength error can not be well corrected until long one is corrected and then inaccurate first-guess covariance may mistakenly take long wave error as short wave ones and result in erroneous analysis. For the purpose of quickly minimizing the errors of long and short waves successively, a new 3D-Var data assimilation scheme, called multi-grid data assimilation scheme, is proposed in this paper. By assimilating the shipboard SST and temperature profiles data into a numerical model of China Seas, we applied this scheme in two-month data assimilation and forecast experiment which ended in a favorable result. Comparing with the traditional scheme of 3D-Var, the new scheme has higher forecast accuracy and a lower forecast Root-Mean-Square (RMS) error. Furthermore, this scheme was applied to assimilate the SST of shipboard, AVHRR Pathfinder Version 5.0 SST and temperature profiles at the same time, and a ten-month forecast experiment on sea temperature of China Seas was carried out, in which a successful forecast result was obtained. Particularly, the new scheme is demonstrated a great numerical efficiency in these analyses.

  9. A third-order gas-kinetic CPR method for the Euler and Navier-Stokes equations on triangular meshes

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Li, Qibing; Fu, Song; Wang, Z. J.

    2018-06-01

    A third-order accurate gas-kinetic scheme based on the correction procedure via reconstruction (CPR) framework is developed for the Euler and Navier-Stokes equations on triangular meshes. The scheme combines the accuracy and efficiency of the CPR formulation with the multidimensional characteristics and robustness of the gas-kinetic flux solver. Comparing with high-order finite volume gas-kinetic methods, the current scheme is more compact and efficient by avoiding wide stencils on unstructured meshes. Unlike the traditional CPR method where the inviscid and viscous terms are treated differently, the inviscid and viscous fluxes in the current scheme are coupled and computed uniformly through the kinetic evolution model. In addition, the present scheme adopts a fully coupled spatial and temporal gas distribution function for the flux evaluation, achieving high-order accuracy in both space and time within a single step. Numerical tests with a wide range of flow problems, from nearly incompressible to supersonic flows with strong shocks, for both inviscid and viscous problems, demonstrate the high accuracy and efficiency of the present scheme.

  10. Security Analysis and Improvement of 'a More Secure Anonymous User Authentication Scheme for the Integrated EPR Information System'.

    PubMed

    Islam, S K Hafizul; Khan, Muhammad Khurram; Li, Xiong

    2015-01-01

    Over the past few years, secure and privacy-preserving user authentication scheme has become an integral part of the applications of the healthcare systems. Recently, Wen has designed an improved user authentication system over the Lee et al.'s scheme for integrated electronic patient record (EPR) information system, which has been analyzed in this study. We have found that Wen's scheme still has the following inefficiencies: (1) the correctness of identity and password are not verified during the login and password change phases; (2) it is vulnerable to impersonation attack and privileged-insider attack; (3) it is designed without the revocation of lost/stolen smart card; (4) the explicit key confirmation and the no key control properties are absent, and (5) user cannot update his/her password without the help of server and secure channel. Then we aimed to propose an enhanced two-factor user authentication system based on the intractable assumption of the quadratic residue problem (QRP) in the multiplicative group. Our scheme bears more securities and functionalities than other schemes found in the literature.

  11. Security Analysis and Improvement of ‘a More Secure Anonymous User Authentication Scheme for the Integrated EPR Information System’

    PubMed Central

    Islam, SK Hafizul; Khan, Muhammad Khurram; Li, Xiong

    2015-01-01

    Over the past few years, secure and privacy-preserving user authentication scheme has become an integral part of the applications of the healthcare systems. Recently, Wen has designed an improved user authentication system over the Lee et al.’s scheme for integrated electronic patient record (EPR) information system, which has been analyzed in this study. We have found that Wen’s scheme still has the following inefficiencies: (1) the correctness of identity and password are not verified during the login and password change phases; (2) it is vulnerable to impersonation attack and privileged-insider attack; (3) it is designed without the revocation of lost/stolen smart card; (4) the explicit key confirmation and the no key control properties are absent, and (5) user cannot update his/her password without the help of server and secure channel. Then we aimed to propose an enhanced two-factor user authentication system based on the intractable assumption of the quadratic residue problem (QRP) in the multiplicative group. Our scheme bears more securities and functionalities than other schemes found in the literature. PMID:26263401

  12. Flux-corrected transport algorithms for continuous Galerkin methods based on high order Bernstein finite elements

    NASA Astrophysics Data System (ADS)

    Lohmann, Christoph; Kuzmin, Dmitri; Shadid, John N.; Mabuza, Sibusiso

    2017-09-01

    This work extends the flux-corrected transport (FCT) methodology to arbitrary order continuous finite element discretizations of scalar conservation laws on simplex meshes. Using Bernstein polynomials as local basis functions, we constrain the total variation of the numerical solution by imposing local discrete maximum principles on the Bézier net. The design of accuracy-preserving FCT schemes for high order Bernstein-Bézier finite elements requires the development of new algorithms and/or generalization of limiting techniques tailored for linear and multilinear Lagrange elements. In this paper, we propose (i) a new discrete upwinding strategy leading to local extremum bounded low order approximations with compact stencils, (ii) high order variational stabilization based on the difference between two gradient approximations, and (iii) new localized limiting techniques for antidiffusive element contributions. The optional use of a smoothness indicator, based on a second derivative test, makes it possible to potentially avoid unnecessary limiting at smooth extrema and achieve optimal convergence rates for problems with smooth solutions. The accuracy of the proposed schemes is assessed in numerical studies for the linear transport equation in 1D and 2D.

  13. Post-processing through linear regression

    NASA Astrophysics Data System (ADS)

    van Schaeybroeck, B.; Vannitsem, S.

    2011-03-01

    Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  14. Monotonic Derivative Correction for Calculation of Supersonic Flows

    ERIC Educational Resources Information Center

    Bulat, Pavel V.; Volkov, Konstantin N.

    2016-01-01

    Aim of the study: This study examines numerical methods for solving the problems in gas dynamics, which are based on an exact or approximate solution to the problem of breakdown of an arbitrary discontinuity (the Riemann problem). Results: Comparative analysis of finite difference schemes for the Euler equations integration is conducted on the…

  15. Verification in Referral-Based Crowdsourcing

    PubMed Central

    Naroditskiy, Victor; Rahwan, Iyad; Cebrian, Manuel; Jennings, Nicholas R.

    2012-01-01

    Online social networks offer unprecedented potential for rallying a large number of people to accomplish a given task. Here we focus on information gathering tasks where rare information is sought through “referral-based crowdsourcing”: the information request is propagated recursively through invitations among members of a social network. Whereas previous work analyzed incentives for the referral process in a setting with only correct reports, misreporting is known to be both pervasive in crowdsourcing applications, and difficult/costly to filter out. A motivating example for our work is the DARPA Red Balloon Challenge where the level of misreporting was very high. In order to undertake a formal study of verification, we introduce a model where agents can exert costly effort to perform verification and false reports can be penalized. This is the first model of verification and it provides many directions for future research, which we point out. Our main theoretical result is the compensation scheme that minimizes the cost of retrieving the correct answer. Notably, this optimal compensation scheme coincides with the winning strategy of the Red Balloon Challenge. PMID:23071530

  16. Comparison of Node-Centered and Cell-Centered Unstructured Finite-Volume Discretizations: Inviscid Fluxes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2010-01-01

    Cell-centered and node-centered approaches have been compared for unstructured finite-volume discretization of inviscid fluxes. The grids range from regular grids to irregular grids, including mixed-element grids and grids with random perturbations of nodes. Accuracy, complexity, and convergence rates of defect-correction iterations are studied for eight nominally second-order accurate schemes: two node-centered schemes with weighted and unweighted least-squares (LSQ) methods for gradient reconstruction and six cell-centered schemes two node-averaging with and without clipping and four schemes that employ different stencils for LSQ gradient reconstruction. The cell-centered nearest-neighbor (CC-NN) scheme has the lowest complexity; a version of the scheme that involves smart augmentation of the LSQ stencil (CC-SA) has only marginal complexity increase. All other schemes have larger complexity; complexity of node-centered (NC) schemes are somewhat lower than complexity of cell-centered node-averaging (CC-NA) and full-augmentation (CC-FA) schemes. On highly anisotropic grids typical of those encountered in grid adaptation, discretization errors of five of the six cell-centered schemes converge with second order on all tested grids; the CC-NA scheme with clipping degrades solution accuracy to first order. The NC schemes converge with second order on regular and/or triangular grids and with first order on perturbed quadrilaterals and mixed-element grids. All schemes may produce large relative errors in gradient reconstruction on grids with perturbed nodes. Defect-correction iterations for schemes employing weighted least-square gradient reconstruction diverge on perturbed stretched grids. Overall, the CC-NN and CC-SA schemes offer the best options of the lowest complexity and secondorder discretization errors. On anisotropic grids over a curved body typical of turbulent flow simulations, the discretization errors converge with second order and are small for the CC-NN, CC-SA, and CC-FA schemes on all grids and for NC schemes on triangular grids; the discretization errors of the CC-NA scheme without clipping do not converge on irregular grids. Accurate gradient reconstruction can be achieved by introducing a local approximate mapping; without approximate mapping, only the NC scheme with weighted LSQ method provides accurate gradients. Defect correction iterations for the CC-NA scheme without clipping diverge; for the NC scheme with weighted LSQ method, the iterations either diverge or converge very slowly. The best option in curved geometries is the CC-SA scheme that offers low complexity, second-order discretization errors, and fast convergence.

  17. PET/CT detectability and classification of simulated pulmonary lesions using an SUV correction scheme

    NASA Astrophysics Data System (ADS)

    Morrow, Andrew N.; Matthews, Kenneth L., II; Bujenovic, Steven

    2008-03-01

    Positron emission tomography (PET) and computed tomography (CT) together are a powerful diagnostic tool, but imperfect image quality allows false positive and false negative diagnoses to be made by any observer despite experience and training. This work investigates PET acquisition mode, reconstruction method and a standard uptake value (SUV) correction scheme on the classification of lesions as benign or malignant in PET/CT images, in an anthropomorphic phantom. The scheme accounts for partial volume effect (PVE) and PET resolution. The observer draws a region of interest (ROI) around the lesion using the CT dataset. A simulated homogenous PET lesion of the same shape as the drawn ROI is blurred with the point spread function (PSF) of the PET scanner to estimate the PVE, providing a scaling factor to produce a corrected SUV. Computer simulations showed that the accuracy of the corrected PET values depends on variations in the CT-drawn boundary and the position of the lesion with respect to the PET image matrix, especially for smaller lesions. Correction accuracy was affected slightly by mismatch of the simulation PSF and the actual scanner PSF. The receiver operating characteristic (ROC) study resulted in several observations. Using observer drawn ROIs, scaled tumor-background ratios (TBRs) more accurately represented actual TBRs than unscaled TBRs. For the PET images, 3D OSEM outperformed 2D OSEM, 3D OSEM outperformed 3D FBP, and 2D OSEM outperformed 2D FBP. The correction scheme significantly increased sensitivity and slightly increased accuracy for all acquisition and reconstruction modes at the cost of a small decrease in specificity.

  18. Fiber-optic extrinsic Fabry-Perot interferometer sensors with three-wavelength digital phase demodulation.

    PubMed

    Schmidt, M; Fürstenau, N

    1999-05-01

    A three-wavelength-based passive quadrature digital phase-demodulation scheme has been developed for readout of fiber-optic extrinsic Fabry-Perot interferometer vibration, acoustic, and strain sensors. This scheme uses a superluminescent diode light source with interference filters in front of the photodiodes and real-time arctan calculation. Quasi-static strain and dynamic vibration sensing with up to an 80-kHz sampling rate is demonstrated. Periodic nonlinearities owing to dephasing with increasing fringe number are corrected for with a suitable algorithm, resulting in significant improvement of the linearity of the sensor characteristics.

  19. Characterization and optimization of an optical and electronic architecture for photon counting

    NASA Astrophysics Data System (ADS)

    Correa, M. del M.; Pérez, F. R.

    2018-04-01

    This work shows a time-domain method for the discrimination and digitization of pulses coming from optical detectors, considering the presence of electronic noise and afterpulsing. The developed signal processing scheme is based on a time-to-digital converter (TDC) and a voltage discriminator. After setting appropriate parameters for taking spectra, acquisition data was corrected by wavelength, intensity response function, and noise suppression. The performance of this scheme is discussed by its characterization as well as the comparison of its spectra to those obtained by an Ocean Optics HR4000 commercial reference.

  20. Molecular implementation of molecular shift register memories

    NASA Technical Reports Server (NTRS)

    Beratan, David N. (Inventor); Onuchic, Jose N. (Inventor)

    1991-01-01

    An electronic shift register memory (20) at the molecular level is described. The memory elements are based on a chain of electron transfer molecules (22) and the information is shifted by photoinduced (26) electron transfer reactions. Thus, multi-step sequences of charge transfer reactions are used to move charge with high efficiency down a molecular chain. The device integrates compositions of the invention onto a VLSI substrate (36), providing an example of a molecular electronic device which may be fabricated. Three energy level schemes, molecular implementation of these schemes, optical excitation strategies, charge amplification strategies, and error correction strategies are described.

  1. Extracting Baseline Electricity Usage Using Gradient Tree Boosting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Taehoon; Lee, Dongeun; Choi, Jaesik

    To understand how specific interventions affect a process observed over time, we need to control for the other factors that influence outcomes. Such a model that captures all factors other than the one of interest is generally known as a baseline. In our study of how different pricing schemes affect residential electricity consumption, the baseline would need to capture the impact of outdoor temperature along with many other factors. In this work, we examine a number of different data mining techniques and demonstrate Gradient Tree Boosting (GTB) to be an effective method to build the baseline. We train GTB onmore » data prior to the introduction of new pricing schemes, and apply the known temperature following the introduction of new pricing schemes to predict electricity usage with the expected temperature correction. Our experiments and analyses show that the baseline models generated by GTB capture the core characteristics over the two years with the new pricing schemes. In contrast to the majority of regression based techniques which fail to capture the lag between the peak of daily temperature and the peak of electricity usage, the GTB generated baselines are able to correctly capture the delay between the temperature peak and the electricity peak. Furthermore, subtracting this temperature-adjusted baseline from the observed electricity usage, we find that the resulting values are more amenable to interpretation, which demonstrates that the temperature-adjusted baseline is indeed effective.« less

  2. CEPC booster design study

    DOE PAGES

    Bian, Tianjian; Gao, Jie; Zhang, Chuang; ...

    2017-12-10

    In September 2012, Chinese scientists proposed a Circular Electron Positron Collider (CEPC) in China at 240 GeV center-of-mass energy for Higgs studies. The booster provides 120 GeV electron and positron beams to the CEPC collider for top-up injection at 0.1 Hz. The design of the full energy booster ring of the CEPC is a challenge. The ejected beam energy is 120 GeV and the injected beam energy is 6 GeV. Here in this paper we describe two alternative schemes, the wiggler bend scheme and the normal bend scheme. For the wiggler bend scheme, we propose to operate the booster ringmore » as a large wiggler at low energy and as a normal ring at high energy to avoid the problem of very low dipole magnet fields. Finally, for the normal bend scheme, we implement the orbit correction to correct the earth field.« less

  3. CEPC booster design study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bian, Tianjian; Gao, Jie; Zhang, Chuang

    In September 2012, Chinese scientists proposed a Circular Electron Positron Collider (CEPC) in China at 240 GeV center-of-mass energy for Higgs studies. The booster provides 120 GeV electron and positron beams to the CEPC collider for top-up injection at 0.1 Hz. The design of the full energy booster ring of the CEPC is a challenge. The ejected beam energy is 120 GeV and the injected beam energy is 6 GeV. Here in this paper we describe two alternative schemes, the wiggler bend scheme and the normal bend scheme. For the wiggler bend scheme, we propose to operate the booster ringmore » as a large wiggler at low energy and as a normal ring at high energy to avoid the problem of very low dipole magnet fields. Finally, for the normal bend scheme, we implement the orbit correction to correct the earth field.« less

  4. Improving efficacy of metastatic tumor segmentation to facilitate early prediction of ovarian cancer patients' response to chemotherapy

    NASA Astrophysics Data System (ADS)

    Danala, Gopichandh; Wang, Yunzhi; Thai, Theresa; Gunderson, Camille C.; Moxley, Katherine M.; Moore, Kathleen; Mannel, Robert S.; Cheng, Samuel; Liu, Hong; Zheng, Bin; Qiu, Yuchen

    2017-02-01

    Accurate tumor segmentation is a critical step in the development of the computer-aided detection (CAD) based quantitative image analysis scheme for early stage prognostic evaluation of ovarian cancer patients. The purpose of this investigation is to assess the efficacy of several different methods to segment the metastatic tumors occurred in different organs of ovarian cancer patients. In this study, we developed a segmentation scheme consisting of eight different algorithms, which can be divided into three groups: 1) Region growth based methods; 2) Canny operator based methods; and 3) Partial differential equation (PDE) based methods. A number of 138 tumors acquired from 30 ovarian cancer patients were used to test the performance of these eight segmentation algorithms. The results demonstrate each of the tested tumors can be successfully segmented by at least one of the eight algorithms without the manual boundary correction. Furthermore, modified region growth, classical Canny detector, and fast marching, and threshold level set algorithms are suggested in the future development of the ovarian cancer related CAD schemes. This study may provide meaningful reference for developing novel quantitative image feature analysis scheme to more accurately predict the response of ovarian cancer patients to the chemotherapy at early stage.

  5. On basis set superposition error corrected stabilization energies for large n-body clusters.

    PubMed

    Walczak, Katarzyna; Friedrich, Joachim; Dolg, Michael

    2011-10-07

    In this contribution, we propose an approximate basis set superposition error (BSSE) correction scheme for the site-site function counterpoise and for the Valiron-Mayer function counterpoise correction of second order to account for the basis set superposition error in clusters with a large number of subunits. The accuracy of the proposed scheme has been investigated for a water cluster series at the CCSD(T), CCSD, MP2, and self-consistent field levels of theory using Dunning's correlation consistent basis sets. The BSSE corrected stabilization energies for a series of water clusters are presented. A study regarding the possible savings with respect to computational resources has been carried out as well as a monitoring of the basis set dependence of the approximate BSSE corrections. © 2011 American Institute of Physics

  6. Comments on baseline correction of digital strong-motion data: Examples from the 1999 Hector Mine, California, earthquake

    USGS Publications Warehouse

    Boore, D.M.; Stephens, C.D.; Joyner, W.B.

    2002-01-01

    Residual displacements for large earthquakes can sometimes be determined from recordings on modern digital instruments, but baseline offsets of unknown origin make it difficult in many cases to do so. To recover the residual displacement, we suggest tailoring a correction scheme by studying the character of the velocity obtained by integration of zeroth-order-corrected acceleration and then seeing if the residual displacements are stable when the various parameters in the particular correction scheme are varied. For many seismological and engineering purposes, however, the residual displacement are of lesser importance than ground motions at periods less than about 20 sec. These ground motions are often recoverable with simple baseline correction and low-cut filtering. In this largely empirical study, we illustrate the consequences of various correction schemes, drawing primarily from digital recordings of the 1999 Hector Mine, California, earthquake. We show that with simple processing the displacement waveforms for this event are very similar for stations separated by as much as 20 km. We also show that a strong pulse on the transverse component was radiated from the Hector Mine earthquake and propagated with little distortion to distances exceeding 170 km; this pulse leads to large response spectral amplitudes around 10 sec.

  7. Neural network decoder for quantum error correcting codes

    NASA Astrophysics Data System (ADS)

    Krastanov, Stefan; Jiang, Liang

    Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.

  8. Dissipative quantum error correction and application to quantum sensing with trapped ions.

    PubMed

    Reiter, F; Sørensen, A S; Zoller, P; Muschik, C A

    2017-11-28

    Quantum-enhanced measurements hold the promise to improve high-precision sensing ranging from the definition of time standards to the determination of fundamental constants of nature. However, quantum sensors lose their sensitivity in the presence of noise. To protect them, the use of quantum error-correcting codes has been proposed. Trapped ions are an excellent technological platform for both quantum sensing and quantum error correction. Here we present a quantum error correction scheme that harnesses dissipation to stabilize a trapped-ion qubit. In our approach, always-on couplings to an engineered environment protect the qubit against spin-flips or phase-flips. Our dissipative error correction scheme operates in a continuous manner without the need to perform measurements or feedback operations. We show that the resulting enhanced coherence time translates into a significantly enhanced precision for quantum measurements. Our work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.

  9. Non-binary LDPC-coded modulation for high-speed optical metro networks with backpropagation

    NASA Astrophysics Data System (ADS)

    Arabaci, Murat; Djordjevic, Ivan B.; Saunders, Ross; Marcoccia, Roberto M.

    2010-01-01

    To simultaneously mitigate the linear and nonlinear channel impairments in high-speed optical communications, we propose the use of non-binary low-density-parity-check-coded modulation in combination with a coarse backpropagation method. By employing backpropagation, we reduce the memory in the channel and in return obtain significant reductions in the complexity of the channel equalizer which is exponentially proportional to the channel memory. We then compensate for the remaining channel distortions using forward error correction based on non-binary LDPC codes. We propose non-binary-LDPC-coded modulation scheme because, compared to bit-interleaved binary-LDPC-coded modulation scheme employing turbo equalization, the proposed scheme lowers the computational complexity and latency of the overall system while providing impressively larger coding gains.

  10. Multidimensional FEM-FCT schemes for arbitrary time stepping

    NASA Astrophysics Data System (ADS)

    Kuzmin, D.; Möller, M.; Turek, S.

    2003-05-01

    The flux-corrected-transport paradigm is generalized to finite-element schemes based on arbitrary time stepping. A conservative flux decomposition procedure is proposed for both convective and diffusive terms. Mathematical properties of positivity-preserving schemes are reviewed. A nonoscillatory low-order method is constructed by elimination of negative off-diagonal entries of the discrete transport operator. The linearization of source terms and extension to hyperbolic systems are discussed. Zalesak's multidimensional limiter is employed to switch between linear discretizations of high and low order. A rigorous proof of positivity is provided. The treatment of non-linearities and iterative solution of linear systems are addressed. The performance of the new algorithm is illustrated by numerical examples for the shock tube problem in one dimension and scalar transport equations in two dimensions.

  11. Effects of upstream-biased third-order space correction terms on multidimensional Crowley advection schemes

    NASA Technical Reports Server (NTRS)

    Schlesinger, R. E.

    1985-01-01

    The impact of upstream-biased corrections for third-order spatial truncation error on the stability and phase error of the two-dimensional Crowley combined advective scheme with the cross-space term included is analyzed, putting primary emphasis on phase error reduction. The various versions of the Crowley scheme are formally defined, and their stability and phase error characteristics are intercompared using a linear Fourier component analysis patterned after Fromm (1968, 1969). The performances of the schemes under prototype simulation conditions are tested using time-dependent numerical experiments which advect an initially cone-shaped passive scalar distribution in each of three steady nondivergent flows. One such flow is solid rotation, while the other two are diagonal uniform flow and a strongly deformational vortex.

  12. Analysis and design of algorithm-based fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Nair, V. S. Sukumaran

    1990-01-01

    An important consideration in the design of high performance multiprocessor systems is to ensure the correctness of the results computed in the presence of transient and intermittent failures. Concurrent error detection and correction have been applied to such systems in order to achieve reliability. Algorithm Based Fault Tolerance (ABFT) was suggested as a cost-effective concurrent error detection scheme. The research was motivated by the complexity involved in the analysis and design of ABFT systems. To that end, a matrix-based model was developed and, based on that, algorithms for both the design and analysis of ABFT systems are formulated. These algorithms are less complex than the existing ones. In order to reduce the complexity further, a hierarchical approach is developed for the analysis of large systems.

  13. Forward and correctional OFDM-based visible light positioning

    NASA Astrophysics Data System (ADS)

    Li, Wei; Huang, Zhitong; Zhao, Runmei; He, Peixuan; Ji, Yuefeng

    2017-09-01

    Visible light positioning (VLP) has attracted much attention in both academic and industrial areas due to the extensive deployment of light-emitting diodes (LEDs) as next-generation green lighting. Generally, the coverage of a single LED lamp is limited, so LED arrays are always utilized to achieve uniform illumination within the large-scale indoor environment. However, in such dense LED deployment scenario, the superposition of the light signals becomes an important challenge for accurate VLP. To solve this problem, we propose a forward and correctional orthogonal frequency division multiplexing (OFDM)-based VLP (FCO-VLP) scheme with low complexity in generating and processing of signals. In the first forward procedure of FCO-VLP, an initial position is obtained by the trilateration method based on OFDM-subcarriers. The positioning accuracy will be further improved in the second correctional procedure based on the database of reference points. As demonstrated in our experiments, our approach yields an improved average positioning error of 4.65 cm and an enhanced positioning accuracy by 24.2% compared with trilateration method.

  14. MR/PET quantification tools: Registration, segmentation, classification, and MR-based attenuation correction

    PubMed Central

    Fei, Baowei; Yang, Xiaofeng; Nye, Jonathon A.; Aarsvold, John N.; Raghunath, Nivedita; Cervo, Morgan; Stark, Rebecca; Meltzer, Carolyn C.; Votaw, John R.

    2012-01-01

    Purpose: Combined MR/PET is a relatively new, hybrid imaging modality. A human MR/PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR/PET for brain imaging. Methods: The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR/PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with [11C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. Results: For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. Conclusions: MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR/PET. PMID:23039679

  15. Numerical experiments on the accuracy of ENO and modified ENO schemes

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    1990-01-01

    Further numerical experiments are made assessing an accuracy degeneracy phenomena. A modified essentially non-oscillatory (ENO) scheme is proposed, which recovers the correct order of accuracy for all the test problems with smooth initial conditions and gives comparable results with the original ENO schemes for discontinuous problems.

  16. Efficient quantum pseudorandomness with simple graph states

    NASA Astrophysics Data System (ADS)

    Mezher, Rawad; Ghalbouni, Joe; Dgheim, Joseph; Markham, Damian

    2018-02-01

    Measurement based (MB) quantum computation allows for universal quantum computing by measuring individual qubits prepared in entangled multipartite states, known as graph states. Unless corrected for, the randomness of the measurements leads to the generation of ensembles of random unitaries, where each random unitary is identified with a string of possible measurement results. We show that repeating an MB scheme an efficient number of times, on a simple graph state, with measurements at fixed angles and no feedforward corrections, produces a random unitary ensemble that is an ɛ -approximate t design on n qubits. Unlike previous constructions, the graph is regular and is also a universal resource for measurement based quantum computing, closely related to the brickwork state.

  17. GIFTS SM EDU Data Processing and Algorithms

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Johnson, David G.; Reisse, Robert A.; Gazarik, Michael J.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three Focal Plane Arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration stage. The calibration procedures can be subdivided into three stages. In the pre-calibration stage, a phase correction algorithm is applied to the decimated and filtered complex interferogram. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected blackbody reference spectra. In the radiometric calibration stage, we first compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. During the post-processing stage, we estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. We then implement a correction scheme that compensates for the effect of fore-optics offsets. Finally, for off-axis pixels, the FPA off-axis effects correction is performed. To estimate the performance of the entire FPA, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation.

  18. An improved method to detect correct protein folds using partial clustering.

    PubMed

    Zhou, Jianjun; Wishart, David S

    2013-01-16

    Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient "partial" clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either C(α) RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance.

  19. An improved method to detect correct protein folds using partial clustering

    PubMed Central

    2013-01-01

    Background Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient “partial“ clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. Results We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either Cα RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. Conclusions The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance. PMID:23323835

  20. Mass-corrections for the conservative coupling of flow and transport on collocated meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waluga, Christian, E-mail: waluga@ma.tum.de; Wohlmuth, Barbara; Rüde, Ulrich

    2016-01-15

    Buoyancy-driven flow models demand a careful treatment of the mass-balance equation to avoid spurious source and sink terms in the non-linear coupling between flow and transport. In the context of finite-elements, it is therefore commonly proposed to employ sufficiently rich pressure spaces, containing piecewise constant shape functions to obtain local or even strong mass-conservation. In three-dimensional computations, this usually requires nonconforming approaches, special meshes or higher order velocities, which make these schemes prohibitively expensive for some applications and complicate the implementation into legacy code. In this paper, we therefore propose a lean and conservatively coupled scheme based on standard stabilizedmore » linear equal-order finite elements for the Stokes part and vertex-centered finite volumes for the energy equation. We show that in a weak mass-balance it is possible to recover exact conservation properties by a local flux-correction which can be computed efficiently on the control volume boundaries of the transport mesh. We discuss implementation aspects and demonstrate the effectiveness of the flux-correction by different two- and three-dimensional examples which are motivated by geophysical applications.« less

  1. High-order conservative finite difference GLM-MHD schemes for cell-centered MHD

    NASA Astrophysics Data System (ADS)

    Mignone, Andrea; Tzeferacos, Petros; Bodo, Gianluigi

    2010-08-01

    We present and compare third- as well as fifth-order accurate finite difference schemes for the numerical solution of the compressible ideal MHD equations in multiple spatial dimensions. The selected methods lean on four different reconstruction techniques based on recently improved versions of the weighted essentially non-oscillatory (WENO) schemes, monotonicity preserving (MP) schemes as well as slope-limited polynomial reconstruction. The proposed numerical methods are highly accurate in smooth regions of the flow, avoid loss of accuracy in proximity of smooth extrema and provide sharp non-oscillatory transitions at discontinuities. We suggest a numerical formulation based on a cell-centered approach where all of the primary flow variables are discretized at the zone center. The divergence-free condition is enforced by augmenting the MHD equations with a generalized Lagrange multiplier yielding a mixed hyperbolic/parabolic correction, as in Dedner et al. [J. Comput. Phys. 175 (2002) 645-673]. The resulting family of schemes is robust, cost-effective and straightforward to implement. Compared to previous existing approaches, it completely avoids the CPU intensive workload associated with an elliptic divergence cleaning step and the additional complexities required by staggered mesh algorithms. Extensive numerical testing demonstrate the robustness and reliability of the proposed framework for computations involving both smooth and discontinuous features.

  2. Computer-Aided Diagnostic (CAD) Scheme by Use of Contralateral Subtraction Technique

    NASA Astrophysics Data System (ADS)

    Nagashima, Hiroyuki; Harakawa, Tetsumi

    We developed a computer-aided diagnostic (CAD) scheme for detection of subtle image findings of acute cerebral infarction in brain computed tomography (CT) by using a contralateral subtraction technique. In our computerized scheme, the lateral inclination of image was first corrected automatically by rotating and shifting. The contralateral subtraction image was then derived by subtraction of reversed image from original image. Initial candidates for acute cerebral infarctions were identified using the multiple-thresholding and image filtering techniques. As the 1st step for removing false positive candidates, fourteen image features were extracted in each of the initial candidates. Halfway candidates were detected by applying the rule-based test with these image features. At the 2nd step, five image features were extracted using the overlapping scale with halfway candidates in interest slice and upper/lower slice image. Finally, acute cerebral infarction candidates were detected by applying the rule-based test with five image features. The sensitivity in the detection for 74 training cases was 97.4% with 3.7 false positives per image. The performance of CAD scheme for 44 testing cases had an approximate result to training cases. Our CAD scheme using the contralateral subtraction technique can reveal suspected image findings of acute cerebral infarctions in CT images.

  3. Precision calculations for h → WW/ZZ → 4 fermions in the Two-Higgs-Doublet Model with Prophecy4f

    NASA Astrophysics Data System (ADS)

    Altenkamp, Lukas; Dittmaier, Stefan; Rzehak, Heidi

    2018-03-01

    We have calculated the next-to-leading-order electroweak and QCD corrections to the decay processes h → WW/ZZ → 4 fermions of the light CP-even Higgs boson h of various types of Two-Higgs-Doublet Models (Types I and II, "lepton-specific" and "flipped" models). The input parameters are defined in four different renormalization schemes, where parameters that are not directly accessible by experiments are defined in the \\overline{MS} scheme. Numerical results are presented for the corrections to partial decay widths for various benchmark scenarios previously motivated in the literature, where we investigate the dependence on the \\overline{MS} renormalization scale and on the choice of the renormalization scheme in detail. We find that it is crucial to be precise with these issues in parameter analyses, since parameter conversions between different schemes can involve sizeable or large corrections, especially in scenarios that are close to experimental exclusion limits or theoretical bounds. It even turns out that some renormalization schemes are not applicable in specific regions of parameter space. Our investigation of differential distributions shows that corrections beyond the Standard Model are mostly constant offsets induced by the mixing between the light and heavy CP-even Higgs bosons, so that differential analyses of h→4 f decay observables do not help to identify Two-Higgs-Doublet Models. Moreover, the decay widths do not significantly depend on the specific type of those models. The calculations are implemented in the public Monte Carlo generator Prophecy4f and ready for application.

  4. Atmospheric correction of short-wave hyperspectral imagery using a fast, full-scattering 1DVar retrieval scheme

    NASA Astrophysics Data System (ADS)

    Thelen, J.-C.; Havemann, S.; Taylor, J. P.

    2012-06-01

    Here, we present a new prototype algorithm for the simultaneous retrieval of the atmospheric profiles (temperature, humidity, ozone and aerosol) and the surface reflectance from hyperspectral radiance measurements obtained from air/space-borne, hyperspectral imagers such as the 'Airborne Visible/Infrared Imager (AVIRIS) or Hyperion on board of the Earth Observatory 1. The new scheme, proposed here, consists of a fast radiative transfer code, based on empirical orthogonal functions (EOFs), in conjunction with a 1D-Var retrieval scheme. The inclusion of an 'exact' scattering code based on spherical harmonics, allows for an accurate treatment of Rayleigh scattering and scattering by aerosols, water droplets and ice-crystals, thus making it possible to also retrieve cloud and aerosol optical properties, although here we will concentrate on non-cloudy scenes. We successfully tested this new approach using two hyperspectral images taken by AVIRIS, a whiskbroom imaging spectrometer operated by the NASA Jet Propulsion Laboratory.

  5. An extended basis inexact shift-invert Lanczos for the efficient solution of large-scale generalized eigenproblems

    NASA Astrophysics Data System (ADS)

    Rewieński, M.; Lamecki, A.; Mrozowski, M.

    2013-09-01

    This paper proposes a technique, based on the Inexact Shift-Invert Lanczos (ISIL) method with Inexact Jacobi Orthogonal Component Correction (IJOCC) refinement, and a preconditioned conjugate-gradient (PCG) linear solver with multilevel preconditioner, for finding several eigenvalues for generalized symmetric eigenproblems. Several eigenvalues are found by constructing (with the ISIL process) an extended projection basis. Presented results of numerical experiments confirm the technique can be effectively applied to challenging, large-scale problems characterized by very dense spectra, such as resonant cavities with spatial dimensions which are large with respect to wavelengths of the resonating electromagnetic fields. It is also shown that the proposed scheme based on inexact linear solves delivers superior performance, as compared to methods which rely on exact linear solves, indicating tremendous potential of the 'inexact solve' concept. Finally, the scheme which generates an extended projection basis is found to provide a cost-efficient alternative to classical deflation schemes when several eigenvalues are computed.

  6. Function-Space-Based Solution Scheme for the Size-Modified Poisson-Boltzmann Equation in Full-Potential DFT.

    PubMed

    Ringe, Stefan; Oberhofer, Harald; Hille, Christoph; Matera, Sebastian; Reuter, Karsten

    2016-08-09

    The size-modified Poisson-Boltzmann (MPB) equation is an efficient implicit solvation model which also captures electrolytic solvent effects. It combines an account of the dielectric solvent response with a mean-field description of solvated finite-sized ions. We present a general solution scheme for the MPB equation based on a fast function-space-oriented Newton method and a Green's function preconditioned iterative linear solver. In contrast to popular multigrid solvers, this approach allows us to fully exploit specialized integration grids and optimized integration schemes. We describe a corresponding numerically efficient implementation for the full-potential density-functional theory (DFT) code FHI-aims. We show that together with an additional Stern layer correction the DFT+MPB approach can describe the mean activity coefficient of a KCl aqueous solution over a wide range of concentrations. The high sensitivity of the calculated activity coefficient on the employed ionic parameters thereby suggests to use extensively tabulated experimental activity coefficients of salt solutions for a systematic parametrization protocol.

  7. Positive-negative corresponding normalized ghost imaging based on an adaptive threshold

    NASA Astrophysics Data System (ADS)

    Li, G. L.; Zhao, Y.; Yang, Z. H.; Liu, X.

    2016-11-01

    Ghost imaging (GI) technology has attracted increasing attention as a new imaging technique in recent years. However, the signal-to-noise ratio (SNR) of GI with pseudo-thermal light needs to be improved before it meets engineering application demands. We therefore propose a new scheme called positive-negative correspondence normalized GI based on an adaptive threshold (PCNGI-AT) to achieve a good performance with less amount of data. In this work, we use both the advantages of normalized GI (NGI) and positive-negative correspondence GI (P-NCGI). The correctness and feasibility of the scheme were proved in theory before we designed an adaptive threshold selection method, in which the parameter of object signal selection conditions is replaced by the normalizing value. The simulation and experimental results reveal that the SNR of the proposed scheme is better than that of time-correspondence differential GI (TCDGI), avoiding the calculation of the matrix of correlation and reducing the amount of data used. The method proposed will make GI far more practical in engineering applications.

  8. Mutual-information-based image to patient re-registration using intraoperative ultrasound in image-guided neurosurgery

    PubMed Central

    Ji, Songbai; Wu, Ziji; Hartov, Alex; Roberts, David W.; Paulsen, Keith D.

    2008-01-01

    An image-based re-registration scheme has been developed and evaluated that uses fiducial registration as a starting point to maximize the normalized mutual information (nMI) between intraoperative ultrasound (iUS) and preoperative magnetic resonance images (pMR). We show that this scheme significantly (p⪡0.001) reduces tumor boundary misalignment between iUS pre-durotomy and pMR from an average of 2.5 mm to 1.0 mm in six resection surgeries. The corrected tumor alignment before dural opening provides a more accurate reference for assessing subsequent intraoperative tumor displacement, which is important for brain shift compensation as surgery progresses. In addition, we report the translational and rotational capture ranges necessary for successful convergence of the nMI registration technique (5.9 mm and 5.2 deg, respectively). The proposed scheme is automatic, sufficiently robust, and computationally efficient (<2 min), and holds promise for routine clinical use in the operating room during image-guided neurosurgical procedures. PMID:18975707

  9. A meta-GGA level screened range-separated hybrid functional by employing short range Hartree-Fock with a long range semilocal functional.

    PubMed

    Jana, Subrata; Samal, Prasanjit

    2018-03-28

    The range-separated hybrid density functionals are very successful in describing a wide range of molecular and solid-state properties accurately. In principle, such functionals are designed from spherically averaged or system averaged as well as reverse engineered exchange holes. In the present attempt, the screened range-separated hybrid functional scheme has been applied to the meta-GGA rung by using the density matrix expansion based semilocal exchange hole (or functional). The hybrid functional proposed here utilizes the spherically averaged density matrix expansion based exchange hole in the range separation scheme. For slowly varying density correction the range separation scheme is employed only through the local density approximation based exchange hole coupled with the corresponding fourth order gradient approximate Tao-Mo enhancement factor. The comprehensive testing and performance of the newly constructed functional indicates its applicability in describing several molecular properties. The most appealing feature of this present screened hybrid functional is that it will be practically very useful in describing solid-state properties at the meta-GGA level.

  10. An Adaptive Monitoring Scheme for Automatic Control of Anaesthesia in dynamic surgical environments based on Bispectral Index and Blood Pressure.

    PubMed

    Yu, Yu-Ning; Doctor, Faiyaz; Fan, Shou-Zen; Shieh, Jiann-Shing

    2018-04-13

    During surgical procedures, bispectral index (BIS) is a well-known measure used to determine the patient's depth of anesthesia (DOA). However, BIS readings can be subject to interference from many factors during surgery, and other parameters such as blood pressure (BP) and heart rate (HR) can provide more stable indicators. However, anesthesiologist still consider BIS as a primary measure to determine if the patient is correctly anaesthetized while relaying on the other physiological parameters to monitor and ensure the patient's status is maintained. The automatic control of administering anesthesia using intelligent control systems has been the subject of recent research in order to alleviate the burden on the anesthetist to manually adjust drug dosage in response physiological changes for sustaining DOA. A system proposed for the automatic control of anesthesia based on type-2 Self Organizing Fuzzy Logic Controllers (T2-SOFLCs) has been shown to be effective in the control of DOA under simulated scenarios while contending with uncertainties due to signal noise and dynamic changes in pharmacodynamics (PD) and pharmacokinetic (PK) effects of the drug on the body. This study considers both BIS and BP as part of an adaptive automatic control scheme, which can adjust to the monitoring of either parameter in response to changes in the availability and reliability of BIS signals during surgery. The simulation of different control schemes using BIS data obtained during real surgical procedures to emulate noise and interference factors have been conducted. The use of either or both combined parameters for controlling the delivery Propofol to maintain safe target set points for DOA are evaluated. The results show that combing BIS and BP based on the proposed adaptive control scheme can ensure the target set points and the correct amount of drug in the body is maintained even with the intermittent loss of BIS signal that could otherwise disrupt an automated control system.

  11. Investigation of Convection and Pressure Treatment with Splitting Techniques

    NASA Technical Reports Server (NTRS)

    Thakur, Siddharth; Shyy, Wei; Liou, Meng-Sing

    1995-01-01

    Treatment of convective and pressure fluxes in the Euler and Navier-Stokes equations using splitting formulas for convective velocity and pressure is investigated. Two schemes - controlled variation scheme (CVS) and advection upstream splitting method (AUSM) - are explored for their accuracy in resolving sharp gradients in flows involving moving or reflecting shock waves as well as a one-dimensional combusting flow with a strong heat release source term. For two-dimensional compressible flow computations, these two schemes are implemented in one of the pressure-based algorithms, whose very basis is the separate treatment of convective and pressure fluxes. For the convective fluxes in the momentum equations as well as the estimation of mass fluxes in the pressure correction equation (which is derived from the momentum and continuity equations) of the present algorithm, both first- and second-order (with minmod limiter) flux estimations are employed. Some issues resulting from the conventional use in pressure-based methods of a staggered grid, for the location of velocity components and pressure, are also addressed. Using the second-order fluxes, both CVS and AUSM type schemes exhibit sharp resolution. Overall, the combination of upwinding and splitting for the convective and pressure fluxes separately exhibits robust performance for a variety of flows and is particularly amenable for adoption in pressure-based methods.

  12. Multi-photon self-error-correction hyperentanglement distribution over arbitrary collective-noise channels

    NASA Astrophysics Data System (ADS)

    Gao, Cheng-Yan; Wang, Guan-Yu; Zhang, Hao; Deng, Fu-Guo

    2017-01-01

    We present a self-error-correction spatial-polarization hyperentanglement distribution scheme for N-photon systems in a hyperentangled Greenberger-Horne-Zeilinger state over arbitrary collective-noise channels. In our scheme, the errors of spatial entanglement can be first averted by encoding the spatial-polarization hyperentanglement into the time-bin entanglement with identical polarization and defined spatial modes before it is transmitted over the fiber channels. After transmission over the noisy channels, the polarization errors introduced by the depolarizing noise can be corrected resorting to the time-bin entanglement. Finally, the parties in quantum communication can in principle share maximally hyperentangled states with a success probability of 100%.

  13. Numerical solution of modified differential equations based on symmetry preservation.

    PubMed

    Ozbenli, Ersin; Vedula, Prakash

    2017-12-01

    In this paper, we propose a method to construct invariant finite-difference schemes for solution of partial differential equations (PDEs) via consideration of modified forms of the underlying PDEs. The invariant schemes, which preserve Lie symmetries, are obtained based on the method of equivariant moving frames. While it is often difficult to construct invariant numerical schemes for PDEs due to complicated symmetry groups associated with cumbersome discrete variable transformations, we note that symmetries associated with more convenient transformations can often be obtained by appropriately modifying the original PDEs. In some cases, modifications to the original PDEs are also found to be useful in order to avoid trivial solutions that might arise from particular selections of moving frames. In our proposed method, modified forms of PDEs can be obtained either by addition of perturbation terms to the original PDEs or through defect correction procedures. These additional terms, whose primary purpose is to enable symmetries with more convenient transformations, are then removed from the system by considering moving frames for which these specific terms go to zero. Further, we explore selection of appropriate moving frames that result in improvement in accuracy of invariant numerical schemes based on modified PDEs. The proposed method is tested using the linear advection equation (in one- and two-dimensions) and the inviscid Burgers' equation. Results obtained for these tests cases indicate that numerical schemes derived from the proposed method perform significantly better than existing schemes not only by virtue of improvement in numerical accuracy but also due to preservation of qualitative properties or symmetries of the underlying differential equations.

  14. Global Precipitation Estimates from Cross-Track Passive Microwave Observations Using a Physically-Based Retrieval Scheme

    NASA Technical Reports Server (NTRS)

    Kidd, Chris; Matsui, Toshi; Chern, Jiundar; Mohr, Karen; Kummerow, Christian; Randel, Dave

    2015-01-01

    The estimation of precipitation across the globe from satellite sensors provides a key resource in the observation and understanding of our climate system. Estimates from all pertinent satellite observations are critical in providing the necessary temporal sampling. However, consistency in these estimates from instruments with different frequencies and resolutions is critical. This paper details the physically based retrieval scheme to estimate precipitation from cross-track (XT) passive microwave (PM) sensors on board the constellation satellites of the Global Precipitation Measurement (GPM) mission. Here the Goddard profiling algorithm (GPROF), a physically based Bayesian scheme developed for conically scanning (CS) sensors, is adapted for use with XT PM sensors. The present XT GPROF scheme utilizes a model-generated database to overcome issues encountered with an observational database as used by the CS scheme. The model database ensures greater consistency across meteorological regimes and surface types by providing a more comprehensive set of precipitation profiles. The database is corrected for bias against the CS database to ensure consistency in the final product. Statistical comparisons over western Europe and the United States show that the XT GPROF estimates are comparable with those from the CS scheme. Indeed, the XT estimates have higher correlations against surface radar data, while maintaining similar root-mean-square errors. Latitudinal profiles of precipitation show the XT estimates are generally comparable with the CS estimates, although in the southern midlatitudes the peak precipitation is shifted equatorward while over the Arctic large differences are seen between the XT and the CS retrievals.

  15. A new scheme for perturbative triples correction to (0,1) sector of Fock space multi-reference coupled cluster method: theory, implementation, and examples.

    PubMed

    Dutta, Achintya Kumar; Vaval, Nayana; Pal, Sourav

    2015-01-28

    We propose a new elegant strategy to implement third order triples correction in the light of many-body perturbation theory to the Fock space multi-reference coupled cluster method for the ionization problem. The computational scaling as well as the storage requirement is of key concerns in any many-body calculations. Our proposed approach scales as N(6) does not require the storage of triples amplitudes and gives superior agreement over all the previous attempts made. This approach is capable of calculating multiple roots in a single calculation in contrast to the inclusion of perturbative triples in the equation of motion variant of the coupled cluster theory, where each root needs to be computed in a state-specific way and requires both the left and right state vectors together. The performance of the newly implemented scheme is tested by applying to methylene, boron nitride (B2N) anion, nitrogen, water, carbon monoxide, acetylene, formaldehyde, and thymine monomer, a DNA base.

  16. All linear optical quantum memory based on quantum error correction.

    PubMed

    Gingrich, Robert M; Kok, Pieter; Lee, Hwang; Vatan, Farrokh; Dowling, Jonathan P

    2003-11-21

    When photons are sent through a fiber as part of a quantum communication protocol, the error that is most difficult to correct is photon loss. Here we propose and analyze a two-to-four qubit encoding scheme, which can recover the loss of one qubit in the transmission. This device acts as a repeater, when it is placed in series to cover a distance larger than the attenuation length of the fiber, and it acts as an optical quantum memory, when it is inserted in a fiber loop. We call this dual-purpose device a "quantum transponder."

  17. Topological Qubits from Valence Bond Solids

    NASA Astrophysics Data System (ADS)

    Wang, Dong-Sheng; Affleck, Ian; Raussendorf, Robert

    2018-05-01

    Topological qubits based on S U (N )-symmetric valence-bond solid models are constructed. A logical topological qubit is the ground subspace with twofold degeneracy, which is due to the spontaneous breaking of a global parity symmetry. A logical Z rotation by an angle 2 π /N , for any integer N >2 , is provided by a global twist operation, which is of a topological nature and protected by the energy gap. A general concatenation scheme with standard quantum error-correction codes is also proposed, which can lead to better codes. Generic error-correction properties of symmetry-protected topological order are also demonstrated.

  18. Scalable video transmission over Rayleigh fading channels using LDPC codes

    NASA Astrophysics Data System (ADS)

    Bansal, Manu; Kondi, Lisimachos P.

    2005-03-01

    In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.

  19. Crystal structure prediction supported by incomplete experimental data

    NASA Astrophysics Data System (ADS)

    Tsujimoto, Naoto; Adachi, Daiki; Akashi, Ryosuke; Todo, Synge; Tsuneyuki, Shinji

    2018-05-01

    We propose an efficient theoretical scheme for structure prediction on the basis of the idea of combining methods, which optimize theoretical calculation and experimental data simultaneously. In this scheme, we formulate a cost function based on a weighted sum of interatomic potential energies and a penalty function which is defined with partial experimental data totally insufficient for conventional structure analysis. In particular, we define the cost function using "crystallinity" formulated with only peak positions within the small range of the x-ray-diffraction pattern. We apply this method to well-known polymorphs of SiO2 and C with up to 108 atoms in the simulation cell and show that it reproduces the correct structures efficiently with very limited information of diffraction peaks. This scheme opens a new avenue for determining and predicting structures that are difficult to determine by conventional methods.

  20. TH-EF-207A-03: Photon Counting Implementation Challenges Using An Electron Multiplying Charged-Coupled Device Based Micro-CT System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Podgorsak, A; Bednarek, D; Rudin, S

    2016-06-15

    Purpose: To successfully implement and operate a photon counting scheme on an electron multiplying charged-coupled device (EMCCD) based micro-CT system. Methods: We built an EMCCD based micro-CT system and implemented a photon counting scheme. EMCCD detectors use avalanche transfer registries to multiply the input signal far above the readout noise floor. Due to intrinsic differences in the pixel array, using a global threshold for photon counting is not optimal. To address this shortcoming, we generated a threshold array based on sixty dark fields (no x-ray exposure). We calculated an average matrix and a variance matrix of the dark field sequence.more » The average matrix was used for the offset correction while the variance matrix was used to set individual pixel thresholds for the photon counting scheme. Three hundred photon counting frames were added for each projection and 360 projections were acquired for each object. The system was used to scan various objects followed by reconstruction using an FDK algorithm. Results: Examination of the projection images and reconstructed slices of the objects indicated clear interior detail free of beam hardening artifacts. This suggests successful implementation of the photon counting scheme on our EMCCD based micro-CT system. Conclusion: This work indicates that it is possible to implement and operate a photon counting scheme on an EMCCD based micro-CT system, suggesting that these devices might be able to operate at very low x-ray exposures in a photon counting mode. Such devices could have future implications in clinical CT protocols. NIH Grant R01EB002873; Toshiba Medical Systems Corp.« less

  1. Study on numerical simulation of asymmetric structure aluminum profile extrusion based on ALE method

    NASA Astrophysics Data System (ADS)

    Chen, Kun; Qu, Yuan; Ding, Siyi; Liu, Changhui; Yang, Fuyong

    2018-05-01

    Using the HyperXtrude module based on the Arbitrary Lagrangian-Eulerian (ALE) finite element method, the paper simulates the steady extrusion process of the asymmetric structure aluminum die successfully. A verification experiment is carried out to verify the simulation results. Having obtained and analyzed the stress-strain field, temperature field and extruded velocity of the metal, it confirms that the simulation prediction results and the experimental schemes are consistent. The scheme of the die correction and optimization are discussed at last. By adjusting the bearing length and core thickness, adopting the structure of feeder plate protection, short shunt bridge in the upper die and three-level bonding container in the lower die to control the metal flowing, the qualified aluminum profile can be obtained.

  2. Nonlinear BCJR equalizer for suppression of intrachannel nonlinearities in 40 Gb/s optical communications systems.

    PubMed

    Djordjevic, Ivan B; Vasic, Bane

    2006-05-29

    A maximum a posteriori probability (MAP) symbol decoding supplemented with iterative decoding is proposed as an effective mean for suppression of intrachannel nonlinearities. The MAP detector, based on Bahl-Cocke-Jelinek-Raviv algorithm, operates on the channel trellis, a dynamical model of intersymbol interference, and provides soft-decision outputs processed further in an iterative decoder. A dramatic performance improvement is demonstrated. The main reason is that the conventional maximum-likelihood sequence detector based on Viterbi algorithm provides hard-decision outputs only, hence preventing the soft iterative decoding. The proposed scheme operates very well in the presence of strong intrachannel intersymbol interference, when other advanced forward error correction schemes fail, and it is also suitable for 40 Gb/s upgrade over existing 10 Gb/s infrastructure.

  3. Multiple-image encryption via lifting wavelet transform and XOR operation based on compressive ghost imaging scheme

    NASA Astrophysics Data System (ADS)

    Li, Xianye; Meng, Xiangfeng; Yang, Xiulun; Wang, Yurong; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-03-01

    A multiple-image encryption method via lifting wavelet transform (LWT) and XOR operation is proposed, which is based on a row scanning compressive ghost imaging scheme. In the encryption process, the scrambling operation is implemented for the sparse images transformed by LWT, then the XOR operation is performed on the scrambled images, and the resulting XOR images are compressed in the row scanning compressive ghost imaging, through which the ciphertext images can be detected by bucket detector arrays. During decryption, the participant who possesses his/her correct key-group, can successfully reconstruct the corresponding plaintext image by measurement key regeneration, compression algorithm reconstruction, XOR operation, sparse images recovery, and inverse LWT (iLWT). Theoretical analysis and numerical simulations validate the feasibility of the proposed method.

  4. Theoretical oscillation frequencies for solar-type dwarfs from stellar models with 〈3D〉-atmospheres

    NASA Astrophysics Data System (ADS)

    Jørgensen, Andreas Christ Sølvsten; Weiss, Achim; Mosumgaard, Jakob Rørsted; Silva Aguirre, Victor; Sahlholdt, Christian Lundsgaard

    2017-12-01

    We present a new method for replacing the outermost layers of stellar models with interpolated atmospheres based on results from 3D simulations, in order to correct for structural inadequacies of these layers. This replacement is known as patching. Tests, based on 3D atmospheres from three different codes and interior models with different input physics, are performed. Using solar models, we investigate how different patching criteria affect the eigenfrequencies. These criteria include the depth, at which the replacement is performed, the quantity, on which the replacement is based, and the mismatch in Teff and log g between the un-patched model and patched 3D atmosphere. We find the eigenfrequencies to be unaltered by the patching depth deep within the adiabatic region, while changing the patching quantity or the employed atmosphere grid leads to frequency shifts that may exceed 1 μHz. Likewise, the eigenfrequencies are sensitive to mismatches in Teff or log g. A thorough investigation of the accuracy of a new scheme, for interpolating mean 3D stratifications within the atmosphere grids, is furthermore performed. Throughout large parts of the atmosphere grids, our interpolation scheme yields sufficiently accurate results for the purpose of asteroseismology. We apply our procedure in asteroseismic analyses of four Kepler stars and draw the same conclusions as in the solar case: Correcting for structural deficiencies lowers the eigenfrequencies, this correction is slightly sensitive to the patching criteria, and the remaining frequency discrepancy between models and observations is less frequency dependent. Our work shows the applicability and relevance of patching in asteroseismology.

  5. Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping

    NASA Astrophysics Data System (ADS)

    Piedrafita, Álvaro; Renes, Joseph M.

    2017-12-01

    We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.

  6. Data pre-processing: Stratospheric aerosol perturbing effect on the remote sensing of vegetation: Correction method for the composite NDVI after the Pinatubo eruption

    NASA Technical Reports Server (NTRS)

    Vermote, E.; Elsaleous, N.; Kaufman, Y. J.; Dutton, E.

    1994-01-01

    An operational stratospheric correction scheme used after the Mount Pinatubo (Phillipines) eruption (Jun. 1991) is presented. The stratospheric aerosol distribution is assumed to be only variable with latitude. Each 9 days the latitudinal distribution of the optical thickness is computed by inverting radiances observed in the NOAA AVHRR channel 1 (0.63 micrometers) and channel 2 (0.83 micrometers) over the Pacific Ocean. This radiance data set is used to check the validity of model used for inversion by checking consistency of the optical thickness deduced from each channel as well as optical thickness deduced from different scattering angles. Using the optical thickness profile previously computed and radiative transfer code assuming Lambertian boundary condition, each pixel of channel 1 and 2 are corrected prior to computation of NDVI (Normalized Difference Vegetation Index). Comparison between corrected, non corrected, and years prior to Pinatubo eruption (1989 to 1990) NDVI composite, shows the necessity and the accuracy of the operational correction scheme.

  7. 0–0 Energies Using Hybrid Schemes: Benchmarks of TD-DFT, CIS(D), ADC(2), CC2, and BSE/GW formalisms for 80 Real-Life Compounds

    PubMed Central

    2015-01-01

    The 0–0 energies of 80 medium and large molecules have been computed with a large panel of theoretical formalisms. We have used an approach computationally tractable for large molecules, that is, the structural and vibrational parameters are obtained with TD-DFT, the solvent effects are accounted for with the PCM model, whereas the total and transition energies have been determined with TD-DFT and with five wave function approaches accounting for contributions from double excitations, namely, CIS(D), ADC(2), CC2, SCS-CC2, and SOS-CC2, as well as Green’s function based BSE/GW approach. Atomic basis sets including diffuse functions have been systematically applied, and several variations of the PCM have been evaluated. Using solvent corrections obtained with corrected linear-response approach, we found that three schemes, namely, ADC(2), CC2, and BSE/GW allow one to reach a mean absolute deviation smaller than 0.15 eV compared to the measurements, the two former yielding slightly better correlation with experiments than the latter. CIS(D), SCS-CC2, and SOS-CC2 provide significantly larger deviations, though the latter approach delivers highly consistent transition energies. In addition, we show that (i) ADC(2) and CC2 values are extremely close to each other but for systems absorbing at low energies; (ii) the linear-response PCM scheme tends to overestimate solvation effects; and that (iii) the average impact of nonequilibrium correction on 0–0 energies is negligible. PMID:26574326

  8. Design and evaluation of sparse quantization index modulation watermarking schemes

    NASA Astrophysics Data System (ADS)

    Cornelis, Bruno; Barbarien, Joeri; Dooms, Ann; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter

    2008-08-01

    In the past decade the use of digital data has increased significantly. The advantages of digital data are, amongst others, easy editing, fast, cheap and cross-platform distribution and compact storage. The most crucial disadvantages are the unauthorized copying and copyright issues, by which authors and license holders can suffer considerable financial losses. Many inexpensive methods are readily available for editing digital data and, unlike analog information, the reproduction in the digital case is simple and robust. Hence, there is great interest in developing technology that helps to protect the integrity of a digital work and the copyrights of its owners. Watermarking, which is the embedding of a signal (known as the watermark) into the original digital data, is one method that has been proposed for the protection of digital media elements such as audio, video and images. In this article, we examine watermarking schemes for still images, based on selective quantization of the coefficients of a wavelet transformed image, i.e. sparse quantization-index modulation (QIM) watermarking. Different grouping schemes for the wavelet coefficients are evaluated and experimentally verified for robustness against several attacks. Wavelet tree-based grouping schemes yield a slightly improved performance over block-based grouping schemes. Additionally, the impact of the deployment of error correction codes on the most promising configurations is examined. The utilization of BCH-codes (Bose, Ray-Chaudhuri, Hocquenghem) results in an improved robustness as long as the capacity of the error codes is not exceeded (cliff-effect).

  9. A subtraction scheme for computing QCD jet cross sections at NNLO: integrating the subtraction terms I

    NASA Astrophysics Data System (ADS)

    Somogyi, Gábor; Trócsányi, Zoltán

    2008-08-01

    In previous articles we outlined a subtraction scheme for regularizing doubly-real emission and real-virtual emission in next-to-next-to-leading order (NNLO) calculations of jet cross sections in electron-positron annihilation. In order to find the NNLO correction these subtraction terms have to be integrated over the factorized unresolved phase space and combined with the two-loop corrections. In this paper we perform the integration of all one-parton unresolved subtraction terms.

  10. Use of corrected centrifugal sudden approximations for the calculation of effective cross sections. II. The N sub 2 --He system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thachuk, M.; McCourt, F.R.W.

    1991-09-15

    A series of centrifugal sudden (CS) and infinite-order sudden (IOS) approximations together with their corrected versions, respectively, the corrected centrifugal sudden (CCS) and corrected infinite-order sudden (CIOS) approximations, originally introduced by McLenithan and Secrest (J. Chem. Phys. {bold 80}, 2480 (1987)), have been compared with the close-coupled (CC) method for the N{sub 2}--He interaction. This extends previous work using the H{sub 2}--He system (J. Chem. Phys. {bold 93}, 3931 (1990)) to an interaction which is more anisotropic and more classical in nature. A set of eleven energy dependent cross sections, including both relaxation and production types, has been calculated usingmore » the {ital LF}- and {ital LA}-labeling schemes for the CS approximation, as well as the {ital KI}-, {ital KF}-, {ital KA}-, and {ital KM}-labeling schemes for the IOS approximation. The latter scheme is defined as {ital KM}={ital K}=max({ital k}{sub {ital j}},{ital k}{sub {ital j}{sub {ital I}}}). Further, a number of temperature dependent cross sections formed from thermal averages of the above set have also been compared at 100 and 200 K. These comparisons have shown that the CS approximation produced accurate results for relaxation type cross sections regardless of the {ital L}-labeling scheme chosen, but inaccurate results for production type cross sections. Further, except for one particular cross section, the CCS approximation did not generally improve the accuracy of the CS results using either the {ital LF}- or {ital LA}-labeling schemes. The accuracy of the IOS results vary greatly between the cross sections with the most accurate values given by the {ital KM}-labeling scheme. The CIOS approximation generally increases the accuracy of the corresponding IOS results but does not completely eliminate the errors associated with them.« less

  11. Random access to mobile networks with advanced error correction

    NASA Technical Reports Server (NTRS)

    Dippold, Michael

    1990-01-01

    A random access scheme for unreliable data channels is investigated in conjunction with an adaptive Hybrid-II Automatic Repeat Request (ARQ) scheme using Rate Compatible Punctured Codes (RCPC) Forward Error Correction (FEC). A simple scheme with fixed frame length and equal slot sizes is chosen and reservation is implicit by the first packet transmitted randomly in a free slot, similar to Reservation Aloha. This allows the further transmission of redundancy if the last decoding attempt failed. Results show that a high channel utilization and superior throughput can be achieved with this scheme that shows a quite low implementation complexity. For the example of an interleaved Rayleigh channel and soft decision utilization and mean delay are calculated. A utilization of 40 percent may be achieved for a frame with the number of slots being equal to half the station number under high traffic load. The effects of feedback channel errors and some countermeasures are discussed.

  12. Asian dust aerosol: Optical effect on satellite ocean color signal and a scheme of its correction

    NASA Astrophysics Data System (ADS)

    Fukushima, H.; Toratani, M.

    1997-07-01

    The paper first exhibits the influence of the Asian dust aerosol (KOSA) on a coastal zone color scanner (CZCS) image which records erroneously low or negative satellite-derived water-leaving radiance especially in a shorter wavelength region. This suggests the presence of spectrally dependent absorption which was disregarded in the past atmospheric correction algorithms. On the basis of the analysis of the scene, a semiempirical optical model of the Asian dust aerosol that relates aerosol single scattering albedo (ωA) to the spectral ratio of aerosol optical thickness between 550 nm and 670 nm is developed. Then, as a modification to a standard CZCS atmospheric correction algorithm (NASA standard algorithm), a scheme which estimates pixel-wise aerosol optical thickness, and in turn ωA, is proposed. The assumption of constant normalized water-leaving radiance at 550 nm is adopted together with a model of aerosol scattering phase function. The scheme is combined to the standard algorithm, performing atmospheric correction just the same as the standard version with a fixed Angstrom coefficient except in the case where the presence of Asian dust aerosol is detected by the lowered satellite-derived Angstrom exponent. Some of the model parameter values are determined so that the scheme does not produce any spatial discontinuity with the standard scheme. The algorithm was tested against the Japanese Asian dust CZCS scene with parameter values of the spectral dependency of ωA, first statistically determined and second optimized for selected pixels. Analysis suggests that the parameter values depend on the assumed Angstrom coefficient for standard algorithm, at the same time defining the spatial extent of the area to apply the Asian dust scheme. The algorithm was also tested for a Saharan dust scene, showing the relevance of the scheme but with different parameter setting. Finally, the algorithm was applied to a data set of 25 CZCS scenes to produce a monthly composite of pigment concentration for April 1981. Through these analyses, the modified algorithm is considered robust in the sense that it operates most compatibly with the standard algorithm yet performs adaptively in response to the magnitude of the dust effect.

  13. Atmospheric correction using near-infrared bands for satellite ocean color data processing in the turbid western Pacific region.

    PubMed

    Wang, Menghua; Shi, Wei; Jiang, Lide

    2012-01-16

    A regional near-infrared (NIR) ocean normalized water-leaving radiance (nL(w)(λ)) model is proposed for atmospheric correction for ocean color data processing in the western Pacific region, including the Bohai Sea, Yellow Sea, and East China Sea. Our motivation for this work is to derive ocean color products in the highly turbid western Pacific region using the Geostationary Ocean Color Imager (GOCI) onboard South Korean Communication, Ocean, and Meteorological Satellite (COMS). GOCI has eight spectral bands from 412 to 865 nm but does not have shortwave infrared (SWIR) bands that are needed for satellite ocean color remote sensing in the turbid ocean region. Based on a regional empirical relationship between the NIR nL(w)(λ) and diffuse attenuation coefficient at 490 nm (K(d)(490)), which is derived from the long-term measurements with the Moderate-resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite, an iterative scheme with the NIR-based atmospheric correction algorithm has been developed. Results from MODIS-Aqua measurements show that ocean color products in the region derived from the new proposed NIR-corrected atmospheric correction algorithm match well with those from the SWIR atmospheric correction algorithm. Thus, the proposed new atmospheric correction method provides an alternative for ocean color data processing for GOCI (and other ocean color satellite sensors without SWIR bands) in the turbid ocean regions of the Bohai Sea, Yellow Sea, and East China Sea, although the SWIR-based atmospheric correction approach is still much preferred. The proposed atmospheric correction methodology can also be applied to other turbid coastal regions.

  14. Accurate bond energies of hydrocarbons from complete basis set extrapolated multi-reference singles and doubles configuration interaction.

    PubMed

    Oyeyemi, Victor B; Pavone, Michele; Carter, Emily A

    2011-12-09

    Quantum chemistry has become one of the most reliable tools for characterizing the thermochemical underpinnings of reactions, such as bond dissociation energies (BDEs). The accurate prediction of these particular properties (BDEs) are challenging for ab initio methods based on perturbative corrections or coupled cluster expansions of the single-determinant Hartree-Fock wave function: the processes of bond breaking and forming are inherently multi-configurational and require an accurate description of non-dynamical electron correlation. To this end, we present a systematic ab initio approach for computing BDEs that is based on three components: 1) multi-reference single and double excitation configuration interaction (MRSDCI) for the electronic energies; 2) a two-parameter scheme for extrapolating MRSDCI energies to the complete basis set limit; and 3) DFT-B3LYP calculations of minimum-energy structures and vibrational frequencies to account for zero point energy and thermal corrections. We validated our methodology against a set of reliable experimental BDE values of CC and CH bonds of hydrocarbons. The goal of chemical accuracy is achieved, on average, without applying any empirical corrections to the MRSDCI electronic energies. We then use this composite scheme to make predictions of BDEs in a large number of hydrocarbon molecules for which there are no experimental data, so as to provide needed thermochemical estimates for fuel molecules. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Stable, non-dissipative, and conservative flux-reconstruction schemes in split forms

    NASA Astrophysics Data System (ADS)

    Abe, Yoshiaki; Morinaka, Issei; Haga, Takanori; Nonomura, Taku; Shibata, Hisaichi; Miyaji, Koji

    2018-01-01

    A stable, non-dissipative, and conservative flux-reconstruction (FR) scheme is constructed and demonstrated for the compressible Euler and Navier-Stokes equations. A proposed FR framework adopts a split form (also known as the skew-symmetric form) for convective terms. Sufficient conditions to satisfy both the primary conservation (PC) and kinetic energy preservation (KEP) properties are rigorously derived by polynomial-based analysis for a general FR framework. It is found that the split form needs to be expressed in the PC split form or KEP split form to satisfy each property in discrete sense. The PC split form is retrieved from existing general forms (Kennedy and Gruber [33]); in contrast, we have newly introduced the KEP split form as a comprehensive form constituting a KEP scheme in the FR framework. Furthermore, Gauss-Lobatto (GL) solution points and g2 correction function are required to satisfy the KEP property while any correction functions are available for the PC property. The split-form FR framework to satisfy the KEP property, eventually, is similar to the split-form DGSEM-GL method proposed by Gassner [23], but which, in this study, is derived solely by polynomial-based analysis without explicitly using the diagonal-norm SBP property. Based on a series of numerical tests (e.g., Sod shock tube), both the PC and KEP properties have been verified. We have also demonstrated that using a non-dissipative KEP flux, a sixteenth-order (p15) simulation of the viscous Taylor-Green vortex (Re = 1 , 600) is stable and its results are free of unphysical oscillations on relatively coarse mesh (total number of degrees of freedom (DoFs) is 1283).

  16. Uniformly Processed Strong Motion Database for Himalaya and Northeast Region of India

    NASA Astrophysics Data System (ADS)

    Gupta, I. D.

    2018-03-01

    This paper presents the first uniformly processed comprehensive database on strong motion acceleration records for the extensive regions of western Himalaya, northeast India, and the alluvial plains juxtaposing the Himalaya. This includes 146 three components of old analog records corrected for the instrument response and baseline distortions and 471 three components of recent digital records corrected for baseline errors. The paper first provides a background of the evolution of strong motion data in India and the seismotectonics of the areas of recording, then describes the details of the recording stations and the contributing earthquakes, which is finally followed by the methodology used to obtain baseline corrected data in a uniform and consistent manner. Two different schemes in common use for baseline correction are based on the application of the Ormsby filter without zero pads (Trifunac 1971) and that on the Butterworth filter with zero pads at the start as well as at the end (Converse and Brady 1992). To integrate the advantages of both the schemes, Ormsby filter with zero pads at the start only is used in the present study. A large number of typical example results are presented to illustrate that the methodology adopted is able to provide realistic velocity and displacement records with much smaller number of zero pads. The present strong motion database of corrected acceleration records will be useful for analyzing the ground motion characteristics of engineering importance, developing prediction equations for various strong motion parameters, and calibrating the seismological source model approach for ground motion simulation for seismically active and risk prone areas of India.

  17. Temperature Data Assimilation with Salinity Corrections: Validation for the NSIPP Ocean Data Assimilation System in the Tropical Pacific Ocean, 1993-1998

    NASA Technical Reports Server (NTRS)

    Troccoli, Alberto; Rienecker, Michele M.; Keppenne, Christian L.; Johnson, Gregory C.

    2003-01-01

    The NASA Seasonal-to-Interannual Prediction Project (NSIPP) has developed an Ocean data assimilation system to initialize the quasi-isopycnal ocean model used in our experimental coupled-model forecast system. Initial tests of the system have focused on the assimilation of temperature profiles in an optimal interpolation framework. It is now recognized that correction of temperature only often introduces spurious water masses. The resulting density distribution can be statically unstable and also have a detrimental impact on the velocity distribution. Several simple schemes have been developed to try to correct these deficiencies. Here the salinity field is corrected by using a scheme which assumes that the temperature-salinity relationship of the model background is preserved during the assimilation. The scheme was first introduced for a zlevel model by Troccoli and Haines (1999). A large set of subsurface observations of salinity and temperature is used to cross-validate two data assimilation experiments run for the 6-year period 1993-1998. In these two experiments only subsurface temperature observations are used, but in one case the salinity field is also updated whenever temperature observations are available.

  18. Revised Chapman-Enskog analysis for a class of forcing schemes in the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Li, Q.; Zhou, P.; Yan, H. J.

    2016-10-01

    In the lattice Boltzmann (LB) method, the forcing scheme, which is used to incorporate an external or internal force into the LB equation, plays an important role. It determines whether the force of the system is correctly implemented in an LB model and affects the numerical accuracy. In this paper we aim to clarify a critical issue about the Chapman-Enskog analysis for a class of forcing schemes in the LB method in which the velocity in the equilibrium density distribution function is given by u =∑αeαfα / ρ , while the actual fluid velocity is defined as u ̂=u +δtF / (2 ρ ) . It is shown that the usual Chapman-Enskog analysis for this class of forcing schemes should be revised so as to derive the actual macroscopic equations recovered from these forcing schemes. Three forcing schemes belonging to the above class are analyzed, among which Wagner's forcing scheme [A. J. Wagner, Phys. Rev. E 74, 056703 (2006), 10.1103/PhysRevE.74.056703] is shown to be capable of reproducing the correct macroscopic equations. The theoretical analyses are examined and demonstrated with two numerical tests, including the simulation of Womersley flow and the modeling of flat and circular interfaces by the pseudopotential multiphase LB model.

  19. A Note on Multigrid Theory for Non-nested Grids and/or Quadrature

    NASA Technical Reports Server (NTRS)

    Douglas, C. C.; Douglas, J., Jr.; Fyfe, D. E.

    1996-01-01

    We provide a unified theory for multilevel and multigrid methods when the usual assumptions are not present. For example, we do not assume that the solution spaces or the grids are nested. Further, we do not assume that there is an algebraic relationship between the linear algebra problems on different levels. What we provide is a computationally useful theory for adaptively changing levels. Theory is provided for multilevel correction schemes, nested iteration schemes, and one way (i.e., coarse to fine grid with no correction iterations) schemes. We include examples showing the applicability of this theory: finite element examples using quadrature in the matrix assembly and finite volume examples with non-nested grids. Our theory applies directly to other discretizations as well.

  20. Deformation of angle profiles in forward kinematics for nullifying end-point offset while preserving movement properties.

    PubMed

    Zhang, Xudong

    2002-10-01

    This work describes a new approach that allows an angle-domain human movement model to generate, via forward kinematics, Cartesian-space human movement representation with otherwise inevitable end-point offset nullified but much of the kinematic authenticity retained. The approach incorporates a rectification procedure that determines the minimum postural angle change at the final frame to correct the end-point offset, and a deformation procedure that deforms the angle profile accordingly to preserve maximum original kinematic authenticity. Two alternative deformation schemes, named amplitude-proportional (AP) and time-proportional (TP) schemes, are proposed and formulated. As an illustration and empirical evaluation, the proposed approach, along with two deformation schemes, was applied to a set of target-directed right-hand reaching movements that had been previously measured and modeled. The evaluation showed that both deformation schemes nullified the final frame end-point offset and significantly reduced time-averaged position errors for the end-point as well as the most distal intermediate joint while causing essentially no change in the remaining joints. A comparison between the two schemes based on time-averaged joint and end-point position errors indicated that overall the TP scheme outperformed the AP scheme. In addition, no statistically significant difference in time-averaged angle error was identified between the raw prediction and either of the deformation schemes, nor between the two schemes themselves, suggesting minimal angle-domain distortion incurred by the deformation.

  1. Off-resonance artifacts correction with convolution in k-space (ORACLE).

    PubMed

    Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne

    2012-06-01

    Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.

  2. An analysis of USSPACECOM's space surveillance network sensor tasking methodology

    NASA Astrophysics Data System (ADS)

    Berger, Jeff M.; Moles, Joseph B.; Wilsey, David G.

    1992-12-01

    This study provides the basis for the development of a cost/benefit assessment model to determine the effects of alterations to the Space Surveillance Network (SSN) on orbital element (OE) set accuracy. It provides a review of current methods used by NORAD and the SSN to gather and process observations, an alternative to the current Gabbard classification method, and the development of a model to determine the effects of observation rate and correction interval on OE set accuracy. The proposed classification scheme is based on satellite J2 perturbations. Specifically, classes were established based on mean motion, eccentricity, and inclination since J2 perturbation effects are functions of only these elements. Model development began by creating representative sensor observations using a highly accurate orbital propagation model. These observations were compared to predicted observations generated using the NORAD Simplified General Perturbation (SGP4) model and differentially corrected using a Bayes, sequential estimation, algorithm. A 10-run Monte Carlo analysis was performed using this model on 12 satellites using 16 different observation rate/correction interval combinations. An ANOVA and confidence interval analysis of the results show that this model does demonstrate the differences in steady state position error based on varying observation rate and correction interval.

  3. GIFTS SM EDU Level 1B Algorithms

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Gazarik, Michael J.; Reisse, Robert A.; Johnson, David G.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) SensorModule (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the GIFTS SM EDU Level 1B algorithms involved in the calibration. The GIFTS Level 1B calibration procedures can be subdivided into four blocks. In the first block, the measured raw interferograms are first corrected for the detector nonlinearity distortion, followed by the complex filtering and decimation procedure. In the second block, a phase correction algorithm is applied to the filtered and decimated complex interferograms. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected spectrum. The phase correction and spectral smoothing operations are performed on a set of interferogram scans for both ambient and hot blackbody references. To continue with the calibration, we compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. We now can estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. The correction schemes that compensate for the fore-optics offsets and off-axis effects are also implemented. In the third block, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation. Finally, in the fourth block, the single pixel algorithms are applied to the entire FPA.

  4. APFEL: A PDF evolution library with QED corrections

    NASA Astrophysics Data System (ADS)

    Bertone, Valerio; Carrazza, Stefano; Rojo, Juan

    2014-06-01

    Quantum electrodynamics and electroweak corrections are important ingredients for many theoretical predictions at the LHC. This paper documents APFEL, a new PDF evolution package that allows for the first time to perform DGLAP evolution up to NNLO in QCD and to LO in QED, in the variable-flavor-number scheme and with either pole or MS bar heavy quark masses. APFEL consistently accounts for the QED corrections to the evolution of quark and gluon PDFs and for the contribution from the photon PDF in the proton. The coupled QCD ⊗ QED equations are solved in x-space by means of higher order interpolation, followed by Runge-Kutta solution of the resulting discretized evolution equations. APFEL is based on an innovative and flexible methodology for the sequential solution of the QCD and QED evolution equations and their combination. In addition to PDF evolution, APFEL provides a module that computes Deep-Inelastic Scattering structure functions in the FONLL general-mass variable-flavor-number scheme up to O(αs2) . All the functionalities of APFEL can be accessed via a Graphical User Interface, supplemented with a variety of plotting tools for PDFs, parton luminosities and structure functions. Written in FORTRAN 77, APFEL can also be used via the C/C++ and Python interfaces, and is publicly available from the HepForge repository.

  5. A novel Kalman filter based video image processing scheme for two-photon fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Huang, Xia; Li, Chunqiang; Xiao, Chuan; Qian, Wei

    2016-03-01

    Two-photon fluorescence microscopy (TPFM) is a perfect optical imaging equipment to monitor the interaction between fast moving viruses and hosts. However, due to strong unavoidable background noises from the culture, videos obtained by this technique are too noisy to elaborate this fast infection process without video image processing. In this study, we developed a novel scheme to eliminate background noises, recover background bacteria images and improve video qualities. In our scheme, we modified and implemented the following methods for both host and virus videos: correlation method, round identification method, tree-structured nonlinear filters, Kalman filters, and cell tracking method. After these procedures, most of noises were eliminated and host images were recovered with their moving directions and speed highlighted in the videos. From the analysis of the processed videos, 93% bacteria and 98% viruses were correctly detected in each frame on average.

  6. Event-triggered H∞ state estimation for semi-Markov jumping discrete-time neural networks with quantization.

    PubMed

    Rakkiyappan, R; Maheswari, K; Velmurugan, G; Park, Ju H

    2018-05-17

    This paper investigates H ∞ state estimation problem for a class of semi-Markovian jumping discrete-time neural networks model with event-triggered scheme and quantization. First, a new event-triggered communication scheme is introduced to determine whether or not the current sampled sensor data should be broad-casted and transmitted to the quantizer, which can save the limited communication resource. Second, a novel communication framework is employed by the logarithmic quantizer that quantifies and reduces the data transmission rate in the network, which apparently improves the communication efficiency of networks. Third, a stabilization criterion is derived based on the sufficient condition which guarantees a prescribed H ∞ performance level in the estimation error system in terms of the linear matrix inequalities. Finally, numerical simulations are given to illustrate the correctness of the proposed scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Well balancing of the SWE schemes for moving-water steady flows

    NASA Astrophysics Data System (ADS)

    Caleffi, Valerio; Valiani, Alessandro

    2017-08-01

    In this work, the exact reproduction of a moving-water steady flow via the numerical solution of the one-dimensional shallow water equations is studied. A new scheme based on a modified version of the HLLEM approximate Riemann solver (Dumbser and Balsara (2016) [18]) that exactly preserves the total head and the discharge in the simulation of smooth steady flows and that correctly dissipates mechanical energy in the presence of hydraulic jumps is presented. This model is compared with a selected set of schemes from the literature, including models that exactly preserve quiescent flows and models that exactly preserve moving-water steady flows. The comparison highlights the strengths and weaknesses of the different approaches. In particular, the results show that the increase in accuracy in the steady state reproduction is counterbalanced by a reduced robustness and numerical efficiency of the models. Some solutions to reduce these drawbacks, at the cost of increased algorithm complexity, are presented.

  8. SU-C-206-07: A Practical Sparse View Ultra-Low Dose CT Acquisition Scheme for PET Attenuation Correction in the Extended Scan Field-Of-View

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miao, J; Fan, J; Gopinatha Pillai, A

    Purpose: To further reduce CT dose, a practical sparse-view acquisition scheme is proposed to provide the same attenuation estimation as higher dose for PET imaging in the extended scan field-of-view. Methods: CT scans are often used for PET attenuation correction and can be acquired at very low CT radiation dose. Low dose techniques often employ low tube voltage/current accompanied with a smooth filter before backprojection to reduce CT image noise. These techniques can introduce bias in the conversion from HU to attenuation values, especially in the extended CT scan field-of-view (FOV). In this work, we propose an ultra-low dose CTmore » technique for PET attenuation correction based on sparse-view acquisition. That is, instead of an acquisition of full amount of views, only a fraction of views are acquired. We tested this technique on a 64-slice GE CT scanner using multiple phantoms. CT scan FOV truncation completion was performed based on the published water-cylinder extrapolation algorithm. A number of continuous views per rotation: 984 (full), 246, 123, 82 and 62 have been tested, corresponding to a CT dose reduction of none, 4x, 8x, 12x and 16x. We also simulated sparse-view acquisition by skipping views from the fully-acquired view data. Results: FBP reconstruction with Q. AC filter on reduced views in the full extended scan field-of-view possesses similar image quality to the reconstruction on acquired full view data. The results showed a further potential for dose reduction compared to the full acquisition, without sacrificing any significant attenuation support to the PET. Conclusion: With the proposed sparse-view method, one can potential achieve at least 2x more CT dose reduction compared to the current Ultra-Low Dose (ULD) PET/CT protocol. A pre-scan based dose modulation scheme can be combined with the above sparse-view approaches, which can even further reduce the CT scan dose during a PET/CT exam.« less

  9. Optimization-based mesh correction with volume and convexity constraints

    DOE PAGES

    D'Elia, Marta; Ridzal, Denis; Peterson, Kara J.; ...

    2016-02-24

    In this study, we consider the problem of finding a mesh such that 1) it is the closest, with respect to a suitable metric, to a given source mesh having the same connectivity, and 2) the volumes of its cells match a set of prescribed positive values that are not necessarily equal to the cell volumes in the source mesh. This volume correction problem arises in important simulation contexts, such as satisfying a discrete geometric conservation law and solving transport equations by incremental remapping or similar semi-Lagrangian transport schemes. In this paper we formulate volume correction as a constrained optimizationmore » problem in which the distance to the source mesh defines an optimization objective, while the prescribed cell volumes, mesh validity and/or cell convexity specify the constraints. We solve this problem numerically using a sequential quadratic programming (SQP) method whose performance scales with the mesh size. To achieve scalable performance we develop a specialized multigrid-based preconditioner for optimality systems that arise in the application of the SQP method to the volume correction problem. Numerical examples illustrate the importance of volume correction, and showcase the accuracy, robustness and scalability of our approach.« less

  10. Non-linear eigensolver-based alternative to traditional SCF methods

    NASA Astrophysics Data System (ADS)

    Gavin, B.; Polizzi, E.

    2013-05-01

    The self-consistent procedure in electronic structure calculations is revisited using a highly efficient and robust algorithm for solving the non-linear eigenvector problem, i.e., H({ψ})ψ = Eψ. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm to account for the non-linearity of the Hamiltonian with the occupied eigenvectors. Using a series of numerical examples and the density functional theory-Kohn/Sham model, it will be shown that our approach can outperform the traditional SCF mixing-scheme techniques by providing a higher converge rate, convergence to the correct solution regardless of the choice of the initial guess, and a significant reduction of the eigenvalue solve time in simulations.

  11. Sequential sampling: a novel method in farm animal welfare assessment.

    PubMed

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.

  12. Location verification algorithm of wearable sensors for wireless body area networks.

    PubMed

    Wang, Hua; Wen, Yingyou; Zhao, Dazhe

    2018-01-01

    Knowledge of the location of sensor devices is crucial for many medical applications of wireless body area networks, as wearable sensors are designed to monitor vital signs of a patient while the wearer still has the freedom of movement. However, clinicians or patients can misplace the wearable sensors, thereby causing a mismatch between their physical locations and their correct target positions. An error of more than a few centimeters raises the risk of mistreating patients. The present study aims to develop a scheme to calculate and detect the position of wearable sensors without beacon nodes. A new scheme was proposed to verify the location of wearable sensors mounted on the patient's body by inferring differences in atmospheric air pressure and received signal strength indication measurements from wearable sensors. Extensive two-sample t tests were performed to validate the proposed scheme. The proposed scheme could easily recognize a 30-cm horizontal body range and a 65-cm vertical body range to correctly perform sensor localization and limb identification. All experiments indicate that the scheme is suitable for identifying wearable sensor positions in an indoor environment.

  13. A Scenario-Based Protocol Checker for Public-Key Authentication Scheme

    NASA Astrophysics Data System (ADS)

    Saito, Takamichi

    Security protocol provides communication security for the internet. One of the important features of it is authentication with key exchange. Its correctness is a requirement of the whole of the communication security. In this paper, we introduce three attack models realized as their attack scenarios, and provide an authentication-protocol checker for applying three attack-scenarios based on the models. We also utilize it to check two popular security protocols: Secure SHell (SSH) and Secure Socket Layer/Transport Layer Security (SSL/TLS).

  14. Investigation of supersonic jet plumes using an improved two-equation turbulence model

    NASA Technical Reports Server (NTRS)

    Lakshmanan, B.; Abdol-Hamid, Khaled S.

    1994-01-01

    Supersonic jet plumes were studied using a two-equation turbulence model employing corrections for compressible dissipation and pressure-dilatation. A space-marching procedure based on an upwind numerical scheme was used to solve the governing equations and turbulence transport equations. The computed results indicate that two-equation models employing corrections for compressible dissipation and pressure-dilatation yield improved agreement with the experimental data. In addition, the numerical study demonstrates that the computed results are sensitive to the effect of grid refinement and insensitive to the type of velocity profiles used at the inflow boundary for the cases considered in the present study.

  15. A uniqueness-and-anonymity-preserving remote user authentication scheme for connected health care.

    PubMed

    Chang, Ya-Fen; Yu, Shih-Hui; Shiao, Ding-Rui

    2013-04-01

    Connected health care provides new opportunities for improving financial and clinical performance. Many connected health care applications such as telecare medicine information system, personally controlled health records system, and patient monitoring have been proposed. Correct and quality care is the goal of connected heath care, and user authentication can ensure the legality of patients. After reviewing authentication schemes for connected health care applications, we find that many of them cannot protect patient privacy such that others can trace users/patients by the transmitted data. And the verification tokens used by these authentication schemes to authenticate users or servers are only password, smart card and RFID tag. Actually, these verification tokens are not unique and easy to copy. On the other hand, biometric characteristics, such as iris, face, voiceprint, fingerprint and so on, are unique, easy to be verified, and hard to be copied. In this paper, a biometrics-based user authentication scheme will be proposed to ensure uniqueness and anonymity at the same time. With the proposed scheme, only the legal user/patient himself/herself can access the remote server, and no one can trace him/her according to transmitted data.

  16. Hierarchical scheme for detecting the rotating MIMO transmission of the in-door RGB-LED visible light wireless communications using mobile-phone camera

    NASA Astrophysics Data System (ADS)

    Chen, Shih-Hao; Chow, Chi-Wai

    2015-01-01

    Multiple-input and multiple-output (MIMO) scheme can extend the transmission capacity for the light-emitting-diode (LED) based visible light communication (VLC) systems. The MIMO VLC system that uses the mobile-phone camera as the optical receiver (Rx) to receive MIMO signal from the n×n Red-Green-Blue (RGB) LED array is desirable. The key step of decoding this signal is to detect the signal direction. If the LED transmitter (Tx) is rotated, the Rx may not realize the rotation and transmission error can occur. In this work, we propose and demonstrate a novel hierarchical transmission scheme which can reduce the computation complexity of rotation detection in LED array VLC system. We use the n×n RGB LED array as the MIMO Tx. In our study, a novel two dimensional Hadamard coding scheme is proposed. Using the different LED color layers to indicate the rotation, a low complexity rotation detection method can be used for improving the quality of received signal. The detection correction rate is above 95% in the indoor usage distance. Experimental results confirm the feasibility of the proposed scheme.

  17. A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations

    NASA Technical Reports Server (NTRS)

    Ghosh, Amitabha

    1997-01-01

    This report discusses some analytical procedures to enhance the real time solutions of PMARC matrices applicable to the Wall Interference Correction Scheme (WICS) currently being implemented at the 12 foot Pressure Tunnel. WICS calculations involve solving large linear systems in a reasonably speedy manner necessitating exploring further improvement in solution time. This paper therefore presents some of the associated theory of the solution of linear systems. Then it discusses a geometrical interpretation of the residual correction schemes. Finally some results of the current investigation are presented.

  18. A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations

    NASA Technical Reports Server (NTRS)

    Ghosh, Amitabha

    1997-01-01

    This report discusses some analytical procedures to enhance the real time solutions of PMARC matrices applicable to the Wall Interference Correction Scheme (WICS) currently being implemented at the 12 foot Pressure Tunell. WICS calculations involve solving large linear systems in a reasonably speedy manner necessitating exploring further improvement in solution time. This paper therefore presents some of the associated theory of the solution of linear systems. Then it discusses a geometrical interpretation of the residual correction schemes. Finally, some results of the current investigation are presented.

  19. A wireless sensor network based personnel positioning scheme in coal mines with blind areas.

    PubMed

    Liu, Zhigao; Li, Chunwen; Wu, Danchen; Dai, Wenhan; Geng, Shaobo; Ding, Qingqing

    2010-01-01

    This paper proposes a novel personnel positioning scheme for a tunnel network with blind areas, which compared with most existing schemes offers both low-cost and high-precision. Based on the data models of tunnel networks, measurement networks and mobile miners, the global positioning method is divided into four steps: (1) calculate the real time personnel location in local areas using a location engine, and send it to the upper computer through the gateway; (2) correct any localization errors resulting from the underground tunnel environmental interference; (3) determine the global three-dimensional position by coordinate transformation; (4) estimate the personnel locations in the blind areas. A prototype system constructed to verify the positioning performance shows that the proposed positioning system has good reliability, scalability, and positioning performance. In particular, the static localization error of the positioning system is less than 2.4 m in the underground tunnel environment and the moving estimation error is below 4.5 m in the corridor environment. The system was operated continuously over three months without any failures.

  20. Location-Aware Dynamic Session-Key Management for Grid-Based Wireless Sensor Networks

    PubMed Central

    Chen, Chin-Ling; Lin, I-Hsien

    2010-01-01

    Security is a critical issue for sensor networks used in hostile environments. When wireless sensor nodes in a wireless sensor network are distributed in an insecure hostile environment, the sensor nodes must be protected: a secret key must be used to protect the nodes transmitting messages. If the nodes are not protected and become compromised, many types of attacks against the network may result. Such is the case with existing schemes, which are vulnerable to attacks because they mostly provide a hop-by-hop paradigm, which is insufficient to defend against known attacks. We propose a location-aware dynamic session-key management protocol for grid-based wireless sensor networks. The proposed protocol improves the security of a secret key. The proposed scheme also includes a key that is dynamically updated. This dynamic update can lower the probability of the key being guessed correctly. Thus currently known attacks can be defended. By utilizing the local information, the proposed scheme can also limit the flooding region in order to reduce the energy that is consumed in discovering routing paths. PMID:22163606

  1. A Wireless Sensor Network Based Personnel Positioning Scheme in Coal Mines with Blind Areas

    PubMed Central

    Liu, Zhigao; Li, Chunwen; Wu, Danchen; Dai, Wenhan; Geng, Shaobo; Ding, Qingqing

    2010-01-01

    This paper proposes a novel personnel positioning scheme for a tunnel network with blind areas, which compared with most existing schemes offers both low-cost and high-precision. Based on the data models of tunnel networks, measurement networks and mobile miners, the global positioning method is divided into four steps: (1) calculate the real time personnel location in local areas using a location engine, and send it to the upper computer through the gateway; (2) correct any localization errors resulting from the underground tunnel environmental interference; (3) determine the global three-dimensional position by coordinate transformation; (4) estimate the personnel locations in the blind areas. A prototype system constructed to verify the positioning performance shows that the proposed positioning system has good reliability, scalability, and positioning performance. In particular, the static localization error of the positioning system is less than 2.4 m in the underground tunnel environment and the moving estimation error is below 4.5 m in the corridor environment. The system was operated continuously over three months without any failures. PMID:22163446

  2. Location-aware dynamic session-key management for grid-based Wireless Sensor Networks.

    PubMed

    Chen, Chin-Ling; Lin, I-Hsien

    2010-01-01

    Security is a critical issue for sensor networks used in hostile environments. When wireless sensor nodes in a wireless sensor network are distributed in an insecure hostile environment, the sensor nodes must be protected: a secret key must be used to protect the nodes transmitting messages. If the nodes are not protected and become compromised, many types of attacks against the network may result. Such is the case with existing schemes, which are vulnerable to attacks because they mostly provide a hop-by-hop paradigm, which is insufficient to defend against known attacks. We propose a location-aware dynamic session-key management protocol for grid-based wireless sensor networks. The proposed protocol improves the security of a secret key. The proposed scheme also includes a key that is dynamically updated. This dynamic update can lower the probability of the key being guessed correctly. Thus currently known attacks can be defended. By utilizing the local information, the proposed scheme can also limit the flooding region in order to reduce the energy that is consumed in discovering routing paths.

  3. Short-range second order screened exchange correction to RPA correlation energies

    NASA Astrophysics Data System (ADS)

    Beuerle, Matthias; Ochsenfeld, Christian

    2017-11-01

    Direct random phase approximation (RPA) correlation energies have become increasingly popular as a post-Kohn-Sham correction, due to significant improvements over DFT calculations for properties such as long-range dispersion effects, which are problematic in conventional density functional theory. On the other hand, RPA still has various weaknesses, such as unsatisfactory results for non-isogyric processes. This can in parts be attributed to the self-correlation present in RPA correlation energies, leading to significant self-interaction errors. Therefore a variety of schemes have been devised to include exchange in the calculation of RPA correlation energies in order to correct this shortcoming. One of the most popular RPA plus exchange schemes is the second order screened exchange (SOSEX) correction. RPA + SOSEX delivers more accurate absolute correlation energies and also improves upon RPA for non-isogyric processes. On the other hand, RPA + SOSEX barrier heights are worse than those obtained from plain RPA calculations. To combine the benefits of RPA correlation energies and the SOSEX correction, we introduce a short-range RPA + SOSEX correction. Proof of concept calculations and benchmarks showing the advantages of our method are presented.

  4. Short-range second order screened exchange correction to RPA correlation energies.

    PubMed

    Beuerle, Matthias; Ochsenfeld, Christian

    2017-11-28

    Direct random phase approximation (RPA) correlation energies have become increasingly popular as a post-Kohn-Sham correction, due to significant improvements over DFT calculations for properties such as long-range dispersion effects, which are problematic in conventional density functional theory. On the other hand, RPA still has various weaknesses, such as unsatisfactory results for non-isogyric processes. This can in parts be attributed to the self-correlation present in RPA correlation energies, leading to significant self-interaction errors. Therefore a variety of schemes have been devised to include exchange in the calculation of RPA correlation energies in order to correct this shortcoming. One of the most popular RPA plus exchange schemes is the second order screened exchange (SOSEX) correction. RPA + SOSEX delivers more accurate absolute correlation energies and also improves upon RPA for non-isogyric processes. On the other hand, RPA + SOSEX barrier heights are worse than those obtained from plain RPA calculations. To combine the benefits of RPA correlation energies and the SOSEX correction, we introduce a short-range RPA + SOSEX correction. Proof of concept calculations and benchmarks showing the advantages of our method are presented.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadid, John Nicolas; Fish, Jacob; Waisman, Haim

    Two heuristic strategies intended to enhance the performance of the generalized global basis (GGB) method [H. Waisman, J. Fish, R.S. Tuminaro, J. Shadid, The Generalized Global Basis (GGB) method, International Journal for Numerical Methods in Engineering 61(8), 1243-1269] applied to nonlinear systems are presented. The standard GGB accelerates a multigrid scheme by an additional coarse grid correction that filters out slowly converging modes. This correction requires a potentially costly eigen calculation. This paper considers reusing previously computed eigenspace information. The GGB? scheme enriches the prolongation operator with new eigenvectors while the modified method (MGGB) selectively reuses the same prolongation. Bothmore » methods use the criteria of principal angles between subspaces spanned between the previous and current prolongation operators. Numerical examples clearly indicate significant time savings in particular for the MGGB scheme.« less

  6. Patient specific anatomy: the new area of anatomy based on computer science illustrated on liver.

    PubMed

    Soler, Luc; Mutter, Didier; Pessaux, Patrick; Marescaux, Jacques

    2015-01-01

    Over the past century, medical imaging has brought a new revolution: internal anatomy of a patient could be seen without any invasive technique. This revolution has highlighted the two main limits of current anatomy: the anatomical description is physician dependent, and the average anatomy is more and more frequently insufficient to describe anatomical variations. These drawbacks can sometimes be so important that they create mistakes but they can be overcome through the use of 3D patient-specific surgical anatomy. In this article, we propose to illustrate such improvement of standard anatomy on liver. We first propose a general scheme allowing to easily compare the four main liver anatomical descriptions by Takasaki, Goldsmith and Woodburne, Bismuth and Couinaud. From this general scheme we propose four rules to apply in order to correct these initial anatomical definitions. Application of these rules allows to correct usual vascular topological mistakes of standard anatomy. We finally validate such correction on a database of 20 clinical cases compared to the 111 clinical cases of a Couinaud article. Out of the 20 images of the database, we note a revealing difference in 14 cases (70%) on at least one important branch of the portal network. Only six cases (30%) do not present a revealing difference between both labellings. We also show that the right portal fissure location on our 20 cases defined between segment V and VI of our anatomical definition is well correlated with the real position described by Couinaud on 111 cases, knowing that the theoretical position was only found in 46 cases out of 111, i.e., 41.44% of cases with the non-corrected Couinaud definition. We have proposed a new anatomical segmentation of the liver based on four main rules to apply in order to correct topological errors of the four main standard segmentations. Our validation clearly illustrates that this new definition corrects the large amount of mistakes created by the current standard definitions, increased by physician interpretation that can vary from one case to another.

  7. Patient specific anatomy: the new area of anatomy based on computer science illustrated on liver

    PubMed Central

    Mutter, Didier; Pessaux, Patrick; Marescaux, Jacques

    2015-01-01

    Background Over the past century, medical imaging has brought a new revolution: internal anatomy of a patient could be seen without any invasive technique. This revolution has highlighted the two main limits of current anatomy: the anatomical description is physician dependent, and the average anatomy is more and more frequently insufficient to describe anatomical variations. These drawbacks can sometimes be so important that they create mistakes but they can be overcome through the use of 3D patient-specific surgical anatomy. Methods In this article, we propose to illustrate such improvement of standard anatomy on liver. We first propose a general scheme allowing to easily compare the four main liver anatomical descriptions by Takasaki, Goldsmith and Woodburne, Bismuth and Couinaud. From this general scheme we propose four rules to apply in order to correct these initial anatomical definitions. Application of these rules allows to correct usual vascular topological mistakes of standard anatomy. We finally validate such correction on a database of 20 clinical cases compared to the 111 clinical cases of a Couinaud article. Results Out of the 20 images of the database, we note a revealing difference in 14 cases (70%) on at least one important branch of the portal network. Only six cases (30%) do not present a revealing difference between both labellings. We also show that the right portal fissure location on our 20 cases defined between segment V and VI of our anatomical definition is well correlated with the real position described by Couinaud on 111 cases, knowing that the theoretical position was only found in 46 cases out of 111, i.e., 41.44% of cases with the non-corrected Couinaud definition. Conclusions We have proposed a new anatomical segmentation of the liver based on four main rules to apply in order to correct topological errors of the four main standard segmentations. Our validation clearly illustrates that this new definition corrects the large amount of mistakes created by the current standard definitions, increased by physician interpretation that can vary from one case to another. PMID:29075611

  8. Intercomparison of Martian Lower Atmosphere Simulated Using Different Planetary Boundary Layer Parameterization Schemes

    NASA Technical Reports Server (NTRS)

    Natarajan, Murali; Fairlie, T. Duncan; Dwyer Cianciolo, Alicia; Smith, Michael D.

    2015-01-01

    We use the mesoscale modeling capability of Mars Weather Research and Forecasting (MarsWRF) model to study the sensitivity of the simulated Martian lower atmosphere to differences in the parameterization of the planetary boundary layer (PBL). Characterization of the Martian atmosphere and realistic representation of processes such as mixing of tracers like dust depend on how well the model reproduces the evolution of the PBL structure. MarsWRF is based on the NCAR WRF model and it retains some of the PBL schemes available in the earth version. Published studies have examined the performance of different PBL schemes in NCAR WRF with the help of observations. Currently such assessments are not feasible for Martian atmospheric models due to lack of observations. It is of interest though to study the sensitivity of the model to PBL parameterization. Typically, for standard Martian atmospheric simulations, we have used the Medium Range Forecast (MRF) PBL scheme, which considers a correction term to the vertical gradients to incorporate nonlocal effects. For this study, we have also used two other parameterizations, a non-local closure scheme called Yonsei University (YSU) PBL scheme and a turbulent kinetic energy closure scheme called Mellor- Yamada-Janjic (MYJ) PBL scheme. We will present intercomparisons of the near surface temperature profiles, boundary layer heights, and wind obtained from the different simulations. We plan to use available temperature observations from Mini TES instrument onboard the rovers Spirit and Opportunity in evaluating the model results.

  9. Seismic reflection imaging, accounting for primary and multiple reflections

    NASA Astrophysics Data System (ADS)

    Wapenaar, Kees; van der Neut, Joost; Thorbecke, Jan; Broggini, Filippo; Slob, Evert; Snieder, Roel

    2015-04-01

    Imaging of seismic reflection data is usually based on the assumption that the seismic response consists of primary reflections only. Multiple reflections, i.e. waves that have reflected more than once, are treated as primaries and are imaged at wrong positions. There are two classes of multiple reflections, which we will call surface-related multiples and internal multiples. Surface-related multiples are those multiples that contain at least one reflection at the earth's surface, whereas internal multiples consist of waves that have reflected only at subsurface interfaces. Surface-related multiples are the strongest, but also relatively easy to deal with because the reflecting boundary (the earth's surface) is known. Internal multiples constitute a much more difficult problem for seismic imaging, because the positions and properties of the reflecting interfaces are not known. We are developing reflection imaging methodology which deals with internal multiples. Starting with the Marchenko equation for 1D inverse scattering problems, we derived 3D Marchenko-type equations, which relate reflection data at the surface to Green's functions between virtual sources anywhere in the subsurface and receivers at the surface. Based on these equations, we derived an iterative scheme by which these Green's functions can be retrieved from the reflection data at the surface. This iterative scheme requires an estimate of the direct wave of the Green's functions in a background medium. Note that this is precisely the same information that is also required by standard reflection imaging schemes. However, unlike in standard imaging, our iterative Marchenko scheme retrieves the multiple reflections of the Green's functions from the reflection data at the surface. For this, no knowledge of the positions and properties of the reflecting interfaces is required. Once the full Green's functions are retrieved, reflection imaging can be carried out by which the primaries and multiples are mapped to their correct positions, with correct reflection amplitudes. In the presentation we will illustrate this new methodology with numerical examples and discuss its potential and limitations.

  10. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    PubMed Central

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-01-01

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW’s) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach. PMID:27314363

  11. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data.

    PubMed

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-06-15

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration's (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003-2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW's) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  12. LEAP: An Innovative Direction Dependent Ionospheric Calibration Scheme for Low Frequency Arrays

    NASA Astrophysics Data System (ADS)

    Rioja, María J.; Dodson, Richard; Franzen, Thomas M. O.

    2018-05-01

    The ambitious scientific goals of the SKA require a matching capability for calibration of atmospheric propagation errors, which contaminate the observed signals. We demonstrate a scheme for correcting the direction-dependent ionospheric and instrumental phase effects at the low frequencies and with the wide fields of view planned for SKA-Low. It leverages bandwidth smearing, to filter-out signals from off-axis directions, allowing the measurement of the direction-dependent antenna-based gains in the visibility domain; by doing this towards multiple directions it is possible to calibrate across wide fields of view. This strategy removes the need for a global sky model, therefore all directions are independent. We use MWA results at 88 and 154 MHz under various weather conditions to characterise the performance and applicability of the technique. We conclude that this method is suitable to measure and correct for temporal fluctuations and direction-dependent spatial ionospheric phase distortions on a wide range of scales: both larger and smaller than the array size. The latter are the most intractable and pose a major challenge for future instruments. Moreover this scheme is an embarrassingly parallel process, as multiple directions can be processed independently and simultaneously. This is an important consideration for the SKA, where the current planned architecture is one of compute-islands with limited interconnects. Current implementation of the algorithm and on-going developments are discussed.

  13. Heralded creation of photonic qudits from parametric down-conversion using linear optics

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Jun-ichi; Bergmann, Marcel; van Loock, Peter; Fuwa, Maria; Okada, Masanori; Takase, Kan; Toyama, Takeshi; Makino, Kenzo; Takeda, Shuntaro; Furusawa, Akira

    2018-05-01

    We propose an experimental scheme to generate, in a heralded fashion, arbitrary quantum superpositions of two-mode optical states with a fixed total photon number n based on weakly squeezed two-mode squeezed state resources (obtained via weak parametric down-conversion), linear optics, and photon detection. Arbitrary d -level (qudit) states can be created this way where d =n +1 . Furthermore, we experimentally demonstrate our scheme for n =2 . The resulting qutrit states are characterized via optical homodyne tomography. We also discuss possible extensions to more than two modes concluding that, in general, our approach ceases to work in this case. For illustration and with regards to possible applications, we explicitly calculate a few examples such as NOON states and logical qubit states for quantum error correction. In particular, our approach enables one to construct bosonic qubit error-correction codes against amplitude damping (photon loss) with a typical suppression of √{n }-1 losses and spanned by two logical codewords that each correspond to an n -photon superposition for two bosonic modes.

  14. Highly Parallel Alternating Directions Algorithm for Time Dependent Problems

    NASA Astrophysics Data System (ADS)

    Ganzha, M.; Georgiev, K.; Lirkov, I.; Margenov, S.; Paprzycki, M.

    2011-11-01

    In our work, we consider the time dependent Stokes equation on a finite time interval and on a uniform rectangular mesh, written in terms of velocity and pressure. For this problem, a parallel algorithm based on a novel direction splitting approach is developed. Here, the pressure equation is derived from a perturbed form of the continuity equation, in which the incompressibility constraint is penalized in a negative norm induced by the direction splitting. The scheme used in the algorithm is composed of two parts: (i) velocity prediction, and (ii) pressure correction. This is a Crank-Nicolson-type two-stage time integration scheme for two and three dimensional parabolic problems in which the second-order derivative, with respect to each space variable, is treated implicitly while the other variable is made explicit at each time sub-step. In order to achieve a good parallel performance the solution of the Poison problem for the pressure correction is replaced by solving a sequence of one-dimensional second order elliptic boundary value problems in each spatial direction. The parallel code is implemented using the standard MPI functions and tested on two modern parallel computer systems. The performed numerical tests demonstrate good level of parallel efficiency and scalability of the studied direction-splitting-based algorithm.

  15. A simplified approach to the band gap correction of defect formation energies: Al, Ga, and In-doped ZnO

    NASA Astrophysics Data System (ADS)

    Saniz, R.; Xu, Y.; Matsubara, M.; Amini, M. N.; Dixit, H.; Lamoen, D.; Partoens, B.

    2013-01-01

    The calculation of defect levels in semiconductors within a density functional theory approach suffers greatly from the band gap problem. We propose a band gap correction scheme that is based on the separation of energy differences in electron addition and relaxation energies. We show that it can predict defect levels with a reasonable accuracy, particularly in the case of defects with conduction band character, and yet is simple and computationally economical. We apply this method to ZnO doped with group III elements (Al, Ga, In). As expected from experiment, the results indicate that Zn substitutional doping is preferred over interstitial doping in Al, Ga, and In-doped ZnO, under both zinc-rich and oxygen-rich conditions. Further, all three dopants act as shallow donors, with the +1 charge state having the most advantageous formation energy. Also, doping effects on the electronic structure of ZnO are sufficiently mild so as to affect little the fundamental band gap and lowest conduction bands dispersion, which secures their n-type transparent conducting behavior. A comparison with the extrapolation method based on LDA+U calculations and with the Heyd-Scuseria-Ernzerhof hybrid functional (HSE) shows the reliability of the proposed scheme in predicting the thermodynamic transition levels in shallow donor systems.

  16. Quantum annealing correction with minor embedding

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Albash, Tameem; Paz-Silva, Gerardo; Hen, Itay; Lidar, Daniel A.

    2015-10-01

    Quantum annealing provides a promising route for the development of quantum optimization devices, but the usefulness of such devices will be limited in part by the range of implementable problems as dictated by hardware constraints. To overcome constraints imposed by restricted connectivity between qubits, a larger set of interactions can be approximated using minor embedding techniques whereby several physical qubits are used to represent a single logical qubit. However, minor embedding introduces new types of errors due to its approximate nature. We introduce and study quantum annealing correction schemes designed to improve the performance of quantum annealers in conjunction with minor embedding, thus leading to a hybrid scheme defined over an encoded graph. We argue that this scheme can be efficiently decoded using an energy minimization technique provided the density of errors does not exceed the per-site percolation threshold of the encoded graph. We test the hybrid scheme using a D-Wave Two processor on problems for which the encoded graph is a two-level grid and the Ising model is known to be NP-hard. The problems we consider are frustrated Ising model problem instances with "planted" (a priori known) solutions. Applied in conjunction with optimized energy penalties and decoding techniques, we find that this approach enables the quantum annealer to solve minor embedded instances with significantly higher success probability than it would without error correction. Our work demonstrates that quantum annealing correction can and should be used to improve the robustness of quantum annealing not only for natively embeddable problems but also when minor embedding is used to extend the connectivity of physical devices.

  17. Understanding the many-body expansion for large systems. III. Critical role of four-body terms, counterpoise corrections, and cutoffs.

    PubMed

    Liu, Kuan-Yu; Herbert, John M

    2017-10-28

    Papers I and II in this series [R. M. Richard et al., J. Chem. Phys. 141, 014108 (2014); K. U. Lao et al., ibid. 144, 164105 (2016)] have attempted to shed light on precision and accuracy issues affecting the many-body expansion (MBE), which only manifest in larger systems and thus have received scant attention in the literature. Many-body counterpoise (CP) corrections are shown to accelerate convergence of the MBE, which otherwise suffers from a mismatch between how basis-set superposition error affects subsystem versus supersystem calculations. In water clusters ranging in size up to (H 2 O) 37 , four-body terms prove necessary to achieve accurate results for both total interaction energies and relative isomer energies, but the sheer number of tetramers makes the use of cutoff schemes essential. To predict relative energies of (H 2 O) 20 isomers, two approximations based on a lower level of theory are introduced and an ONIOM-type procedure is found to be very well converged with respect to the appropriate MBE benchmark, namely, a CP-corrected supersystem calculation at the same level of theory. Results using an energy-based cutoff scheme suggest that if reasonable approximations to the subsystem energies are available (based on classical multipoles, say), then the number of requisite subsystem calculations can be reduced even more dramatically than when distance-based thresholds are employed. The end result is several accurate four-body methods that do not require charge embedding, and which are stable in large basis sets such as aug-cc-pVTZ that have sometimes proven problematic for fragment-based quantum chemistry methods. Even with aggressive thresholding, however, the four-body approach at the self-consistent field level still requires roughly ten times more processors to outmatch the performance of the corresponding supersystem calculation, in test cases involving 1500-1800 basis functions.

  18. Understanding the many-body expansion for large systems. III. Critical role of four-body terms, counterpoise corrections, and cutoffs

    NASA Astrophysics Data System (ADS)

    Liu, Kuan-Yu; Herbert, John M.

    2017-10-01

    Papers I and II in this series [R. M. Richard et al., J. Chem. Phys. 141, 014108 (2014); K. U. Lao et al., ibid. 144, 164105 (2016)] have attempted to shed light on precision and accuracy issues affecting the many-body expansion (MBE), which only manifest in larger systems and thus have received scant attention in the literature. Many-body counterpoise (CP) corrections are shown to accelerate convergence of the MBE, which otherwise suffers from a mismatch between how basis-set superposition error affects subsystem versus supersystem calculations. In water clusters ranging in size up to (H2O)37, four-body terms prove necessary to achieve accurate results for both total interaction energies and relative isomer energies, but the sheer number of tetramers makes the use of cutoff schemes essential. To predict relative energies of (H2O)20 isomers, two approximations based on a lower level of theory are introduced and an ONIOM-type procedure is found to be very well converged with respect to the appropriate MBE benchmark, namely, a CP-corrected supersystem calculation at the same level of theory. Results using an energy-based cutoff scheme suggest that if reasonable approximations to the subsystem energies are available (based on classical multipoles, say), then the number of requisite subsystem calculations can be reduced even more dramatically than when distance-based thresholds are employed. The end result is several accurate four-body methods that do not require charge embedding, and which are stable in large basis sets such as aug-cc-pVTZ that have sometimes proven problematic for fragment-based quantum chemistry methods. Even with aggressive thresholding, however, the four-body approach at the self-consistent field level still requires roughly ten times more processors to outmatch the performance of the corresponding supersystem calculation, in test cases involving 1500-1800 basis functions.

  19. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/ormore » second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.« less

  20. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    NASA Astrophysics Data System (ADS)

    Spackman, Peter R.; Karton, Amir

    2015-05-01

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/Lα two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol-1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol-1.

  1. Identification of Unexpressed Premises and Argumentation Schemes by Students in Secondary School.

    ERIC Educational Resources Information Center

    van Eemeren, Frans H.; And Others

    1995-01-01

    Reports on exploratory empirical investigations on the performances of Dutch secondary education students in identifying unexpressed premises and argumentation schemes. Finds that, in the absence of any disambiguating contextual information, unexpressed major premises and non-syllogistic premises are more often correctly identified that…

  2. Design and experiment of FBG-based icing monitoring on overhead transmission lines with an improvement trial for windy weather.

    PubMed

    Zhang, Min; Xing, Yimeng; Zhang, Zhiguo; Chen, Qiguan

    2014-12-12

    A scheme for monitoring icing on overhead transmission lines with fiber Bragg grating (FBG) strain sensors is designed and evaluated both theoretically and experimentally. The influences of temperature and wind are considered. The results of field experiments using simulated ice loading on windless days indicate that the scheme is capable of monitoring the icing thickness within 0-30 mm with an accuracy of ±1 mm, a load cell error of 0.0308v, a repeatability error of 0.3328v and a hysteresis error is 0.026%. To improve the measurement during windy weather, a correction factor is added to the effective gravity acceleration, and the absolute FBG strain is replaced by its statistical average.

  3. AFFINE-CORRECTED PARADISE: FREE-BREATHING PATIENT-ADAPTIVE CARDIAC MRI WITH SENSITIVITY ENCODING

    PubMed Central

    Sharif, Behzad; Bresler, Yoram

    2013-01-01

    We propose a real-time cardiac imaging method with parallel MRI that allows for free breathing during imaging and does not require cardiac or respiratory gating. The method is based on the recently proposed PARADISE (Patient-Adaptive Reconstruction and Acquisition Dynamic Imaging with Sensitivity Encoding) scheme. The new acquisition method adapts the PARADISE k-t space sampling pattern according to an affine model of the respiratory motion. The reconstruction scheme involves multi-channel time-sequential imaging with time-varying channels. All model parameters are adapted to the imaged patient as part of the experiment and drive both data acquisition and cine reconstruction. Simulated cardiac MRI experiments using the realistic NCAT phantom show high quality cine reconstructions and robustness to modeling inaccuracies. PMID:24390159

  4. Information-reduced Carrier Synchronization of Iterative Decoded BPSK and QPSK using Soft Decision (Extrinsic) Feedback

    NASA Technical Reports Server (NTRS)

    Simon, Marvin; Valles, Esteban; Jones, Christopher

    2008-01-01

    This paper addresses the carrier-phase estimation problem under low SNR conditions as are typical of turbo- and LDPC-coded applications. In previous publications by the first author, closed-loop carrier synchronization schemes for error-correction coded BPSK and QPSK modulation were proposed that were based on feeding back hard data decisions at the input of the loop, the purpose being to remove the modulation prior to attempting to track the carrier phase as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. In this paper, we consider an alternative approach wherein the extrinsic soft information from the iterative decoder of turbo or LDPC codes is instead used as the feedback.

  5. Design of a global soil moisture initialization procedure for the simple biosphere model

    NASA Technical Reports Server (NTRS)

    Liston, G. E.; Sud, Y. C.; Walker, G. K.

    1993-01-01

    Global soil moisture and land-surface evapotranspiration fields are computed using an analysis scheme based on the Simple Biosphere (SiB) soil-vegetation-atmosphere interaction model. The scheme is driven with observed precipitation, and potential evapotranspiration, where the potential evapotranspiration is computed following the surface air temperature-potential evapotranspiration regression of Thomthwaite (1948). The observed surface air temperature is corrected to reflect potential (zero soil moisture stress) conditions by letting the ratio of actual transpiration to potential transpiration be a function of normalized difference vegetation index (NDVI). Soil moisture, evapotranspiration, and runoff data are generated on a daily basis for a 10-year period, January 1979 through December 1988, using observed precipitation gridded at a 4 deg by 5 deg resolution.

  6. OptoRadio: a method of wireless communication using orthogonal M-ary PSK (OMPSK) modulation

    NASA Astrophysics Data System (ADS)

    Gaire, Sunil Kumar; Faruque, Saleh; Ahamed, Md. Maruf

    2016-09-01

    Laser based radio communication system, i.e. OptoRadio, using Orthogonal M-ary PSK Modulation scheme is presented in this paper. In this scheme, when a block of data needs to be transmitted, the corresponding block of biorthogonal code is transmitted by means of multi-phase shift keying. At the receiver, two photo diodes are cross coupled. The effect is that the net output power due to ambient light is close to zero. The laser signal is then transmitted only into one of the receivers. With all other signals being cancelled out, the laser signal is an overwhelmingly dominant signal. The detailed design, bit error correction capabilities, and bandwidth efficiency are presented to illustrate the concept.

  7. SeaWiFS Technical Report Series. Volume 41; Case Studies for SeaWiFS Calibration and Validation

    NASA Technical Reports Server (NTRS)

    Yeh, Eueng-nan; Barnes, Robert A.; Darzi, Michael; Kumar, Lakshmi; Early, Edward A.; Johnson, B. Carol; Mueller, James L.; Trees, Charles C.

    1997-01-01

    This document provides brief reports, or case studies, on a number of investigations sponsored by the Calibration and Validation Team (CVT) within the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Project. Chapter I describes the calibration and characterization of the GSFC sphere, which was used in the recent recalibration of the SeaWiFS instrument. Chapter 2 presents a revision of the diffuse attenuation coefficient, K(490), algorithm based on the SeaWiFS wavelengths. Chapter 3 provides an implementation scheme for an algorithm to remove out-of-band radiance when using a sensor calibration based on a finite width (truncated) spectral response function, e.g., between the 1% transmission points. Chapter 4 describes the implementation schemes for the stray light quality flag (local area coverage [LAC] and global area coverage [GAC]) and the LAC stray light correction.

  8. Communication: Charge-population based dispersion interactions for molecules and materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stöhr, Martin; Department Chemie, Technische Universität München, Lichtenbergstr. 4, D-85748 Garching; Michelitsch, Georg S.

    2016-04-21

    We introduce a system-independent method to derive effective atomic C{sub 6} coefficients and polarizabilities in molecules and materials purely from charge population analysis. This enables the use of dispersion-correction schemes in electronic structure calculations without recourse to electron-density partitioning schemes and expands their applicability to semi-empirical methods and tight-binding Hamiltonians. We show that the accuracy of our method is en par with established electron-density partitioning based approaches in describing intermolecular C{sub 6} coefficients as well as dispersion energies of weakly bound molecular dimers, organic crystals, and supramolecular complexes. We showcase the utility of our approach by incorporation of the recentlymore » developed many-body dispersion method [Tkatchenko et al., Phys. Rev. Lett. 108, 236402 (2012)] into the semi-empirical density functional tight-binding method and propose the latter as a viable technique to study hybrid organic-inorganic interfaces.« less

  9. Analytical minimization of synchronicity errors in stochastic identification

    NASA Astrophysics Data System (ADS)

    Bernal, D.

    2018-01-01

    An approach to minimize error due to synchronicity faults in stochastic system identification is presented. The scheme is based on shifting the time domain signals so the phases of the fundamental eigenvector estimated from the spectral density are zero. A threshold on the mean of the amplitude-weighted absolute value of these phases, above which signal shifting is deemed justified, is derived and found to be proportional to the first mode damping ratio. It is shown that synchronicity faults do not map precisely to phasor multiplications in subspace identification and that the accuracy of spectral density estimated eigenvectors, for inputs with arbitrary spectral density, decrease with increasing mode number. Selection of a corrective strategy based on signal alignment, instead of eigenvector adjustment using phasors, is shown to be the product of the foregoing observations. Simulations that include noise and non-classical damping suggest that the scheme can provide sufficient accuracy to be of practical value.

  10. Coding for reliable satellite communications

    NASA Technical Reports Server (NTRS)

    Gaarder, N. T.; Lin, S.

    1986-01-01

    This research project was set up to study various kinds of coding techniques for error control in satellite and space communications for NASA Goddard Space Flight Center. During the project period, researchers investigated the following areas: (1) decoding of Reed-Solomon codes in terms of dual basis; (2) concatenated and cascaded error control coding schemes for satellite and space communications; (3) use of hybrid coding schemes (error correction and detection incorporated with retransmission) to improve system reliability and throughput in satellite communications; (4) good codes for simultaneous error correction and error detection, and (5) error control techniques for ring and star networks.

  11. Leading-Color Fully Differential Two-Loop Soft Corrections to QCD Dipole Showers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dulat, Falko; Höche, Stefan; Prestel, Stefan

    We compute the next-to-leading order corrections to soft-gluon radiation differentially in the one-emission phase space. We show that their contribution to the evolution of color dipoles can be obtained in a modified subtraction scheme, such that both one- and two-emission terms are amenable to Monte-Carlo integration. The two-loop cusp anomalous dimension is recovered naturally upon integration over the full phase space. We present two independent implementations of the new algorithm in the two event generators Pythia and Sherpa, and we compare the resulting fully differential simulation to the CMW scheme.

  12. Recursive algorithms for bias and gain nonuniformity correction in infrared videos.

    PubMed

    Pipa, Daniel R; da Silva, Eduardo A B; Pagliari, Carla L; Diniz, Paulo S R

    2012-12-01

    Infrared focal-plane array (IRFPA) detectors suffer from fixed-pattern noise (FPN) that degrades image quality, which is also known as spatial nonuniformity. FPN is still a serious problem, despite recent advances in IRFPA technology. This paper proposes new scene-based correction algorithms for continuous compensation of bias and gain nonuniformity in FPA sensors. The proposed schemes use recursive least-square and affine projection techniques that jointly compensate for both the bias and gain of each image pixel, presenting rapid convergence and robustness to noise. The synthetic and real IRFPA videos experimentally show that the proposed solutions are competitive with the state-of-the-art in FPN reduction, by presenting recovered images with higher fidelity.

  13. The scheme of a blindless positioning structure with parallel adjusting tables and swing rods for 4000 optical fibres of LAMOST.

    NASA Astrophysics Data System (ADS)

    Yunguo, Gao

    1996-12-01

    This scheme structure is for positioning 4000 optical fibres of LAMOST telescope. It adopts the swing rods adjusted parallel and simultaneously by many small tables. The problems, for example, positioning accuracy of the optical fibers, the time to readjust all the 4000 optical fibres and error correction, etc. have been considered in the scheme. The structure has no blind area.

  14. Permanence analysis of a concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Lin, S.; Kasami, T.

    1983-01-01

    A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however, the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for the planetary program, is analyzed.

  15. Probability of undetected error after decoding for a concatenated coding scheme

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Lin, S.

    1984-01-01

    A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for NASA telecommand system is analyzed.

  16. Optimization of the linear-scaling local natural orbital CCSD(T) method: Redundancy-free triples correction using Laplace transform.

    PubMed

    Nagy, Péter R; Kállay, Mihály

    2017-06-07

    An improved algorithm is presented for the evaluation of the (T) correction as a part of our local natural orbital (LNO) coupled-cluster singles and doubles with perturbative triples [LNO-CCSD(T)] scheme [Z. Rolik et al., J. Chem. Phys. 139, 094105 (2013)]. The new algorithm is an order of magnitude faster than our previous one and removes the bottleneck related to the calculation of the (T) contribution. First, a numerical Laplace transformed expression for the (T) fragment energy is introduced, which requires on average 3 to 4 times fewer floating point operations with negligible compromise in accuracy eliminating the redundancy among the evaluated triples amplitudes. Second, an additional speedup factor of 3 is achieved by the optimization of our canonical (T) algorithm, which is also executed in the local case. These developments can also be integrated into canonical as well as alternative fragmentation-based local CCSD(T) approaches with minor modifications. As it is demonstrated by our benchmark calculations, the evaluation of the new Laplace transformed (T) correction can always be performed if the preceding CCSD iterations are feasible, and the new scheme enables the computation of LNO-CCSD(T) correlation energies with at least triple-zeta quality basis sets for realistic three-dimensional molecules with more than 600 atoms and 12 000 basis functions in a matter of days on a single processor.

  17. High performance interconnection between high data rate networks

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Maly, K.; Overstreet, C. M.; Zhang, L.; Sun, W.

    1992-01-01

    The bridge/gateway system needed to interconnect a wide range of computer networks to support a wide range of user quality-of-service requirements is discussed. The bridge/gateway must handle a wide range of message types including synchronous and asynchronous traffic, large, bursty messages, short, self-contained messages, time critical messages, etc. It is shown that messages can be classified into three basic classes, synchronous and large and small asynchronous messages. The first two require call setup so that packet identification, buffer handling, etc. can be supported in the bridge/gateway. Identification enables resequences in packet size. The third class is for messages which do not require call setup. Resequencing hardware based to handle two types of resequencing problems is presented. The first is for a virtual parallel circuit which can scramble channel bytes. The second system is effective in handling both synchronous and asynchronous traffic between networks with highly differing packet sizes and data rates. The two other major needs for the bridge/gateway are congestion and error control. A dynamic, lossless congestion control scheme which can easily support effective error correction is presented. Results indicate that the congestion control scheme provides close to optimal capacity under congested conditions. Under conditions where error may develop due to intervening networks which are not lossless, intermediate error recovery and correction takes 1/3 less time than equivalent end-to-end error correction under similar conditions.

  18. Optimization of the linear-scaling local natural orbital CCSD(T) method: Redundancy-free triples correction using Laplace transform

    PubMed Central

    2017-01-01

    An improved algorithm is presented for the evaluation of the (T) correction as a part of our local natural orbital (LNO) coupled-cluster singles and doubles with perturbative triples [LNO-CCSD(T)] scheme [Z. Rolik et al., J. Chem. Phys. 139, 094105 (2013)]. The new algorithm is an order of magnitude faster than our previous one and removes the bottleneck related to the calculation of the (T) contribution. First, a numerical Laplace transformed expression for the (T) fragment energy is introduced, which requires on average 3 to 4 times fewer floating point operations with negligible compromise in accuracy eliminating the redundancy among the evaluated triples amplitudes. Second, an additional speedup factor of 3 is achieved by the optimization of our canonical (T) algorithm, which is also executed in the local case. These developments can also be integrated into canonical as well as alternative fragmentation-based local CCSD(T) approaches with minor modifications. As it is demonstrated by our benchmark calculations, the evaluation of the new Laplace transformed (T) correction can always be performed if the preceding CCSD iterations are feasible, and the new scheme enables the computation of LNO-CCSD(T) correlation energies with at least triple-zeta quality basis sets for realistic three-dimensional molecules with more than 600 atoms and 12 000 basis functions in a matter of days on a single processor. PMID:28576082

  19. An Indoor Positioning Method for Smartphones Using Landmarks and PDR.

    PubMed

    Wang, Xi; Jiang, Mingxing; Guo, Zhongwen; Hu, Naijun; Sun, Zhongwei; Liu, Jing

    2016-12-15

    Recently location based services (LBS) have become increasingly popular in indoor environments. Among these indoor positioning techniques providing LBS, a fusion approach combining WiFi-based and pedestrian dead reckoning (PDR) techniques is drawing more and more attention of researchers. Although this fusion method performs well in some cases, it still has some limitations, such as heavy computation and inconvenience for real-time use. In this work, we study map information of a given indoor environment, analyze variations of WiFi received signal strength (RSS), define several kinds of indoor landmarks, and then utilize these landmarks to correct accumulated errors derived from PDR. This fusion scheme, called Landmark-aided PDR (LaP), is proved to be light-weight and suitable for real-time implementation by running an Android application designed for the experiment. We compared LaP with other PDR-based fusion approaches. Experimental results show that the proposed scheme can achieve a significant improvement with an average accuracy of 2.17 m.

  20. An Indoor Positioning Method for Smartphones Using Landmarks and PDR †

    PubMed Central

    Wang, Xi; Jiang, Mingxing; Guo, Zhongwen; Hu, Naijun; Sun, Zhongwei; Liu, Jing

    2016-01-01

    Recently location based services (LBS) have become increasingly popular in indoor environments. Among these indoor positioning techniques providing LBS, a fusion approach combining WiFi-based and pedestrian dead reckoning (PDR) techniques is drawing more and more attention of researchers. Although this fusion method performs well in some cases, it still has some limitations, such as heavy computation and inconvenience for real-time use. In this work, we study map information of a given indoor environment, analyze variations of WiFi received signal strength (RSS), define several kinds of indoor landmarks, and then utilize these landmarks to correct accumulated errors derived from PDR. This fusion scheme, called Landmark-aided PDR (LaP), is proved to be light-weight and suitable for real-time implementation by running an Android application designed for the experiment. We compared LaP with other PDR-based fusion approaches. Experimental results show that the proposed scheme can achieve a significant improvement with an average accuracy of 2.17 m. PMID:27983670

  1. Fault Mitigation Schemes for Future Spaceflight Multicore Processors

    NASA Technical Reports Server (NTRS)

    Alexander, James W.; Clement, Bradley J.; Gostelow, Kim P.; Lai, John Y.

    2012-01-01

    Future planetary exploration missions demand significant advances in on-board computing capabilities over current avionics architectures based on a single-core processing element. The state-of-the-art multi-core processor provides much promise in meeting such challenges while introducing new fault tolerance problems when applied to space missions. Software-based schemes are being presented in this paper that can achieve system-level fault mitigation beyond that provided by radiation-hard-by-design (RHBD). For mission and time critical applications such as the Terrain Relative Navigation (TRN) for planetary or small body navigation, and landing, a range of fault tolerance methods can be adapted by the application. The software methods being investigated include Error Correction Code (ECC) for data packet routing between cores, virtual network routing, Triple Modular Redundancy (TMR), and Algorithm-Based Fault Tolerance (ABFT). A robust fault tolerance framework that provides fail-operational behavior under hard real-time constraints and graceful degradation will be demonstrated using TRN executing on a commercial Tilera(R) processor with simulated fault injections.

  2. Optimal port-based teleportation

    NASA Astrophysics Data System (ADS)

    Mozrzymas, Marek; Studziński, Michał; Strelchuk, Sergii; Horodecki, Michał

    2018-05-01

    Deterministic port-based teleportation (dPBT) protocol is a scheme where a quantum state is guaranteed to be transferred to another system without unitary correction. We characterise the best achievable performance of the dPBT when both the resource state and the measurement is optimised. Surprisingly, the best possible fidelity for an arbitrary number of ports and dimension of the teleported state is given by the largest eigenvalue of a particular matrix—Teleportation Matrix. It encodes the relationship between a certain set of Young diagrams and emerges as the optimal solution to the relevant semidefinite programme.

  3. A knowledge-based approach to improving optimization techniques in system planning

    NASA Technical Reports Server (NTRS)

    Momoh, J. A.; Zhang, Z. Z.

    1990-01-01

    A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.

  4. On the quantum-channel capacity for orbital angular momentum-based free-space optical communications.

    PubMed

    Zhang, Yequn; Djordjevic, Ivan B; Gao, Xin

    2012-08-01

    Inspired by recent demonstrations of orbital angular momentum-(OAM)-based single-photon communications, we propose two quantum-channel models: (i) the multidimensional quantum-key distribution model and (ii) the quantum teleportation model. Both models employ operator-sum representation for Kraus operators derived from OAM eigenkets transition probabilities. These models are highly important for future development of quantum-error correction schemes to extend the transmission distance and improve date rates of OAM quantum communications. By using these models, we calculate corresponding quantum-channel capacities in the presence of atmospheric turbulence.

  5. Long distance quantum communication using quantum error correction

    NASA Technical Reports Server (NTRS)

    Gingrich, R. M.; Lee, H.; Dowling, J. P.

    2004-01-01

    We describe a quantum error correction scheme that can increase the effective absorption length of the communication channel. This device can play the role of a quantum transponder when placed in series, or a cyclic quantum memory when inserted in an optical loop.

  6. On the impact of topography and building mask on time varying gravity due to local hydrology

    NASA Astrophysics Data System (ADS)

    Deville, S.; Jacob, T.; Chéry, J.; Champollion, C.

    2013-01-01

    We use 3 yr of surface absolute gravity measurements at three sites on the Larzac plateau (France) to quantify the changes induced by topography and the building on gravity time-series, with respect to an idealized infinite slab approximation. Indeed, local topography and buildings housing ground-based gravity measurement have an effect on the distribution of water storage changes, therefore affecting the associated gravity signal. We first calculate the effects of surrounding topography and building dimensions on the gravity attraction for a uniform layer of water. We show that a gravimetric interpretation of water storage change using an infinite slab, the so-called Bouguer approximation, is generally not suitable. We propose to split the time varying gravity signal in two parts (1) a surface component including topographic and building effects (2) a deep component associated to underground water transfer. A reservoir modelling scheme is herein presented to remove the local site effects and to invert for the effective hydrological properties of the unsaturated zone. We show that effective time constants associated to water transfer vary greatly from site to site. We propose that our modelling scheme can be used to correct for the local site effects on gravity at any site presenting a departure from a flat topography. Depending on sites, the corrected signal can exceed measured values by 5-15 μGal, corresponding to 120-380 mm of water using the Bouguer slab formula. Our approach only requires the knowledge of daily precipitation corrected for evapotranspiration. Therefore, it can be a useful tool to correct any kind of gravimetric time-series data.

  7. Design and implementation of the one-step MSD adder of optical computer.

    PubMed

    Song, Kai; Yan, Liping

    2012-03-01

    On the basis of the symmetric encoding algorithm for the modified signed-digit (MSD), a 7*7 truth table that can be realized with optical methods was developed. And based on the truth table, the optical path structures and circuit implementations of the one-step MSD adder of ternary optical computer (TOC) were designed. Experiments show that the scheme is correct, feasible, and efficient. © 2012 Optical Society of America

  8. Conductivity Cell Thermal Inertia Correction Revisited

    NASA Astrophysics Data System (ADS)

    Eriksen, C. C.

    2012-12-01

    Salinity measurements made with a CTD (conductivity-temperature-depth instrument) rely on accurate estimation of water temperature within their conductivity cell. Lueck (1990) developed a theoretical framework for heat transfer between the cell body and water passing through it. Based on this model, Lueck and Picklo (1990) introduced the practice of correcting for cell thermal inertia by filtering a temperature time series using two parameters, an amplitude α and a decay time constant τ, a practice now widely used. Typically these two parameters are chosen for a given cell configuration and internal flushing speed by a statistical method applied to a particular data set. Here, thermal inertia correction theory has been extended to apply to flow speeds spanning well over an order of magnitude, both within and outside a conductivity cell, to provide predictions of α and τ from cell geometry and composition. The extended model enables thermal inertia correction for the variable flows encountered by conductivity cells on autonomous gliders and floats, as well as tethered platforms. The length scale formed as the product of cell encounter speed of isotherms, α, and τ can be used to gauge the size of the temperature correction for a given thermal stratification. For cells flushed by dynamic pressure variation induced by platform motion, this length varies by less than a factor of 2 over more than a decade of speed variation. The magnitude of correction for free-flow flushed sensors is comparable to that of pumped cells, but at an order of magnitude in energy savings. Flow conditions around a cell's exterior are found to be of comparable importance to thermal inertia response as flushing speed. Simplification of cell thermal response to a single normal mode is most valid at slow speed. Error in thermal inertia estimation arises from both neglect of higher modes and numerical discretization of the correction scheme, both of which can be easily quantified. Consideration of thermal inertia correction enables assessment of various CTD sampling schemes. Spot sampling by pumping a cell intermittently provides particular challenges, and may lead to biases in inferred salinity that are comparable to climate signals reported from profiling float arrays.

  9. A downscaling method for the assessment of local climate change

    NASA Astrophysics Data System (ADS)

    Bruno, E.; Portoghese, I.; Vurro, M.

    2009-04-01

    The use of complimentary models is necessary to study the impact of climate change scenarios on the hydrological response at different space-time scales. However, the structure of GCMs is such that their space resolution (hundreds of kilometres) is too coarse and not adequate to describe the variability of extreme events at basin scale (Burlando and Rosso, 2002). To bridge the space-time gap between the climate scenarios and the usual scale of the inputs for hydrological prediction models is a fundamental requisite for the evaluation of climate change impacts on water resources. Since models operate a simplification of a complex reality, their results cannot be expected to fit with climate observations. Identifying local climate scenarios for impact analysis implies the definition of more detailed local scenario by downscaling GCMs or RCMs results. Among the output correction methods we consider the statistical approach by Déqué (2007) reported as a ‘Variable correction method' in which the correction of model outputs is obtained by a function build with the observation dataset and operating a quantile-quantile transformation (Q-Q transform). However, in the case of daily precipitation fields the Q-Q transform is not able to correct the temporal property of the model output concerning the dry-wet lacunarity process. An alternative correction method is proposed based on a stochastic description of the arrival-duration-intensity processes in coherence with the Poissonian Rectangular Pulse scheme (PRP) (Eagleson, 1972). In this proposed approach, the Q-Q transform is applied to the PRP variables derived from the daily rainfall datasets. Consequently the corrected PRP parameters are used for the synthetic generation of statistically homogeneous rainfall time series that mimic the persistency of daily observations for the reference period. Then the PRP parameters are forced through the GCM scenarios to generate local scale rainfall records for the 21st century. The statistical parameters characterizing daily storm occurrence, storm intensity and duration needed to apply the PRP scheme are considered among STARDEX collection of extreme indices.

  10. High-Resolution NU-WRF Simulations of a Deep Convective-Precipitation System During MC3E. Part 1; Comparisons Between Goddard Microphysics Schemes and Observations

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Wu, Di; Lang, Stephen; Chern, Jiundar; Peters-Lidard, Christa; Fridlind, Ann; Matsui, Toshihisa

    2015-01-01

    The Goddard microphysics scheme was recently improved by adding a 4th ice class (frozen dropshail). This new 4ICE scheme was implemented and tested in the Goddard Cumulus Ensemble model (GCE) for an intense continental squall line and a moderate,less-organized continental case. Simulated peak radar reflectivity profiles were improved both in intensity and shape for both cases as were the overall reflectivity probability distributions versus observations. In this study, the new Goddard 4ICE scheme is implemented into the regional-scale NASA Unified - Weather Research and Forecasting model (NU-WRF) and tested on an intense mesoscale convective system that occurred during the Midlatitude Continental Convective Clouds Experiment (MC3E). The NU42WRF simulated radar reflectivities, rainfall intensities, and vertical and horizontal structure using the new 4ICE scheme agree as well as or significantly better with observations than when using previous versions of the Goddard 3ICE (graupel or hail) schemes. In the 4ICE scheme, the bin microphysics-based rain evaporation correction produces more erect convective cores, while modification of the unrealistic collection of ice by dry hail produces narrow and intense cores, allowing more slow-falling snow to be transported rearward. Together with a revised snow size mapping, the 4ICE scheme produces a more horizontally stratified trailing stratiform region with a broad, more coherent light rain area. In addition, the NU-WRF 4ICE simulated radar reflectivity distributions are consistent with and generally superior to those using the GCE due to the less restrictive open lateral boundaries

  11. A scheme based on ICD-10 diagnoses and drug prescriptions to stage chronic kidney disease severity in healthcare administrative records.

    PubMed

    Friberg, Leif; Gasparini, Alessandro; Carrero, Juan Jesus

    2018-04-01

    Information about renal function is important for drug safety studies using administrative health databases. However, serum creatinine values are seldom available in these registries. Our aim was to develop and test a simple scheme for stratification of renal function without access to laboratory test results. Our scheme uses registry data about diagnoses, contacts, dialysis and drug use. We validated the scheme in the Stockholm CREAtinine Measurements (SCREAM) project using information on approximately 1.1 million individuals residing in the Stockholm County who underwent calibrated creatinine testing during 2006-11, linked with data about health care contacts and filled drug prescriptions. Estimated glomerular filtration rate (eGFR) was calculated with the CKD-EPI formula and used as the gold standard for validation of the scheme. When the scheme classified patients as having eGFR <30 mL/min/1.73 m 2 , it was correct in 93.5% of cases. The specificity of the scheme was close to 100% in all age groups. The sensitivity was poor, ranging from 68.2% in the youngest age quartile, down to 10.7% in the oldest age quartile. Age-related decline in renal function makes a large proportion of elderly patients fall into the chronic kidney disease (CKD) range without receiving CKD diagnoses, as this often is seen as part of normal ageing. In the absence of renal function tests, our scheme may be of value for identifying patients with moderate and severe CKD on the basis of diagnostic and prescription data for use in studies of large healthcare databases.

  12. Projecting future precipitation and temperature at sites with diverse climate through multiple statistical downscaling schemes

    NASA Astrophysics Data System (ADS)

    Vallam, P.; Qin, X. S.

    2017-10-01

    Anthropogenic-driven climate change would affect the global ecosystem and is becoming a world-wide concern. Numerous studies have been undertaken to determine the future trends of meteorological variables at different scales. Despite these studies, there remains significant uncertainty in the prediction of future climates. To examine the uncertainty arising from using different schemes to downscale the meteorological variables for the future horizons, projections from different statistical downscaling schemes were examined. These schemes included statistical downscaling method (SDSM), change factor incorporated with LARS-WG, and bias corrected disaggregation (BCD) method. Global circulation models (GCMs) based on CMIP3 (HadCM3) and CMIP5 (CanESM2) were utilized to perturb the changes in the future climate. Five study sites (i.e., Alice Springs, Edmonton, Frankfurt, Miami, and Singapore) with diverse climatic conditions were chosen for examining the spatial variability of applying various statistical downscaling schemes. The study results indicated that the regions experiencing heavy precipitation intensities were most likely to demonstrate the divergence between the predictions from various statistical downscaling methods. Also, the variance computed in projecting the weather extremes indicated the uncertainty derived from selection of downscaling tools and climate models. This study could help gain an improved understanding about the features of different downscaling approaches and the overall downscaling uncertainty.

  13. Coded throughput performance simulations for the time-varying satellite channel. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Han, LI

    1995-01-01

    The design of a reliable satellite communication link involving the data transfer from a small, low-orbit satellite to a ground station, but through a geostationary satellite, was examined. In such a scenario, the received signal power to noise density ratio increases as the transmitting low-orbit satellite comes into view, and then decreases as it then departs, resulting in a short-duration, time-varying communication link. The optimal values of the small satellite antenna beamwidth, signaling rate, modulation scheme and the theoretical link throughput (in bits per day) have been determined. The goal of this thesis is to choose a practical coding scheme which maximizes the daily link throughput while satisfying a prescribed probability of error requirement. We examine the throughput of both fixed rate and variable rate concatenated forward error correction (FEC) coding schemes for the additive white Gaussian noise (AWGN) channel, and then examine the effect of radio frequency interference (RFI) on the best coding scheme among them. Interleaving is used to mitigate degradation due to RFI. It was found that the variable rate concatenated coding scheme could achieve 74 percent of the theoretical throughput, equivalent to 1.11 Gbits/day based on the cutoff rate R(sub 0). For comparison, 87 percent is achievable for AWGN-only case.

  14. An Iterative Information-Reduced Quadriphase-Shift-Keyed Carrier Synchronization Scheme Using Decision Feedback for Low Signal-to-Noise Ratio Applications

    NASA Technical Reports Server (NTRS)

    Simon, M.; Tkacenko, A.

    2006-01-01

    In a previous publication [1], an iterative closed-loop carrier synchronization scheme for binary phase-shift keyed (BPSK) modulation was proposed that was based on feeding back data decisions to the input of the loop, the purpose being to remove the modulation prior to carrier synchronization as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. The idea there was that, with sufficient independence between the received data and the decisions on it that are fed back (as would occur in an error-correction coding environment with sufficient decoding delay), a pure tone in the presence of noise would ultimately be produced (after sufficient iteration and low enough error probability) and thus could be tracked without any squaring loss. This article demonstrates that, with some modification, the same idea of iterative information reduction through decision feedback can be applied to quadrature phase-shift keyed (QPSK) modulation, something that was mentioned in the previous publication but never pursued.

  15. The large discretization step method for time-dependent partial differential equations

    NASA Technical Reports Server (NTRS)

    Haras, Zigo; Taasan, Shlomo

    1995-01-01

    A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.

  16. Evaluating the performance of Sentinel-3 SRAL SAR Altimetry in the Coastal and Open Ocean, and developing improved retrieval methods - The ESA SCOOP Project.

    NASA Astrophysics Data System (ADS)

    Benveniste, J.; Cotton, D.; Moreau, T.; Varona, E.; Roca, M.; Cipollini, P.; Cancet, M.; Martin, F.; Fenoglio-Marc, L.; Naeije, M.; Fernandes, J.; Restano, M.; Ambrozio, A.

    2016-12-01

    The ESA Sentinel-3 satellite, launched in February 2016 as a part of the Copernicus programme, is the second satellite to operate a SAR mode altimeter. The Sentinel 3 Synthetic Aperture Radar Altimeter (SRAL) is based on the heritage from Cryosat-2, but this time complemented by a Microwave Radiometer (MWR) to provide a wet troposphere correction, and operating at Ku and C-Bands to provide an accurate along-track ionospheric correction. Together this instrument package, including both GPS and DORIS instruments for accurate positioning, allows accurate measurements of sea surface height over the ocean, as well as measurements of significant wave height and surface wind speed. SCOOP (SAR Altimetry Coastal & Open Ocean Performance) is a project funded under the ESA SEOM (Scientific Exploitation of Operational Missions) Programme Element, started in September 2015, to characterise the expected performance of Sentinel-3 SRAL SAR mode altimeter products, in the coastal zone and open-ocean, and then to develop and evaluate enhancements to the baseline processing scheme in terms of improvements to ocean measurements. There is also a work package to develop and evaluate an improved Wet Troposphere correction for Sentinel-3, based on the measurements from the on-board MWR, further enhanced mostly in the coastal and polar regions using third party data, and provide recommendations for use. At the end of the project recommendations for further developments and implementations will be provided through a scientific roadmap. In this presentation we provide an overview of the SCOOP project, highlighting the key deliverables and discussing the potential impact of the results in terms of the application of delay-Doppler (SAR) altimeter measurements over the open-ocean and coastal zone. We also present the initial results from the project, including: Key findings from a review of the current "state-of-the-art" for SAR altimetry, Specification of the initial "reference" delay-Doppler and echo modelling /retracking processing schemes, Evaluation of the initial Test Data Set in the Open Ocean and Coastal Zone Overview of modifications planned to the reference delay-Doppler and echo modelling/ re-tracking processing schemes.

  17. Erratum: 2-Bromo-1-(4-methyl-phen-yl)-3-phenyl-prop-2-en-1-one. Corrigendum.

    PubMed

    Fun, Hoong-Kun; Jebas, Samuel Robinson; Patil, P S; Karthikeyan, M S; Dharmaprakash, S M

    2008-11-13

    The chemical name in the title and the scheme of the paper by Fun, Jebas, Patil, Karthikeyan & Dharmaprakash [Acta Cryst. (2008), E64, o1559] are corrected.[This corrects the article DOI: 10.1107/S1600536808022289.].

  18. Advanced digital signal processing for short-haul and access network

    NASA Astrophysics Data System (ADS)

    Zhang, Junwen; Yu, Jianjun; Chi, Nan

    2016-02-01

    Digital signal processing (DSP) has been proved to be a successful technology recently in high speed and high spectrum-efficiency optical short-haul and access network, which enables high performances based on digital equalizations and compensations. In this paper, we investigate advanced DSP at the transmitter and receiver side for signal pre-equalization and post-equalization in an optical access network. A novel DSP-based digital and optical pre-equalization scheme has been proposed for bandwidth-limited high speed short-distance communication system, which is based on the feedback of receiver-side adaptive equalizers, such as least-mean-squares (LMS) algorithm and constant or multi-modulus algorithms (CMA, MMA). Based on this scheme, we experimentally demonstrate 400GE on a single optical carrier based on the highest ETDM 120-GBaud PDM-PAM-4 signal, using one external modulator and coherent detection. A line rate of 480-Gb/s is achieved, which enables 20% forward-error correction (FEC) overhead to keep the 400-Gb/s net information rate. The performance after fiber transmission shows large margin for both short range and metro/regional networks. We also extend the advanced DSP for short haul optical access networks by using high order QAMs. We propose and demonstrate a high speed multi-band CAP-WDM-PON system on intensity modulation, direct detection and digital equalizations. A hybrid modified cascaded MMA post-equalization schemes are used to equalize the multi-band CAP-mQAM signals. Using this scheme, we successfully demonstrates 550Gb/s high capacity WDMPON system with 11 WDM channels, 55 sub-bands, and 10-Gb/s per user in the downstream over 40-km SMF.

  19. Devil's vortex Fresnel lens phase masks on an asymmetric cryptosystem based on phase-truncation in gyrator wavelet transform domain

    NASA Astrophysics Data System (ADS)

    Singh, Hukum

    2016-06-01

    An asymmetric scheme has been proposed for optical double images encryption in the gyrator wavelet transform (GWT) domain. Grayscale and binary images are encrypted separately using double random phase encoding (DRPE) in the GWT domain. Phase masks based on devil's vortex Fresnel Lens (DVFLs) and random phase masks (RPMs) are jointly used in spatial as well as in the Fourier plane. The images to be encrypted are first gyrator transformed and then single-level discrete wavelet transformed (DWT) to decompose LL , HL , LH and HH matrices of approximation, horizontal, vertical and diagonal coefficients. The resulting coefficients from the DWT are multiplied by other RPMs and the results are applied to inverse discrete wavelet transform (IDWT) for obtaining the encrypted images. The images are recovered from their corresponding encrypted images by using the correct parameters of the GWT, DVFL and its digital implementation has been performed using MATLAB 7.6.0 (R2008a). The mother wavelet family, DVFL and gyrator transform orders associated with the GWT are extra keys that cause difficulty to an attacker. Thus, the scheme is more secure as compared to conventional techniques. The efficacy of the proposed scheme is verified by computing mean-squared-error (MSE) between recovered and the original images. The sensitivity of the proposed scheme is verified with encryption parameters and noise attacks.

  20. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  1. Validation of satellite-based rainfall in Kalahari

    NASA Astrophysics Data System (ADS)

    Lekula, Moiteela; Lubczynski, Maciek W.; Shemang, Elisha M.; Verhoef, Wouter

    2018-06-01

    Water resources management in arid and semi-arid areas is hampered by insufficient rainfall data, typically obtained from sparsely distributed rain gauges. Satellite-based rainfall estimates (SREs) are alternative sources of such data in these areas. In this study, daily rainfall estimates from FEWS-RFE∼11 km, TRMM-3B42∼27 km, CMOPRH∼27 km and CMORPH∼8 km were evaluated against nine, daily rain gauge records in Central Kalahari Basin (CKB), over a five-year period, 01/01/2001-31/12/2005. The aims were to evaluate the daily rainfall detection capabilities of the four SRE algorithms, analyze the spatio-temporal variability of rainfall in the CKB and perform bias-correction of the four SREs. Evaluation methods included scatter plot analysis, descriptive statistics, categorical statistics and bias decomposition. The spatio-temporal variability of rainfall, was assessed using the SREs' mean annual rainfall, standard deviation, coefficient of variation and spatial correlation functions. Bias correction of the four SREs was conducted using a Time-Varying Space-Fixed bias-correction scheme. The results underlined the importance of validating daily SREs, as they had different rainfall detection capabilities in the CKB. The FEWS-RFE∼11 km performed best, providing better results of descriptive and categorical statistics than the other three SREs, although bias decomposition showed that all SREs underestimated rainfall. The analysis showed that the most reliable SREs performance analysis indicator were the frequency of "miss" rainfall events and the "miss-bias", as they directly indicated SREs' sensitivity and bias of rainfall detection, respectively. The Time Varying and Space Fixed (TVSF) bias-correction scheme, improved some error measures but resulted in the reduction of the spatial correlation distance, thus increased, already high, spatial rainfall variability of all the four SREs. This study highlighted SREs as valuable source of daily rainfall data providing good spatio-temporal data coverage especially suitable for areas with limited rain gauges, such as the CKB, but also emphasized SREs' drawbacks, creating avenue for follow up research.

  2. Deterministic error correction for nonlocal spatial-polarization hyperentanglement

    PubMed Central

    Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu

    2016-01-01

    Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication. PMID:26861681

  3. Deterministic error correction for nonlocal spatial-polarization hyperentanglement.

    PubMed

    Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu

    2016-02-10

    Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication.

  4. An Application of UAV Attitude Estimation Using a Low-Cost Inertial Navigation System

    NASA Technical Reports Server (NTRS)

    Eure, Kenneth W.; Quach, Cuong Chi; Vazquez, Sixto L.; Hogge, Edward F.; Hill, Boyd L.

    2013-01-01

    Unmanned Aerial Vehicles (UAV) are playing an increasing role in aviation. Various methods exist for the computation of UAV attitude based on low cost microelectromechanical systems (MEMS) and Global Positioning System (GPS) receivers. There has been a recent increase in UAV autonomy as sensors are becoming more compact and onboard processing power has increased significantly. Correct UAV attitude estimation will play a critical role in navigation and separation assurance as UAVs share airspace with civil air traffic. This paper describes attitude estimation derived by post-processing data from a small low cost Inertial Navigation System (INS) recorded during the flight of a subscale commercial off the shelf (COTS) UAV. Two discrete time attitude estimation schemes are presented here in detail. The first is an adaptation of the Kalman Filter to accommodate nonlinear systems, the Extended Kalman Filter (EKF). The EKF returns quaternion estimates of the UAV attitude based on MEMS gyro, magnetometer, accelerometer, and pitot tube inputs. The second scheme is the complementary filter which is a simpler algorithm that splits the sensor frequency spectrum based on noise characteristics. The necessity to correct both filters for gravity measurement errors during turning maneuvers is demonstrated. It is shown that the proposed algorithms may be used to estimate UAV attitude. The effects of vibration on sensor measurements are discussed. Heuristic tuning comments pertaining to sensor filtering and gain selection to achieve acceptable performance during flight are given. Comparisons of attitude estimation performance are made between the EKF and the complementary filter.

  5. Investigation of television transmission using adaptive delta modulation principles

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1976-01-01

    The results are presented of a study on the use of the delta modulator as a digital encoder of television signals. The computer simulation of different delta modulators was studied in order to find a satisfactory delta modulator. After finding a suitable delta modulator algorithm via computer simulation, the results were analyzed and then implemented in hardware to study its ability to encode real time motion pictures from an NTSC format television camera. The effects of channel errors on the delta modulated video signal were tested along with several error correction algorithms via computer simulation. A very high speed delta modulator was built (out of ECL logic), incorporating the most promising of the correction schemes, so that it could be tested on real time motion pictures. Delta modulators were investigated which could achieve significant bandwidth reduction without regard to complexity or speed. The first scheme investigated was a real time frame to frame encoding scheme which required the assembly of fourteen, 131,000 bit long shift registers as well as a high speed delta modulator. The other schemes involved the computer simulation of two dimensional delta modulator algorithms.

  6. Blind multirigid retrospective motion correction of MR images.

    PubMed

    Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard

    2015-04-01

    Physiological nonrigid motion is inevitable when imaging, e.g., abdominal viscera, and can lead to serious deterioration of the image quality. Prospective techniques for motion correction can handle only special types of nonrigid motion, as they only allow global correction. Retrospective methods developed so far need guidance from navigator sequences or external sensors. We propose a fully retrospective nonrigid motion correction scheme that only needs raw data as an input. Our method is based on a forward model that describes the effects of nonrigid motion by partitioning the image into patches with locally rigid motion. Using this forward model, we construct an objective function that we can optimize with respect to both unknown motion parameters per patch and the underlying sharp image. We evaluate our method on both synthetic and real data in 2D and 3D. In vivo data was acquired using standard imaging sequences. The correction algorithm significantly improves the image quality. Our compute unified device architecture (CUDA)-enabled graphic processing unit implementation ensures feasible computation times. The presented technique is the first computationally feasible retrospective method that uses the raw data of standard imaging sequences, and allows to correct for nonrigid motion without guidance from external motion sensors. © 2014 Wiley Periodicals, Inc.

  7. Characterization and correction of eddy-current artifacts in unipolar and bipolar diffusion sequences using magnetic field monitoring.

    PubMed

    Chan, Rachel W; von Deuster, Constantin; Giese, Daniel; Stoeck, Christian T; Harmer, Jack; Aitken, Andrew P; Atkinson, David; Kozerke, Sebastian

    2014-07-01

    Diffusion tensor imaging (DTI) of moving organs is gaining increasing attention but robust performance requires sequence modifications and dedicated correction methods to account for system imperfections. In this study, eddy currents in the "unipolar" Stejskal-Tanner and the velocity-compensated "bipolar" spin-echo diffusion sequences were investigated and corrected for using a magnetic field monitoring approach in combination with higher-order image reconstruction. From the field-camera measurements, increased levels of second-order eddy currents were quantified in the unipolar sequence relative to the bipolar diffusion sequence while zeroth and linear orders were found to be similar between both sequences. Second-order image reconstruction based on field-monitoring data resulted in reduced spatial misalignment artifacts and residual displacements of less than 0.43 mm and 0.29 mm (in the unipolar and bipolar sequences, respectively) after second-order eddy-current correction. Results demonstrate the need for second-order correction in unipolar encoding schemes but also show that bipolar sequences benefit from second-order reconstruction to correct for incomplete intrinsic cancellation of eddy-currents. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Lan, E-mail: chenglanster@gmail.com; Stopkowicz, Stella, E-mail: stella.stopkowicz@kjemi.uio.no; Gauss, Jürgen, E-mail: gauss@uni-mainz.de

    A perturbative approach to compute second-order spin-orbit (SO) corrections to a spin-free Dirac-Coulomb Hartree-Fock (SFDC-HF) calculation is suggested. The proposed scheme treats the difference between the DC and SFDC Hamiltonian as perturbation and exploits analytic second-derivative techniques. In addition, a cost-effective scheme for incorporating relativistic effects in high-accuracy calculations is suggested consisting of a SFDC coupled-cluster treatment augmented by perturbative SO corrections obtained at the HF level. Benchmark calculations for the hydrogen halides HX, X = F-At as well as the coinage-metal fluorides CuF, AgF, and AuF demonstrate the accuracy of the proposed perturbative treatment of SO effects on energiesmore » and electrical properties in comparison with the more rigorous full DC treatment. Furthermore, we present, as an application of our scheme, results for the electrical properties of AuF and XeAuF.« less

  9. Participation and performance in INSTAND multi-analyte molecular genetics external quality assessment schemes from 2006 to 2012.

    PubMed

    Maly, Friedrich E; Fried, Roman; Spannagl, Michael

    2014-01-01

    INSTAND e.V. has provided Molecular Genetics Multi-Analyte EQA schemes since 2006. EQA participation and performance were assessed from 2006 - 2012. From 2006 to 2012, the number of analytes in the Multi-Analyte EQA schemes rose from 17 to 53. Total number of results returned rose from 168 in January 2006 to 824 in August 2012. The overall error rate was 1.40 +/- 0.84% (mean +/- SD, N = 24 EQA dates). From 2006 to 2012, no analyte was reported 100% correctly. Individual participant performance was analysed for one common analyte, Lactase (LCT) T-13910C. From 2006 to 2012, 114 laboratories participated in this EQA. Of these, 10 laboratories (8.8%) reported at least one wrong result during the whole observation period. All laboratories reported correct results after their failure incident. In spite of the low overall error rate, EQA will continue to be important for Molecular Genetics.

  10. Automatic cortical segmentation in the developing brain.

    PubMed

    Xue, Hui; Srinivasan, Latha; Jiang, Shuzhou; Rutherford, Mary; Edwards, A David; Rueckert, Daniel; Hajnal, Jo V

    2007-01-01

    The segmentation of neonatal cortex from magnetic resonance (MR) images is much more challenging than the segmentation of cortex in adults. The main reason is the inverted contrast between grey matter (GM) and white matter (WM) that occurs when myelination is incomplete. This causes mislabeled partial volume voxels, especially at the interface between GM and cerebrospinal fluid (CSF). We propose a fully automatic cortical segmentation algorithm, detecting these mislabeled voxels using a knowledge-based approach and correcting errors by adjusting local priors to favor the correct classification. Our results show that the proposed algorithm corrects errors in the segmentation of both GM and WM compared to the classic EM scheme. The segmentation algorithm has been tested on 25 neonates with the gestational ages ranging from approximately 27 to 45 weeks. Quantitative comparison to the manual segmentation demonstrates good performance of the method (mean Dice similarity: 0.758 +/- 0.037 for GM and 0.794 +/- 0.078 for WM).

  11. Suppression of Speckles at High Adaptive Correction Using Speckle Symmetry

    NASA Technical Reports Server (NTRS)

    Bloemhof, Eric E.

    2006-01-01

    Focal-plane speckles set important sensitivity limits on ground- or space-based imagers and coronagraphs that may be used to search for faint companions, perhaps ultimately including exoplanets, around stars. As speckles vary with atmospheric fluctuations or with drifting beamtrain aberrations, they contribute speckle noise proportional to their full amplitude. Schemes to suppress speckles are thus of great interest. At high adaptive correction, speckles organize into species, represented by algebraic terms in the expansion of the phase exponential, that have distinct spatial symmetry, even or odd, under spatial inversion. Filtering speckle patterns by symmetry may eliminate a disproportionate fraction of the speckle noise while blocking (only) half of the image signal from the off-axis companion being sought. The fraction of speckle power and hence of speckle noise in each term will vary with degree of correction, and so also will the net symmetry in the speckle pattern.

  12. A Higher-Order Bending Theory for Laminated Composite and Sandwich Beams

    NASA Technical Reports Server (NTRS)

    Cook, Geoffrey M.

    1997-01-01

    A higher-order bending theory is derived for laminated composite and sandwich beams. This is accomplished by assuming a special form for the axial and transverse displacement expansions. An independent expansion is also assumed for the transverse normal stress. Appropriate shear correction factors based on energy considerations are used to adjust the shear stiffness. A set of transverse normal correction factors is introduced, leading to significant improvements in the transverse normal strain and stress for laminated composite and sandwich beams. A closed-form solution to the cylindrical elasticity solutions for a wide range of beam aspect ratios and commonly used material systems. Accurate shear stresses for a wide range of laminates, including the challenging unsymmetric composite and sandwich laminates, are obtained using an original corrected integration scheme. For application of the theory to a wider range of problems, guidelines for finite element approximations are presented.

  13. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1985-01-01

    A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.

  14. Scatter correction in cone-beam CT via a half beam blocker technique allowing simultaneous acquisition of scatter and image information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Ho; Xing Lei; Lee, Rena

    2012-05-15

    Purpose: X-ray scatter incurred to detectors degrades the quality of cone-beam computed tomography (CBCT) and represents a problem in volumetric image guided and adaptive radiation therapy. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, due to missing information resulting from the obstruction of the blocker, such methods require dual scanning or dynamically moving blocker to obtain a complete volumetric image. Here, we propose a half beam blocker-based approach, in conjunction with a total variation (TV) regularized Feldkamp-Davis-Kress (FDK) algorithm, to correct scatter-induced artifacts by simultaneously acquiring image and scatter information frommore » a single-rotation CBCT scan. Methods: A half beam blocker, comprising lead strips, is used to simultaneously acquire image data on one side of the projection data and scatter data on the other half side. One-dimensional cubic B-Spline interpolation/extrapolation is applied to derive patient specific scatter information by using the scatter distributions on strips. The estimated scatter is subtracted from the projection image acquired at the opposite view. With scatter-corrected projections where this subtraction is completed, the FDK algorithm based on a cosine weighting function is performed to reconstruct CBCT volume. To suppress the noise in the reconstructed CBCT images produced by geometric errors between two opposed projections and interpolated scatter information, total variation regularization is applied by a minimization using a steepest gradient descent optimization method. The experimental studies using Catphan504 and anthropomorphic phantoms were carried out to evaluate the performance of the proposed scheme. Results: The scatter-induced shading artifacts were markedly suppressed in CBCT using the proposed scheme. Compared with CBCT without a blocker, the nonuniformity value was reduced from 39.3% to 3.1%. The root mean square error relative to values inside the regions of interest selected from a benchmark scatter free image was reduced from 50 to 11.3. The TV regularization also led to a better contrast-to-noise ratio. Conclusions: An asymmetric half beam blocker-based FDK acquisition and reconstruction technique has been established. The proposed scheme enables simultaneous detection of patient specific scatter and complete volumetric CBCT reconstruction without additional requirements such as prior images, dual scans, or moving strips.« less

  15. Access Control Mechanism for IoT Environments Based on Modelling Communication Procedures as Resources.

    PubMed

    Cruz-Piris, Luis; Rivera, Diego; Marsa-Maestre, Ivan; de la Hoz, Enrique; Velasco, Juan R

    2018-03-20

    Internet growth has generated new types of services where the use of sensors and actuators is especially remarkable. These services compose what is known as the Internet of Things (IoT). One of the biggest current challenges is obtaining a safe and easy access control scheme for the data managed in these services. We propose integrating IoT devices in an access control system designed for Web-based services by modelling certain IoT communication elements as resources. This would allow us to obtain a unified access control scheme between heterogeneous devices (IoT devices, Internet-based services, etc.). To achieve this, we have analysed the most relevant communication protocols for these kinds of environments and then we have proposed a methodology which allows the modelling of communication actions as resources. Then, we can protect these resources using access control mechanisms. The validation of our proposal has been carried out by selecting a communication protocol based on message exchange, specifically Message Queuing Telemetry Transport (MQTT). As an access control scheme, we have selected User-Managed Access (UMA), an existing Open Authorization (OAuth) 2.0 profile originally developed for the protection of Internet services. We have performed tests focused on validating the proposed solution in terms of the correctness of the access control system. Finally, we have evaluated the energy consumption overhead when using our proposal.

  16. Access Control Mechanism for IoT Environments Based on Modelling Communication Procedures as Resources

    PubMed Central

    2018-01-01

    Internet growth has generated new types of services where the use of sensors and actuators is especially remarkable. These services compose what is known as the Internet of Things (IoT). One of the biggest current challenges is obtaining a safe and easy access control scheme for the data managed in these services. We propose integrating IoT devices in an access control system designed for Web-based services by modelling certain IoT communication elements as resources. This would allow us to obtain a unified access control scheme between heterogeneous devices (IoT devices, Internet-based services, etc.). To achieve this, we have analysed the most relevant communication protocols for these kinds of environments and then we have proposed a methodology which allows the modelling of communication actions as resources. Then, we can protect these resources using access control mechanisms. The validation of our proposal has been carried out by selecting a communication protocol based on message exchange, specifically Message Queuing Telemetry Transport (MQTT). As an access control scheme, we have selected User-Managed Access (UMA), an existing Open Authorization (OAuth) 2.0 profile originally developed for the protection of Internet services. We have performed tests focused on validating the proposed solution in terms of the correctness of the access control system. Finally, we have evaluated the energy consumption overhead when using our proposal. PMID:29558406

  17. Design and Experiment of FBG-Based Icing Monitoring on Overhead Transmission Lines with an Improvement Trial for Windy Weather

    PubMed Central

    Zhang, Min; Xing, Yimeng; Zhang, Zhiguo; Chen, Qiguan

    2014-01-01

    A scheme for monitoring icing on overhead transmission lines with fiber Bragg grating (FBG) strain sensors is designed and evaluated both theoretically and experimentally. The influences of temperature and wind are considered. The results of field experiments using simulated ice loading on windless days indicate that the scheme is capable of monitoring the icing thickness within 0–30 mm with an accuracy of ±1 mm, a load cell error of 0.0308v, a repeatability error of 0.3328v and a hysteresis error is 0.026%. To improve the measurement during windy weather, a correction factor is added to the effective gravity acceleration, and the absolute FBG strain is replaced by its statistical average. PMID:25615733

  18. A Spatiotemporal-Chaos-Based Cryptosystem Taking Advantage of Both Synchronous and Self-Synchronizing Schemes

    NASA Astrophysics Data System (ADS)

    Lü, Hua-Ping; Wang, Shi-Hong; Li, Xiao-Wen; Tang, Guo-Ning; Kuang, Jin-Yu; Ye, Wei-Ping; Hu, Gang

    2004-06-01

    Two-dimensional one-way coupled map lattices are used for cryptography where multiple space units produce chaotic outputs in parallel. One of the outputs plays the role of driving for synchronization of the decryption system while the others perform the function of information encoding. With this separation of functions the receiver can establish a self-checking and self-correction mechanism, and enjoys the advantages of both synchronous and self-synchronizing schemes. A comparison between the present system with the system of advanced encryption standard (AES) is presented in the aspect of channel noise influence. Numerical investigations show that our system is much stronger than AES against channel noise perturbations, and thus can be better used for secure communications with large channel noise.

  19. New Radiosonde Temperature Bias Adjustments for Potential NWP Applications Based on GPS RO Data

    NASA Astrophysics Data System (ADS)

    Sun, B.; Reale, A.; Ballish, B.; Seidel, D. J.

    2014-12-01

    Conventional radiosonde observations (RAOBs), along with satellite and other in situ data, are assimilated in numerical weather prediction (NWP) models to generate a forecast. Radiosonde temperature observations, however, have solar and thermal radiation induced biases (typically a warm daytime bias from sunlight heating the sensor and a cold bias at night as the sensor emits longwave radiation). Radiation corrections made at stations based on algorithms provided by radiosonde manufacturers or national meteorological agencies may not be adequate, so biases remain. To adjust these biases, NWP centers may make additional adjustments to radiosonde data. However, the radiation correction (RADCOR) schemes used in the NOAA NCEP data assimilation and forecasting system is outdated and does not cover several widely-used contemporary radiosonde types. This study focuses on work whose objective is to improve these corrections and test their impacts on the NWP forecasting and analysis. GPS Radio Occultation (RO) dry temperature (Tdry) is considered to be highly accurate in the upper troposphere and low stratosphere where atmospheric water vapor is negligible. This study uses GPS RO Tdry from the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) as the reference to quantify the radiation induced RAOB temperature errors by analyzing ~ 3-yr collocated RAOB and COSMIC GPS RO data compile by the NOAA Products Validation System (NPROVS). The new radiation adjustments are developed for different solar angle categories and for all common sonde types flown in the WMO global operational upper air network. Results for global and several commonly used sondes are presented in the context of NCEP Global Forecast System observation-minus-background analysis, indicating projected impacts in reducing forecast error. Dedicated NWP impact studies to quantify the impact of the new RADCOR schemes on the NCEP analyses and forecast are under consideration.

  20. Proceedings of the IFIP WG 11.3 Working Conference on Database Security (6th) Held in Vancouver, British Columbia on 19-22 August 1992.

    DTIC Science & Technology

    1992-01-01

    multiversioning scheme for this purpose was presented in [9]. The scheme guarantees that high level methods would read down object states at lower levels that...order given by fork-stamp, and terminated writing versions with timestamp WStamp. Such a history is needed to implement the multiversioning scheme...recovery protocol for multiversion schedulers and show that this protocol is both correct and secure. The behavior of the recovery protocol depends

  1. A subtraction scheme for computing QCD jet cross sections at NNLO: integrating the doubly unresolved subtraction terms

    NASA Astrophysics Data System (ADS)

    Somogyi, Gábor

    2013-04-01

    We finish the definition of a subtraction scheme for computing NNLO corrections to QCD jet cross sections. In particular, we perform the integration of the soft-type contributions to the doubly unresolved counterterms via the method of Mellin-Barnes representations. With these final ingredients in place, the definition of the scheme is complete and the computation of fully differential rates for electron-positron annihilation into two and three jets at NNLO accuracy becomes feasible.

  2. [Modeling continuous scaling of NDVI based on fractal theory].

    PubMed

    Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng

    2013-07-01

    Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.

  3. Dynamic Computation Offloading for Low-Power Wearable Health Monitoring Systems.

    PubMed

    Kalantarian, Haik; Sideris, Costas; Mortazavi, Bobak; Alshurafa, Nabil; Sarrafzadeh, Majid

    2017-03-01

    The objective of this paper is to describe and evaluate an algorithm to reduce power usage and increase battery lifetime for wearable health-monitoring devices. We describe a novel dynamic computation offloading scheme for real-time wearable health monitoring devices that adjusts the partitioning of data processing between the wearable device and mobile application as a function of desired classification accuracy. By making the correct offloading decision based on current system parameters, we show that we are able to reduce system power by as much as 20%. We demonstrate that computation offloading can be applied to real-time monitoring systems, and yields significant power savings. Making correct offloading decisions for health monitoring devices can extend battery life and improve adherence.

  4. A fast iterative scheme for the linearized Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Wu, Lei; Zhang, Jun; Liu, Haihu; Zhang, Yonghao; Reese, Jason M.

    2017-06-01

    Iterative schemes to find steady-state solutions to the Boltzmann equation are efficient for highly rarefied gas flows, but can be very slow to converge in the near-continuum flow regime. In this paper, a synthetic iterative scheme is developed to speed up the solution of the linearized Boltzmann equation by penalizing the collision operator L into the form L = (L + Nδh) - Nδh, where δ is the gas rarefaction parameter, h is the velocity distribution function, and N is a tuning parameter controlling the convergence rate. The velocity distribution function is first solved by the conventional iterative scheme, then it is corrected such that the macroscopic flow velocity is governed by a diffusion-type equation that is asymptotic-preserving into the Navier-Stokes limit. The efficiency of this new scheme is assessed by calculating the eigenvalue of the iteration, as well as solving for Poiseuille and thermal transpiration flows. We find that the fastest convergence of our synthetic scheme for the linearized Boltzmann equation is achieved when Nδ is close to the average collision frequency. The synthetic iterative scheme is significantly faster than the conventional iterative scheme in both the transition and the near-continuum gas flow regimes. Moreover, due to its asymptotic-preserving properties, the synthetic iterative scheme does not need high spatial resolution in the near-continuum flow regime, which makes it even faster than the conventional iterative scheme. Using this synthetic scheme, with the fast spectral approximation of the linearized Boltzmann collision operator, Poiseuille and thermal transpiration flows between two parallel plates, through channels of circular/rectangular cross sections and various porous media are calculated over the whole range of gas rarefaction. Finally, the flow of a Ne-Ar gas mixture is solved based on the linearized Boltzmann equation with the Lennard-Jones intermolecular potential for the first time, and the difference between these results and those using the hard-sphere potential is discussed.

  5. The effect of metal artefact reduction on CT-based attenuation correction for PET imaging in the vicinity of metallic hip implants: a phantom study.

    PubMed

    Harnish, Roy; Prevrhal, Sven; Alavi, Abass; Zaidi, Habib; Lang, Thomas F

    2014-07-01

    To determine if metal artefact reduction (MAR) combined with a priori knowledge of prosthesis material composition can be applied to obtain CT-based attenuation maps with sufficient accuracy for quantitative assessment of (18)F-fluorodeoxyglucose uptake in lesions near metallic prostheses. A custom hip prosthesis phantom with a lesion-sized cavity filled with 0.2 ml (18)F-FDG solution having an activity of 3.367 MBq adjacent to a prosthesis bore was imaged twice with a chrome-cobalt steel hip prosthesis and a plastic replica, respectively. Scanning was performed on a clinical hybrid PET/CT system equipped with an additional external (137)Cs transmission source. PET emission images were reconstructed from both phantom configurations with CT-based attenuation correction (CTAC) and with CT-based attenuation correction using MAR (MARCTAC). To compare results with the attenuation-correction method extant prior to the advent of PET/CT, we also carried out attenuation correction with (137)Cs transmission-based attenuation correction (TXAC). CTAC and MARCTAC images were scaled to attenuation coefficients at 511 keV using a trilinear function that mapped the highest CT values to the prosthesis alloy attenuation coefficient. Accuracy and spatial distribution of the lesion activity was compared between the three reconstruction schemes. Compared to the reference activity of 3.37 MBq, the estimated activity quantified from the PET image corrected by TXAC was 3.41 MBq. The activity estimated from PET images corrected by MARCTAC was similar in accuracy at 3.32 MBq. CTAC corrected PET images resulted in nearly 40 % overestimation of lesion activity at 4.70 MBq. Comparison of PET images obtained with the plastic and metal prostheses in place showed that CTAC resulted in a marked distortion of the (18)F-FDG distribution within the lesion, whereas application of MARCTAC and TXAC resulted in lesion distributions similar to those observed with the plastic replica. MAR combined with a trilinear CT number mapping for PET attenuation correction resulted in estimates of lesion activity comparable in accuracy to that obtained with (137)Cs transmission-based attenuation correction, and far superior to estimates made without attenuation correction or with a standard CT attenuation map. The ability to use CT images for attenuation correction is a potentially important development because it obviates the need for a (137)Cs transmission source, which entails extra scan time, logistical complexity and expense.

  6. a Cell Vertex Algorithm for the Incompressible Navier-Stokes Equations on Non-Orthogonal Grids

    NASA Astrophysics Data System (ADS)

    Jessee, J. P.; Fiveland, W. A.

    1996-08-01

    The steady, incompressible Navier-Stokes (N-S) equations are discretized using a cell vertex, finite volume method. Quadrilateral and hexahedral meshes are used to represent two- and three-dimensional geometries respectively. The dependent variables include the Cartesian components of velocity and pressure. Advective fluxes are calculated using bounded, high-resolution schemes with a deferred correction procedure to maintain a compact stencil. This treatment insures bounded, non-oscillatory solutions while maintaining low numerical diffusion. The mass and momentum equations are solved with the projection method on a non-staggered grid. The coupling of the pressure and velocity fields is achieved using the Rhie and Chow interpolation scheme modified to provide solutions independent of time steps or relaxation factors. An algebraic multigrid solver is used for the solution of the implicit, linearized equations.A number of test cases are anlaysed and presented. The standard benchmark cases include a lid-driven cavity, flow through a gradual expansion and laminar flow in a three-dimensional curved duct. Predictions are compared with data, results of other workers and with predictions from a structured, cell-centred, control volume algorithm whenever applicable. Sensitivity of results to the advection differencing scheme is investigated by applying a number of higher-order flux limiters: the MINMOD, MUSCL, OSHER, CLAM and SMART schemes. As expected, studies indicate that higher-order schemes largely mitigate the diffusion effects of first-order schemes but also shown no clear preference among the higher-order schemes themselves with respect to accuracy. The effect of the deferred correction procedure on global convergence is discussed.

  7. Physical oceanography from satellites: Currents and the slope of the sea surface

    NASA Technical Reports Server (NTRS)

    Sturges, W.

    1974-01-01

    A global scheme using satellite altimetry in conjunction with thermometry techniques provides for more accurate determinations of first order leveling networks by overcoming discrepancies between ocean leveling and land leveling methods. The high noise content in altimetry signals requires filtering or correction for tides, etc., as well as carefully planned sampling schemes.

  8. Displacement data assimilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenthal, W. Steven; Venkataramani, Shankar; Mariano, Arthur J.

    We show that modifying a Bayesian data assimilation scheme by incorporating kinematically-consistent displacement corrections produces a scheme that is demonstrably better at estimating partially observed state vectors in a setting where feature information is important. While the displacement transformation is generic, here we implement it within an ensemble Kalman Filter framework and demonstrate its effectiveness in tracking stochastically perturbed vortices.

  9. NNLO QCD corrections to associated W H production and H →b b ¯ decay

    NASA Astrophysics Data System (ADS)

    Caola, Fabrizio; Luisoni, Gionata; Melnikov, Kirill; Röntsch, Raoul

    2018-04-01

    We present a computation of the next-to-next-to-leading-order (NNLO) QCD corrections to the production of a Higgs boson in association with a W boson at the LHC and the subsequent decay of the Higgs boson into a b b ¯ pair, treating the b quarks as massless. We consider various kinematic distributions and find significant corrections to observables that resolve the Higgs decay products. We also find that a cut on the transverse momentum of the W boson, important for experimental analyses, may have a significant impact on kinematic distributions and radiative corrections. We show that some of these effects can be adequately described by simulating QCD radiation in Higgs boson decays to b quarks using parton showers. We also describe contributions to Higgs decay to a b b ¯ pair that first appear at NNLO and that were not considered in previous fully differential computations. The calculation of NNLO QCD corrections to production and decay sub-processes is carried out within the nested soft-collinear subtraction scheme presented by some of us earlier this year. We demonstrate that this subtraction scheme performs very well, allowing a computation of the coefficient of the second-order QCD corrections at the level of a few per mill.

  10. Weighted divergence correction scheme and its fast implementation

    NASA Astrophysics Data System (ADS)

    Wang, ChengYue; Gao, Qi; Wei, RunJie; Li, Tian; Wang, JinJun

    2017-05-01

    Forcing the experimental volumetric velocity fields to satisfy mass conversation principles has been proved beneficial for improving the quality of measured data. A number of correction methods including the divergence correction scheme (DCS) have been proposed to remove divergence errors from measurement velocity fields. For tomographic particle image velocimetry (TPIV) data, the measurement uncertainty for the velocity component along the light thickness direction is typically much larger than for the other two components. Such biased measurement errors would weaken the performance of traditional correction methods. The paper proposes a variant for the existing DCS by adding weighting coefficients to the three velocity components, named as the weighting DCS (WDCS). The generalized cross validation (GCV) method is employed to choose the suitable weighting coefficients. A fast algorithm for DCS or WDCS is developed, making the correction process significantly low-cost to implement. WDCS has strong advantages when correcting velocity components with biased noise levels. Numerical tests validate the accuracy and efficiency of the fast algorithm, the effectiveness of GCV method, and the advantages of WDCS. Lastly, DCS and WDCS are employed to process experimental velocity fields from the TPIV measurement of a turbulent boundary layer. This shows that WDCS achieves a better performance than DCS in improving some flow statistics.

  11. Performance of the Goddard Multiscale Modeling Framework with Goddard Ice Microphysical Schemes

    NASA Technical Reports Server (NTRS)

    Chern, Jiun-Dar; Tao, Wei-Kuo; Lang, Stephen E.; Matsui, Toshihisa; Li, J.-L.; Mohr, Karen I.; Skofronick-Jackson, Gail M.; Peters-Lidard, Christa D.

    2016-01-01

    The multiscale modeling framework (MMF), which replaces traditional cloud parameterizations with cloud-resolving models (CRMs) within a host atmospheric general circulation model (GCM), has become a new approach for climate modeling. The embedded CRMs make it possible to apply CRM-based cloud microphysics directly within a GCM. However, most such schemes have never been tested in a global environment for long-term climate simulation. The benefits of using an MMF to evaluate rigorously and improve microphysics schemes are here demonstrated. Four one-moment microphysical schemes are implemented into the Goddard MMF and their results validated against three CloudSat/CALIPSO cloud ice products and other satellite data. The new four-class (cloud ice, snow, graupel, and frozen drops/hail) ice scheme produces a better overall spatial distribution of cloud ice amount, total cloud fractions, net radiation, and total cloud radiative forcing than earlier three-class ice schemes, with biases within the observational uncertainties. Sensitivity experiments are conducted to examine the impact of recently upgraded microphysical processes on global hydrometeor distributions. Five processes dominate the global distributions of cloud ice and snow amount in long-term simulations: (1) allowing for ice supersaturation in the saturation adjustment, (2) three additional correction terms in the depositional growth of cloud ice to snow, (3) accounting for cloud ice fall speeds, (4) limiting cloud ice particle size, and (5) new size-mapping schemes for snow and graupel. Despite the cloud microphysics improvements, systematic errors associated with subgrid processes, cyclic lateral boundaries in the embedded CRMs, and momentum transport remain and will require future improvement.

  12. A Bluetooth/PDR Integration Algorithm for an Indoor Positioning System.

    PubMed

    Li, Xin; Wang, Jian; Liu, Chunyan

    2015-09-25

    This paper proposes two schemes for indoor positioning by fusing Bluetooth beacons and a pedestrian dead reckoning (PDR) technique to provide meter-level positioning without additional infrastructure. As to the PDR approach, a more effective multi-threshold step detection algorithm is used to improve the positioning accuracy. According to pedestrians' different walking patterns such as walking or running, this paper makes a comparative analysis of multiple step length calculation models to determine a linear computation model and the relevant parameters. In consideration of the deviation between the real heading and the value of the orientation sensor, a heading estimation method with real-time compensation is proposed, which is based on a Kalman filter with map geometry information. The corrected heading can inhibit the positioning error accumulation and improve the positioning accuracy of PDR. Moreover, this paper has implemented two positioning approaches integrated with Bluetooth and PDR. One is the PDR-based positioning method based on map matching and position correction through Bluetooth. There will not be too much calculation work or too high maintenance costs using this method. The other method is a fusion calculation method based on the pedestrians' moving status (direct movement or making a turn) to determine adaptively the noise parameters in an Extended Kalman Filter (EKF) system. This method has worked very well in the elimination of various phenomena, including the "go and back" phenomenon caused by the instability of the Bluetooth-based positioning system and the "cross-wall" phenomenon due to the accumulative errors caused by the PDR algorithm. Experiments performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building in the China University of Mining and Technology (CUMT) campus showed that the proposed scheme can reliably achieve a 2-meter precision.

  13. A Bluetooth/PDR Integration Algorithm for an Indoor Positioning System

    PubMed Central

    Li, Xin; Wang, Jian; Liu, Chunyan

    2015-01-01

    This paper proposes two schemes for indoor positioning by fusing Bluetooth beacons and a pedestrian dead reckoning (PDR) technique to provide meter-level positioning without additional infrastructure. As to the PDR approach, a more effective multi-threshold step detection algorithm is used to improve the positioning accuracy. According to pedestrians’ different walking patterns such as walking or running, this paper makes a comparative analysis of multiple step length calculation models to determine a linear computation model and the relevant parameters. In consideration of the deviation between the real heading and the value of the orientation sensor, a heading estimation method with real-time compensation is proposed, which is based on a Kalman filter with map geometry information. The corrected heading can inhibit the positioning error accumulation and improve the positioning accuracy of PDR. Moreover, this paper has implemented two positioning approaches integrated with Bluetooth and PDR. One is the PDR-based positioning method based on map matching and position correction through Bluetooth. There will not be too much calculation work or too high maintenance costs using this method. The other method is a fusion calculation method based on the pedestrians’ moving status (direct movement or making a turn) to determine adaptively the noise parameters in an Extended Kalman Filter (EKF) system. This method has worked very well in the elimination of various phenomena, including the “go and back” phenomenon caused by the instability of the Bluetooth-based positioning system and the “cross-wall” phenomenon due to the accumulative errors caused by the PDR algorithm. Experiments performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building in the China University of Mining and Technology (CUMT) campus showed that the proposed scheme can reliably achieve a 2-meter precision. PMID:26404277

  14. The refractive index in electron microscopy and the errors of its approximations.

    PubMed

    Lentzen, M

    2017-05-01

    In numerical calculations for electron diffraction often a simplified form of the electron-optical refractive index, linear in the electric potential, is used. In recent years improved calculation schemes have been proposed, aiming at higher accuracy by including higher-order terms of the electric potential. These schemes start from the relativistically corrected Schrödinger equation, and use a second simplified form, now for the refractive index squared, being linear in the electric potential. The second and higher-order corrections thus determined have, however, a large error, compared to those derived from the relativistically correct refractive index. The impact of the two simplifications on electron diffraction calculations is assessed through numerical comparison of the refractive index at high-angle Coulomb scattering and of cross-sections for a wide range of scattering angles, kinetic energies, and atomic numbers. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Signal processing of aircraft flyover noise

    NASA Technical Reports Server (NTRS)

    Kelly, Jeffrey J.

    1991-01-01

    A detailed analysis of signal processing concerns for measuring aircraft flyover noise is presented. Development of a de-Dopplerization scheme for both corrected time history and spectral data is discussed along with an analysis of motion effects on measured spectra. A computer code was written to implement the de-Dopplerization scheme. Input to the code is the aircraft position data and the pressure time histories. To facilitate ensemble averaging, a uniform level flyover is considered but the code can accept more general flight profiles. The effects of spectral smearing and its removal is discussed. Using data acquired from XV-15 tilt rotor flyover test comparisons are made showing the measured and corrected spectra. Frequency shifts are accurately accounted for by the method. It is shown that correcting for spherical spreading, Doppler amplitude, and frequency can give some idea about source directivity. The analysis indicated that smearing increases with frequency and is more severe on approach than recession.

  16. The Stochastic Multicloud Model as part of an operational convection parameterisation in a comprehensive GCM

    NASA Astrophysics Data System (ADS)

    Peters, Karsten; Jakob, Christian; Möbis, Benjamin

    2015-04-01

    An adequate representation of convective processes in numerical models of the atmospheric circulation (general circulation models, GCMs) remains one of the grand challenges in atmospheric science. In particular, the models struggle with correctly representing the spatial distribution and high variability of tropical convection. It is thought that this model deficiency partly results from formulating current convection parameterisation schemes in a purely deterministic manner. Here, we use observations of tropical convection to inform the design of a novel convection parameterisation with stochastic elements. The novel scheme is built around the Stochastic MultiCloud Model (SMCM, Khouider et al 2010). We present the progress made in utilising SMCM-based estimates of updraft area fractions at cloud base as part of the deep convection scheme of a GCM. The updraft area fractions are used to yield one part of the cloud base mass-flux used in the closure assumption of convective mass-flux schemes. The closure thus receives a stochastic component, potentially improving modeled convective variability and coherence. For initial investigations, we apply the above methodology to the operational convective parameterisation of the ECHAM6 GCM. We perform 5-year AMIP simulations, i.e. with prescribed observed SSTs. We find that with the SMCM, convection is weaker and more coherent and continuous from timestep to timestep compared to the standard model. Total global precipitation is reduced in the SMCM run, but this reduces i) the overall error compared to observed global precipitation (GPCP) and ii) middle tropical tropospheric temperature biases compared to ERA-Interim. Hovmoeller diagrams indicate a slightly higher degree of convective organisation compared to the base case and Wheeler-Kiladis frequency wavenumber diagrams indicate slightly more spectral power in the MJO range.

  17. Interface- and discontinuity-aware numerical schemes for plasma 3-T radiation diffusion in two and three dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, William W., E-mail: dai@lanl.gov; Scannapieco, Anthony J.

    2015-11-01

    A set of numerical schemes is developed for two- and three-dimensional time-dependent 3-T radiation diffusion equations in systems involving multi-materials. To resolve sub-cell structure, interface reconstruction is implemented within any cell that has more than one material. Therefore, the system of 3-T radiation diffusion equations is solved on two- and three-dimensional polyhedral meshes. The focus of the development is on the fully coupling between radiation and material, the treatment of nonlinearity in the equations, i.e., in the diffusion terms and source terms, treatment of the discontinuity across cell interfaces in material properties, the formulations for both transient and steady states,more » the property for large time steps, and second order accuracy in both space and time. The discontinuity of material properties between different materials is correctly treated based on the governing physics principle for general polyhedral meshes and full nonlinearity. The treatment is exact for arbitrarily strong discontinuity. The scheme is fully nonlinear for the full nonlinearity in the 3-T diffusion equations. Three temperatures are fully coupled and are updated simultaneously. The scheme is general in two and three dimensions on general polyhedral meshes. The features of the scheme are demonstrated through numerical examples for transient problems and steady states. The effects of some simplifications of numerical schemes are also shown through numerical examples, such as linearization, simple average of diffusion coefficient, and approximate treatment for the coupling between radiation and material.« less

  18. Hot spot variability and lithography process window investigation by CDU improvement using CDC technique

    NASA Astrophysics Data System (ADS)

    Thamm, Thomas; Geh, Bernd; Djordjevic Kaufmann, Marija; Seltmann, Rolf; Bitensky, Alla; Sczyrba, Martin; Samy, Aravind Narayana

    2018-03-01

    Within the current paper, we will concentrate on the well-known CDC technique from Carl Zeiss to improve the CD distribution of the wafer by improving the reticle CDU and its impact on hotspots and Litho process window. The CDC technique uses an ultra-short pulse laser technology, which generates a micro-level Shade-In-Element (also known as "Pixels") into the mask quartz bulk material. These scatter centers are able to selectively attenuate certain areas of the reticle in higher resolution compared to other methods and thus improve the CD uniformity. In a first section, we compare the CDC technique with scanner dose correction schemes. It becomes obvious, that the CDC technique has unique advantages with respect to spatial resolution and intra-field flexibility over scanner correction schemes, however, due to the scanner flexibility across wafer both methods are rather complementary than competing. In a second section we show that a reference feature based correction scheme can be used to improve the CDU of a full chip with multiple different features that have different MEEF and dose sensitivities. In detail we will discuss the impact of forward scattering light originated by the CDC pixels on the illumination source and the related proximity signature. We will show that the impact on proximity is small compared to the CDU benefit of the CDC technique. Finally we show to which extend the reduced variability across reticle will result in a better common electrical process window of a whole chip design on the whole reticle field on wafer. Finally we will discuss electrical verification results between masks with purposely made bad CDU that got repaired by the CDC technique versus inherently good "golden" masks on a complex logic device. No yield difference is observed between the repaired bad masks and the masks with good CDU.

  19. Multi-model data fusion to improve an early warning system for hypo-/hyperglycemic events.

    PubMed

    Botwey, Ransford Henry; Daskalaki, Elena; Diem, Peter; Mougiakakou, Stavroula G

    2014-01-01

    Correct predictions of future blood glucose levels in individuals with Type 1 Diabetes (T1D) can be used to provide early warning of upcoming hypo-/hyperglycemic events and thus to improve the patient's safety. To increase prediction accuracy and efficiency, various approaches have been proposed which combine multiple predictors to produce superior results compared to single predictors. Three methods for model fusion are presented and comparatively assessed. Data from 23 T1D subjects under sensor-augmented pump (SAP) therapy were used in two adaptive data-driven models (an autoregressive model with output correction - cARX, and a recurrent neural network - RNN). Data fusion techniques based on i) Dempster-Shafer Evidential Theory (DST), ii) Genetic Algorithms (GA), and iii) Genetic Programming (GP) were used to merge the complimentary performances of the prediction models. The fused output is used in a warning algorithm to issue alarms of upcoming hypo-/hyperglycemic events. The fusion schemes showed improved performance with lower root mean square errors, lower time lags, and higher correlation. In the warning algorithm, median daily false alarms (DFA) of 0.25%, and 100% correct alarms (CA) were obtained for both event types. The detection times (DT) before occurrence of events were 13.0 and 12.1 min respectively for hypo-/hyperglycemic events. Compared to the cARX and RNN models, and a linear fusion of the two, the proposed fusion schemes represents a significant improvement.

  20. The Mars Analysis Correction Data Assimilation (MACDA): A reference atmospheric reanalysis

    NASA Astrophysics Data System (ADS)

    Montabone, Luca; Read, Peter; Lewis, Stephen; Steele, Liam; Holmes, James; Valeanu, Alexandru

    2016-07-01

    The Mars Analysis Correction Data Assimilation (MACDA) dataset version 1.0 contains the reanalysis of fundamental atmospheric and surface variables for the planet Mars covering a period of about three Martian years (late MY 24 to early MY 27). This has been produced by data assimilation of retrieved thermal profiles and column dust optical depths from NASA's Mars Global Surveyor/Thermal Emission Spectrometer (MGS/TES), which have been assimilated into a Mars global climate model (MGCM) using the Analysis Correction scheme developed at the UK Meteorological Office. The MACDA v1.0 reanalysis is publicly available, and the NetCDF files can be downloaded from the archive at the Centre for Environmental Data Analysis/British Atmospheric Data Centre (CEDA/BADC). The variables included in the dataset can be visualised using an ad-hoc graphical user interface (the "MACDA Plotter") at the following URL: http://macdap.physics.ox.ac.uk/ MACDA is an ongoing collaborative project, and work is currently undertaken to produce version 2.0 of the Mars atmospheric reanalysis. One of the key improvements is the extension of the reanalysis period to nine martian years (MY 24 through MY 32), with the assimilation of NASA's Mars Reconnaissance Orbiter/Mars Climate Sounder (MRO/MCS) retrievals of thermal and dust opacity profiles. MACDA 2.0 is also going to be based on an improved version of the underlying MGCM and an updated scheme to fully assimilate (radiative active) tracers, such as dust and water ice.

  1. Al7CX (X=Li-Cs) clusters: Stability and the prospect for cluster materials

    NASA Astrophysics Data System (ADS)

    Ashman, C.; Khanna, S. N.; Pederson, M. R.; Kortus, J.

    2000-12-01

    Al7C clusters, recently found to have a high-electron affinity and exceptional stability, are shown to form ionic molecules when combined with alkali-metal atoms. Our studies, based on an ab initio gradient-corrected density-functional scheme, show that Al7CX (X=Li-Cs) clusters have a very low-electron affinity and a high-ionization potential. When combined, the two- and four-atom composite clusters of Al7CLi units leave the Al7C clusters almost intact. Preliminary studies indicate that Al7CLi may be suitable to form cluster-based materials.

  2. Sequential limiting in continuous and discontinuous Galerkin methods for the Euler equations

    NASA Astrophysics Data System (ADS)

    Dobrev, V.; Kolev, Tz.; Kuzmin, D.; Rieben, R.; Tomov, V.

    2018-03-01

    We present a new predictor-corrector approach to enforcing local maximum principles in piecewise-linear finite element schemes for the compressible Euler equations. The new element-based limiting strategy is suitable for continuous and discontinuous Galerkin methods alike. In contrast to synchronized limiting techniques for systems of conservation laws, we constrain the density, momentum, and total energy in a sequential manner which guarantees positivity preservation for the pressure and internal energy. After the density limiting step, the total energy and momentum gradients are adjusted to incorporate the irreversible effect of density changes. Antidiffusive corrections to bounds-compatible low-order approximations are limited to satisfy inequality constraints for the specific total and kinetic energy. An accuracy-preserving smoothness indicator is introduced to gradually adjust lower bounds for the element-based correction factors. The employed smoothness criterion is based on a Hessian determinant test for the density. A numerical study is performed for test problems with smooth and discontinuous solutions.

  3. Quantum gambling using two nonorthogonal states

    NASA Astrophysics Data System (ADS)

    Hwang, Won Young; Ahn, Doyeol; Hwang, Sung Woo

    2001-12-01

    We give a (remote) quantum-gambling scheme that makes use of the fact that quantum nonorthogonal states cannot be distinguished with certainty. In the proposed scheme, two participants Alice and Bob can be regarded as playing a game of making guesses on identities of quantum states that are in one of two given nonorthogonal states: if Bob makes a correct (an incorrect) guess on the identity of a quantum state that Alice has sent, he wins (loses). It is shown that the proposed scheme is secure against the nonentanglement attack. It can also be shown heuristically that the scheme is secure in the case of the entanglement attack.

  4. First order comparison of numerical calculation and two different turtle input schemes to represent a SLC defocusing magnet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jaeger, J.

    1983-07-14

    Correcting the dispersion function in the SLC north arc it turned out that backleg-windings (BLW) acting horizontally as well as BLW acting vertically have to be used. In the latter case the question arose what is the best representation of a defocusing magnet with excited BLW acting in the vertical plane for the computer code TURTLE. Two different schemes, the 14.-scheme and the 20.-scheme were studied and the TURTLE output for one ray through such a magnet compared with the numerical solution of the equation of motion; only terms of first order have been taken into account.

  5. Gauge-independent renormalization of the N2HDM

    NASA Astrophysics Data System (ADS)

    Krause, Marcel; López-Val, David; Mühlleitner, Margarete; Santos, Rui

    2017-12-01

    The Next-to-Minimal 2-Higgs-Doublet Model (N2HDM) is an interesting benchmark model for a Higgs sector consisting of two complex doublet and one real singlet fields. Like the Next-to-Minimal Supersymmetric extension (NMSSM) it features light Higgs bosons that could have escaped discovery due to their singlet admixture. Thereby, the model allows for various different Higgs-to-Higgs decay modes. Contrary to the NMSSM, however, the model is not subject to supersymmetric relations restraining its allowed parameter space and its phenomenology. For the correct determination of the allowed parameter space, the correct interpretation of the LHC Higgs data and the possible distinction of beyond-the-Standard Model Higgs sectors higher order corrections to the Higgs boson observables are crucial. This requires not only their computation but also the development of a suitable renormalization scheme. In this paper we have worked out the renormalization of the complete N2HDM and provide a scheme for the gauge-independent renormalization of the mixing angles. We discuss the renormalization of the Z_2 soft breaking parameter m 12 2 and the singlet vacuum expectation value v S . Both enter the Higgs self-couplings relevant for Higgs-to-Higgs decays. We apply our renormalization scheme to different sample processes such as Higgs decays into Z bosons and decays into a lighter Higgs pair. Our results show that the corrections may be sizable and have to be taken into account for reliable predictions.

  6. Perspectives of shaped pulses for EPR spectroscopy

    NASA Astrophysics Data System (ADS)

    Spindler, Philipp E.; Schöps, Philipp; Kallies, Wolfgang; Glaser, Steffen J.; Prisner, Thomas F.

    2017-07-01

    This article describes current uses of shaped pulses, generated by an arbitrary waveform generator, in the field of EPR spectroscopy. We show applications of sech/tanh and WURST pulses to dipolar spectroscopy, including new pulse schemes and procedures, and discuss the more general concept of optimum-control-based pulses for applications in EPR spectroscopy. The article also describes a procedure to correct for experimental imperfections, mostly introduced by the microwave resonator, and discusses further potential applications and limitations of such pulses.

  7. Performance of concatenated Reed-Solomon trellis-coded modulation over Rician fading channels

    NASA Technical Reports Server (NTRS)

    Moher, Michael L.; Lodge, John H.

    1990-01-01

    A concatenated coding scheme for providing very reliable data over mobile-satellite channels at power levels similar to those used for vocoded speech is described. The outer code is a shorter Reed-Solomon code which provides error detection as well as error correction capabilities. The inner code is a 1-D 8-state trellis code applied independently to both the inphase and quadrature channels. To achieve the full error correction potential of this inner code, the code symbols are multiplexed with a pilot sequence which is used to provide dynamic channel estimation and coherent detection. The implementation structure of this scheme is discussed and its performance is estimated.

  8. Long-range analysis of density fitting in extended systems

    NASA Astrophysics Data System (ADS)

    Varga, Scarontefan

    Density fitting scheme is analyzed for the Coulomb problem in extended systems from the correctness of long-range behavior point of view. We show that for the correct cancellation of divergent long-range Coulomb terms it is crucial for the density fitting scheme to reproduce the overlap matrix exactly. It is demonstrated that from all possible fitting metric choices the Coulomb metric is the only one which inherently preserves the overlap matrix for infinite systems with translational periodicity. Moreover, we show that by a small additional effort any non-Coulomb metric fit can be made overlap-preserving as well. The problem is analyzed for both ordinary and Poisson basis set choices.

  9. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  10. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo

    1986-01-01

    A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  11. Analysis of unsteady reacting flows and impact of chemistry description in Large Eddy Simulations of side-dump ramjet combustors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roux, A.; Gicquel, L.Y.M.; Staffelbach, G.

    2010-01-15

    Among all the undesired phenomena observed in ramjet combustors, combustion instabilities are of foremost importance and predicting them using Large Eddy Simulation (LES) is an active research field. While acoustics are naturally captured by compressible LES provided that the proper boundary conditions are applied, combustion/chemistry modelling remains a critical issue and its impact on numerical predictions must still be assessed for complex applications. To do so, two different ramjet LES's are compared here. The first simulation is based on a standard one-step chemistry known to over-estimate the laminar flame speed in fuel rich conditions. The second simulation uses the samemore » scheme but introduces a correction of reaction rates for rich flames to match a detailed mechanism provided by Peters (1993). Even though the two chemical schemes are very similar and very few points burn in rich regimes, distinct limit-cycles are obtained with LES depending on which scheme is used. Results obtained with the standard one-step chemistry exhibit high frequency self-sustained oscillations. Multiple flame fronts are stabilized in the vicinity of the shear layer developing at the exit of the air inlets. When compared to the experiment, the fitted one-step scheme yields better predictions than the standard scheme. With the fitted scheme, the flame is detached from the air inlets and stabilizes in the regions identified in the experiment (Ristori et al. (2005), Heid and Ristori (2003), Heid and Ristori (2005), Ristori et al. (1999)). LES and experiments exhibit all main low-frequency modes including the first longitudinal acoustic mode. The high frequencies excited with the standard scheme are damped with the fitted scheme. The chemical scheme is found, for this ramjet burner, to have a strong impact on the predicted stability: approximate chemical schemes even in a limited range of equivalence ratio can lead to the occurence of non-physical combustion oscillations. (author)« less

  12. Receiver bandwidth effects on complex modulation and detection using directly modulated lasers.

    PubMed

    Yuan, Feng; Che, Di; Shieh, William

    2016-05-01

    Directly modulated lasers (DMLs) have long been employed for short- and medium-reach optical communications due to their low cost. Recently, a new modulation scheme called complex modulated DMLs has been demonstrated showing a significant optical signal to noise ratio sensitivity enhancement compared with the traditional intensity-only detection scheme. However, chirp-induced optical spectrum broadening is inevitable in complex modulated systems, which may imply a need for high-bandwidth receivers. In this Letter, we study the impact of receiver bandwidth effects on the performance of complex modulation and coherent detection systems based on DMLs. We experimentally demonstrate that such systems exhibit a reasonable tolerance for the reduced receiver bandwidth. For 10 Gbaud 4-level pulse amplitude modulation signals, the required electrical bandwidth is as low as 8.5 and 7.5 GHz for 7% and 20% forward error correction, respectively. Therefore, it is feasible to realize DML-based complex modulated systems using cost-effective receivers with narrow bandwidth.

  13. Research Topics on Cluttered Environments Interrogation and Propagation

    DTIC Science & Technology

    2014-11-04

    propagation in random and complex media and looked at specific applications associated with imaging and communication through a cluttered medium...imaging and communication schemes. We have used the results on the fourth moment to analyze wavefront correction schemes and obtained novel...and com- plex media and looked at specific applications associated with imaging and communication through a cluttered medium. The main new

  14. Study on advanced information processing system

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Liu, Jyh-Charn

    1992-01-01

    Issues related to the reliability of a redundant system with large main memory are addressed. In particular, the Fault-Tolerant Processor (FTP) for Advanced Launch System (ALS) is used as a basis for our presentation. When the system is free of latent faults, the probability of system crash due to nearly-coincident channel faults is shown to be insignificant even when the outputs of computing channels are infrequently voted on. In particular, using channel error maskers (CEMs) is shown to improve reliability more effectively than increasing the number of channels for applications with long mission times. Even without using a voter, most memory errors can be immediately corrected by CEMs implemented with conventional coding techniques. In addition to their ability to enhance system reliability, CEMs--with a low hardware overhead--can be used to reduce not only the need of memory realignment, but also the time required to realign channel memories in case, albeit rare, such a need arises. Using CEMs, we have developed two schemes, called Scheme 1 and Scheme 2, to solve the memory realignment problem. In both schemes, most errors are corrected by CEMs, and the remaining errors are masked by a voter.

  15. Performance of MIMO-OFDM using convolution codes with QAM modulation

    NASA Astrophysics Data System (ADS)

    Astawa, I. Gede Puja; Moegiharto, Yoedy; Zainudin, Ahmad; Salim, Imam Dui Agus; Anggraeni, Nur Annisa

    2014-04-01

    Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct errors that occur during data transmission. One can use the convolution code. This paper present performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate ½. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 subcarrier which transmits Rayleigh multipath fading channel in OFDM system. To achieve a BER of 10-3 is required 10dB SNR in SISO-OFDM scheme. For 2×2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4×4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4×4 MIMO-OFDM system without coding, power saving 7 dB of 2×2 MIMO-OFDM and significant power savings from SISO-OFDM system.

  16. On regularizing the MCTDH equations of motion

    NASA Astrophysics Data System (ADS)

    Meyer, Hans-Dieter; Wang, Haobin

    2018-03-01

    The Multiconfiguration Time-Dependent Hartree (MCTDH) approach leads to equations of motion (EOM) which become singular when there are unoccupied so-called single-particle functions (SPFs). Starting from a Hartree product, all SPFs, except the first one, are unoccupied initially. To solve the MCTDH-EOMs numerically, one therefore has to remove the singularity by a regularization procedure. Usually the inverse of a density matrix is regularized. Here we argue and show that regularizing the coefficient tensor, which in turn regularizes the density matrix as well, leads to an improved performance of the EOMs. The initially unoccupied SPFs are rotated faster into their "correct direction" in Hilbert space and the final results are less sensitive to the choice of the value of the regularization parameter. For a particular example (a spin-boson system studied with a transformed Hamiltonian), we could even show that only with the new regularization scheme could one obtain correct results. Finally, in Appendix A, a new integration scheme for the MCTDH-EOMs developed by Lubich and co-workers is discussed. It is argued that this scheme does not solve the problem of the unoccupied natural orbitals because this scheme ignores the latter and does not propagate them at all.

  17. The effect of interference on delta modulation encoded video signals

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.

    1979-01-01

    The results of a study on the use of the delta modulator as a digital encoder of television signals are presented. The computer simulation was studied of different delta modulators in order to find a satisfactory delta modulator. After finding a suitable delta modulator algorithm via computer simulation, the results are analyzed and then implemented in hardware to study the ability to encode real time motion pictures from an NTSC format television camera. The effects were investigated of channel errors on the delta modulated video signal and several error correction algorithms were tested via computer simulation. A very high speed delta modulator was built (out of ECL logic), incorporating the most promising of the correction schemes, so that it could be tested on real time motion pictures. The final area of investigation concerned itself with finding delta modulators which could achieve significant bandwidth reduction without regard to complexity or speed. The first such scheme to be investigated was a real time frame to frame encoding scheme which required the assembly of fourteen, 131,000 bit long shift registers as well as a high speed delta modulator. The other schemes involved two dimensional delta modulator algorithms.

  18. Variational Continuous Assimilation of TMI and SSM/I Rain Rates: Impact on GEOS-3 Hurricane Analyses and Forecasts

    NASA Technical Reports Server (NTRS)

    Hou, Arthur Y.; Zhang, Sara Q.; Reale, Oreste

    2003-01-01

    We describe a variational continuous assimilation (VCA) algorithm for assimilating tropical rainfall data using moisture and temperature tendency corrections as the control variable to offset model deficiencies. For rainfall assimilation, model errors are of special concern since model-predicted precipitation is based on parameterized moist physics, which can have substantial systematic errors. This study examines whether a VCA scheme using the forecast model as a weak constraint offers an effective pathway to precipitation assimilation. The particular scheme we exarnine employs a '1+1' dimension precipitation observation operator based on a 6-h integration of a column model of moist physics from the Goddard Earth Observing System (GEOS) global data assimilation system DAS). In earlier studies, we tested a simplified version of this scheme and obtained improved monthly-mean analyses and better short-range forecast skills. This paper describes the full implementation ofthe 1+1D VCA scheme using background and observation error statistics, and examines how it may improve GEOS analyses and forecasts of prominent tropical weather systems such as hurricanes. Parallel assimilation experiments with and without rainfall data for Hurricanes Bonnie and Floyd show that assimilating 6-h TMI and SSM/I surfice rain rates leads to more realistic storm features in the analysis, which, in turn, provide better initial conditions for 5-day storm track prediction and precipitation forecast. These results provide evidence that addressing model deficiencies in moisture tendency may be crucial to making effective use of precipitation information in data assimilation.

  19. How important is self-consistency for the dDsC density dependent dispersion correction?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brémond, Éric; Corminboeuf, Clémence, E-mail: clemence.corminboeuf@epfl.ch; Golubev, Nikolay

    2014-05-14

    The treatment of dispersion interactions is ubiquitous but computationally demanding for seamless ab initio approaches. A highly popular and simple remedy consists in correcting for the missing interactions a posteriori by adding an attractive energy term summed over all atom pairs to standard density functional approximations. These corrections were originally based on atom pairwise parameters and, hence, had a strong touch of empiricism. To overcome such limitations, we recently proposed a robust system-dependent dispersion correction, dDsC, that is computed from the electron density and that provides a balanced description of both weak inter- and intramolecular interactions. From the theoretical pointmore » of view and for the sake of increasing reliability, we here verify if the self-consistent implementation of dDsC impacts ground-state properties such as interaction energies, electron density, dipole moments, geometries, and harmonic frequencies. In addition, we investigate the suitability of the a posteriori scheme for molecular dynamics simulations, for which the analysis of the energy conservation constitutes a challenging tests. Our study demonstrates that the post-SCF approach in an excellent approximation.« less

  20. Higher-order differencing method with a multigrid approach for the solution of the incompressible flow equations at high Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Tzanos, Constantine P.

    1992-10-01

    A higher-order differencing scheme (Tzanos, 1990) is used in conjunction with a multigrid approach to obtain accurate solutions of the Navier-Stokes convection-diffusion equations at high Re numbers. Flow in a square cavity with a moving lid is used as a test problem. a multigrid approach based on the additive correction method (Settari and Aziz) and an iterative incomplete lower and upper solver demonstrated good performance for the whole range of Re number under consideration (from 1000 to 10,000) and for both uniform and nonuniform grids. It is concluded that the combination of the higher-order differencing scheme with a multigrid approach proved to be an effective technique for giving accurate solutions of the Navier-Stokes equations at high Re numbers.

  1. A Simplified Guidance for Target Missiles Used in Ballistic Missile Defence Evaluation

    NASA Astrophysics Data System (ADS)

    Prabhakar, N.; Kumar, I. D.; Tata, S. K.; Vaithiyanathan, V.

    2013-01-01

    A simplified guidance scheme for the target missiles used in Ballistic Missile Defence is presented in this paper. The proposed method has two major components, a Ground Guidance Computation (GGC) and an In-Flight Guidance Computation. The GGC which runs on the ground uses a missile model to generate attitude history in pitch plane and computes launch azimuth of the missile to compensate for the effect of earth rotation. The vehicle follows the pre launch computed attitude (theta) history in pitch plane and also applies the course correction in azimuth plane based on its deviation from the pre launch computed azimuth plane. This scheme requires less computations and counters In-flight disturbances such as wind, gust etc. quite efficiently. The simulation results show that the proposed method provides the satisfactory performance and robustness.

  2. Propagation of spectral characterization errors of imaging spectrometers at level-1 and its correction within a level-2 recalibration scheme

    NASA Astrophysics Data System (ADS)

    Vicent, Jorge; Alonso, Luis; Sabater, Neus; Miesch, Christophe; Kraft, Stefan; Moreno, Jose

    2015-09-01

    The uncertainties in the knowledge of the Instrument Spectral Response Function (ISRF), barycenter of the spectral channels and bandwidth / spectral sampling (spectral resolution) are important error sources in the processing of satellite imaging spectrometers within narrow atmospheric absorption bands. The exhaustive laboratory spectral characterization is a costly engineering process that differs from the instrument configuration in-flight given the harsh space environment and harmful launching phase. The retrieval schemes at Level-2 commonly assume a Gaussian ISRF, leading to uncorrected spectral stray-light effects and wrong characterization and correction of the spectral shift and smile. These effects produce inaccurate atmospherically corrected data and are propagated to the final Level-2 mission products. Within ESA's FLEX satellite mission activities, the impact of the ISRF knowledge error and spectral calibration at Level-1 products and its propagation to Level-2 retrieved chlorophyll fluorescence has been analyzed. A spectral recalibration scheme has been implemented at Level-2 reducing the errors in Level-1 products below the 10% error in retrieved fluorescence within the oxygen absorption bands enhancing the quality of the retrieved products. The work presented here shows how the minimization of the spectral calibration errors requires an effort both for the laboratory characterization and for the implementation of specific algorithms at Level-2.

  3. Colour correct: the interactive effects of food label nutrition colouring schemes and food category healthiness on health perceptions.

    PubMed

    Nyilasy, Gergely; Lei, Jing; Nagpal, Anish; Tan, Joseph

    2016-08-01

    The purpose of the present study was to examine the effects of food label nutrition colouring schemes in interaction with food category healthiness on consumers' perceptions of food healthiness. Three streams of colour theory (colour attention, colour association and colour approach-avoidance) in interaction with heuristic processing theory provide consonant predictions and explanations for the underlying psychological processes. A 2 (food category healthiness: healthy v. unhealthy)×3 (food label nutrient colouring schemes: healthy=green, unhealthy=red (HGUR) v. healthy=red, unhealthy=green (HRUG) v. no colour (control)) between-subjects design was used. The research setting was a randomised-controlled experiment using varying formats of food packages and nutritional information colouring. Respondents (n 196) sourced from a national consumer panel, USA. The findings suggest that, for healthy foods, the nutritional colouring schemes reduced perceived healthiness, irrespective of which nutrients were coloured red or green (healthinesscontrol=4·86; healthinessHGUR=4·10; healthinessHRUG=3·70). In contrast, for unhealthy foods, there was no significant difference in perceptions of food healthiness when comparing different colouring schemes against the control. The results make an important qualification to the common belief that colour coding can enhance the correct interpretation of nutrition information and suggest that this incentive may not necessarily support healthier food choices in all situations.

  4. Time-domain analysis of planar microstrip devices using a generalized Yee-algorithm based on unstructured grids

    NASA Technical Reports Server (NTRS)

    Gedney, Stephen D.; Lansing, Faiza

    1993-01-01

    The generalized Yee-algorithm is presented for the temporal full-wave analysis of planar microstrip devices. This algorithm has the significant advantage over the traditional Yee-algorithm in that it is based on unstructured and irregular grids. The robustness of the generalized Yee-algorithm is that structures that contain curved conductors or complex three-dimensional geometries can be more accurately, and much more conveniently modeled using standard automatic grid generation techniques. This generalized Yee-algorithm is based on the the time-marching solution of the discrete form of Maxwell's equations in their integral form. To this end, the electric and magnetic fields are discretized over a dual, irregular, and unstructured grid. The primary grid is assumed to be composed of general fitted polyhedra distributed throughout the volume. The secondary grid (or dual grid) is built up of the closed polyhedra whose edges connect the centroid's of adjacent primary cells, penetrating shared faces. Faraday's law and Ampere's law are used to update the fields normal to the primary and secondary grid faces, respectively. Subsequently, a correction scheme is introduced to project the normal fields onto the grid edges. It is shown that this scheme is stable, maintains second-order accuracy, and preserves the divergenceless nature of the flux densities. Finally, for computational efficiency the algorithm is structured as a series of sparse matrix-vector multiplications. Based on this scheme, the generalized Yee-algorithm has been implemented on vector and parallel high performance computers in a highly efficient manner.

  5. Cooperative MIMO communication at wireless sensor network: an error correcting code approach.

    PubMed

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error p(b). It is observed that C-MIMO performs more efficiently when the targeted p(b) is smaller. Also the lower encoding rate for LDPC code offers better error characteristics.

  6. Inversion Schemes to Retrieve Atmospheric and Oceanic Parameters from SeaWiFS Data

    NASA Technical Reports Server (NTRS)

    Deschamps, P.-Y.; Frouin, R.

    1997-01-01

    The investigation focuses on two key issues in satellite ocean color remote sensing, namely the presence of whitecaps on the sea surface and the validity of the aerosol models selected for the atmospheric correction of SeaWiFS data. Experiments were designed and conducted at the Scripps Institution of Oceanography to measure the optical properties of whitecaps and to study the aerosol optical properties in a typical mid-latitude coastal environment. CIMEL Electronique sunphotometers, now integrated in the AERONET network, were also deployed permanently in Bermuda and in Lanai, calibration/validation sites for SeaWiFS and MODIS. Original results were obtained on the spectral reflectance of whitecaps and on the choice of aerosol models for atmospheric correction schemes and the type of measurements that should be made to verify those schemes. Bio-optical algorithms to remotely sense primary productivity from space were also evaluated, as well as current algorithms to estimate PAR at the earth's surface.

  7. Cooperative MIMO Communication at Wireless Sensor Network: An Error Correcting Code Approach

    PubMed Central

    Islam, Mohammad Rakibul; Han, Young Shin

    2011-01-01

    Cooperative communication in wireless sensor network (WSN) explores the energy efficient wireless communication schemes between multiple sensors and data gathering node (DGN) by exploiting multiple input multiple output (MIMO) and multiple input single output (MISO) configurations. In this paper, an energy efficient cooperative MIMO (C-MIMO) technique is proposed where low density parity check (LDPC) code is used as an error correcting code. The rate of LDPC code is varied by varying the length of message and parity bits. Simulation results show that the cooperative communication scheme outperforms SISO scheme in the presence of LDPC code. LDPC codes with different code rates are compared using bit error rate (BER) analysis. BER is also analyzed under different Nakagami fading scenario. Energy efficiencies are compared for different targeted probability of bit error pb. It is observed that C-MIMO performs more efficiently when the targeted pb is smaller. Also the lower encoding rate for LDPC code offers better error characteristics. PMID:22163732

  8. Viscous compressible flow direct and inverse computation and illustrations

    NASA Technical Reports Server (NTRS)

    Yang, T. T.; Ntone, F.

    1986-01-01

    An algorithm for laminar and turbulent viscous compressible two dimensional flows is presented. For the application of precise boundary conditions over an arbitrary body surface, a body-fitted coordinate system is used in the physical plane. A thin-layer approximation of tne Navier-Stokes equations is introduced to keep the viscous terms relatively simple. The flow field computation is performed in the transformed plane. A factorized, implicit scheme is used to facilitate the computation. Sample calculations, for Couette flow, developing pipe flow, an isolated airflow, two dimensional compressor cascade flow, and segmental compressor blade design are presented. To a certain extent, the effective use of the direct solver depends on the user's skill in setting up the gridwork, the time step size and the choice of the artificial viscosity. The design feature of the algorithm, an iterative scheme to correct geometry for a specified surface pressure distribution, works well for subsonic flows. A more elaborate correction scheme is required in treating transonic flows where local shock waves may be involved.

  9. Cryptosystem for Securing Image Encryption Using Structured Phase Masks in Fresnel Wavelet Transform Domain

    NASA Astrophysics Data System (ADS)

    Singh, Hukum

    2016-12-01

    A cryptosystem for securing image encryption is considered by using double random phase encoding in Fresnel wavelet transform (FWT) domain. Random phase masks (RPMs) and structured phase masks (SPMs) based on devil's vortex toroidal lens (DVTL) are used in spatial as well as in Fourier planes. The images to be encrypted are first Fresnel transformed and then single-level discrete wavelet transform (DWT) is apply to decompose LL,HL, LH and HH matrices. The resulting matrices from the DWT are multiplied by additional RPMs and the resultants are subjected to inverse DWT for the encrypted images. The scheme is more secure because of many parameters used in the construction of SPM. The original images are recovered by using the correct parameters of FWT and SPM. Phase mask SPM based on DVTL increases security that enlarges the key space for encryption and decryption. The proposed encryption scheme is a lens-less optical system and its digital implementation has been performed using MATLAB 7.6.0 (R2008a). The computed value of mean-squared-error between the retrieved and the input images shows the efficacy of scheme. The sensitivity to encryption parameters, robustness against occlusion, entropy and multiplicative Gaussian noise attacks have been analysed.

  10. A novel two-stage evaluation system based on a Group-G1 approach to identify appropriate emergency treatment technology schemes in sudden water source pollution accidents.

    PubMed

    Qu, Jianhua; Meng, Xianlin; Hu, Qi; You, Hong

    2016-02-01

    Sudden water source pollution resulting from hazardous materials has gradually become a major threat to the safety of the urban water supply. Over the past years, various treatment techniques have been proposed for the removal of the pollutants to minimize the threat of such pollutions. Given the diversity of techniques available, the current challenge is how to scientifically select the most desirable alternative for different threat degrees. Therefore, a novel two-stage evaluation system was developed based on a circulation-correction improved Group-G1 method to determine the optimal emergency treatment technology scheme, considering the areas of contaminant elimination in both drinking water sources and water treatment plants. In stage 1, the threat degree caused by the pollution was predicted using a threat evaluation index system and was subdivided into four levels. Then, a technique evaluation index system containing four sets of criteria weights was constructed in stage 2 to obtain the optimum treatment schemes corresponding to the different threat levels. The applicability of the established evaluation system was tested by a practical cadmium-contaminated accident that occurred in 2012. The results show this system capable of facilitating scientific analysis in the evaluation and selection of emergency treatment technologies for drinking water source security.

  11. Alignment and bit extraction for secure fingerprint biometrics

    NASA Astrophysics Data System (ADS)

    Nagar, A.; Rane, S.; Vetro, A.

    2010-01-01

    Security of biometric templates stored in a system is important because a stolen template can compromise system security as well as user privacy. Therefore, a number of secure biometrics schemes have been proposed that facilitate matching of feature templates without the need for a stored biometric sample. However, most of these schemes suffer from poor matching performance owing to the difficulty of designing biometric features that remain robust over repeated biometric measurements. This paper describes a scheme to extract binary features from fingerprints using minutia points and fingerprint ridges. The features are amenable to direct matching based on binary Hamming distance, but are especially suitable for use in secure biometric cryptosystems that use standard error correcting codes. Given all binary features, a method for retaining only the most discriminable features is presented which improves the Genuine Accept Rate (GAR) from 82% to 90% at a False Accept Rate (FAR) of 0.1% on a well-known public database. Additionally, incorporating singular points such as a core or delta feature is shown to improve the matching tradeoff.

  12. A direct force model for Galilean invariant lattice Boltzmann simulation of fluid-particle flows

    NASA Astrophysics Data System (ADS)

    Tao, Shi; He, Qing; Chen, Baiman; Yang, Xiaoping; Huang, Simin

    The lattice Boltzmann method (LBM) has been widely used in the simulation of particulate flows involving complex moving boundaries. Due to the kinetic background of LBM, the bounce-back (BB) rule and the momentum exchange (ME) method can be easily applied to the solid boundary treatment and the evaluation of fluid-solid interaction force, respectively. However, recently it has been found that both the BB and ME schemes may violate the principle of Galilean invariance (GI). Some modified BB and ME methods have been proposed to reduce the GI error. But these remedies have been recognized subsequently to be inconsistent with Newton’s Third Law. Therefore, contrary to those corrections based on the BB and ME methods, a unified iterative approach is adopted to handle the solid boundary in the present study. Furthermore, a direct force (DF) scheme is proposed to evaluate the fluid-particle interaction force. The methods preserve the efficiency of the BB and ME schemes, and the performance on the accuracy and GI is verified and validated in the test cases of particulate flows with freely moving particles.

  13. Structure-based CoMFA as a predictive model - CYP2C9 inhibitors as a test case.

    PubMed

    Yasuo, Kazuya; Yamaotsu, Noriyuki; Gouda, Hiroaki; Tsujishita, Hideki; Hirono, Shuichi

    2009-04-01

    In this study, we tried to establish a general scheme to create a model that could predict the affinity of small compounds to their target proteins. This scheme consists of a search for ligand-binding sites on a protein, a generation of bound conformations (poses) of ligands in each of the sites by docking, identifications of the correct poses of each ligand by consensus scoring and MM-PBSA analysis, and a construction of a CoMFA model with the obtained poses to predict the affinity of the ligands. By using a crystal structure of CYP 2C9 and the twenty known CYP inhibitors as a test case, we obtained a CoMFA model with a good statistics, which suggested that the classification of the binding sites as well as the predicted bound poses of the ligands should be reasonable enough. The scheme described here would give a method to predict the affinity of small compounds with a reasonable accuracy, which is expected to heighten the value of computational chemistry in the drug design process.

  14. An improved PCA method with application to boiler leak detection.

    PubMed

    Sun, Xi; Marquez, Horacio J; Chen, Tongwen; Riaz, Muhammad

    2005-07-01

    Principal component analysis (PCA) is a popular fault detection technique. It has been widely used in process industries, especially in the chemical industry. In industrial applications, achieving a sensitive system capable of detecting incipient faults, which maintains the false alarm rate to a minimum, is a crucial issue. Although a lot of research has been focused on these issues for PCA-based fault detection and diagnosis methods, sensitivity of the fault detection scheme versus false alarm rate continues to be an important issue. In this paper, an improved PCA method is proposed to address this problem. In this method, a new data preprocessing scheme and a new fault detection scheme designed for Hotelling's T2 as well as the squared prediction error are developed. A dynamic PCA model is also developed for boiler leak detection. This new method is applied to boiler water/steam leak detection with real data from Syncrude Canada's utility plant in Fort McMurray, Canada. Our results demonstrate that the proposed method can effectively reduce false alarm rate, provide effective and correct leak alarms, and give early warning to operators.

  15. Unsupervised feature relevance analysis applied to improve ECG heartbeat clustering.

    PubMed

    Rodríguez-Sotelo, J L; Peluffo-Ordoñez, D; Cuesta-Frau, D; Castellanos-Domínguez, G

    2012-10-01

    The computer-assisted analysis of biomedical records has become an essential tool in clinical settings. However, current devices provide a growing amount of data that often exceeds the processing capacity of normal computers. As this amount of information rises, new demands for more efficient data extracting methods appear. This paper addresses the task of data mining in physiological records using a feature selection scheme. An unsupervised method based on relevance analysis is described. This scheme uses a least-squares optimization of the input feature matrix in a single iteration. The output of the algorithm is a feature weighting vector. The performance of the method was assessed using a heartbeat clustering test on real ECG records. The quantitative cluster validity measures yielded a correctly classified heartbeat rate of 98.69% (specificity), 85.88% (sensitivity) and 95.04% (general clustering performance), which is even higher than the performance achieved by other similar ECG clustering studies. The number of features was reduced on average from 100 to 18, and the temporal cost was a 43% lower than in previous ECG clustering schemes. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  16. Notice of Violation of IEEE Publication PrinciplesJoint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath

    NASA Astrophysics Data System (ADS)

    Li, Lei; Hu, Jianhao

    2010-12-01

    Notice of Violation of IEEE Publication Principles"Joint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath"by Lei Li and Jianhao Hu,in the IEEE Transactions on Nuclear Science, vol.57, no.6, Dec. 2010, pp. 3779-3786After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.This paper contains substantial duplication of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following articles:"Multiple Error Detection and Correction Based on Redundant Residue Number Systems"by Vik Tor Goh and M.U. Siddiqi,in the IEEE Transactions on Communications, vol.56, no.3, March 2008, pp.325-330"A Coding Theory Approach to Error Control in Redundant Residue Number Systems. I: Theory and Single Error Correction"by H. Krishna, K-Y. Lin, and J-D. Sun, in the IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol.39, no.1, Jan 1992, pp.8-17In this paper, we propose a joint scheme which combines redundant residue number systems (RRNS) with module isolation (MI) for mitigating single event multiple bit upsets (SEMBUs) in datapath. The proposed hardening scheme employs redundant residues to improve the fault tolerance for datapath and module spacings to guarantee that SEMBUs caused by charge sharing do not propagate among the operation channels of different moduli. The features of RRNS, such as independence, parallel and error correction, are exploited to establish the radiation hardening architecture for the datapath in radiation environments. In the proposed scheme, all of the residues can be processed independently, and most of the soft errors in datapath can be corrected with the redundant relationship of the residues at correction module, which is allocated at the end of the datapath. In the back-end implementation, module isolation technique is used to improve the soft error rate performance for RRNS by physically separating the operation channels of different moduli. The case studies show at least an order of magnitude decrease on the soft error rate (SER) as compared to the NonRHBD designs, and demonstrate that RRNS+MI can reduce the SER from 10-12 to 10-17 when the processing steps of datapath are 106. The proposed scheme can even achieve less area and latency overheads than that without radiation hardening, since RRNS can reduce the operational complexity in datapath.

  17. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Kasami, T.; Fujiwara, T.; Lin, S.

    1986-01-01

    In this paper, a concatenated coding scheme for error control in data communications is presented and analyzed. In this scheme, the inner code is used for both error correction and detection; however, the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error (or decoding error) of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughput efficiency of the proposed error control scheme incorporated with a selective-repeat ARQ retransmission strategy is also analyzed. Three specific examples are presented. One of the examples is proposed for error control in the NASA Telecommand System.

  18. First-principles supercell calculations of small polarons with proper account for long-range polarization effects

    NASA Astrophysics Data System (ADS)

    Kokott, Sebastian; Levchenko, Sergey V.; Rinke, Patrick; Scheffler, Matthias

    2018-03-01

    We present a density functional theory (DFT) based supercell approach for modeling small polarons with proper account for the long-range elastic response of the material. Our analysis of the supercell dependence of the polaron properties (e.g., atomic structure, binding energy, and the polaron level) reveals long-range electrostatic effects and the electron–phonon (el–ph) interaction as the two main contributors. We develop a correction scheme for DFT polaron calculations that significantly reduces the dependence of polaron properties on the DFT exchange-correlation functional and the size of the supercell in the limit of strong el–ph coupling. Using our correction approach, we present accurate all-electron full-potential DFT results for small polarons in rocksalt MgO and rutile TiO2.

  19. Galilean invariant resummation schemes of cosmological perturbations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peloso, Marco; Pietroni, Massimo, E-mail: peloso@physics.umn.edu, E-mail: massimo.pietroni@unipr.it

    2017-01-01

    Many of the methods proposed so far to go beyond Standard Perturbation Theory break invariance under time-dependent boosts (denoted here as extended Galilean Invariance, or GI). This gives rise to spurious large scale effects which spoil the small scale predictions of these approximation schemes. By using consistency relations we derive fully non-perturbative constraints that GI imposes on correlation functions. We then introduce a method to quantify the amount of GI breaking of a given scheme, and to correct it by properly tailored counterterms. Finally, we formulate resummation schemes which are manifestly GI, discuss their general features, and implement them inmore » the so called Time-Flow, or TRG, equations.« less

  20. A new flux splitting scheme

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Christopher J., Jr.

    1993-01-01

    A new flux splitting scheme is proposed. The scheme is remarkably simple and yet its accuracy rivals and in some cases surpasses that of Roe's solver in the Euler and Navier-Stokes solutions performed in this study. The scheme is robust and converges as fast as the Roe splitting. An approximately defined cell-face advection Mach number is proposed using values from the two straddling cells via associated characteristic speeds. This interface Mach number is then used to determine the upwind extrapolation for the convective quantities. Accordingly, the name of the scheme is coined as Advection Upstream Splitting Method (AUSM). A new pressure splitting is introduced which is shown to behave successfully, yielding much smoother results than other existing pressure splittings. Of particular interest is the supersonic blunt body problem in which the Roe scheme gives anomalous solutions. The AUSM produces correct solutions without difficulty for a wide range of flow conditions as well as grids.

  1. A new flux splitting scheme

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Christopher J., Jr.

    1991-01-01

    A new flux splitting scheme is proposed. The scheme is remarkably simple and yet its accuracy rivals and in some cases surpasses that of Roe's solver in the Euler and Navier-Stokes solutions performed in this study. The scheme is robust and converges as fast as the Roe splitting. An approximately defined cell-face advection Mach number is proposed using values from the two straddling cells via associated characteristic speeds. This interface Mach number is then used to determine the upwind extrapolation for the convective quantities. Accordingly, the name of the scheme is coined as Advection Upstream Splitting Method (AUSM). A new pressure splitting is introduced which is shown to behave successfully, yielding much smoother results than other existing pressure splittings. Of particular interest is the supersonic blunt body problem in which the Roe scheme gives anomalous solutions. The AUSM produces correct solutions without difficulty for a wide range of flow conditions as well as grids.

  2. Novel MDM-PON scheme utilizing self-homodyne detection for high-speed/capacity access networks.

    PubMed

    Chen, Yuanxiang; Li, Juhao; Zhu, Paikun; Wu, Zhongying; Zhou, Peng; Tian, Yu; Ren, Fang; Yu, Jinyi; Ge, Dawei; Chen, Jingbiao; He, Yongqi; Chen, Zhangyuan

    2015-12-14

    In this paper, we propose a cost-effective, energy-saving mode-division-multiplexing passive optical network (MDM-PON) scheme utilizing self-homodyne detection for high-speed/capacity access network based on low modal-crosstalk few-mode fiber (FMF) and all-fiber mode multiplexer/demultiplexer (MUX/DEMUX). In the proposed scheme, one of the spatial modes is used to transmit a portion of signal carrier (namely pilot-tone) as the local oscillator (LO), while the others are used for signal-bearing channels. At the receiver, the pilot-tone and the signal can be separated without strong crosstalk and sent to the receiver for coherent detection. The spectral efficiency (SE) is significantly enhanced when multiple spatial channels are used. Meanwhile, the self-homodyne detection scheme can effectively suppress laser phase noise, which relaxes the requirement for the lasers line-width at the optical line terminal or optical network units (OLT/ONUs). The digital signal processing (DSP) at the receiver is also simplified since it removes the need for frequency offset compensation and complex phase correction, which reduces the computational complexity and energy consumption. Polarization division multiplexing (PDM) that offers doubled SE is also supported by the scheme. The proposed scheme is scalable to multi-wavelength application when wavelength MUX/DEMUX is utilized. Utilizing the proposed scheme, we demonstrate a proof of concept 4 × 40-Gb/s orthogonal frequency division multiplexing (OFDM) transmission over 55-km FMF using low modal-crosstalk two-mode FMF and MUX/DEMUX with error free operation. Compared with back to back case, less than 1-dB Q-factor penalty is observed after 55-km FMF of the four channels. Signal power and pilot-tone power are also optimized to achieve the optimal transmission performance.

  3. An accurate front capturing scheme for tumor growth models with a free boundary limit

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Tang, Min; Wang, Li; Zhou, Zhennan

    2018-07-01

    We consider a class of tumor growth models under the combined effects of density-dependent pressure and cell multiplication, with a free boundary model as its singular limit when the pressure-density relationship becomes highly nonlinear. In particular, the constitutive law connecting pressure p and density ρ is p (ρ) = m/m-1 ρ m - 1, and when m ≫ 1, the cell density ρ may evolve its support according to a pressure-driven geometric motion with sharp interface along its boundary. The nonlinearity and degeneracy in the diffusion bring great challenges in numerical simulations. Prior to the present paper, there is lack of standard mechanism to numerically capture the front propagation speed as m ≫ 1. In this paper, we develop a numerical scheme based on a novel prediction-correction reformulation that can accurately approximate the front propagation even when the nonlinearity is extremely strong. We show that the semi-discrete scheme naturally connects to the free boundary limit equation as m → ∞. With proper spatial discretization, the fully discrete scheme has improved stability, preserves positivity, and can be implemented without nonlinear solvers. Finally, extensive numerical examples in both one and two dimensions are provided to verify the claimed properties in various applications.

  4. Correcting the extended-source calibration for the Herschel-SPIRE Fourier-transform spectrometer

    NASA Astrophysics Data System (ADS)

    Valtchanov, I.; Hopwood, R.; Bendo, G.; Benson, C.; Conversi, L.; Fulton, T.; Griffin, M. J.; Joubaud, T.; Lim, T.; Lu, N.; Marchili, N.; Makiwa, G.; Meyer, R. A.; Naylor, D. A.; North, C.; Papageorgiou, A.; Pearson, C.; Polehampton, E. T.; Scott, J.; Schulz, B.; Spencer, L. D.; van der Wiel, M. H. D.; Wu, R.

    2018-03-01

    We describe an update to the Herschel-Spectral and Photometric Imaging Receiver (SPIRE) Fourier-transform spectrometer (FTS) calibration for extended sources, which incorporates a correction for the frequency-dependent far-field feedhorn efficiency, ηff. This significant correction affects all FTS extended-source calibrated spectra in sparse or mapping mode, regardless of the spectral resolution. Line fluxes and continuum levels are underestimated by factors of 1.3-2 in thespectrometer long wavelength band (447-1018 GHz; 671-294 μm) and 1.4-1.5 in the spectrometer short wavelength band (944-1568 GHz; 318-191 μm). The correction was implemented in the FTS pipeline version 14.1 and has also been described in the SPIRE Handbook since 2017 February. Studies based on extended-source calibrated spectra produced prior to this pipeline version should be critically reconsidered using the current products available in the Herschel Science Archive. Once the extended-source calibrated spectra are corrected for ηff, the synthetic photometry and the broad-band intensities from SPIRE photometer maps agree within 2-4 per cent - similar levels to the comparison of point-source calibrated spectra and photometry from point-source calibrated maps. The two calibration schemes for the FTS are now self-consistent: the conversion between the corrected extended-source and point-source calibrated spectra can be achieved with the beam solid angle and a gain correction that accounts for the diffraction loss.

  5. Stereotactic body radiotherapy for lung cancer: how much does it really cost?

    PubMed

    Lievens, Yolande; Obyn, Caroline; Mertens, Anne-Sophie; Van Halewyck, Dries; Hulstaert, Frank

    2015-03-01

    Despite the lack of randomized evidence, stereotactic body radiotherapy (SBRT) is being accepted as superior to conventional radiotherapy for patients with T1-2N0 non-small-cell lung cancer in the periphery of the lung and unfit or unwilling to undergo surgery. To introduce SBRT in a system of coverage with evidence development, a correct financing had to be determined. A time-driven activity-based costing model for radiotherapy was developed. Resource cost calculation of all radiotherapy treatments, standard and innovative, was conducted in 10 Belgian radiotherapy centers in the second half of 2012. The average cost of lung SBRT across the 10 centers (6221&OV0556;) is in the range of the average costs of standard fractionated 3D-conformal radiotherapy (5919&OV0556;) and intensity-modulated radiotherapy (7379&OV0556;) for lung cancer. Hypofractionated 3D-conformal radiotherapy and intensity-modulated radiotherapy schemes are less costly (3993&OV0556; respectively 4730&OV0556;). The SBRT cost increases with the number of fractions and is highly dependent of personnel and equipment use. SBRT cost varies more by centre than conventional radiotherapy cost, reflecting different technologies, stages in the learning curve and a lack of clear guidance in this field. Time-driven activity-based costing of radiotherapy is feasible in a multicentre setup, resulting in real-life resource costs that can form the basis for correct reimbursement schemes, supporting an early yet controlled introduction of innovative radiotherapy techniques in clinical practice.

  6. Analysis of temporal gene expression profiles: clustering by simulated annealing and determining the optimal number of clusters.

    PubMed

    Lukashin, A V; Fuchs, R

    2001-05-01

    Cluster analysis of genome-wide expression data from DNA microarray hybridization studies has proved to be a useful tool for identifying biologically relevant groupings of genes and samples. In the present paper, we focus on several important issues related to clustering algorithms that have not yet been fully studied. We describe a simple and robust algorithm for the clustering of temporal gene expression profiles that is based on the simulated annealing procedure. In general, this algorithm guarantees to eventually find the globally optimal distribution of genes over clusters. We introduce an iterative scheme that serves to evaluate quantitatively the optimal number of clusters for each specific data set. The scheme is based on standard approaches used in regular statistical tests. The basic idea is to organize the search of the optimal number of clusters simultaneously with the optimization of the distribution of genes over clusters. The efficiency of the proposed algorithm has been evaluated by means of a reverse engineering experiment, that is, a situation in which the correct distribution of genes over clusters is known a priori. The employment of this statistically rigorous test has shown that our algorithm places greater than 90% genes into correct clusters. Finally, the algorithm has been tested on real gene expression data (expression changes during yeast cell cycle) for which the fundamental patterns of gene expression and the assignment of genes to clusters are well understood from numerous previous studies.

  7. Recent assimilation developments of FOAM the Met Office ocean forecast system

    NASA Astrophysics Data System (ADS)

    Lea, Daniel; Martin, Matthew; Waters, Jennifer; Mirouze, Isabelle; While, James; King, Robert

    2015-04-01

    FOAM is the Met Office's operational ocean forecasting system. This system comprises a range of models from a 1/4 degree resolution global to 1/12 degree resolution regional models and shelf seas models at 7 km resolution. The system is made up of the ocean model NEMO (Nucleus for European Modeling of the Ocean), the Los Alomos sea ice model CICE and the NEMOVAR assimilation run in 3D-VAR FGAT mode. Work is ongoing to transition to both a higher resolution global ocean model at 1/12 degrees and to run FOAM in coupled models. The FOAM system generally performs well. One area of concern however is the performance in the tropics where spurious oscillations and excessive vertical velocity gradients are found after assimilation. NEMOVAR includes a balance operator which in the extra-tropics uses geostrophic balance to produce velocity increments which balance the density increments applied. In the tropics, however, the main balance is between the pressure gradients produced by the density gradient and the applied wind stress. A scheme is presented which aims to maintain this balance when increments are applied. Another issue in FOAM is that there are sometimes persistent temperature and salinity errors which are not effectively corrected by the assimilation. The standard NEMOVAR has a single correlation length scale based on the local Rossby radius. This means that observations in the extra tropics have influence on the model only on short length-scales. In order to maximise the information extracted from the observations and to correct large scale model biases a multiple correlation length-scale scheme has been developed. This includes a larger length scale which spreads observation information further. Various refinements of the scheme are also explored including reducing the longer length scale component at the edge of the sea ice and in areas with high potential vorticity gradients. A related scheme which varies the correlation length scale in the shelf seas is also described.

  8. Survey of Header Compression Techniques

    NASA Technical Reports Server (NTRS)

    Ishac, Joseph

    2001-01-01

    This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves compression schemes which provide better tolerances in conditions with a high BER.

  9. Automated recognition of helium speech. Phase I: Investigation of microprocessor based analysis/synthesis system

    NASA Astrophysics Data System (ADS)

    Jelinek, H. J.

    1986-01-01

    This is the Final Report of Electronic Design Associates on its Phase I SBIR project. The purpose of this project is to develop a method for correcting helium speech, as experienced in diver-surface communication. The goal of the Phase I study was to design, prototype, and evaluate a real time helium speech corrector system based upon digital signal processing techniques. The general approach was to develop hardware (an IBM PC board) to digitize helium speech and software (a LAMBDA computer based simulation) to translate the speech. As planned in the study proposal, this initial prototype may now be used to assess expected performance from a self contained real time system which uses an identical algorithm. The Final Report details the work carried out to produce the prototype system. Four major project tasks were: a signal processing scheme for converting helium speech to normal sounding speech was generated. The signal processing scheme was simulated on a general purpose (LAMDA) computer. Actual helium speech was supplied to the simulation and the converted speech was generated. An IBM-PC based 14 bit data Input/Output board was designed and built. A bibliography of references on speech processing was generated.

  10. On edge-aware path-based color spatial sampling for Retinex: from Termite Retinex to Light Energy-driven Termite Retinex

    NASA Astrophysics Data System (ADS)

    Simone, Gabriele; Cordone, Roberto; Serapioni, Raul Paolo; Lecca, Michela

    2017-05-01

    Retinex theory estimates the human color sensation at any observed point by correcting its color based on the spatial arrangement of the colors in proximate regions. We revise two recent path-based, edge-aware Retinex implementations: Termite Retinex (TR) and Energy-driven Termite Retinex (ETR). As the original Retinex implementation, TR and ETR scan the neighborhood of any image pixel by paths and rescale their chromatic intensities by intensity levels computed by reworking the colors of the pixels on the paths. Our interest in TR and ETR is due to their unique, content-based scanning scheme, which uses the image edges to define the paths and exploits a swarm intelligence model for guiding the spatial exploration of the image. The exploration scheme of ETR has been showed to be particularly effective: its paths are local minima of an energy functional, designed to favor the sampling of image pixels highly relevant to color sensation. Nevertheless, since its computational complexity makes ETR poorly practicable, here we present a light version of it, named Light Energy-driven TR, and obtained from ETR by implementing a modified, optimized minimization procedure and by exploiting parallel computing.

  11. CLASSICAL AREAS OF PHENOMENOLOGY: Correcting dynamic residual aberrations of conformal optical systems using AO technology

    NASA Astrophysics Data System (ADS)

    Li, Yan; Li, Lin; Huang, Yi-Fan; Du, Bao-Lin

    2009-07-01

    This paper analyses the dynamic residual aberrations of a conformal optical system and introduces adaptive optics (AO) correction technology to this system. The image sharpening AO system is chosen as the correction scheme. Communication between MATLAB and Code V is established via ActiveX technique in computer simulation. The SPGD algorithm is operated at seven zoom positions to calculate the optimized surface shape of the deformable mirror. After comparison of performance of the corrected system with the baseline system, AO technology is proved to be a good way of correcting the dynamic residual aberration in conformal optical design.

  12. An evaluation of solution algorithms and numerical approximation methods for modeling an ion exchange process

    NASA Astrophysics Data System (ADS)

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  13. Intensity non-uniformity correction in MRI: existing methods and their validation.

    PubMed

    Belaroussi, Boubakeur; Milles, Julien; Carme, Sabin; Zhu, Yue Min; Benoit-Cattin, Hugues

    2006-04-01

    Magnetic resonance imaging is a popular and powerful non-invasive imaging technique. Automated analysis has become mandatory to efficiently cope with the large amount of data generated using this modality. However, several artifacts, such as intensity non-uniformity, can degrade the quality of acquired data. Intensity non-uniformity consists in anatomically irrelevant intensity variation throughout data. It can be induced by the choice of the radio-frequency coil, the acquisition pulse sequence and by the nature and geometry of the sample itself. Numerous methods have been proposed to correct this artifact. In this paper, we propose an overview of existing methods. We first sort them according to their location in the acquisition/processing pipeline. Sorting is then refined based on the assumptions those methods rely on. Next, we present the validation protocols used to evaluate these different correction schemes both from a qualitative and a quantitative point of view. Finally, availability and usability of the presented methods is discussed.

  14. Calculations of separated 3-D flows with a pressure-staggered Navier-Stokes equations solver

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.

    1991-01-01

    A Navier-Stokes equations solver based on a pressure correction method with a pressure-staggered mesh and calculations of separated three-dimensional flows are presented. It is shown that the velocity pressure decoupling, which occurs when various pressure correction algorithms are used for pressure-staggered meshes, is caused by the ill-conditioned discrete pressure correction equation. The use of a partial differential equation for the incremental pressure eliminates the velocity pressure decoupling mechanism by itself and yields accurate numerical results. Example flows considered are a three-dimensional lid driven cavity flow and a laminar flow through a 90 degree bend square duct. For the lid driven cavity flow, the present numerical results compare more favorably with the measured data than those obtained using a formally third order accurate quadratic upwind interpolation scheme. For the curved duct flow, the present numerical method yields a grid independent solution with a very small number of grid points. The calculated velocity profiles are in good agreement with the measured data.

  15. SU-E-T-472: Improvement of IMRT QA Passing Rate by Correcting Angular Dependence of MatriXX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Q; Watkins, W; Kim, T

    2015-06-15

    Purpose: Multi-channel planar detector arrays utilized for IMRT-QA, such as the MatriXX, exhibit an incident-beam angular dependent response which can Result in false-positive gamma-based QA results, especially for helical tomotherapy plans which encompass the full range of beam angles. Although MatriXX can use with gantry angle sensor to provide automatically angular correction, this sensor does not work with tomotherapy. The purpose of the study is to reduce IMRT-QA false-positives by correcting for the MatriXX angular dependence. Methods: MatriXX angular dependence was characterized by comparing multiple fixed-angle irradiation measurements with corresponding TPS computed doses. For 81 Tomo-helical IMRT-QA measurements, two differentmore » correction schemes were tested: (1) A Monte-Carlo dose engine was used to compute MatriXX signal based on the angular-response curve. The computed signal was then compared with measurement. (2) Uncorrected computed signal was compared with measurements uniformly scaled to account for the average angular dependence. Three scaling factor (+2%, +2.5%, +3%) were tested. Results: The MatriXX response is 8% less than predicted for a PA beam even when the couch is fully accounted for. Without angular correction, only 67% of the cases pass the >90% points γ<1 (3%, 3mm). After full angular correction, 96% of the cases pass the criteria. Of three scaling factors, +2% gave the highest passing rate (89%), which is still less than the full angular correction method. With a stricter γ(2%,3mm) criteria, the full angular correction method was still able to achieve the 90% passing rate while the scaling method only gives 53% passing rate. Conclusion: Correction for the MatriXX angular dependence reduced the false-positives rate of our IMRT-QA process. It is necessary to correct for the angular dependence to achieve the IMRT passing criteria specified in TG129.« less

  16. Warm layer and cool skin corrections for bulk water temperature measurements for air-sea interaction studies

    NASA Astrophysics Data System (ADS)

    Alappattu, Denny P.; Wang, Qing; Yamaguchi, Ryan; Lind, Richard J.; Reynolds, Mike; Christman, Adam J.

    2017-08-01

    The sea surface temperature (SST) relevant to air-sea interaction studies is the temperature immediately adjacent to the air, referred to as skin SST. Generally, SST measurements from ships and buoys are taken at depths varies from several centimeters to 5 m below the surface. These measurements, known as bulk SST, can differ from skin SST up to O(1°C). Shipboard bulk and skin SST measurements were made during the Coupled Air-Sea Processes and Electromagnetic ducting Research east coast field campaign (CASPER-East). An Infrared SST Autonomous Radiometer (ISAR) recorded skin SST, while R/V Sharp's Surface Mapping System (SMS) provided bulk SST from 1 m water depth. Since the ISAR is sensitive to sea spray and rain, missing skin SST data occurred in these conditions. However, SMS measurement is less affected by adverse weather and provided continuous bulk SST measurements. It is desirable to correct the bulk SST to obtain a good representation of the skin SST, which is the objective of this research. Bulk-skin SST difference has been examined with respect to meteorological factors associated with cool skin and diurnal warm layers. Strong influences of wind speed, diurnal effects, and net longwave radiation flux on temperature difference are noticed. A three-step scheme is established to correct for wind effect, diurnal variability, and then for dependency on net longwave radiation flux. Scheme is tested and compared to existing correction schemes. This method is able to effectively compensate for multiple factors acting to modify bulk SST measurements over the range of conditions experienced during CASPER-East.

  17. Full self-consistency versus quasiparticle self-consistency in diagrammatic approaches: Exactly solvable two-site Hubbard model

    DOE PAGES

    Kutepov, A. L.

    2015-07-22

    Self-consistent solutions of Hedin's equations (HE) for the two-site Hubbard model (HM) have been studied. They have been found for three-point vertices of increasing complexity (Γ = 1 (GW approximation), Γ₁ from the first-order perturbation theory, and the exact vertex Γ E). Comparison is made between the cases when an additional quasiparticle (QP) approximation for Green's functions is applied during the self-consistent iterative solving of HE and when QP approximation is not applied. Results obtained with the exact vertex are directly related to the present open question—which approximation is more advantageous for future implementations, GW + DMFT or QPGW +more » DMFT. It is shown that in a regime of strong correlations only the originally proposed GW + DMFT scheme is able to provide reliable results. Vertex corrections based on Perturbation Theory systematically improve the GW results when full self-consistency is applied. The application of QP self-consistency combined with PT vertex corrections shows similar problems to the case when the exact vertex is applied combined with QP sc. An analysis of Ward Identity violation is performed for all studied in this work's approximations and its relation to the general accuracy of the schemes used is provided.« less

  18. Full self-consistency versus quasiparticle self-consistency in diagrammatic approaches: exactly solvable two-site Hubbard model.

    PubMed

    Kutepov, A L

    2015-08-12

    Self-consistent solutions of Hedin's equations (HE) for the two-site Hubbard model (HM) have been studied. They have been found for three-point vertices of increasing complexity (Γ = 1 (GW approximation), Γ1 from the first-order perturbation theory, and the exact vertex Γ(E)). Comparison is made between the cases when an additional quasiparticle (QP) approximation for Green's functions is applied during the self-consistent iterative solving of HE and when QP approximation is not applied. The results obtained with the exact vertex are directly related to the present open question-which approximation is more advantageous for future implementations, GW + DMFT or QPGW + DMFT. It is shown that in a regime of strong correlations only the originally proposed GW + DMFT scheme is able to provide reliable results. Vertex corrections based on perturbation theory (PT) systematically improve the GW results when full self-consistency is applied. The application of QP self-consistency combined with PT vertex corrections shows similar problems to the case when the exact vertex is applied combined with QP sc. An analysis of Ward Identity violation is performed for all studied in this work's approximations and its relation to the general accuracy of the schemes used is provided.

  19. An empirical method to correct for temperature-dependent variations in the overlap function of CHM15k ceilometers

    NASA Astrophysics Data System (ADS)

    Hervo, Maxime; Poltera, Yann; Haefele, Alexander

    2016-07-01

    Imperfections in a lidar's overlap function lead to artefacts in the background, range and overlap-corrected lidar signals. These artefacts can erroneously be interpreted as an aerosol gradient or, in extreme cases, as a cloud base leading to false cloud detection. A correct specification of the overlap function is hence crucial in the use of automatic elastic lidars (ceilometers) for the detection of the planetary boundary layer or of low cloud. In this study, an algorithm is presented to correct such artefacts. It is based on the assumption of a homogeneous boundary layer and a correct specification of the overlap function down to a minimum range, which must be situated within the boundary layer. The strength of the algorithm lies in a sophisticated quality-check scheme which allows the reliable identification of favourable atmospheric conditions. The algorithm was applied to 2 years of data from a CHM15k ceilometer from the company Lufft. Backscatter signals corrected for background, range and overlap were compared using the overlap function provided by the manufacturer and the one corrected with the presented algorithm. Differences between corrected and uncorrected signals reached up to 45 % in the first 300 m above ground. The amplitude of the correction turned out to be temperature dependent and was larger for higher temperatures. A linear model of the correction as a function of the instrument's internal temperature was derived from the experimental data. Case studies and a statistical analysis of the strongest gradient derived from corrected signals reveal that the temperature model is capable of a high-quality correction of overlap artefacts, in particular those due to diurnal variations. The presented correction method has the potential to significantly improve the detection of the boundary layer with gradient-based methods because it removes false candidates and hence simplifies the attribution of the detected gradients to the planetary boundary layer. A particularly significant benefit can be expected for the detection of shallow stable layers typical of night-time situations. The algorithm is completely automatic and does not require any on-site intervention but requires the definition of an adequate instrument-specific configuration. It is therefore suited for use in large ceilometer networks.

  20. Application of wavelet multi-resolution analysis for correction of seismic acceleration records

    NASA Astrophysics Data System (ADS)

    Ansari, Anooshiravan; Noorzad, Assadollah; Zare, Mehdi

    2007-12-01

    During an earthquake, many stations record the ground motion, but only a few of them could be corrected using conventional high-pass and low-pass filtering methods and the others were identified as highly contaminated by noise and as a result useless. There are two major problems associated with these noisy records. First, since the signal to noise ratio (S/N) is low, it is not possible to discriminate between the original signal and noise either in the frequency domain or in the time domain. Consequently, it is not possible to cancel out noise using conventional filtering methods. The second problem is the non-stationary characteristics of the noise. In other words, in many cases the characteristics of the noise are varied over time and in these situations, it is not possible to apply frequency domain correction schemes. When correcting acceleration signals contaminated with high-level non-stationary noise, there is an important question whether it is possible to estimate the state of the noise in different bands of time and frequency. Wavelet multi-resolution analysis decomposes a signal into different time-frequency components, and besides introducing a suitable criterion for identification of the noise among each component, also provides the required mathematical tool for correction of highly noisy acceleration records. In this paper, the characteristics of the wavelet de-noising procedures are examined through the correction of selected real and synthetic acceleration time histories. It is concluded that this method provides a very flexible and efficient tool for the correction of very noisy and non-stationary records of ground acceleration. In addition, a two-step correction scheme is proposed for long period correction of the acceleration records. This method has the advantage of stable results in displacement time history and response spectrum.

  1. An analysis of four error detection and correction schemes for the proposed Federal standard 1024 (land mobile radio)

    NASA Astrophysics Data System (ADS)

    Lohrmann, Carol A.

    1990-03-01

    Interoperability of commercial Land Mobile Radios (LMR) and the military's tactical LMR is highly desirable if the U.S. government is to respond effectively in a national emergency or in a joint military operation. This ability to talk securely and immediately across agency and military service boundaries is often overlooked. One way to ensure interoperability is to develop and promote Federal communication standards (FS). This thesis surveys one area of the proposed FS 1024 for LMRs; namely, the error detection and correction (EDAC) of the message indicator (MI) bits used for cryptographic synchronization. Several EDAC codes are examined (Hamming, Quadratic Residue, hard decision Golay and soft decision Golay), tested on three FORTRAN programmed channel simulations (INMARSAT, Gaussian and constant burst width), compared and analyzed (based on bit error rates and percent of error-free super-frame runs) so that a best code can be recommended. Out of the four codes under study, the soft decision Golay code (24,12) is evaluated to be the best. This finding is based on the code's ability to detect and correct errors as well as the relative ease of implementation of the algorithm.

  2. High-quality 3D correction of ring and radiant artifacts in flat panel detector-based cone beam volume CT imaging

    NASA Astrophysics Data System (ADS)

    Abu Anas, Emran Mohammad; Kim, Jae Gon; Lee, Soo Yeol; Kamrul Hasan, Md

    2011-10-01

    The use of an x-ray flat panel detector is increasingly becoming popular in 3D cone beam volume CT machines. Due to the deficient semiconductor array manufacturing process, the cone beam projection data are often corrupted by different types of abnormalities, which cause severe ring and radiant artifacts in a cone beam reconstruction image, and as a result, the diagnostic image quality is degraded. In this paper, a novel technique is presented for the correction of error in the 2D cone beam projections due to abnormalities often observed in 2D x-ray flat panel detectors. Template images are derived from the responses of the detector pixels using their statistical properties and then an effective non-causal derivative-based detection algorithm in 2D space is presented for the detection of defective and mis-calibrated detector elements separately. An image inpainting-based 3D correction scheme is proposed for the estimation of responses of defective detector elements, and the responses of the mis-calibrated detector elements are corrected using the normalization technique. For real-time implementation, a simplification of the proposed off-line method is also suggested. Finally, the proposed algorithms are tested using different real cone beam volume CT images and the experimental results demonstrate that the proposed methods can effectively remove ring and radiant artifacts from cone beam volume CT images compared to other reported techniques in the literature.

  3. Can Regional Climate Models be used in the assessment of vulnerability and risk caused by extreme events?

    NASA Astrophysics Data System (ADS)

    Nunes, Ana

    2015-04-01

    Extreme meteorological events played an important role in catastrophic occurrences observed in the past over densely populated areas in Brazil. This motived the proposal of an integrated system for analysis and assessment of vulnerability and risk caused by extreme events in urban areas that are particularly affected by complex topography. That requires a multi-scale approach, which is centered on a regional modeling system, consisting of a regional (spectral) climate model coupled to a land-surface scheme. This regional modeling system employs a boundary forcing method based on scale-selective bias correction and assimilation of satellite-based precipitation estimates. Scale-selective bias correction is a method similar to the spectral nudging technique for dynamical downscaling that allows internal modes to develop in agreement with the large-scale features, while the precipitation assimilation procedure improves the modeled deep-convection and drives the land-surface scheme variables. Here, the scale-selective bias correction acts only on the rotational part of the wind field, letting the precipitation assimilation procedure to correct moisture convergence, in order to reconstruct South American current climate within the South American Hydroclimate Reconstruction Project. The hydroclimate reconstruction outputs might eventually produce improved initial conditions for high-resolution numerical integrations in metropolitan regions, generating more reliable short-term precipitation predictions, and providing accurate hidrometeorological variables to higher resolution geomorphological models. Better representation of deep-convection from intermediate scales is relevant when the resolution of the regional modeling system is refined by any method to meet the scale of geomorphological dynamic models of stability and mass movement, assisting in the assessment of risk areas and estimation of terrain stability over complex topography. The reconstruction of past extreme events also helps the development of a system for decision-making, regarding natural and social disasters, and reducing impacts. Numerical experiments using this regional modeling system successfully modeled severe weather events in Brazil. Comparisons with the NCEP Climate Forecast System Reanalysis outputs were made at resolutions of about 40- and 25-km of the regional climate model.

  4. CENTERA: A Centralized Trust-Based Efficient Routing Protocol with Authentication for Wireless Sensor Networks †

    PubMed Central

    Tajeddine, Ayman; Kayssi, Ayman; Chehab, Ali; Elhajj, Imad; Itani, Wassim

    2015-01-01

    In this paper, we present CENTERA, a CENtralized Trust-based Efficient Routing protocol with an appropriate authentication scheme for wireless sensor networks (WSN). CENTERA utilizes the more powerful base station (BS) to gather minimal neighbor trust information from nodes and calculate the best routes after isolating different types of “bad” nodes. By periodically accumulating these simple local observations and approximating the nodes' battery lives, the BS draws a global view of the network, calculates three quality metrics—maliciousness, cooperation, and compatibility—and evaluates the Data Trust and Forwarding Trust values of each node. Based on these metrics, the BS isolates “bad”, “misbehaving” or malicious nodes for a certain period, and put some nodes on probation. CENTERA increases the node's bad/probation level with repeated “bad” behavior, and decreases it otherwise. Then it uses a very efficient method to distribute the routing information to “good” nodes. Based on its target environment, and if required, CENTERA uses an authentication scheme suitable for severely constrained nodes, ranging from the symmetric RC5 for safe environments under close administration, to pairing-based cryptography (PBC) for hostile environments with a strong attacker model. We simulate CENTERA using TOSSIM and verify its correctness and show some energy calculations. PMID:25648712

  5. CENTERA: a centralized trust-based efficient routing protocol with authentication for wireless sensor networks.

    PubMed

    Tajeddine, Ayman; Kayssi, Ayman; Chehab, Ali; Elhajj, Imad; Itani, Wassim

    2015-02-02

    In this paper, we present CENTERA, a CENtralized Trust-based Efficient Routing protocol with an appropriate authentication scheme for wireless sensor networks (WSN). CENTERA utilizes the more powerful base station (BS) to gather minimal neighbor trust information from nodes and calculate the best routes after isolating different types of "bad" nodes. By periodically accumulating these simple local observations and approximating the nodes' battery lives, the BS draws a global view of the network, calculates three quality metrics-maliciousness, cooperation, and compatibility-and evaluates the Data Trust and Forwarding Trust values of each node. Based on these metrics, the BS isolates "bad", "misbehaving" or malicious nodes for a certain period, and put some nodes on probation. CENTERA increases the node's bad/probation level with repeated "bad" behavior, and decreases it otherwise. Then it uses a very efficient method to distribute the routing information to "good" nodes. Based on its target environment, and if required, CENTERA uses an authentication scheme suitable for severely constrained nodes, ranging from the symmetric RC5 for safe environments under close administration, to pairing-based cryptography (PBC) for hostile environments with a strong attacker model. We simulate CENTERA using TOSSIM and verify its correctness and show some energy calculations.

  6. Evaluation of respiratory and cardiac motion correction schemes in dual gated PET/CT cardiac imaging.

    PubMed

    Lamare, F; Le Maitre, A; Dawood, M; Schäfers, K P; Fernandez, P; Rimoldi, O E; Visvikis, D

    2014-07-01

    Cardiac imaging suffers from both respiratory and cardiac motion. One of the proposed solutions involves double gated acquisitions. Although such an approach may lead to both respiratory and cardiac motion compensation there are issues associated with (a) the combination of data from cardiac and respiratory motion bins, and (b) poor statistical quality images as a result of using only part of the acquired data. The main objective of this work was to evaluate different schemes of combining binned data in order to identify the best strategy to reconstruct motion free cardiac images from dual gated positron emission tomography (PET) acquisitions. A digital phantom study as well as seven human studies were used in this evaluation. PET data were acquired in list mode (LM). A real-time position management system and an electrocardiogram device were used to provide the respiratory and cardiac motion triggers registered within the LM file. Acquired data were subsequently binned considering four and six cardiac gates, or the diastole only in combination with eight respiratory amplitude gates. PET images were corrected for attenuation, but no randoms nor scatter corrections were included. Reconstructed images from each of the bins considered above were subsequently used in combination with an affine or an elastic registration algorithm to derive transformation parameters allowing the combination of all acquired data in a particular position in the cardiac and respiratory cycles. Images were assessed in terms of signal-to-noise ratio (SNR), contrast, image profile, coefficient-of-variation (COV), and relative difference of the recovered activity concentration. Regardless of the considered motion compensation strategy, the nonrigid motion model performed better than the affine model, leading to higher SNR and contrast combined with a lower COV. Nevertheless, when compensating for respiration only, no statistically significant differences were observed in the performance of the two motion models considered. Superior image SNR and contrast were seen using the affine respiratory motion model in combination with the diastole cardiac bin in comparison to the use of the whole cardiac cycle. In contrast, when simultaneously correcting for cardiac beating and respiration, the elastic respiratory motion model outperformed the affine model. In this context, four cardiac bins associated with eight respiratory amplitude bins seemed to be adequate. Considering the compensation of respiratory motion effects only, both affine and elastic based approaches led to an accurate resizing and positioning of the myocardium. The use of the diastolic phase combined with an affine model based respiratory motion correction may therefore be a simple approach leading to significant quality improvements in cardiac PET imaging. However, the best performance was obtained with the combined correction for both cardiac and respiratory movements considering all the dual-gated bins independently through the use of an elastic model based motion compensation.

  7. Intercomparison of methods for coincidence summing corrections in gamma-ray spectrometry--part II (volume sources).

    PubMed

    Lépy, M-C; Altzitzoglou, T; Anagnostakis, M J; Capogni, M; Ceccatelli, A; De Felice, P; Djurasevic, M; Dryak, P; Fazio, A; Ferreux, L; Giampaoli, A; Han, J B; Hurtado, S; Kandic, A; Kanisch, G; Karfopoulos, K L; Klemola, S; Kovar, P; Laubenstein, M; Lee, J H; Lee, J M; Lee, K B; Pierre, S; Carvalhal, G; Sima, O; Tao, Chau Van; Thanh, Tran Thien; Vidmar, T; Vukanac, I; Yang, M J

    2012-09-01

    The second part of an intercomparison of the coincidence summing correction methods is presented. This exercise concerned three volume sources, filled with liquid radioactive solution. The same experimental spectra, decay scheme and photon emission intensities were used by all the participants. The results were expressed as coincidence summing corrective factors for several energies of (152)Eu and (134)Cs, and different source-to-detector distances. They are presented and discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Virtex-5QV Self Scrubber

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wojahn, Christopher K.

    2015-10-20

    This HDL code (hereafter referred to as "software") implements circuitry in Xilinx Virtex-5QV Field Programmable Gate Array (FPGA) hardware. This software allows the device to self-check the consistency of its own configuration memory for radiation-induced errors. The software then provides the capability to correct any single-bit errors detected in the memory using the device's inherent circuitry, or reload corrupted memory frames when larger errors occur that cannot be corrected with the device's built-in error correction and detection scheme.

  9. High resolution schemes and the entropy condition

    NASA Technical Reports Server (NTRS)

    Osher, S.; Chakravarthy, S.

    1983-01-01

    A systematic procedure for constructing semidiscrete, second order accurate, variation diminishing, five point band width, approximations to scalar conservation laws, is presented. These schemes are constructed to also satisfy a single discrete entropy inequality. Thus, in the convex flux case, convergence is proven to be the unique physically correct solution. For hyperbolic systems of conservation laws, this construction is used formally to extend the first author's first order accurate scheme, and show (under some minor technical hypotheses) that limit solutions satisfy an entropy inequality. Results concerning discrete shocks, a maximum principle, and maximal order of accuracy are obtained. Numerical applications are also presented.

  10. A predictor-corrector scheme for vortex identification

    NASA Technical Reports Server (NTRS)

    Singer, Bart A.; Banks, David C.

    1994-01-01

    A new algorithm for identifying and characterizing vortices in complex flows is presented. The scheme uses both the vorticity and pressure fields. A skeleton line along the center of a vortex is produced by a two-step predictor-corrector scheme. The technique uses the vector field to move in the direction of the skeleton line and the scalar field to correct the location in the plane perpendicular to the skeleton line. A general vortex cross section can be concisely defined with five parameters at each point along the skeleton line. The details of the method and examples of its use are discussed.

  11. Efficiency of coherent-state quantum cryptography in the presence of loss: Influence of realistic error correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heid, Matthias; Luetkenhaus, Norbert

    2006-05-15

    We investigate the performance of a continuous-variable quantum key distribution scheme in a practical setting. More specifically, we take a nonideal error reconciliation procedure into account. The quantum channel connecting the two honest parties is assumed to be lossy but noiseless. Secret key rates are given for the case that the measurement outcomes are postselected or a reverse reconciliation scheme is applied. The reverse reconciliation scheme loses its initial advantage in the practical setting. If one combines postselection with reverse reconciliation, however, much of this advantage can be recovered.

  12. Density matrix renormalization group for a highly degenerate quantum system: Sliding environment block approach

    NASA Astrophysics Data System (ADS)

    Schmitteckert, Peter

    2018-04-01

    We present an infinite lattice density matrix renormalization group sweeping procedure which can be used as a replacement for the standard infinite lattice blocking schemes. Although the scheme is generally applicable to any system, its main advantages are the correct representation of commensurability issues and the treatment of degenerate systems. As an example we apply the method to a spin chain featuring a highly degenerate ground-state space where the new sweeping scheme provides an increase in performance as well as accuracy by many orders of magnitude compared to a recently published work.

  13. Secure Wake-Up Scheme for WBANs

    NASA Astrophysics Data System (ADS)

    Liu, Jing-Wei; Ameen, Moshaddique Al; Kwak, Kyung-Sup

    Network life time and hence device life time is one of the fundamental metrics in wireless body area networks (WBAN). To prolong it, especially those of implanted sensors, each node must conserve its energy as much as possible. While a variety of wake-up/sleep mechanisms have been proposed, the wake-up radio potentially serves as a vehicle to introduce vulnerabilities and attacks to WBAN, eventually resulting in its malfunctions. In this paper, we propose a novel secure wake-up scheme, in which a wake-up authentication code (WAC) is employed to ensure that a BAN Node (BN) is woken up by the correct BAN Network Controller (BNC) rather than unintended users or malicious attackers. The scheme is thus particularly implemented by a two-radio architecture. We show that our scheme provides higher security while consuming less energy than the existing schemes.

  14. MPDATA: Third-order accuracy for variable flows

    NASA Astrophysics Data System (ADS)

    Waruszewski, Maciej; Kühnlein, Christian; Pawlowska, Hanna; Smolarkiewicz, Piotr K.

    2018-04-01

    This paper extends the multidimensional positive definite advection transport algorithm (MPDATA) to third-order accuracy for temporally and spatially varying flows. This is accomplished by identifying the leading truncation error of the standard second-order MPDATA, performing the Cauchy-Kowalevski procedure to express it in a spatial form and compensating its discrete representation-much in the same way as the standard MPDATA corrects the first-order accurate upwind scheme. The procedure of deriving the spatial form of the truncation error was automated using a computer algebra system. This enables various options in MPDATA to be included straightforwardly in the third-order scheme, thereby minimising the implementation effort in existing code bases. Following the spirit of MPDATA, the error is compensated using the upwind scheme resulting in a sign-preserving algorithm, and the entire scheme can be formulated using only two upwind passes. Established MPDATA enhancements, such as formulation in generalised curvilinear coordinates, the nonoscillatory option or the infinite-gauge variant, carry over to the fully third-order accurate scheme. A manufactured 3D analytic solution is used to verify the theoretical development and its numerical implementation, whereas global tracer-transport benchmarks demonstrate benefits for chemistry-transport models fundamental to air quality monitoring, forecasting and control. A series of explicitly-inviscid implicit large-eddy simulations of a convective boundary layer and explicitly-viscid simulations of a double shear layer illustrate advantages of the fully third-order-accurate MPDATA for fluid dynamics applications.

  15. Secure Obfuscation for Encrypted Group Signatures

    PubMed Central

    Fan, Hongfei; Liu, Qin

    2015-01-01

    In recent years, group signature techniques are widely used in constructing privacy-preserving security schemes for various information systems. However, conventional techniques keep the schemes secure only in normal black-box attack contexts. In other words, these schemes suppose that (the implementation of) the group signature generation algorithm is running in a platform that is perfectly protected from various intrusions and attacks. As a complementary to existing studies, how to generate group signatures securely in a more austere security context, such as a white-box attack context, is studied in this paper. We use obfuscation as an approach to acquire a higher level of security. Concretely, we introduce a special group signature functionality-an encrypted group signature, and then provide an obfuscator for the proposed functionality. A series of new security notions for both the functionality and its obfuscator has been introduced. The most important one is the average-case secure virtual black-box property w.r.t. dependent oracles and restricted dependent oracles which captures the requirement of protecting the output of the proposed obfuscator against collision attacks from group members. The security notions fit for many other specialized obfuscators, such as obfuscators for identity-based signatures, threshold signatures and key-insulated signatures. Finally, the correctness and security of the proposed obfuscator have been proven. Thereby, the obfuscated encrypted group signature functionality can be applied to variants of privacy-preserving security schemes and enhance the security level of these schemes. PMID:26167686

  16. ON THE USE OF SHOT NOISE FOR PHOTON COUNTING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zmuidzinas, Jonas, E-mail: jonas@caltech.edu

    Lieu et al. have recently claimed that it is possible to substantially improve the sensitivity of radio-astronomical observations. In essence, their proposal is to make use of the intensity of the photon shot noise as a measure of the photon arrival rate. Lieu et al. provide a detailed quantum-mechanical calculation of a proposed measurement scheme that uses two detectors and conclude that this scheme avoids the sensitivity degradation that is associated with photon bunching. If correct, this result could have a profound impact on radio astronomy. Here I present a detailed analysis of the sensitivity attainable using shot-noise measurement schemesmore » that use either one or two detectors, and demonstrate that neither scheme can avoid the photon bunching penalty. I perform both semiclassical and fully quantum calculations of the sensitivity, obtaining consistent results, and provide a formal proof of the equivalence of these two approaches. These direct calculations are furthermore shown to be consistent with an indirect argument based on a correlation method that establishes an independent limit to the sensitivity of shot-noise measurement schemes. Furthermore, these calculations are directly applicable to the regime of interest identified by Lieu et al. Collectively, these results conclusively demonstrate that the photon-bunching sensitivity penalty applies to shot-noise measurement schemes just as it does to ordinary photon counting, in contradiction to the fundamental claim made by Lieu et al. The source of this contradiction is traced to a logical fallacy in their argument.« less

  17. Analyzing the Effectiveness of the Self-organized Public-Key Management System on MANETs under the Lack of Cooperation and the Impersonation Attacks

    NASA Astrophysics Data System (ADS)

    da Silva, Eduardo; Dos Santos, Aldri Luiz; Lima, Michele N.; Albini, Luiz Carlos Pessoa

    Among the key management schemes for MANETs, the Self-Organized Public-Key Management System (PGP-Like) is the main chaining-based key management scheme. It is fully self-organized and does not require any certificate authority. Two kinds of misbehavior attacks are considered to be great threats to PGP-Like: lack of cooperation and impersonation attacks. This work quantifies the impact of such attacks on the PGP-Like. Simulation results show that PGP-Like was able to maintain its effectiveness when submitted to the lack of cooperation attack, contradicting previously theoretical results. It correctly works even in the presence of more than 60% of misbehaving nodes, although the convergence time is affected with only 20% of misbehaving nodes. On the other hand, PGP-Like is completely vulnerable to the impersonation attack. Its functionality is affected with just 5% of misbehaving nodes, confirming previously theoretical results.

  18. Pros and Cons of the Acceleration Scheme (NF-IDS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogacz, Alex; Bogacz, Slawomir

    The overall goal of the acceleration systems: large acceptance acceleration to 25 GeV and beam shaping can be accomplished by various fixed field accelerators at different stages. They involve three superconducting linacs: a single pass linear Pre-accelerator followed by a pair of multi-pass Recirculating Linear Accelerators (RLA) and finally a nonâ scaling FFAG ring. The present baseline acceleration scenario has been optimized to take maximum advantage of appropriate acceleration scheme at a given stage. Pros and cons of various stages are discussed here in detail. The solenoid based Pre-accelerator offers very large acceptance and facilitates correction of energy gain acrossmore » the bunch and significant longitudinal compression trough induced synchrotron motion. However, far off-crest acceleration reduces the effective acceleration gradient and adds complexity through the requirement of individual RF phase control for each cavity. Close proximity of strong solenoids and superc« less

  19. Feed-forward frequency offset estimation for 32-QAM optical coherent detection.

    PubMed

    Xiao, Fei; Lu, Jianing; Fu, Songnian; Xie, Chenhui; Tang, Ming; Tian, Jinwen; Liu, Deming

    2017-04-17

    Due to the non-rectangular distribution of the constellation points, traditional fast Fourier transform based frequency offset estimation (FFT-FOE) is no longer suitable for 32-QAM signal. Here, we report a modified FFT-FOE technique by selecting and digitally amplifying the inner QPSK ring of 32-QAM after the adaptive equalization, which is defined as QPSK-selection assisted FFT-FOE. Simulation results show that no FOE error occurs with a FFT size of only 512 symbols, when the signal-to-noise ratio (SNR) is above 17.5 dB using our proposed FOE technique. However, the error probability of traditional FFT-FOE scheme for 32-QAM is always intolerant. Finally, our proposed FOE scheme functions well for 10 Gbaud dual polarization (DP)-32-QAM signal to reach 20% forward error correction (FEC) threshold of BER=2×10-2, under the scenario of back-to-back (B2B) transmission.

  20. Routing architecture and security for airborne networks

    NASA Astrophysics Data System (ADS)

    Deng, Hongmei; Xie, Peng; Li, Jason; Xu, Roger; Levy, Renato

    2009-05-01

    Airborne networks are envisioned to provide interconnectivity for terrestial and space networks by interconnecting highly mobile airborne platforms. A number of military applications are expected to be used by the operator, and all these applications require proper routing security support to establish correct route between communicating platforms in a timely manner. As airborne networks somewhat different from traditional wired and wireless networks (e.g., Internet, LAN, WLAN, MANET, etc), security aspects valid in these networks are not fully applicable to airborne networks. Designing an efficient security scheme to protect airborne networks is confronted with new requirements. In this paper, we first identify a candidate routing architecture, which works as an underlying structure for our proposed security scheme. And then we investigate the vulnerabilities and attack models against routing protocols in airborne networks. Based on these studies, we propose an integrated security solution to address routing security issues in airborne networks.

  1. Data Management Systems (DMS): Complex data types study. Volume 1: Appendices A-B. Volume 2: Appendices C1-C5. Volume 3: Appendices D1-D3 and E

    NASA Technical Reports Server (NTRS)

    Leibfried, T. F., Jr.; Davari, Sadegh; Natarajan, Swami; Zhao, Wei

    1992-01-01

    Two categories were chosen for study: the issue of using a preprocessor on Ada code of Application Programs which would interface with the Run-Time Object Data Base Standard Services (RODB STSV), the intent was to catch and correct any mis-registration errors of the program coder between the user declared Objects, their types, their addresses, and the corresponding RODB definitions; and RODB STSV Performance Issues and Identification of Problems with the planned methods for accessing Primitive Object Attributes, this included the study of an alternate storage scheme to the 'store objects by attribute' scheme in the current design of the RODB. The study resulted in essentially three separate documents, an interpretation of the system requirements, an assessment of the preliminary design, and a detailing of the components of a detailed design.

  2. Ordering policy for stock-dependent demand rate under progressive payment scheme: a comment

    NASA Astrophysics Data System (ADS)

    Glock, Christoph H.; Ries, Jörg M.; Schwindl, Kurt

    2015-04-01

    In a recent paper, Soni and Shah developed a model for finding the optimal ordering policy for a retailer facing stock-dependent demand and a supplier offering a progressive payment scheme. In this comment, we correct several errors in the formulation of the models of Soni and Shah and modify some assumptions to increase the model's applicability. Numerical examples illustrate the benefits of our modifications.

  3. An RFID solution for enhancing inpatient medication safety with real-time verifiable grouping-proof.

    PubMed

    Chen, Yu-Yi; Tsai, Meng-Lin

    2014-01-01

    The occurrence of a medication error can threaten patient safety. The medication administration process is complex and cumbersome, and nursing staffs are prone to error when they are tired. Proper Information Technology (IT) can assist the nurse in correct medication administration. We review a recent proposal regarding a leading-edge solution to enhance inpatient medication safety by using RFID technology. The proof mechanism is the kernel concept in their design and worth studying to develop a well-designed grouping-proof scheme. Other RFID grouping-proof protocols could be similarly applied in administering physician orders. We improve on the weaknesses of previous works and develop a reading-order independent RFID grouping-proof scheme in this paper. In our scheme, tags are queried and verified under the direct control of the authorized reader without connecting to the back-end database server. Immediate verification in our design makes this application more portable and efficient and critical security issues have been analyzed by the threat model. Our scheme is suitable for the safe drug administration scenario and the drug package scenario in a hospital environment to enhance inpatient medication safety. It automatically checks for correct drug unit-dose and appropriate inpatient treatments. Copyright © 2013. Published by Elsevier Ireland Ltd.

  4. On Formulations of Discontinuous Galerkin and Related Methods for Conservation Laws

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.

    2014-01-01

    A formulation for the discontinuous Galerkin (DG) method that leads to solutions using the differential form of the equation (as opposed to the standard integral form) is presented. The formulation includes (a) a derivative calculation that involves only data within each cell with no data interaction among cells, and (b) for each cell, corrections to this derivative that deal with the jumps in fluxes at the cell boundaries and allow data across cells to interact. The derivative with no interaction is obtained by a projection, but for nodal-type methods, evaluating this derivative by interpolation at the nodal points is more economical. The corrections are derived using the approximate (Dirac) delta functions. The formulation results in a family of schemes: different approximate delta functions give rise to different methods. It is shown that the current formulation is essentially equivalent to the flux reconstruction (FR) formulation. Due to the use of approximate delta functions, an energy stability proof simpler than that of Vincent, Castonguay, and Jameson (2011) for a family of schemes is derived. Accuracy and stability of resulting schemes are discussed via Fourier analyses. Similar to FR, the current formulation provides a unifying framework for high-order methods by recovering the DG, spectral difference (SD), and spectral volume (SV) schemes. It also yields stable, accurate, and economical methods.

  5. A Systematic Error Correction Method for TOVS Radiances

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.

  6. A comprehensive numerical analysis of background phase correction with V-SHARP.

    PubMed

    Özbay, Pinar Senay; Deistung, Andreas; Feng, Xiang; Nanz, Daniel; Reichenbach, Jürgen Rainer; Schweser, Ferdinand

    2017-04-01

    Sophisticated harmonic artifact reduction for phase data (SHARP) is a method to remove background field contributions in MRI phase images, which is an essential processing step for quantitative susceptibility mapping (QSM). To perform SHARP, a spherical kernel radius and a regularization parameter need to be defined. In this study, we carried out an extensive analysis of the effect of these two parameters on the corrected phase images and on the reconstructed susceptibility maps. As a result of the dependence of the parameters on acquisition and processing characteristics, we propose a new SHARP scheme with generalized parameters. The new SHARP scheme uses a high-pass filtering approach to define the regularization parameter. We employed the variable-kernel SHARP (V-SHARP) approach, using different maximum radii (R m ) between 1 and 15 mm and varying regularization parameters (f) in a numerical brain model. The local root-mean-square error (RMSE) between the ground-truth, background-corrected field map and the results from SHARP decreased towards the center of the brain. RMSE of susceptibility maps calculated with a spatial domain algorithm was smallest for R m between 6 and 10 mm and f between 0 and 0.01 mm -1 , and for maps calculated with a Fourier domain algorithm for R m between 10 and 15 mm and f between 0 and 0.0091 mm -1 . We demonstrated and confirmed the new parameter scheme in vivo. The novel regularization scheme allows the use of the same regularization parameter irrespective of other imaging parameters, such as image resolution. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. A Multigrid NLS-4DVar Data Assimilation Scheme with Advanced Research WRF (ARW)

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Tian, X.

    2017-12-01

    The motions of the atmosphere have multiscale properties in space and/or time, and the background error covariance matrix (Β) should thus contain error information at different correlation scales. To obtain an optimal analysis, the multigrid three-dimensional variational data assimilation scheme is used widely when sequentially correcting errors from large to small scales. However, introduction of the multigrid technique into four-dimensional variational data assimilation is not easy, due to its strong dependence on the adjoint model, which has extremely high computational costs in data coding, maintenance, and updating. In this study, the multigrid technique was introduced into the nonlinear least-squares four-dimensional variational assimilation (NLS-4DVar) method, which is an advanced four-dimensional ensemble-variational method that can be applied without invoking the adjoint models. The multigrid NLS-4DVar (MG-NLS-4DVar) scheme uses the number of grid points to control the scale, with doubling of this number when moving from a coarse to a finer grid. Furthermore, the MG-NLS-4DVar scheme not only retains the advantages of NLS-4DVar, but also sufficiently corrects multiscale errors to achieve a highly accurate analysis. The effectiveness and efficiency of the proposed MG-NLS-4DVar scheme were evaluated by several groups of observing system simulation experiments using the Advanced Research Weather Research and Forecasting Model. MG-NLS-4DVar outperformed NLS-4DVar, with a lower computational cost.

  8. Reliable and fast quantitative analysis of active ingredient in pharmaceutical suspension using Raman spectroscopy.

    PubMed

    Park, Seok Chan; Kim, Minjung; Noh, Jaegeun; Chung, Hoeil; Woo, Youngah; Lee, Jonghwa; Kemper, Mark S

    2007-06-12

    The concentration of acetaminophen in a turbid pharmaceutical suspension has been measured successfully using Raman spectroscopy. The spectrometer was equipped with a large spot probe which enabled the coverage of a representative area during sampling. This wide area illumination (WAI) scheme (coverage area 28.3 mm2) for Raman data collection proved to be more reliable for the compositional determination of these pharmaceutical suspensions, especially when the samples were turbid. The reproducibility of measurement using the WAI scheme was compared to that of using a conventional small-spot scheme which employed a much smaller illumination area (about 100 microm spot size). A layer of isobutyric anhydride was placed in front of the sample vials to correct the variation in the Raman intensity due to the fluctuation of laser power. Corrections were accomplished using the isolated carbonyl band of isobutyric anhydride. The acetaminophen concentrations of prediction samples were accurately estimated using a partial least squares (PLS) calibration model. The prediction accuracy was maintained even with changes in laser power. It was noted that the prediction performance was somewhat degraded for turbid suspensions with high acetaminophen contents. When comparing the results of reproducibility obtained with the WAI scheme and those obtained using the conventional scheme, it was concluded that the quantitative determination of the active pharmaceutical ingredient (API) in turbid suspensions is much improved when employing a larger laser coverage area. This is presumably due to the improvement in representative sampling.

  9. Mathematical model of the loan portfolio dynamics in the form of Markov chain considering the process of new customers attraction

    NASA Astrophysics Data System (ADS)

    Bozhalkina, Yana

    2017-12-01

    Mathematical model of the loan portfolio structure change in the form of Markov chain is explored. This model considers in one scheme both the process of customers attraction, their selection based on the credit score, and loans repayment. The model describes the structure and volume of the loan portfolio dynamics, which allows to make medium-term forecasts of profitability and risk. Within the model corrective actions of bank management in order to increase lending volumes or to reduce the risk are formalized.

  10. Objective analysis of observational data from the FGGE observing systems

    NASA Technical Reports Server (NTRS)

    Baker, W.; Edelmann, D.; Iredell, M.; Han, D.; Jakkempudi, S.

    1981-01-01

    An objective analysis procedure for updating the GLAS second and fourth order general atmospheric circulation models using observational data from the first GARP global experiment is described. The objective analysis procedure is based on a successive corrections method and the model is updated in a data assimilation cycle. Preparation of the observational data for analysis and the objective analysis scheme are described. The organization of the program and description of the required data sets are presented. The program logic and detailed descriptions of each subroutine are given.

  11. Experimental validation of thermo-chemical algorithm for a simulation of pultrusion processes

    NASA Astrophysics Data System (ADS)

    Barkanov, E.; Akishin, P.; Miazza, N. L.; Galvez, S.; Pantelelis, N.

    2018-04-01

    To provide better understanding of the pultrusion processes without or with temperature control and to support the pultrusion tooling design, an algorithm based on the mixed time integration scheme and nodal control volumes method has been developed. At present study its experimental validation is carried out by the developed cure sensors measuring the electrical resistivity and temperature on the profile surface. By this verification process the set of initial data used for a simulation of the pultrusion process with rod profile has been successfully corrected and finally defined.

  12. SU-E-T-58: A Novel Monte Carlo Photon Transport Simulation Scheme and Its Application in Cone Beam CT Projection Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Southern Medical University, Guangzhou; Tian, Z

    Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source.more » After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle transport simulations.« less

  13. Parametrization and Benchmark of Long-Range Corrected DFTB2 for Organic Molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vuong, Van Quan; Akkarapattiakal Kuriappan, Jissy; Kubillus, Maximilian

    In this paper, we present the parametrization and benchmark of long-range corrected second-order density functional tight binding (DFTB), LC-DFTB2, for organic and biological molecules. The LC-DFTB2 model not only improves fundamental orbital energy gaps but also ameliorates the DFT self-interaction error and overpolarization problem, and further improves charge-transfer excited states significantly. Electronic parameters for the construction of the DFTB2 Hamiltonian as well as repulsive potentials were optimized for molecules containing C, H, N, and O chemical elements. We use a semiautomatic parametrization scheme based on a genetic algorithm. With the new parameters, LC-DFTB2 describes geometries and vibrational frequencies of organicmore » molecules similarly well as third-order DFTB3/3OB, the de facto standard parametrization based on a GGA functional. Finally, LC-DFTB2 performs well also for atomization and reaction energies, however, slightly less satisfactorily than DFTB3/3OB.« less

  14. Design and simulation of the circuit of SWIR hyper-spectral imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Ren, Bin; Li, Zi-tian; Meng, Nan

    2009-07-01

    With the requirement of the SWIR Hyper-spectral Imaging Spectrometer, this article describes a project of SWIR image circuit based on IRFPA detector. First, the structure of the SWIR Hyper-spectral Imaging Spectrometer is introduced in this paper, and then the infrared imaging circuit design is proposed, which is based on MCT SWIR FPA with 500*256 pixels, the detector NEPTURN, in Safradir Company. According to the scheme, several key technologies have been studied in particular, such as driving circuit, time control circuit, high-speed A/D converter, LVDS (Low Voltage Differential Signaling) transmission circuit. At last, An improved two-point Correction Method was chosen to correct the Non-uniformity of image. The simulation results demonstrate that the proposed method can effectively suppress noises and work with low power consumption. The electric system not only has the advantages of simplicity and compactness but also can work stably, providing 500×256 image at the frame frequency of 200 Hz in good quality.

  15. Parametrization and Benchmark of Long-Range Corrected DFTB2 for Organic Molecules

    DOE PAGES

    Vuong, Van Quan; Akkarapattiakal Kuriappan, Jissy; Kubillus, Maximilian; ...

    2017-12-12

    In this paper, we present the parametrization and benchmark of long-range corrected second-order density functional tight binding (DFTB), LC-DFTB2, for organic and biological molecules. The LC-DFTB2 model not only improves fundamental orbital energy gaps but also ameliorates the DFT self-interaction error and overpolarization problem, and further improves charge-transfer excited states significantly. Electronic parameters for the construction of the DFTB2 Hamiltonian as well as repulsive potentials were optimized for molecules containing C, H, N, and O chemical elements. We use a semiautomatic parametrization scheme based on a genetic algorithm. With the new parameters, LC-DFTB2 describes geometries and vibrational frequencies of organicmore » molecules similarly well as third-order DFTB3/3OB, the de facto standard parametrization based on a GGA functional. Finally, LC-DFTB2 performs well also for atomization and reaction energies, however, slightly less satisfactorily than DFTB3/3OB.« less

  16. A piece of cake: the ground-state energies in γ i -deformed = 4 SYM theory at leading wrapping order

    NASA Astrophysics Data System (ADS)

    Fokken, Jan; Sieg, Christoph; Wilhelm, Matthias

    2014-09-01

    In the non-supersymmetric γi-deformed = 4 SYM theory, the scaling dimensions of the operators tr[ Z L ] composed of L scalar fields Z receive finite-size wrapping and prewrapping corrections in the 't Hooft limit. In this paper, we calculate these scaling dimensions to leading wrapping order directly from Feynman diagrams. For L ≥ 3, the result is proportional to the maximally transcendental `cake' integral. It matches with an earlier result obtained from the integrability-based Lüscher corrections, TBA and Y-system equations. At L = 2, where the integrability-based equations yield infinity, we find a finite rational result. This result is renormalization-scheme dependent due to the non-vanishing β-function of an induced quartic scalar double-trace coupling, on which we have reported earlier. This explicitly shows that conformal invariance is broken — even in the 't Hooft limit. [Figure not available: see fulltext.

  17. Comparison of two schemes for automatic keyword extraction from MEDLINE for functional gene clustering.

    PubMed

    Liu, Ying; Ciliax, Brian J; Borges, Karin; Dasigi, Venu; Ram, Ashwin; Navathe, Shamkant B; Dingledine, Ray

    2004-01-01

    One of the key challenges of microarray studies is to derive biological insights from the unprecedented quatities of data on gene-expression patterns. Clustering genes by functional keyword association can provide direct information about the nature of the functional links among genes within the derived clusters. However, the quality of the keyword lists extracted from biomedical literature for each gene significantly affects the clustering results. We extracted keywords from MEDLINE that describes the most prominent functions of the genes, and used the resulting weights of the keywords as feature vectors for gene clustering. By analyzing the resulting cluster quality, we compared two keyword weighting schemes: normalized z-score and term frequency-inverse document frequency (TFIDF). The best combination of background comparison set, stop list and stemming algorithm was selected based on precision and recall metrics. In a test set of four known gene groups, a hierarchical algorithm correctly assigned 25 of 26 genes to the appropriate clusters based on keywords extracted by the TDFIDF weighting scheme, but only 23 og 26 with the z-score method. To evaluate the effectiveness of the weighting schemes for keyword extraction for gene clusters from microarray profiles, 44 yeast genes that are differentially expressed during the cell cycle were used as a second test set. Using established measures of cluster quality, the results produced from TFIDF-weighted keywords had higher purity, lower entropy, and higher mutual information than those produced from normalized z-score weighted keywords. The optimized algorithms should be useful for sorting genes from microarray lists into functionally discrete clusters.

  18. External quality assessment for KRAS testing is needed: setup of a European program and report of the first joined regional quality assessment rounds.

    PubMed

    Bellon, Ellen; Ligtenberg, Marjolijn J L; Tejpar, Sabine; Cox, Karen; de Hertogh, Gert; de Stricker, Karin; Edsjö, Anders; Gorgoulis, Vassilis; Höfler, Gerald; Jung, Andreas; Kotsinas, Athanassios; Laurent-Puig, Pierre; López-Ríos, Fernando; Hansen, Tine Plato; Rouleau, Etienne; Vandenberghe, Peter; van Krieken, Johan J M; Dequeker, Elisabeth

    2011-01-01

    The use of epidermal growth factor receptor-targeting antibodies in metastatic colorectal cancer has been restricted to patients with wild-type KRAS tumors by the European Medicines Agency since 2008, based on data showing a lack of efficacy and potential harm in patients with mutant KRAS tumors. In an effort to ensure optimal, uniform, and reliable community-based KRAS testing throughout Europe, a KRAS external quality assessment (EQA) scheme was set up. The first large assessment round included 59 laboratories from eight different European countries. For each country, one regional scheme organizer prepared and distributed the samples for the participants of their own country. The samples included unstained sections of 10 invasive colorectal carcinomas with known KRAS mutation status. The samples were centrally validated by one of two reference laboratories. The laboratories were allowed to use their own preferred method for histological evaluation, DNA isolation, and mutation analysis. In this study, we analyze the setup of the KRAS scheme. We analyzed the advantages and disadvantages of the regional scheme organization by analyzing the outcome of genotyping results, analysis of tumor percentage, and written reports. We conclude that only 70% of laboratories correctly identified the KRAS mutational status in all samples. Both the false-positive and false-negative results observed negatively affect patient care. Reports of the KRAS test results often lacked essential information. We aim to further expand this program to more laboratories to provide a robust estimate of the quality of KRAS testing in Europe, and provide the basis for remedial measures and harmonization.

  19. On the calculation of charge transfer transitions with standard density functionals using constrained variational density functional theory.

    PubMed

    Ziegler, Tom; Krykunov, Mykhaylo

    2010-08-21

    It is well known that time-dependent density functional theory (TD-DFT) based on standard gradient corrected functionals affords both a quantitative and qualitative incorrect picture of charge transfer transitions between two spatially separated regions. It is shown here that the well known failure can be traced back to the use of linear response theory. Further, it is demonstrated that the inclusion of higher order terms readily affords a qualitatively correct picture even for simple functionals based on the local density approximation. The inclusion of these terms is done within the framework of a newly developed variational approach to excitation energies called constrained variational density functional theory (CV-DFT). To second order [CV(2)-DFT] this theory is identical to adiabatic TD-DFT within the Tamm-Dancoff approximation. With inclusion of fourth order corrections [CV(4)-DFT] it affords a qualitative correct description of charge transfer transitions. It is finally demonstrated that the relaxation of the ground state Kohn-Sham orbitals to first order in response to the change in density on excitation together with CV(4)-DFT affords charge transfer excitations in good agreement with experiment. The new relaxed theory is termed R-CV(4)-DFT. The relaxed scheme represents an effective way in which to introduce double replacements into the description of single electron excitations, something that would otherwise require a frequency dependent kernel.

  20. Study on High Resolution Membrane-Based Diffractive Optical Imaging on Geostationary Orbit

    NASA Astrophysics Data System (ADS)

    Jiao, J.; Wang, B.; Wang, C.; Zhang, Y.; Jin, J.; Liu, Z.; Su, Y.; Ruan, N.

    2017-05-01

    Diffractive optical imaging technology provides a new way to realize high resolution earth observation on geostationary orbit. There are a lot of benefits to use the membrane-based diffractive optical element in ultra-large aperture optical imaging system, including loose tolerance, light weight, easy folding and unfolding, which make it easy to realize high resolution earth observation on geostationary orbit. The implementation of this technology also faces some challenges, including the configuration of the diffractive primary lens, the development of high diffraction efficiency membrane-based diffractive optical elements, and the correction of the chromatic aberration of the diffractive optical elements. Aiming at the configuration of the diffractive primary lens, the "6+1" petal-type unfold scheme is proposed, which consider the compression ratio, the blocking rate and the development complexity. For high diffraction efficiency membrane-based diffractive optical element, a self-collimating method is proposed. The diffraction efficiency is more than 90 % of the theoretical value. For the chromatic aberration correction problem, an optimization method based on schupmann is proposed to make the imaging spectral bandwidth in visible light band reach 100 nm. The above conclusions have reference significance for the development of ultra-large aperture diffractive optical imaging system.

  1. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    NASA Astrophysics Data System (ADS)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  2. A comparative study of two codes with an improved two-equation turbulence model for predicting jet plumes

    NASA Technical Reports Server (NTRS)

    Balakrishnan, L.; Abdol-Hamid, Khaled S.

    1992-01-01

    Compressible jet plumes were studied using a two-equation turbulence model. A space marching procedure based on an upwind numerical scheme was used to solve the governing equations and turbulence transport equations. The computed results indicate that extending the space marching procedure for solving supersonic/subsonic mixing problems can be stable, efficient and accurate. Moreover, a newly developed correction for compressible dissipation has been verified in fully expanded and underexpanded jet plumes. For a sonic jet plume, no improvement in results over the standard two-equation model was seen. However for a supersonic jet plume, the correction due to compressible dissipation successfully predicted the reduced spreading rate of the jet compared to the sonic case. The computed results were generally in good agreement with the experimental data.

  3. Testing ice microphysics parameterizations in the NCAR Community Atmospheric Model Version 3 using Tropical Warm Pool-International Cloud Experiment data

    DOE PAGES

    Wang, Weiguo; Liu, Xiaohong; Xie, Shaocheng; ...

    2009-07-23

    Here, cloud properties have been simulated with a new double-moment microphysics scheme under the framework of the single-column version of NCAR Community Atmospheric Model version 3 (CAM3). For comparison, the same simulation was made with the standard single-moment microphysics scheme of CAM3. Results from both simulations compared favorably with observations during the Tropical Warm Pool–International Cloud Experiment by the U.S. Department of Energy Atmospheric Radiation Measurement Program in terms of the temporal variation and vertical distribution of cloud fraction and cloud condensate. Major differences between the two simulations are in the magnitude and distribution of ice water content within themore » mixed-phase cloud during the monsoon period, though the total frozen water (snow plus ice) contents are similar. The ice mass content in the mixed-phase cloud from the new scheme is larger than that from the standard scheme, and ice water content extends 2 km further downward, which is in better agreement with observations. The dependence of the frozen water mass fraction on temperature from the new scheme is also in better agreement with available observations. Outgoing longwave radiation (OLR) at the top of the atmosphere (TOA) from the simulation with the new scheme is, in general, larger than that with the standard scheme, while the surface downward longwave radiation is similar. Sensitivity tests suggest that different treatments of the ice crystal effective radius contribute significantly to the difference in the calculations of TOA OLR, in addition to cloud water path. Numerical experiments show that cloud properties in the new scheme can respond reasonably to changes in the concentration of aerosols and emphasize the importance of correctly simulating aerosol effects in climate models for aerosol-cloud interactions. Further evaluation, especially for ice cloud properties based on in-situ data, is needed.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamaguchi, Kizashi; Nishihara, Satomichi; Saito, Toru

    First principle calculations of effective exchange integrals (J) in the Heisenberg model for diradical species were performed by both symmetry-adapted (SA) multi-reference (MR) and broken-symmetry (BS) single reference (SR) methods. Mukherjee-type (Mk) state specific (SS) MR coupled-cluster (CC) calculations by the use of natural orbital (NO) references of ROHF, UHF, UDFT and CASSCF solutions were carried out to elucidate J values for di- and poly-radical species. Spin-unrestricted Hartree Fock (UHF) based coupled-cluster (CC) computations were also performed to these species. Comparison between UHF-NO(UNO)-MkMRCC and BS UHF-CC computational results indicated that spin-contamination of UHF-CC solutions still remains at the SD level.more » In order to eliminate the spin contamination, approximate spin-projection (AP) scheme was applied for UCC, and the AP procedure indeed corrected the error to yield good agreement with MkMRCC in energy. The CC double with spin-unrestricted Brueckner's orbital (UBD) was furthermore employed for these species, showing that spin-contamination involved in UHF solutions is largely suppressed, and therefore AP scheme for UBCCD removed easily the rest of spin-contamination. We also performed spin-unrestricted pure- and hybrid-density functional theory (UDFT) calculations of diradical and polyradical species. Three different computational schemes for total spin angular momentums were examined for the AP correction of the hybrid (H) UDFT. HUDFT calculations followed by AP, HUDFT(AP), yielded the S-T gaps that were qualitatively in good agreement with those of MkMRCCSD, UHF-CC(AP) and UB-CC(AP). Thus a systematic comparison among MkMRCCSD, UCC(AP) UBD(AP) and UDFT(AP) was performed concerning with the first principle calculations of J values in di- and poly-radical species. It was found that BS (AP) methods reproduce MkMRCCSD results, indicating their applicability to large exchange coupled systems.« less

  5. Low-resolution simulations of vesicle suspensions in 2D

    NASA Astrophysics Data System (ADS)

    Kabacaoğlu, Gökberk; Quaife, Bryan; Biros, George

    2018-03-01

    Vesicle suspensions appear in many biological and industrial applications. These suspensions are characterized by rich and complex dynamics of vesicles due to their interaction with the bulk fluid, and their large deformations and nonlinear elastic properties. Many existing state-of-the-art numerical schemes can resolve such complex vesicle flows. However, even when using provably optimal algorithms, these simulations can be computationally expensive, especially for suspensions with a large number of vesicles. These high computational costs can limit the use of simulations for parameter exploration, optimization, or uncertainty quantification. One way to reduce the cost is to use low-resolution discretizations in space and time. However, it is well-known that simply reducing the resolution results in vesicle collisions, numerical instabilities, and often in erroneous results. In this paper, we investigate the effect of a number of algorithmic empirical fixes (which are commonly used by many groups) in an attempt to make low-resolution simulations more stable and more predictive. Based on our empirical studies for a number of flow configurations, we propose a scheme that attempts to integrate these fixes in a systematic way. This low-resolution scheme is an extension of our previous work [51,53]. Our low-resolution correction algorithms (LRCA) include anti-aliasing and membrane reparametrization for avoiding spurious oscillations in vesicles' membranes, adaptive time stepping and a repulsion force for handling vesicle collisions and, correction of vesicles' area and arc-length for maintaining physical vesicle shapes. We perform a systematic error analysis by comparing the low-resolution simulations of dilute and dense suspensions with their high-fidelity, fully resolved, counterparts. We observe that the LRCA enables both efficient and statistically accurate low-resolution simulations of vesicle suspensions, while it can be 10× to 100× faster.

  6. Numerical methods for the weakly compressible Generalized Langevin Model in Eulerian reference frame

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azarnykh, Dmitrii, E-mail: d.azarnykh@tum.de; Litvinov, Sergey; Adams, Nikolaus A.

    2016-06-01

    A well established approach for the computation of turbulent flow without resolving all turbulent flow scales is to solve a filtered or averaged set of equations, and to model non-resolved scales by closures derived from transported probability density functions (PDF) for velocity fluctuations. Effective numerical methods for PDF transport employ the equivalence between the Fokker–Planck equation for the PDF and a Generalized Langevin Model (GLM), and compute the PDF by transporting a set of sampling particles by GLM (Pope (1985) [1]). The natural representation of GLM is a system of stochastic differential equations in a Lagrangian reference frame, typically solvedmore » by particle methods. A representation in a Eulerian reference frame, however, has the potential to significantly reduce computational effort and to allow for the seamless integration into a Eulerian-frame numerical flow solver. GLM in a Eulerian frame (GLMEF) formally corresponds to the nonlinear fluctuating hydrodynamic equations derived by Nakamura and Yoshimori (2009) [12]. Unlike the more common Landau–Lifshitz Navier–Stokes (LLNS) equations these equations are derived from the underdamped Langevin equation and are not based on a local equilibrium assumption. Similarly to LLNS equations the numerical solution of GLMEF requires special considerations. In this paper we investigate different numerical approaches to solving GLMEF with respect to the correct representation of stochastic properties of the solution. We find that a discretely conservative staggered finite-difference scheme, adapted from a scheme originally proposed for turbulent incompressible flow, in conjunction with a strongly stable (for non-stochastic PDE) Runge–Kutta method performs better for GLMEF than schemes adopted from those proposed previously for the LLNS. We show that equilibrium stochastic fluctuations are correctly reproduced.« less

  7. Sensitivity of a Cloud-Resolving Model to Bulk and Explicit Bin Microphysical Schemes. Part 2; Cloud Microphysics and Storm Dynamics Interactions

    NASA Technical Reports Server (NTRS)

    Li, Xiaowen; Tao, Wei-Kuo; Khain, Alexander P.; Simpson, Joanne; Johnson, Daniel E.

    2009-01-01

    Part I of this paper compares two simulations, one using a bulk and the other a detailed bin microphysical scheme, of a long-lasting, continental mesoscale convective system with leading convection and trailing stratiform region. Diagnostic studies and sensitivity tests are carried out in Part II to explain the simulated contrasts in the spatial and temporal variations by the two microphysical schemes and to understand the interactions between cloud microphysics and storm dynamics. It is found that the fixed raindrop size distribution in the bulk scheme artificially enhances rain evaporation rate and produces a stronger near surface cool pool compared with the bin simulation. In the bulk simulation, cool pool circulation dominates the near-surface environmental wind shear in contrast to the near-balance between cool pool and wind shear in the bin simulation. This is the main reason for the contrasting quasi-steady states simulated in Part I. Sensitivity tests also show that large amounts of fast-falling hail produced in the original bulk scheme not only result in a narrow trailing stratiform region but also act to further exacerbate the strong cool pool simulated in the bulk parameterization. An empirical formula for a correction factor, r(q(sub r)) = 0.11q(sub r)(exp -1.27) + 0.98, is developed to correct the overestimation of rain evaporation in the bulk model, where r is the ratio of the rain evaporation rate between the bulk and bin simulations and q(sub r)(g per kilogram) is the rain mixing ratio. This formula offers a practical fix for the simple bulk scheme in rain evaporation parameterization.

  8. A Constrained Scheme for High Precision Downward Continuation of Potential Field Data

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Meng, Xiaohong; Zhou, Zhiwen

    2018-04-01

    To further improve the accuracy of the downward continuation of potential field data, we present a novel constrained scheme in this paper combining the ideas of the truncated Taylor series expansion, the principal component analysis, the iterative continuation and the prior constraint. In the scheme, the initial downward continued field on the target plane is obtained from the original measured field using the truncated Taylor series expansion method. If the original field was with particularly low signal-to-noise ratio, the principal component analysis is utilized to suppress the noise influence. Then, the downward continued field is upward continued to the plane of the prior information. If the prior information was on the target plane, it should be upward continued over a short distance to get the updated prior information. Next, the difference between the calculated field and the updated prior information is calculated. The cosine attenuation function is adopted to get the scope of constraint and the corresponding modification item. Afterward, a correction is performed on the downward continued field on the target plane by adding the modification item. The correction process is iteratively repeated until the difference meets the convergence condition. The accuracy of the proposed constrained scheme is tested on synthetic data with and without noise. Numerous model tests demonstrate that downward continuation using the constrained strategy can yield more precise results compared to other downward continuation methods without constraints and is relatively insensitive to noise even for downward continuation over a large distance. Finally, the proposed scheme is applied to real magnetic data collected within the Dapai polymetallic deposit from the Fujian province in South China. This practical application also indicates the superiority of the presented scheme.

  9. Residual Distribution Schemes for Conservation Laws Via Adaptive Quadrature

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Abgrall, Remi; Biegel, Bryan (Technical Monitor)

    2000-01-01

    This paper considers a family of nonconservative numerical discretizations for conservation laws which retains the correct weak solution behavior in the limit of mesh refinement whenever sufficient order numerical quadrature is used. Our analysis of 2-D discretizations in nonconservative form follows the 1-D analysis of Hou and Le Floch. For a specific family of nonconservative discretizations, it is shown under mild assumptions that the error arising from non-conservation is strictly smaller than the discretization error in the scheme. In the limit of mesh refinement under the same assumptions, solutions are shown to satisfy an entropy inequality. Using results from this analysis, a variant of the "N" (Narrow) residual distribution scheme of van der Weide and Deconinck is developed for first-order systems of conservation laws. The modified form of the N-scheme supplants the usual exact single-state mean-value linearization of flux divergence, typically used for the Euler equations of gasdynamics, by an equivalent integral form on simplex interiors. This integral form is then numerically approximated using an adaptive quadrature procedure. This renders the scheme nonconservative in the sense described earlier so that correct weak solutions are still obtained in the limit of mesh refinement. Consequently, we then show that the modified form of the N-scheme can be easily applied to general (non-simplicial) element shapes and general systems of first-order conservation laws equipped with an entropy inequality where exact mean-value linearization of the flux divergence is not readily obtained, e.g. magnetohydrodynamics, the Euler equations with certain forms of chemistry, etc. Numerical examples of subsonic, transonic and supersonic flows containing discontinuities together with multi-level mesh refinement are provided to verify the analysis.

  10. Ensuring correct rollback recovery in distributed shared memory systems

    NASA Technical Reports Server (NTRS)

    Janssens, Bob; Fuchs, W. Kent

    1995-01-01

    Distributed shared memory (DSM) implemented on a cluster of workstations is an increasingly attractive platform for executing parallel scientific applications. Checkpointing and rollback techniques can be used in such a system to allow the computation to progress in spite of the temporary failure of one or more processing nodes. This paper presents the design of an independent checkpointing method for DSM that takes advantage of DSM's specific properties to reduce error-free and rollback overhead. The scheme reduces the dependencies that need to be considered for correct rollback to those resulting from transfers of pages. Furthermore, in-transit messages can be recovered without the use of logging. We extend the scheme to a DSM implementation using lazy release consistency, where the frequency of dependencies is further reduced.

  11. Continuous light absorption photometer for long-term studies

    NASA Astrophysics Data System (ADS)

    Ogren, John A.; Wendell, Jim; Andrews, Elisabeth; Sheridan, Patrick J.

    2017-12-01

    A new photometer is described for continuous determination of the aerosol light absorption coefficient, optimized for long-term studies of the climate-forcing properties of aerosols. Measurements of the light attenuation coefficient are made at blue, green, and red wavelengths, with a detection limit of 0.02 Mm-1 and a precision of 4 % for hourly averages. The uncertainty of the light absorption coefficient is primarily determined by the uncertainty of the correction scheme commonly used to convert the measured light attenuation to light absorption coefficient and ranges from about 20 % at sites with high loadings of strongly absorbing aerosols up to 100 % or more at sites with low loadings of weakly absorbing aerosols. Much lower uncertainties (ca. 40 %) for the latter case can be achieved with an advanced correction scheme.

  12. Second derivatives for approximate spin projection methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Lee M.; Hratchian, Hrant P., E-mail: hhratchian@ucmerced.edu

    2015-02-07

    The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical secondmore » derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.« less

  13. Viscous Corrections of the Time Incremental Minimization Scheme and Visco-Energetic Solutions to Rate-Independent Evolution Problems

    NASA Astrophysics Data System (ADS)

    Minotti, Luca; Savaré, Giuseppe

    2018-02-01

    We propose the new notion of Visco-Energetic solutions to rate-independent systems {(X, E,} d) driven by a time dependent energy E and a dissipation quasi-distance d in a general metric-topological space X. As for the classic Energetic approach, solutions can be obtained by solving a modified time Incremental Minimization Scheme, where at each step the dissipation quasi-distance d is incremented by a viscous correction {δ} (for example proportional to the square of the distance d), which penalizes far distance jumps by inducing a localized version of the stability condition. We prove a general convergence result and a typical characterization by Stability and Energy Balance in a setting comparable to the standard energetic one, thus capable of covering a wide range of applications. The new refined Energy Balance condition compensates for the localized stability and provides a careful description of the jump behavior: at every jump the solution follows an optimal transition, which resembles in a suitable variational sense the discrete scheme that has been implemented for the whole construction.

  14. Molecular genetics external quality assessment pilot scheme for KRAS analysis in metastatic colorectal cancer.

    PubMed

    Deans, Zandra C; Tull, Justyna; Beighton, Gemma; Abbs, Stephen; Robinson, David O; Butler, Rachel

    2011-11-01

    Laboratories are increasingly required to perform molecular tests for the detection of mutations in the KRAS gene in metastatic colorectal cancers to allow better clinical management and more effective treatment for these patients. KRAS mutation status predicts a patient's likely response to the monoclonal antibody cetuximab. To provide a high standard of service, these laboratories require external quality assessment (EQA) to monitor the level of laboratory output and measure the performance of the laboratory against other service providers. National External Quality Assurance Services for Molecular Genetics provided a pilot EQA scheme for KRAS molecular analysis in metastatic colorectal cancers during 2009. Very few genotyping errors were reported by participating laboratories; however, the reporting nomenclature of the genotyping results varied considerably between laboratories. The pilot EQA scheme highlighted the need for continuing EQA in this field which will assess the laboratories' ability not only to obtain accurate, reliable results but also to interpret them safely and correctly ensuring that the referring clinician has the correct information to make the best clinical therapeutic decision for their patient.

  15. An O(Nm(sup 2)) Plane Solver for the Compressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Thomas, J. L.; Bonhaus, D. L.; Anderson, W. K.; Rumsey, C. L.; Biedron, R. T.

    1999-01-01

    A hierarchical multigrid algorithm for efficient steady solutions to the two-dimensional compressible Navier-Stokes equations is developed and demonstrated. The algorithm applies multigrid in two ways: a Full Approximation Scheme (FAS) for a nonlinear residual equation and a Correction Scheme (CS) for a linearized defect correction implicit equation. Multigrid analyses which include the effect of boundary conditions in one direction are used to estimate the convergence rate of the algorithm for a model convection equation. Three alternating-line- implicit algorithms are compared in terms of efficiency. The analyses indicate that full multigrid efficiency is not attained in the general case; the number of cycles to attain convergence is dependent on the mesh density for high-frequency cross-stream variations. However, the dependence is reasonably small and fast convergence is eventually attained for any given frequency with either the FAS or the CS scheme alone. The paper summarizes numerical computations for which convergence has been attained to within truncation error in a few multigrid cycles for both inviscid and viscous ow simulations on highly stretched meshes.

  16. Simultaneous Mean and Covariance Correction Filter for Orbit Estimation.

    PubMed

    Wang, Xiaoxu; Pan, Quan; Ding, Zhengtao; Ma, Zhengya

    2018-05-05

    This paper proposes a novel filtering design, from a viewpoint of identification instead of the conventional nonlinear estimation schemes (NESs), to improve the performance of orbit state estimation for a space target. First, a nonlinear perturbation is viewed or modeled as an unknown input (UI) coupled with the orbit state, to avoid the intractable nonlinear perturbation integral (INPI) required by NESs. Then, a simultaneous mean and covariance correction filter (SMCCF), based on a two-stage expectation maximization (EM) framework, is proposed to simply and analytically fit or identify the first two moments (FTM) of the perturbation (viewed as UI), instead of directly computing such the INPI in NESs. Orbit estimation performance is greatly improved by utilizing the fit UI-FTM to simultaneously correct the state estimation and its covariance. Third, depending on whether enough information is mined, SMCCF should outperform existing NESs or the standard identification algorithms (which view the UI as a constant independent of the state and only utilize the identified UI-mean to correct the state estimation, regardless of its covariance), since it further incorporates the useful covariance information in addition to the mean of the UI. Finally, our simulations demonstrate the superior performance of SMCCF via an orbit estimation example.

  17. A fast and robust computational method for the ionization cross sections of the driven Schrödinger equation using an O (N) multigrid-based scheme

    NASA Astrophysics Data System (ADS)

    Cools, S.; Vanroose, W.

    2016-03-01

    This paper improves the convergence and robustness of a multigrid-based solver for the cross sections of the driven Schrödinger equation. Adding a Coupled Channel Correction Step (CCCS) after each multigrid (MG) V-cycle efficiently removes the errors that remain after the V-cycle sweep. The combined iterative solution scheme (MG-CCCS) is shown to feature significantly improved convergence rates over the classical MG method at energies where bound states dominate the solution, resulting in a fast and scalable solution method for the complex-valued Schrödinger break-up problem for any energy regime. The proposed solver displays optimal scaling; a solution is found in a time that is linear in the number of unknowns. The method is validated on a 2D Temkin-Poet model problem, and convergence results both as a solver and preconditioner are provided to support the O (N) scalability of the method. This paper extends the applicability of the complex contour approach for far field map computation (Cools et al. (2014) [10]).

  18. A computer code for multiphase all-speed transient flows in complex geometries. MAST version 1.0

    NASA Technical Reports Server (NTRS)

    Chen, C. P.; Jiang, Y.; Kim, Y. M.; Shang, H. M.

    1991-01-01

    The operation of the MAST code, which computes transient solutions to the multiphase flow equations applicable to all-speed flows, is described. Two-phase flows are formulated based on the Eulerian-Lagrange scheme in which the continuous phase is described by the Navier-Stokes equation (or Reynolds equations for turbulent flows). Dispersed phase is formulated by a Lagrangian tracking scheme. The numerical solution algorithms utilized for fluid flows is a newly developed pressure-implicit algorithm based on the operator-splitting technique in generalized nonorthogonal coordinates. This operator split allows separate operation on each of the variable fields to handle pressure-velocity coupling. The obtained pressure correction equation has the hyperbolic nature and is effective for Mach numbers ranging from the incompressible limit to supersonic flow regimes. The present code adopts a nonstaggered grid arrangement; thus, the velocity components and other dependent variables are collocated at the same grid. A sequence of benchmark-quality problems, including incompressible, subsonic, transonic, supersonic, gas-droplet two-phase flows, as well as spray-combustion problems, were performed to demonstrate the robustness and accuracy of the present code.

  19. Early prediction of extreme stratospheric polar vortex states based on causal precursors

    NASA Astrophysics Data System (ADS)

    Kretschmer, Marlene; Runge, Jakob; Coumou, Dim

    2017-08-01

    Variability in the stratospheric polar vortex (SPV) can influence the tropospheric circulation and thereby winter weather. Early predictions of extreme SPV states are thus important to improve forecasts of winter weather including cold spells. However, dynamical models are usually restricted in lead time because they poorly capture low-frequency processes. Empirical models often suffer from overfitting problems as the relevant physical processes and time lags are often not well understood. Here we introduce a novel empirical prediction method by uniting a response-guided community detection scheme with a causal discovery algorithm. This way, we objectively identify causal precursors of the SPV at subseasonal lead times and find them to be in good agreement with known physical drivers. A linear regression prediction model based on the causal precursors can explain most SPV variability (r2 = 0.58), and our scheme correctly predicts 58% (46%) of extremely weak SPV states for lead times of 1-15 (16-30) days with false-alarm rates of only approximately 5%. Our method can be applied to any variable relevant for (sub)seasonal weather forecasts and could thus help improving long-lead predictions.

  20. An Efficient and Adaptive Mutual Authentication Framework for Heterogeneous Wireless Sensor Network-Based Applications

    PubMed Central

    Kumar, Pardeep; Ylianttila, Mika; Gurtov, Andrei; Lee, Sang-Gon; Lee, Hoon-Jae

    2014-01-01

    Robust security is highly coveted in real wireless sensor network (WSN) applications since wireless sensors' sense critical data from the application environment. This article presents an efficient and adaptive mutual authentication framework that suits real heterogeneous WSN-based applications (such as smart homes, industrial environments, smart grids, and healthcare monitoring). The proposed framework offers: (i) key initialization; (ii) secure network (cluster) formation (i.e., mutual authentication and dynamic key establishment); (iii) key revocation; and (iv) new node addition into the network. The correctness of the proposed scheme is formally verified. An extensive analysis shows the proposed scheme coupled with message confidentiality, mutual authentication and dynamic session key establishment, node privacy, and message freshness. Moreover, the preliminary study also reveals the proposed framework is secure against popular types of attacks, such as impersonation attacks, man-in-the-middle attacks, replay attacks, and information-leakage attacks. As a result, we believe the proposed framework achieves efficiency at reasonable computation and communication costs and it can be a safeguard to real heterogeneous WSN applications. PMID:24521942

  1. Sensitivity Limits of Rydberg Atom-Based Radio Frequency Electric Field Sensing

    NASA Astrophysics Data System (ADS)

    Jahangiri, Akbar J.; Kumar, Santosh; Kuebler, Harald; Fan, Haoquan; Shaffer, James P.

    2017-04-01

    We present progress on Rydberg atom-based RF electric field sensing using Rydberg state electromagnetically induced transparency (EIT) in room temperature atomic vapor cells. In recent experiments on homodyne detection with a Mach-Zehnder interferometer and frequency modulation spectroscopy with active control of residual amplitude modulation we determined that photon shot noise on the probe laser detector limits the sensitivity. Another factor that limits the accuracy is residual Doppler broadening due to the wave-vector mismatch between the coupling and the probe lasers. The sensor as limited by project noise can be orders of magnitude better. A multi-photon scheme is presented that can eliminate the residual Doppler effect by matching the wave-vectors of three lasers and reduce the photon shot noise limit by correctly choosing the Rabi frequencies of the first two steps of the EIT scheme. Using density matrix calculations, we predict that the three-photon approach can improve the detection sensitivity to below 200 nV cm-1 Hz- 1 / 2 and expand the Autler-Townes regime which improves the accuracy. This work is supported by DARPA and the NRO.

  2. An efficient and adaptive mutual authentication framework for heterogeneous wireless sensor network-based applications.

    PubMed

    Kumar, Pardeep; Ylianttila, Mika; Gurtov, Andrei; Lee, Sang-Gon; Lee, Hoon-Jae

    2014-02-11

    Robust security is highly coveted in real wireless sensor network (WSN) applications since wireless sensors' sense critical data from the application environment. This article presents an efficient and adaptive mutual authentication framework that suits real heterogeneous WSN-based applications (such as smart homes, industrial environments, smart grids, and healthcare monitoring). The proposed framework offers: (i) key initialization; (ii) secure network (cluster) formation (i.e., mutual authentication and dynamic key establishment); (iii) key revocation; and (iv) new node addition into the network. The correctness of the proposed scheme is formally verified. An extensive analysis shows the proposed scheme coupled with message confidentiality, mutual authentication and dynamic session key establishment, node privacy, and message freshness. Moreover, the preliminary study also reveals the proposed framework is secure against popular types of attacks, such as impersonation attacks, man-in-the-middle attacks, replay attacks, and information-leakage attacks. As a result, we believe the proposed framework achieves efficiency at reasonable computation and communication costs and it can be a safeguard to real heterogeneous WSN applications.

  3. A novel framework for objective detection and tracking of TC center from noisy satellite imagery

    NASA Astrophysics Data System (ADS)

    Johnson, Bibin; Thomas, Sachin; Rani, J. Sheeba

    2018-07-01

    This paper proposes a novel framework for automatically determining and tracking the center of a tropical cyclone (TC) during its entire life-cycle from the Thermal infrared (TIR) channel data of the geostationary satellite. The proposed method handles meteorological images with noise, missing or partial information due to the seasonal variability and lack of significant spatial or vortex features. To retrieve the cyclone center from these circumstances, a synergistic approach based on objective measures and Numerical Weather Prediction (NWP) model is being proposed. This method employs a spatial gradient scheme to process missing and noisy frames or a spatio-temporal gradient scheme for image sequences that are continuous and contain less noise. The initial estimate of the TC center from the missing imagery is corrected by exploiting a NWP model based post-processing scheme. The validity of the framework is tested on Infrared images of different cyclones obtained from various Geostationary satellites such as the Meteosat-7, INSAT- 3 D , Kalpana-1 etc. The computed track is compared with the actual track data obtained from Joint Typhoon Warning Center (JTWC), and it shows a reduction of mean track error by 11 % as compared to the other state of the art methods in the presence of missing and noisy frames. The proposed method is also successfully tested for simultaneous retrieval of the TC center from images containing multiple non-overlapping cyclones.

  4. A Round-Efficient Authenticated Key Agreement Scheme Based on Extended Chaotic Maps for Group Cloud Meeting.

    PubMed

    Lin, Tsung-Hung; Tsung, Chen-Kun; Lee, Tian-Fu; Wang, Zeng-Bo

    2017-12-03

    The security is a critical issue for business purposes. For example, the cloud meeting must consider strong security to maintain the communication privacy. Considering the scenario with cloud meeting, we apply extended chaotic map to present passwordless group authentication key agreement, termed as Passwordless Group Authentication Key Agreement (PL-GAKA). PL-GAKA improves the computation efficiency for the simple group password-based authenticated key agreement (SGPAKE) proposed by Lee et al. in terms of computing the session key. Since the extended chaotic map has equivalent security level to the Diffie-Hellman key exchange scheme applied by SGPAKE, the security of PL-GAKA is not sacrificed when improving the computation efficiency. Moreover, PL-GAKA is a passwordless scheme, so the password maintenance is not necessary. Short-term authentication is considered, hence the communication security is stronger than other protocols by dynamically generating session key in each cloud meeting. In our analysis, we first prove that each meeting member can get the correct information during the meeting. We analyze common security issues for the proposed PL-GAKA in terms of session key security, mutual authentication, perfect forward security, and data integrity. Moreover, we also demonstrate that communicating in PL-GAKA is secure when suffering replay attacks, impersonation attacks, privileged insider attacks, and stolen-verifier attacks. Eventually, an overall comparison is given to show the performance between PL-GAKA, SGPAKE and related solutions.

  5. Fault-tolerant simple quantum-bit commitment unbreakable by individual attacks

    NASA Astrophysics Data System (ADS)

    Shimizu, Kaoru; Imoto, Nobuyuki

    2002-03-01

    This paper proposes a simple scheme for quantum-bit commitment that is secure against individual particle attacks, where a sender is unable to use quantum logical operations to manipulate multiparticle entanglement for performing quantum collective and coherent attacks. Our scheme employs a cryptographic quantum communication channel defined in a four-dimensional Hilbert space and can be implemented by using single-photon interference. For an ideal case of zero-loss and noiseless quantum channels, our basic scheme relies only on the physical features of quantum states. Moreover, as long as the bit-flip error rates are sufficiently small (less than a few percent), we can improve our scheme and make it fault tolerant by adopting simple error-correcting codes with a short length. Compared with the well-known Brassard-Crepeau-Jozsa-Langlois 1993 (BCJL93) protocol, our scheme is mathematically far simpler, more efficient in terms of transmitted photon number, and better tolerant of bit-flip errors.

  6. Correction of image drift and distortion in a scanning electron microscopy.

    PubMed

    Jin, P; Li, X

    2015-12-01

    Continuous research on small-scale mechanical structures and systems has attracted strong demand for ultrafine deformation and strain measurements. Conventional optical microscope cannot meet such requirements owing to its lower spatial resolution. Therefore, high-resolution scanning electron microscope has become the preferred system for high spatial resolution imaging and measurements. However, scanning electron microscope usually is contaminated by distortion and drift aberrations which cause serious errors to precise imaging and measurements of tiny structures. This paper develops a new method to correct drift and distortion aberrations of scanning electron microscope images, and evaluates the effect of correction by comparing corrected images with scanning electron microscope image of a standard sample. The drift correction is based on the interpolation scheme, where a series of images are captured at one location of the sample and perform image correlation between the first image and the consequent images to interpolate the drift-time relationship of scanning electron microscope images. The distortion correction employs the axial symmetry model of charged particle imaging theory to two images sharing with the same location of one object under different imaging fields of view. The difference apart from rigid displacement between the mentioned two images will give distortion parameters. Three-order precision is considered in the model and experiment shows that one pixel maximum correction is obtained for the employed high-resolution electron microscopic system. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  7. Criterion for correct recalls in associative-memory neural networks

    NASA Astrophysics Data System (ADS)

    Ji, Han-Bing

    1992-12-01

    A novel weighted outer-product learning (WOPL) scheme for associative memory neural networks (AMNNs) is presented. In the scheme, each fundamental memory is allocated a learning weight to direct its correct recall. Both the Hopfield and multiple training models are instances of the WOPL model with certain sets of learning weights. A necessary condition of choosing learning weights for the convergence property of the WOPL model is obtained through neural dynamics. A criterion for choosing learning weights for correct associative recalls of the fundamental memories is proposed. In this paper, an important parameter called signal to noise ratio gain (SNRG) is devised, and it is found out empirically that SNRGs have their own threshold values which means that any fundamental memory can be correctly recalled when its corresponding SNRG is greater than or equal to its threshold value. Furthermore, a theorem is given and some theoretical results on the conditions of SNRGs and learning weights for good associative recall performance of the WOPL model are accordingly obtained. In principle, when all SNRGs or learning weights chosen satisfy the theoretically obtained conditions, the asymptotic storage capacity of the WOPL model will grow at the greatest rate under certain known stochastic meaning for AMNNs, and thus the WOPL model can achieve correct recalls for all fundamental memories. The representative computer simulations confirm the criterion and theoretical analysis.

  8. Optimal rotated staggered-grid finite-difference schemes for elastic wave modeling in TTI media

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Yan, Hongyong; Liu, Hong

    2015-11-01

    The rotated staggered-grid finite-difference (RSFD) is an effective approach for numerical modeling to study the wavefield characteristics in tilted transversely isotropic (TTI) media. But it surfaces from serious numerical dispersion, which directly affects the modeling accuracy. In this paper, we propose two different optimal RSFD schemes based on the sampling approximation (SA) method and the least-squares (LS) method respectively to overcome this problem. We first briefly introduce the RSFD theory, based on which we respectively derive the SA-based RSFD scheme and the LS-based RSFD scheme. Then different forms of analysis are used to compare the SA-based RSFD scheme and the LS-based RSFD scheme with the conventional RSFD scheme, which is based on the Taylor-series expansion (TE) method. The contrast in numerical accuracy analysis verifies the greater accuracy of the two proposed optimal schemes, and indicates that these schemes can effectively widen the wavenumber range with great accuracy compared with the TE-based RSFD scheme. Further comparisons between these two optimal schemes show that at small wavenumbers, the SA-based RSFD scheme performs better, while at large wavenumbers, the LS-based RSFD scheme leads to a smaller error. Finally, the modeling results demonstrate that for the same operator length, the SA-based RSFD scheme and the LS-based RSFD scheme can achieve greater accuracy than the TE-based RSFD scheme, while for the same accuracy, the optimal schemes can adopt shorter difference operators to save computing time.

  9. A rank-based approach for correcting systematic biases in spatial disaggregation of coarse-scale climate simulations

    NASA Astrophysics Data System (ADS)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2017-07-01

    Use of General Circulation Model (GCM) precipitation and evapotranspiration sequences for hydrologic modelling can result in unrealistic simulations due to the coarse scales at which GCMs operate and the systematic biases they contain. The Bias Correction Spatial Disaggregation (BCSD) method is a popular statistical downscaling and bias correction method developed to address this issue. The advantage of BCSD is its ability to reduce biases in the distribution of precipitation totals at the GCM scale and then introduce more realistic variability at finer scales than simpler spatial interpolation schemes. Although BCSD corrects biases at the GCM scale before disaggregation; at finer spatial scales biases are re-introduced by the assumptions made in the spatial disaggregation process. Our study focuses on this limitation of BCSD and proposes a rank-based approach that aims to reduce the spatial disaggregation bias especially for both low and high precipitation extremes. BCSD requires the specification of a multiplicative bias correction anomaly field that represents the ratio of the fine scale precipitation to the disaggregated precipitation. It is shown that there is significant temporal variation in the anomalies, which is masked when a mean anomaly field is used. This can be improved by modelling the anomalies in rank-space. Results from the application of the rank-BCSD procedure improve the match between the distributions of observed and downscaled precipitation at the fine scale compared to the original BCSD approach. Further improvements in the distribution are identified when a scaling correction to preserve mass in the disaggregation process is implemented. An assessment of the approach using a single GCM over Australia shows clear advantages especially in the simulation of particularly low and high downscaled precipitation amounts.

  10. A positivity-preserving, implicit defect-correction multigrid method for turbulent combustion

    NASA Astrophysics Data System (ADS)

    Wasserman, M.; Mor-Yossef, Y.; Greenberg, J. B.

    2016-07-01

    A novel, robust multigrid method for the simulation of turbulent and chemically reacting flows is developed. A survey of previous attempts at implementing multigrid for the problems at hand indicated extensive use of artificial stabilization to overcome numerical instability arising from non-linearity of turbulence and chemistry model source-terms, small-scale physics of combustion, and loss of positivity. These issues are addressed in the current work. The highly stiff Reynolds-averaged Navier-Stokes (RANS) equations, coupled with turbulence and finite-rate chemical kinetics models, are integrated in time using the unconditionally positive-convergent (UPC) implicit method. The scheme is successfully extended in this work for use with chemical kinetics models, in a fully-coupled multigrid (FC-MG) framework. To tackle the degraded performance of multigrid methods for chemically reacting flows, two major modifications are introduced with respect to the basic, Full Approximation Storage (FAS) approach. First, a novel prolongation operator that is based on logarithmic variables is proposed to prevent loss of positivity due to coarse-grid corrections. Together with the extended UPC implicit scheme, the positivity-preserving prolongation operator guarantees unconditional positivity of turbulence quantities and species mass fractions throughout the multigrid cycle. Second, to improve the coarse-grid-correction obtained in localized regions of high chemical activity, a modified defect correction procedure is devised, and successfully applied for the first time to simulate turbulent, combusting flows. The proposed modifications to the standard multigrid algorithm create a well-rounded and robust numerical method that provides accelerated convergence, while unconditionally preserving the positivity of model equation variables. Numerical simulations of various flows involving premixed combustion demonstrate that the proposed MG method increases the efficiency by a factor of up to eight times with respect to an equivalent single-grid method, and by two times with respect to an artificially-stabilized MG method.

  11. Radiation boundary condition and anisotropy correction for finite difference solutions of the Helmholtz equation

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Webb, Jay C.

    1994-01-01

    In this paper finite-difference solutions of the Helmholtz equation in an open domain are considered. By using a second-order central difference scheme and the Bayliss-Turkel radiation boundary condition, reasonably accurate solutions can be obtained when the number of grid points per acoustic wavelength used is large. However, when a smaller number of grid points per wavelength is used excessive reflections occur which tend to overwhelm the computed solutions. Excessive reflections are due to the incompability between the governing finite difference equation and the Bayliss-Turkel radiation boundary condition. The Bayliss-Turkel radiation boundary condition was developed from the asymptotic solution of the partial differential equation. To obtain compatibility, the radiation boundary condition should be constructed from the asymptotic solution of the finite difference equation instead. Examples are provided using the improved radiation boundary condition based on the asymptotic solution of the governing finite difference equation. The computed results are free of reflections even when only five grid points per wavelength are used. The improved radiation boundary condition has also been tested for problems with complex acoustic sources and sources embedded in a uniform mean flow. The present method of developing a radiation boundary condition is also applicable to higher order finite difference schemes. In all these cases no reflected waves could be detected. The use of finite difference approximation inevita bly introduces anisotropy into the governing field equation. The effect of anisotropy is to distort the directional distribution of the amplitude and phase of the computed solution. It can be quite large when the number of grid points per wavelength used in the computation is small. A way to correct this effect is proposed. The correction factor developed from the asymptotic solutions is source independent and, hence, can be determined once and for all. The effectiveness of the correction factor in providing improvements to the computed solution is demonstrated in this paper.

  12. Towards information-optimal simulation of partial differential equations.

    PubMed

    Leike, Reimar H; Enßlin, Torsten A

    2018-03-01

    Most simulation schemes for partial differential equations (PDEs) focus on minimizing a simple error norm of a discretized version of a field. This paper takes a fundamentally different approach; the discretized field is interpreted as data providing information about a real physical field that is unknown. This information is sought to be conserved by the scheme as the field evolves in time. Such an information theoretic approach to simulation was pursued before by information field dynamics (IFD). In this paper we work out the theory of IFD for nonlinear PDEs in a noiseless Gaussian approximation. The result is an action that can be minimized to obtain an information-optimal simulation scheme. It can be brought into a closed form using field operators to calculate the appearing Gaussian integrals. The resulting simulation schemes are tested numerically in two instances for the Burgers equation. Their accuracy surpasses finite-difference schemes on the same resolution. The IFD scheme, however, has to be correctly informed on the subgrid correlation structure. In certain limiting cases we recover well-known simulation schemes like spectral Fourier-Galerkin methods. We discuss implications of the approximations made.

  13. Preserving privacy of online digital physiological signals using blind and reversible steganography.

    PubMed

    Shiu, Hung-Jr; Lin, Bor-Sing; Huang, Chien-Hung; Chiang, Pei-Ying; Lei, Chin-Laung

    2017-11-01

    Physiological signals such as electrocardiograms (ECG) and electromyograms (EMG) are widely used to diagnose diseases. Presently, the Internet offers numerous cloud storage services which enable digital physiological signals to be uploaded for convenient access and use. Numerous online databases of medical signals have been built. The data in them must be processed in a manner that preserves patients' confidentiality. A reversible error-correcting-coding strategy will be adopted to transform digital physiological signals into a new bit-stream that uses a matrix in which is embedded the Hamming code to pass secret messages or private information. The shared keys are the matrix and the version of the Hamming code. An online open database, the MIT-BIH arrhythmia database, was used to test the proposed algorithms. The time-complexity, capacity and robustness are evaluated. Comparisons of several evaluations subject to related work are also proposed. This work proposes a reversible, low-payload steganographic scheme for preserving the privacy of physiological signals. An (n,  m)-hamming code is used to insert (n - m) secret bits into n bits of a cover signal. The number of embedded bits per modification is higher than in comparable methods, and the computational power is efficient and the scheme is secure. Unlike other Hamming-code based schemes, the proposed scheme is both reversible and blind. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. On Spurious Numerics in Solving Reactive Equations

    NASA Technical Reports Server (NTRS)

    Kotov, D. V; Yee, H. C.; Wang, W.; Shu, C.-W.

    2013-01-01

    The objective of this study is to gain a deeper understanding of the behavior of high order shock-capturing schemes for problems with stiff source terms and discontinuities and on corresponding numerical prediction strategies. The studies by Yee et al. (2012) and Wang et al. (2012) focus only on solving the reactive system by the fractional step method using the Strang splitting (Strang 1968). It is a common practice by developers in computational physics and engineering simulations to include a cut off safeguard if densities are outside the permissible range. Here we compare the spurious behavior of the same schemes by solving the fully coupled reactive system without the Strang splitting vs. using the Strang splitting. Comparison between the two procedures and the effects of a cut off safeguard is the focus the present study. The comparison of the performance of these schemes is largely based on the degree to which each method captures the correct location of the reaction front for coarse grids. Here "coarse grids" means standard mesh density requirement for accurate simulation of typical non-reacting flows of similar problem setup. It is remarked that, in order to resolve the sharp reaction front, local refinement beyond standard mesh density is still needed.

  15. Particle/Continuum Hybrid Simulation in a Parallel Computing Environment

    NASA Technical Reports Server (NTRS)

    Baganoff, Donald

    1996-01-01

    The objective of this study was to modify an existing parallel particle code based on the direct simulation Monte Carlo (DSMC) method to include a Navier-Stokes (NS) calculation so that a hybrid solution could be developed. In carrying out this work, it was determined that the following five issues had to be addressed before extensive program development of a three dimensional capability was pursued: (1) find a set of one-sided kinetic fluxes that are fully compatible with the DSMC method, (2) develop a finite volume scheme to make use of these one-sided kinetic fluxes, (3) make use of the one-sided kinetic fluxes together with DSMC type boundary conditions at a material surface so that velocity slip and temperature slip arise naturally for near-continuum conditions, (4) find a suitable sampling scheme so that the values of the one-sided fluxes predicted by the NS solution at an interface between the two domains can be converted into the correct distribution of particles to be introduced into the DSMC domain, (5) carry out a suitable number of tests to confirm that the developed concepts are valid, individually and in concert for a hybrid scheme.

  16. A structure adapted multipole method for electrostatic interactions in protein dynamics

    NASA Astrophysics Data System (ADS)

    Niedermeier, Christoph; Tavan, Paul

    1994-07-01

    We present an algorithm for rapid approximate evaluation of electrostatic interactions in molecular dynamics simulations of proteins. Traditional algorithms require computational work of the order O(N2) for a system of N particles. Truncation methods which try to avoid that effort entail untolerably large errors in forces, energies and other observables. Hierarchical multipole expansion algorithms, which can account for the electrostatics to numerical accuracy, scale with O(N log N) or even with O(N) if they become augmented by a sophisticated scheme for summing up forces. To further reduce the computational effort we propose an algorithm that also uses a hierarchical multipole scheme but considers only the first two multipole moments (i.e., charges and dipoles). Our strategy is based on the consideration that numerical accuracy may not be necessary to reproduce protein dynamics with sufficient correctness. As opposed to previous methods, our scheme for hierarchical decomposition is adjusted to structural and dynamical features of the particular protein considered rather than chosen rigidly as a cubic grid. As compared to truncation methods we manage to reduce errors in the computation of electrostatic forces by a factor of 10 with only marginal additional effort.

  17. Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme.

    PubMed

    Wang, Hao; Jiang, Jie; Zhang, Guangjun

    2017-04-21

    The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.

  18. Architectures for Quantum Simulation Showing a Quantum Speedup

    NASA Astrophysics Data System (ADS)

    Bermejo-Vega, Juan; Hangleiter, Dominik; Schwarz, Martin; Raussendorf, Robert; Eisert, Jens

    2018-04-01

    One of the main aims in the field of quantum simulation is to achieve a quantum speedup, often referred to as "quantum computational supremacy," referring to the experimental realization of a quantum device that computationally outperforms classical computers. In this work, we show that one can devise versatile and feasible schemes of two-dimensional, dynamical, quantum simulators showing such a quantum speedup, building on intermediate problems involving nonadaptive, measurement-based, quantum computation. In each of the schemes, an initial product state is prepared, potentially involving an element of randomness as in disordered models, followed by a short-time evolution under a basic translationally invariant Hamiltonian with simple nearest-neighbor interactions and a mere sampling measurement in a fixed basis. The correctness of the final-state preparation in each scheme is fully efficiently certifiable. We discuss experimental necessities and possible physical architectures, inspired by platforms of cold atoms in optical lattices and a number of others, as well as specific assumptions that enter the complexity-theoretic arguments. This work shows that benchmark settings exhibiting a quantum speedup may require little control, in contrast to universal quantum computing. Thus, our proposal puts a convincing experimental demonstration of a quantum speedup within reach in the near term.

  19. Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme

    PubMed Central

    Wang, Hao; Jiang, Jie; Zhang, Guangjun

    2017-01-01

    The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters. PMID:28430132

  20. A rotationally biased upwind difference scheme for the Euler equations

    NASA Technical Reports Server (NTRS)

    Davis, S. F.

    1983-01-01

    The upwind difference schemes of Godunov, Osher, Roe and van Leer are able to resolve one dimensional steady shocks for the Euler equations within one or two mesh intervals. Unfortunately, this resolution is lost in two dimensions when the shock crosses the computing grid at an oblique angle. To correct this problem, a numerical scheme was developed which automatically locates the angle at which a shock might be expected to cross the computing grid and then constructs separate finite difference formulas for the flux components normal and tangential to this direction. Numerical results which illustrate the ability of this method to resolve steady oblique shocks are presented.

Top