Sample records for channel estimation errors

  1. Performance analysis of adaptive equalization for coherent acoustic communications in the time-varying ocean environment.

    PubMed

    Preisig, James C

    2005-07-01

    Equations are derived for analyzing the performance of channel estimate based equalizers. The performance is characterized in terms of the mean squared soft decision error (sigma2(s)) of each equalizer. This error is decomposed into two components. These are the minimum achievable error (sigma2(0)) and the excess error (sigma2(e)). The former is the soft decision error that would be realized by the equalizer if the filter coefficient calculation were based upon perfect knowledge of the channel impulse response and statistics of the interfering noise field. The latter is the additional soft decision error that is realized due to errors in the estimates of these channel parameters. These expressions accurately predict the equalizer errors observed in the processing of experimental data by a channel estimate based decision feedback equalizer (DFE) and a passive time-reversal equalizer. Further expressions are presented that allow equalizer performance to be predicted given the scattering function of the acoustic channel. The analysis using these expressions yields insights into the features of surface scattering that most significantly impact equalizer performance in shallow water environments and motivates the implementation of a DFE that is robust with respect to channel estimation errors.

  2. LS Channel Estimation and Signal Separation for UHF RFID Tag Collision Recovery on the Physical Layer.

    PubMed

    Duan, Hanjun; Wu, Haifeng; Zeng, Yu; Chen, Yuebin

    2016-03-26

    In a passive ultra-high frequency (UHF) radio-frequency identification (RFID) system, tag collision is generally resolved on a medium access control (MAC) layer. However, some of collided tag signals could be recovered on a physical (PHY) layer and, thus, enhance the identification efficiency of the RFID system. For the recovery on the PHY layer, channel estimation is a critical issue. Good channel estimation will help to recover the collided signals. Existing channel estimates work well for two collided tags. When the number of collided tags is beyond two, however, the existing estimates have more estimation errors. In this paper, we propose a novel channel estimate for the UHF RFID system. It adopts an orthogonal matrix based on the information of preambles which is known for a reader and applies a minimum-mean-square-error (MMSE) criterion to estimate channels. From the estimated channel, we could accurately separate the collided signals and recover them. By means of numerical results, we show that the proposed estimate has lower estimation errors and higher separation efficiency than the existing estimates.

  3. Revised techniques for estimating peak discharges from channel width in Montana

    USGS Publications Warehouse

    Parrett, Charles; Hull, J.A.; Omang, R.J.

    1987-01-01

    This study was conducted to develop new estimating equations based on channel width and the updated flood frequency curves of previous investigations. Simple regression equations for estimating peak discharges with recurrence intervals of 2, 5, 10 , 25, 50, and 100 years were developed for seven regions in Montana. The standard errors of estimates for the equations that use active channel width as the independent variables ranged from 30% to 87%. The standard errors of estimate for the equations that use bankfull width as the independent variable ranged from 34% to 92%. The smallest standard errors generally occurred in the prediction equations for the 2-yr flood, 5-yr flood, and 10-yr flood, and the largest standard errors occurred in the prediction equations for the 100-yr flood. The equations that use active channel width and the equations that use bankfull width were determined to be about equally reliable in five regions. In the West Region, the equations that use bankfull width were slightly more reliable than those based on active channel width, whereas in the East-Central Region the equations that use active channel width were slightly more reliable than those based on bankfull width. Compared with similar equations previously developed, the standard errors of estimate for the new equations are substantially smaller in three regions and substantially larger in two regions. Limitations on the use of the estimating equations include: (1) The equations are based on stable conditions of channel geometry and prevailing water and sediment discharge; (2) The measurement of channel width requires a site visit, preferably by a person with experience in the method, and involves appreciable measurement errors; (3) Reliability of results from the equations for channel widths beyond the range of definition is unknown. In spite of the limitations, the estimating equations derived in this study are considered to be as reliable as estimating equations based on basin and climatic variables. Because the two types of estimating equations are independent, results from each can be weighted inversely proportional to their variances, and averaged. The weighted average estimate has a variance less than either individual estimate. (Author 's abstract)

  4. Channel estimation in few mode fiber mode division multiplexing transmission system

    NASA Astrophysics Data System (ADS)

    Hei, Yongqiang; Li, Li; Li, Wentao; Li, Xiaohui; Shi, Guangming

    2018-03-01

    It is abundantly clear that obtaining the channel state information (CSI) is of great importance for the equalization and detection in coherence receivers. However, to the best of the authors' knowledge, in most of the existing literatures, CSI is assumed to be perfectly known at the receiver. So far, few literature discusses the effects of imperfect CSI on MDM system performance caused by channel estimation. Motivated by that, in this paper, the channel estimation in few mode fiber (FMF) mode division multiplexing (MDM) system is investigated, in which two classical channel estimation methods, i.e., least square (LS) method and minimum mean square error (MMSE) method, are discussed with the assumption of the spatially white noise lumped at the receiver side of MDM system. Both the capacity and BER performance of MDM system affected by mode-dependent gain or loss (MDL) with different channel estimation errors have been studied. Simulation results show that the capacity and BER performance can be further deteriorated in MDM system by the channel estimation, and an 1e-3 variance of channel estimation error is acceptable in MDM system with 0-6 dB MDL values.

  5. A robust pseudo-inverse spectral filter applied to the Earth Radiation Budget Experiment (ERBE) scanning channels

    NASA Technical Reports Server (NTRS)

    Avis, L. M.; Green, R. N.; Suttles, J. T.; Gupta, S. K.

    1984-01-01

    Computer simulations of a least squares estimator operating on the ERBE scanning channels are discussed. The estimator is designed to minimize the errors produced by nonideal spectral response to spectrally varying and uncertain radiant input. The three ERBE scanning channels cover a shortwave band a longwave band and a ""total'' band from which the pseudo inverse spectral filter estimates the radiance components in the shortwave band and a longwave band. The radiance estimator draws on instantaneous field of view (IFOV) scene type information supplied by another algorithm of the ERBE software, and on a priori probabilistic models of the responses of the scanning channels to the IFOV scene types for given Sun scene spacecraft geometry. It is found that the pseudoinverse spectral filter is stable, tolerant of errors in scene identification and in channel response modeling, and, in the absence of such errors, yields minimum variance and essentially unbiased radiance estimates.

  6. A MIMO radar quadrature and multi-channel amplitude-phase error combined correction method based on cross-correlation

    NASA Astrophysics Data System (ADS)

    Yun, Lingtong; Zhao, Hongzhong; Du, Mengyuan

    2018-04-01

    Quadrature and multi-channel amplitude-phase error have to be compensated in the I/Q quadrature sampling and signal through multi-channel. A new method that it doesn't need filter and standard signal is presented in this paper. And it can combined estimate quadrature and multi-channel amplitude-phase error. The method uses cross-correlation and amplitude ratio between the signal to estimate the two amplitude-phase errors simply and effectively. And the advantages of this method are verified by computer simulation. Finally, the superiority of the method is also verified by measure data of outfield experiments.

  7. Mean annual runoff and peak flow estimates based on channel geometry of streams in northeastern and western Montana

    USGS Publications Warehouse

    Parrett, Charles; Omang, R.J.; Hull, J.A.

    1983-01-01

    Equations for estimating mean annual runoff and peak discharge from measurements of channel geometry were developed for western and northeastern Montana. The study area was divided into two regions for the mean annual runoff analysis, and separate multiple-regression equations were developed for each region. The active-channel width was determined to be the most important independent variable in each region. The standard error of estimate for the estimating equation using active-channel width was 61 percent in the Northeast Region and 38 percent in the West region. The study area was divided into six regions for the peak discharge analysis, and multiple regression equations relating channel geometry and basin characteristics to peak discharges having recurrence intervals of 2, 5, 10, 25, 50 and 100 years were developed for each region. The standard errors of estimate for the regression equations using only channel width as an independent variable ranged from 35 to 105 percent. The standard errors improved in four regions as basin characteristics were added to the estimating equations. (USGS)

  8. Smooth Approximation l 0-Norm Constrained Affine Projection Algorithm and Its Applications in Sparse Channel Estimation

    PubMed Central

    2014-01-01

    We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588

  9. RLS Channel Estimation with Adaptive Forgetting Factor for DS-CDMA Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Tomeba, Hiromichi; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can increase the downlink bit error rate (BER) performance of DS-CDMA beyond that possible with conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. Recently, we proposed a pilot-assisted channel estimation (CE) based on the MMSE criterion. Using MMSE-CE, the channel estimation accuracy is almost insensitive to the pilot chip sequence, and a good BER performance is achieved. In this paper, we propose a channel estimation scheme using one-tap recursive least square (RLS) algorithm, where the forgetting factor is adapted to the changing channel condition by the least mean square (LMS)algorithm, for DS-CDMA with FDE. We evaluate the BER performance using RLS-CE with adaptive forgetting factor in a frequency-selective fast Rayleigh fading channel by computer simulation.

  10. Fuzzy-Estimation Control for Improvement Microwave Connection for Iraq Electrical Grid

    NASA Astrophysics Data System (ADS)

    Hoomod, Haider K.; Radi, Mohammed

    2018-05-01

    The demand for broadband wireless services is increasing day by day (as internet or radio broadcast and TV etc.) for this reason and optimal exploiting for this bandwidth may be other reasons indeed be there is problem in the communication channels. it’s necessary that exploiting the good part form this bandwidth. In this paper, we propose to use estimation technique for estimate channel availability in that moment and next one to know the error in the bandwidth channel for controlling the possibility data transferring through the channel. The proposed estimation based on the combination of the least Minimum square (LMS), Standard Kalman filter, and Modified Kalman filter. The error estimation in channel use as control parameter in fuzzy rules to adjusted the rate and size sending data through the network channel, and rearrangement the priorities of the buffered data (workstation control parameters, Texts, phone call, images, and camera video) for the worst cases of error in channel. The propose system is designed to management data communications through the channels connect among the Iraqi electrical grid stations. The proposed results show that the modified Kalman filter have a best result in time and noise estimation (0.1109 for 5% noise estimation to 0.3211 for 90% noise estimation) and the packets loss rate is reduced with ratio from (35% to 385%).

  11. A channel estimation scheme for MIMO-OFDM systems

    NASA Astrophysics Data System (ADS)

    He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen

    2017-08-01

    In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.

  12. Bandwidth efficient channel estimation method for airborne hyperspectral data transmission in sparse doubly selective communication channels

    NASA Astrophysics Data System (ADS)

    Vahidi, Vahid; Saberinia, Ebrahim; Regentova, Emma E.

    2017-10-01

    A channel estimation (CE) method based on compressed sensing (CS) is proposed to estimate the sparse and doubly selective (DS) channel for hyperspectral image transmission from unmanned aircraft vehicles to ground stations. The proposed method contains three steps: (1) the priori estimate of the channel by orthogonal matching pursuit (OMP), (2) calculation of the linear minimum mean square error (LMMSE) estimate of the received pilots given the estimated channel, and (3) estimate of the complex amplitudes and Doppler shifts of the channel using the enhanced received pilot data applying a second round of a CS algorithm. The proposed method is named DS-LMMSE-OMP, and its performance is evaluated by simulating transmission of AVIRIS hyperspectral data via the communication channel and assessing their fidelity for the automated analysis after demodulation. The performance of the DS-LMMSE-OMP approach is compared with that of two other state-of-the-art CE methods. The simulation results exhibit up to 8-dB figure of merit in the bit error rate and 50% improvement in the hyperspectral image classification accuracy.

  13. Pilot-Assisted Channel Estimation for Orthogonal Multi-Carrier DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Shima, Tomoyuki; Tomeba, Hiromichi; Adachi, Fumiyuki

    Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of time-domain spreading and orthogonal frequency division multiplexing (OFDM). In orthogonal MC DS-CDMA, the frequency diversity gain can be obtained by applying frequency-domain equalization (FDE) based on minimum mean square error (MMSE) criterion to a block of OFDM symbols and can improve the bit error rate (BER) performance in a severe frequency-selective fading channel. FDE requires an accurate estimate of the channel gain. The channel gain can be estimated by removing the pilot modulation in the frequency domain. In this paper, we propose a pilot-assisted channel estimation suitable for orthogonal MC DS-CDMA with FDE and evaluate, by computer simulation, the BER performance in a frequency-selective Rayleigh fading channel.

  14. Performance of concatenated Reed-Solomon trellis-coded modulation over Rician fading channels

    NASA Technical Reports Server (NTRS)

    Moher, Michael L.; Lodge, John H.

    1990-01-01

    A concatenated coding scheme for providing very reliable data over mobile-satellite channels at power levels similar to those used for vocoded speech is described. The outer code is a shorter Reed-Solomon code which provides error detection as well as error correction capabilities. The inner code is a 1-D 8-state trellis code applied independently to both the inphase and quadrature channels. To achieve the full error correction potential of this inner code, the code symbols are multiplexed with a pilot sequence which is used to provide dynamic channel estimation and coherent detection. The implementation structure of this scheme is discussed and its performance is estimated.

  15. A method for estimating mean and low flows of streams in national forests of Montana

    USGS Publications Warehouse

    Parrett, Charles; Hull, J.A.

    1985-01-01

    Equations were developed for estimating mean annual discharge, 80-percent exceedance discharge, and 95-percent exceedance discharge for streams on national forest lands in Montana. The equations for mean annual discharge used active-channel width, drainage area and mean annual precipitation as independent variables, with active-channel width being most significant. The equations for 80-percent exceedance discharge and 95-percent exceedance discharge used only active-channel width as an independent variable. The standard error or estimate for the best equation for estimating mean annual discharge was 27 percent. The standard errors of estimate for the equations were 67 percent for estimating 80-percent exceedance discharge and 75 percent for estimating 95-percent exceedance discharge. (USGS)

  16. Finite-error metrological bounds on multiparameter Hamiltonian estimation

    NASA Astrophysics Data System (ADS)

    Kura, Naoto; Ueda, Masahito

    2018-01-01

    Estimation of multiple parameters in an unknown Hamiltonian is investigated. We present upper and lower bounds on the time required to complete the estimation within a prescribed error tolerance δ . The lower bound is given on the basis of the Cramér-Rao inequality, where the quantum Fisher information is bounded by the squared evolution time. The upper bound is obtained by an explicit construction of estimation procedures. By comparing the cases with different numbers of Hamiltonian channels, we also find that the few-channel procedure with adaptive feedback and the many-channel procedure with entanglement are equivalent in the sense that they require the same amount of time resource up to a constant factor.

  17. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  18. Semiblind channel estimation for MIMO-OFDM systems

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Sheng; Song, Jyu-Han

    2012-12-01

    This article proposes a semiblind channel estimation method for multiple-input multiple-output orthogonal frequency-division multiplexing systems based on circular precoding. Relying on the precoding scheme at the transmitters, the autocorrelation matrix of the received data induces a structure relating the outer product of the channel frequency response matrix and precoding coefficients. This structure makes it possible to extract information about channel product matrices, which can be used to form a Hermitian matrix whose positive eigenvalues and corresponding eigenvectors yield the channel impulse response matrix. This article also tests the resistance of the precoding design to finite-sample estimation errors, and explores the effects of the precoding scheme on channel equalization by performing pairwise error probability analysis. The proposed method is immune to channel zero locations, and is reasonably robust to channel order overestimation. The proposed method is applicable to the scenarios in which the number of transmitters exceeds that of the receivers. Simulation results demonstrate the performance of the proposed method and compare it with some existing methods.

  19. Mean annual runoff and peak flow estimates based on channel geometry of streams in southeastern Montana

    USGS Publications Warehouse

    Omang, R.J.; Parrett, Charles; Hull, J.A.

    1983-01-01

    Equations using channel-geometry measurements were developed for estimating mean runoff and peak flows of ungaged streams in southeastern Montana. Two separate sets of esitmating equations were developed for determining mean annual runoff: one for perennial streams and one for ephemeral and intermittent streams. Data from 29 gaged sites on perennial streams and 21 gaged sites on ephemeral and intermittent streams were used in these analyses. Data from 78 gaged sites were used in the peak-flow analyses. Southeastern Montana was divided into three regions and separate multiple-regression equations for each region were developed that relate channel dimensions to peak discharge having recurrence intervals of 2, 5, 10, 25, 50, and 100 years. Channel-geometery relations were developed using measurements of the active-channel width and bankfull width. Active-channel width and bankfull width were the most significant channel features for estimating mean annual runoff for al types of streams. Use of this method requires that onsite measurements be made of channel width. The standard error of estimate for predicting mean annual runoff ranged from about 38 to 79 percent. The standard error of estimate relating active-channel width or bankfull width to peak flow ranged from about 37 to 115 percent. (USGS)

  20. Monitoring inter-channel nonlinearity based on differential pilot

    NASA Astrophysics Data System (ADS)

    Wang, Wanli; Yang, Aiying; Guo, Peng; Lu, Yueming; Qiao, Yaojun

    2018-06-01

    We modify and simplify the inter-channel nonlinearity (NL) estimation method by using differential pilot. Compared to previous works, the inter-channel NL estimation method we propose has much lower complexity and does not need modification of the transmitter. The performance of inter-channel NL monitoring with different launch power is tested. For both QPSK and 16QAM systems with 9 channels, the estimation error of inter-channel NL is lower than 1 dB when the total launch power is bigger than 12 dBm after 1000 km optical transmission. At last, we compare our inter-channel NL estimation method with other methods.

  1. Methods for accurate estimation of net discharge in a tidal channel

    USGS Publications Warehouse

    Simpson, M.R.; Bland, R.

    2000-01-01

    Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three sets of calibration data differed by less than an average of 4 cubic meters per second, or less than 0.5% of a typical peak tidal discharge rate of 750 cubic meters per second.

  2. Methods for estimating streamflow at mountain fronts in southern New Mexico

    USGS Publications Warehouse

    Waltemeyer, S.D.

    1994-01-01

    The infiltration of streamflow is potential recharge to alluvial-basin aquifers at or near mountain fronts in southern New Mexico. Data for 13 streamflow-gaging stations were used to determine a relation between mean annual stream- flow and basin and climatic conditions. Regression analysis was used to develop an equation that can be used to estimate mean annual streamflow on the basis of drainage areas and mean annual precipi- tation. The average standard error of estimate for this equation is 46 percent. Regression analysis also was used to develop an equation to estimate mean annual streamflow on the basis of active- channel width. Measurements of the width of active channels were determined for 6 of the 13 gaging stations. The average standard error of estimate for this relation is 29 percent. Stream- flow estimates made using a regression equation based on channel geometry are considered more reliable than estimates made from an equation based on regional relations of basin and climatic conditions. The sample size used to develop these relations was small, however, and the reported standard error of estimate may not represent that of the entire population. Active-channel-width measurements were made at 23 ungaged sites along the Rio Grande upstream from Elephant Butte Reservoir. Data for additional sites would be needed for a more comprehensive assessment of mean annual streamflow in southern New Mexico.

  3. Noise Estimation and Adaptive Encoding for Asymmetric Quantum Error Correcting Codes

    NASA Astrophysics Data System (ADS)

    Florjanczyk, Jan; Brun, Todd; CenterQuantum Information Science; Technology Team

    We present a technique that improves the performance of asymmetric quantum error correcting codes in the presence of biased qubit noise channels. Our study is motivated by considering what useful information can be learned from the statistics of syndrome measurements in stabilizer quantum error correcting codes (QECC). We consider the case of a qubit dephasing channel where the dephasing axis is unknown and time-varying. We are able to estimate the dephasing angle from the statistics of the standard syndrome measurements used in stabilizer QECC's. We use this estimate to rotate the computational basis of the code in such a way that the most likely type of error is covered by the highest distance of the asymmetric code. In particular, we use the [ [ 15 , 1 , 3 ] ] shortened Reed-Muller code which can correct one phase-flip error but up to three bit-flip errors. In our simulations, we tune the computational basis to match the estimated dephasing axis which in turn leads to a decrease in the probability of a phase-flip error. With a sufficiently accurate estimate of the dephasing axis, our memory's effective error is dominated by the much lower probability of four bit-flips. Aro MURI Grant No. W911NF-11-1-0268.

  4. UWB channel estimation using new generating TR transceivers

    DOEpatents

    Nekoogar, Faranak [San Ramon, CA; Dowla, Farid U [Castro Valley, CA; Spiridon, Alex [Palo Alto, CA; Haugen, Peter C [Livermore, CA; Benzel, Dave M [Livermore, CA

    2011-06-28

    The present invention presents a simple and novel channel estimation scheme for UWB communication systems. As disclosed herein, the present invention maximizes the extraction of information by incorporating a new generation of transmitted-reference (Tr) transceivers that utilize a single reference pulse(s) or a preamble of reference pulses to provide improved channel estimation while offering higher Bit Error Rate (BER) performance and data rates without diluting the transmitter power.

  5. Variable is better than invariable: sparse VSS-NLMS algorithms with application to adaptive MIMO channel estimation.

    PubMed

    Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki

    2014-01-01

    Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.

  6. Variable Is Better Than Invariable: Sparse VSS-NLMS Algorithms with Application to Adaptive MIMO Channel Estimation

    PubMed Central

    Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki

    2014-01-01

    Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286

  7. Methods for estimating flood frequency in Montana based on data through water year 1998

    USGS Publications Warehouse

    Parrett, Charles; Johnson, Dave R.

    2004-01-01

    Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.

  8. Estimation of bathymetric depth and slope from data assimilation of swath altimetry into a hydrodynamic model

    NASA Astrophysics Data System (ADS)

    Durand, Michael; Andreadis, Konstantinos M.; Alsdorf, Douglas E.; Lettenmaier, Dennis P.; Moller, Delwyn; Wilson, Matthew

    2008-10-01

    The proposed Surface Water and Ocean Topography (SWOT) mission would provide measurements of water surface elevation (WSE) for characterization of storage change and discharge. River channel bathymetry is a significant source of uncertainty in estimating discharge from WSE measurements, however. In this paper, we demonstrate an ensemble-based data assimilation (DA) methodology for estimating bathymetric depth and slope from WSE measurements and the LISFLOOD-FP hydrodynamic model. We performed two proof-of-concept experiments using synthetically generated SWOT measurements. The experiments demonstrated that bathymetric depth and slope can be estimated to within 3.0 microradians or 50 cm, respectively, using SWOT WSE measurements, within the context of our DA and modeling framework. We found that channel bathymetry estimation accuracy is relatively insensitive to SWOT measurement error, because uncertainty in LISFLOOD-FP inputs (such as channel roughness and upstream boundary conditions) is likely to be of greater magnitude than measurement error.

  9. Capacity estimation and verification of quantum channels with arbitrarily correlated errors.

    PubMed

    Pfister, Corsin; Rol, M Adriaan; Mantri, Atul; Tomamichel, Marco; Wehner, Stephanie

    2018-01-02

    The central figure of merit for quantum memories and quantum communication devices is their capacity to store and transmit quantum information. Here, we present a protocol that estimates a lower bound on a channel's quantum capacity, even when there are arbitrarily correlated errors. One application of these protocols is to test the performance of quantum repeaters for transmitting quantum information. Our protocol is easy to implement and comes in two versions. The first estimates the one-shot quantum capacity by preparing and measuring in two different bases, where all involved qubits are used as test qubits. The second verifies on-the-fly that a channel's one-shot quantum capacity exceeds a minimal tolerated value while storing or communicating data. We discuss the performance using simple examples, such as the dephasing channel for which our method is asymptotically optimal. Finally, we apply our method to a superconducting qubit in experiment.

  10. Channel estimation based on quantized MMP for FDD massive MIMO downlink

    NASA Astrophysics Data System (ADS)

    Guo, Yao-ting; Wang, Bing-he; Qu, Yi; Cai, Hua-jie

    2016-10-01

    In this paper, we consider channel estimation for Massive MIMO systems operating in frequency division duplexing mode. By exploiting the sparsity of propagation paths in Massive MIMO channel, we develop a compressed sensing(CS) based channel estimator which can reduce the pilot overhead. As compared with the conventional least squares (LS) and linear minimum mean square error(LMMSE) estimation, the proposed algorithm is based on the quantized multipath matching pursuit - MMP - reduced the pilot overhead and performs better than other CS algorithms. The simulation results demonstrate the advantage of the proposed algorithm over various existing methods including the LS, LMMSE, CoSaMP and conventional MMP estimators.

  11. Atmospheric Compensation and Surface Temperature and Emissivity Retrieval with LWIR Hyperspectral Imagery

    NASA Astrophysics Data System (ADS)

    Pieper, Michael

    Accurate estimation or retrieval of surface emissivity spectra from long-wave infrared (LWIR) or Thermal Infrared (TIR) hyperspectral imaging data acquired by airborne or space-borne sensors is necessary for many scientific and defense applications. The at-aperture radiance measured by the sensor is a function of the ground emissivity and temperature, modified by the atmosphere. Thus the emissivity retrieval process consists of two interwoven steps: atmospheric compensation (AC) to retrieve the ground radiance from the measured at-aperture radiance and temperature-emissivity separation (TES) to separate the temperature and emissivity from the ground radiance. In-scene AC (ISAC) algorithms use blackbody-like materials in the scene, which have a linear relationship between their ground radiances and at-aperture radiances determined by the atmospheric transmission and upwelling radiance. Using a clear reference channel to estimate the ground radiance, a linear fitting of the at-aperture radiance and estimated ground radiance is done to estimate the atmospheric parameters. TES algorithms for hyperspectral imaging data assume that the emissivity spectra for solids are smooth compared to the sharp features added by the atmosphere. The ground temperature and emissivity are found by finding the temperature that provides the smoothest emissivity estimate. In this thesis we develop models to investigate the sensitivity of AC and TES to the basic assumptions enabling their performance. ISAC assumes that there are perfect blackbody pixels in a scene and that there is a clear channel, which is never the case. The developed ISAC model explains how the quality of blackbody-like pixels affect the shape of atmospheric estimates and the clear channel assumption affects their magnitude. Emissivity spectra for solids usually have some roughness. The TES model identifies four sources of error: the smoothing error of the emissivity spectrum, the emissivity error from using the incorrect temperature, and the errors caused by sensor noise and wavelength calibration. The ways these errors interact determines the overall TES performance. Since the AC and TES processes are interwoven, any errors in AC are transferred to TES and the final temperature and emissivity estimates. Combining the two models, shape errors caused by the blackbody assumption are transferred to the emissivity estimates, where magnitude errors from the clear channel assumption are compensated by TES temperature induced emissivity errors. The ability for the temperature induced error to compensate for such atmospheric errors makes it difficult to determine the correct atmospheric parameters for a scene. With these models we are able to determine the expected quality of estimated emissivity spectra based on the quality of blackbody-like materials on the ground, the emissivity of the materials being searched for, and the properties of the sensor. The quality of material emissivity spectra is a key factor in determining detection performance for a material in a scene.

  12. Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications

    NASA Astrophysics Data System (ADS)

    Qian, Xuewen; Deng, Honggui; He, Hailang

    2017-10-01

    Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.

  13. Constrained motion estimation-based error resilient coding for HEVC

    NASA Astrophysics Data System (ADS)

    Guo, Weihan; Zhang, Yongfei; Li, Bo

    2018-04-01

    Unreliable communication channels might lead to packet losses and bit errors in the videos transmitted through it, which will cause severe video quality degradation. This is even worse for HEVC since more advanced and powerful motion estimation methods are introduced to further remove the inter-frame dependency and thus improve the coding efficiency. Once a Motion Vector (MV) is lost or corrupted, it will cause distortion in the decoded frame. More importantly, due to motion compensation, the error will propagate along the motion prediction path, accumulate over time, and significantly degrade the overall video presentation quality. To address this problem, we study the problem of encoder-sider error resilient coding for HEVC and propose a constrained motion estimation scheme to mitigate the problem of error propagation to subsequent frames. The approach is achieved by cutting off MV dependencies and limiting the block regions which are predicted by temporal motion vector. The experimental results show that the proposed method can effectively suppress the error propagation caused by bit errors of motion vector and can improve the robustness of the stream in the bit error channels. When the bit error probability is 10-5, an increase of the decoded video quality (PSNR) by up to1.310dB and on average 0.762 dB can be achieved, compared to the reference HEVC.

  14. Robust Transceiver Design for Multiuser MIMO Downlink with Channel Uncertainties

    NASA Astrophysics Data System (ADS)

    Miao, Wei; Li, Yunzhou; Chen, Xiang; Zhou, Shidong; Wang, Jing

    This letter addresses the problem of robust transceiver design for the multiuser multiple-input-multiple-output (MIMO) downlink where the channel state information at the base station (BS) is imperfect. A stochastic approach which minimizes the expectation of the total mean square error (MSE) of the downlink conditioned on the channel estimates under a total transmit power constraint is adopted. The iterative algorithm reported in [2] is improved to handle the proposed robust optimization problem. Simulation results show that our proposed robust scheme effectively reduces the performance loss due to channel uncertainties and outperforms existing methods, especially when the channel errors of the users are different.

  15. Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems

    PubMed Central

    Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang

    2015-01-01

    The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches. PMID:26007726

  16. On the Probability of Error and Stochastic Resonance in Discrete Memoryless Channels

    DTIC Science & Technology

    2013-12-01

    Information - Driven Doppler Shift Estimation and Compensation Methods for Underwater Wireless Sensor Networks ”, which is to analyze and develop... underwater wireless sensor networks . We formulated an analytic relationship that relates the average probability of error to the systems parameters, the...thesis, we studied the performance of Discrete Memoryless Channels (DMC), arising in the context of cooperative underwater wireless sensor networks

  17. MIMO channel estimation and evaluation for airborne traffic surveillance in cellular networks

    NASA Astrophysics Data System (ADS)

    Vahidi, Vahid; Saberinia, Ebrahim

    2018-01-01

    A channel estimation (CE) procedure based on compressed sensing is proposed to estimate the multiple-input multiple-output sparse channel for traffic data transmission from drones to ground stations. The proposed procedure consists of an offline phase and a real-time phase. In the offline phase, a pilot arrangement method, which considers the interblock and block mutual coherence simultaneously, is proposed. The real-time phase contains three steps. At the first step, it obtains the priori estimate of the channel by block orthogonal matching pursuit; afterward, it utilizes that estimated channel to calculate the linear minimum mean square error of the received pilots. Finally, the block compressive sampling matching pursuit utilizes the enhanced received pilots to estimate the channel more accurately. The performance of the CE procedure is evaluated by simulating the transmission of traffic data through the communication channel and evaluating its fidelity for car detection after demodulation. Simulation results indicate that the proposed CE technique enhances the performance of the car detection in a traffic image considerably.

  18. Reduced-rank technique for joint channel estimation in TD-SCDMA systems

    NASA Astrophysics Data System (ADS)

    Kamil Marzook, Ali; Ismail, Alyani; Mohd Ali, Borhanuddin; Sali, Adawati; Khatun, Sabira

    2013-02-01

    In time division-synchronous code division multiple access systems, increasing the system capacity by exploiting the inserting of the largest number of users in one time slot (TS) requires adding more estimation processes to estimate the joint channel matrix for the whole system. The increase in the number of channel parameters due the increase in the number of users in one TS directly affects the precision of the estimator's performance. This article presents a novel channel estimation with low complexity, which relies on reducing the rank order of the total channel matrix H. The proposed method exploits the rank deficiency of H to reduce the number of parameters that characterise this matrix. The adopted reduced-rank technique is based on truncated singular value decomposition algorithm. The algorithms for reduced-rank joint channel estimation (JCE) are derived and compared against traditional full-rank JCEs: least squares (LS) or Steiner and enhanced (LS or MMSE) algorithms. Simulation results of the normalised mean square error showed the superiority of reduced-rank estimators. In addition, the channel impulse responses founded by reduced-rank estimator for all active users offers considerable performance improvement over the conventional estimator along the channel window length.

  19. Low-Complexity Polynomial Channel Estimation in Large-Scale MIMO With Arbitrary Statistics

    NASA Astrophysics Data System (ADS)

    Shariati, Nafiseh; Bjornson, Emil; Bengtsson, Mats; Debbah, Merouane

    2014-10-01

    This paper considers pilot-based channel estimation in large-scale multiple-input multiple-output (MIMO) communication systems, also known as massive MIMO, where there are hundreds of antennas at one side of the link. Motivated by the fact that computational complexity is one of the main challenges in such systems, a set of low-complexity Bayesian channel estimators, coined Polynomial ExpAnsion CHannel (PEACH) estimators, are introduced for arbitrary channel and interference statistics. While the conventional minimum mean square error (MMSE) estimator has cubic complexity in the dimension of the covariance matrices, due to an inversion operation, our proposed estimators significantly reduce this to square complexity by approximating the inverse by a L-degree matrix polynomial. The coefficients of the polynomial are optimized to minimize the mean square error (MSE) of the estimate. We show numerically that near-optimal MSEs are achieved with low polynomial degrees. We also derive the exact computational complexity of the proposed estimators, in terms of the floating-point operations (FLOPs), by which we prove that the proposed estimators outperform the conventional estimators in large-scale MIMO systems of practical dimensions while providing a reasonable MSEs. Moreover, we show that L needs not scale with the system dimensions to maintain a certain normalized MSE. By analyzing different interference scenarios, we observe that the relative MSE loss of using the low-complexity PEACH estimators is smaller in realistic scenarios with pilot contamination. On the other hand, PEACH estimators are not well suited for noise-limited scenarios with high pilot power; therefore, we also introduce the low-complexity diagonalized estimator that performs well in this regime. Finally, we ...

  20. Some conservative estimates in quantum cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molotkov, S. N.

    2006-08-15

    Relationship is established between the security of the BB84 quantum key distribution protocol and the forward and converse coding theorems for quantum communication channels. The upper bound Q{sub c} {approx} 11% on the bit error rate compatible with secure key distribution is determined by solving the transcendental equation H(Q{sub c})=C-bar({rho})/2, where {rho} is the density matrix of the input ensemble, C-bar({rho}) is the classical capacity of a noiseless quantum channel, and H(Q) is the capacity of a classical binary symmetric channel with error rate Q.

  1. An adaptive threshold detector and channel parameter estimator for deep space optical communications

    NASA Technical Reports Server (NTRS)

    Arabshahi, P.; Mukai, R.; Yan, T. -Y.

    2001-01-01

    This paper presents a method for optimal adaptive setting of ulse-position-modulation pulse detection thresholds, which minimizes the total probability of error for the dynamically fading optical fee space channel.

  2. A joint source-channel distortion model for JPEG compressed images.

    PubMed

    Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C

    2006-06-01

    The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.

  3. Adaptive Quadrature Detection for Multicarrier Continuous-Variable Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Gyongyosi, Laszlo; Imre, Sandor

    2015-03-01

    We propose the adaptive quadrature detection for multicarrier continuous-variable quantum key distribution (CVQKD). A multicarrier CVQKD scheme uses Gaussian subcarrier continuous variables for the information conveying and Gaussian sub-channels for the transmission. The proposed multicarrier detection scheme dynamically adapts to the sub-channel conditions using a corresponding statistics which is provided by our sophisticated sub-channel estimation procedure. The sub-channel estimation phase determines the transmittance coefficients of the sub-channels, which information are used further in the adaptive quadrature decoding process. We define the technique called subcarrier spreading to estimate the transmittance conditions of the sub-channels with a theoretical error-minimum in the presence of a Gaussian noise. We introduce the terms of single and collective adaptive quadrature detection. We also extend the results for a multiuser multicarrier CVQKD scenario. We prove the achievable error probabilities, the signal-to-noise ratios, and quantify the attributes of the framework. The adaptive detection scheme allows to utilize the extra resources of multicarrier CVQKD and to maximize the amount of transmittable information. This work was partially supported by the GOP-1.1.1-11-2012-0092 (Secure quantum key distribution between two units on optical fiber network) project sponsored by the EU and European Structural Fund, and by the COST Action MP1006.

  4. Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel

    NASA Astrophysics Data System (ADS)

    Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele

    2009-12-01

    An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.

  5. A simple algorithm for distance estimation without radar and stereo vision based on the bionic principle of bee eyes

    NASA Astrophysics Data System (ADS)

    Khamukhin, A. A.

    2017-02-01

    Simple navigation algorithms are needed for small autonomous unmanned aerial vehicles (UAVs). These algorithms can be implemented in a small microprocessor with low power consumption. This will help to reduce the weight of the UAVs computing equipment and to increase the flight range. The proposed algorithm uses only the number of opaque channels (ommatidia in bees) through which a target can be seen by moving an observer from location 1 to 2 toward the target. The distance estimation is given relative to the distance between locations 1 and 2. The simple scheme of an appositional compound eye to develop calculation formula is proposed. The distance estimation error analysis shows that it decreases with an increase of the total number of opaque channels to a certain limit. An acceptable error of about 2 % is achieved with the angle of view from 3 to 10° when the total number of opaque channels is 21600.

  6. Prioritized packet video transmission over time-varying wireless channel using proactive FEC

    NASA Astrophysics Data System (ADS)

    Kumwilaisak, Wuttipong; Kim, JongWon; Kuo, C.-C. Jay

    2000-12-01

    Quality of video transmitted over time-varying wireless channels relies heavily on the coordinated effort to cope with both channel and source variations dynamically. Given the priority of each source packet and the estimated channel condition, an adaptive protection scheme based on joint source-channel criteria is investigated via proactive forward error correction (FEC). With proactive FEC in Reed Solomon (RS)/Rate-compatible punctured convolutional (RCPC) codes, we study a practical algorithm to match the relative priority of source packets and instantaneous channel conditions. The channel condition is estimated to capture the long-term fading effect in terms of the averaged SNR over a preset window. Proactive protection is performed for each packet based on the joint source-channel criteria with special attention to the accuracy, time-scale match, and feedback delay of channel status estimation. The overall gain of the proposed protection mechanism is demonstrated in terms of the end-to-end wireless video performance.

  7. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    NASA Astrophysics Data System (ADS)

    Pan, Weiqiang; Liu, Ping; Chen, Fangjiong; Ji, Fei; Feng, Jing

    2015-06-01

    In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB) of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  8. Estimates of monthly streamflow characteristics at selected sites in the upper Missouri River basin, Montana, base period water years 1937-86

    USGS Publications Warehouse

    Parrett, Charles; Johnson, D.R.; Hull, J.A.

    1989-01-01

    Estimates of streamflow characteristics (monthly mean flow that is exceeded 90, 80, 50, and 20 percent of the time for all years of record and mean monthly flow) were made and are presented in tabular form for 312 sites in the Missouri River basin in Montana. Short-term gaged records were extended to the base period of water years 1937-86, and were used to estimate monthly streamflow characteristics at 100 sites. Data from 47 gaged sites were used in regression analysis relating the streamflow characteristics to basin characteristics and to active-channel width. The basin-characteristics equations, with standard errors of 35% to 97%, were used to estimate streamflow characteristics at 179 ungaged sites. The channel-width equations, with standard errors of 36% to 103%, were used to estimate characteristics at 138 ungaged sites. Streamflow measurements were correlated with concurrent streamflows at nearby gaged sites to estimate streamflow characteristics at 139 ungaged sites. In a test using 20 pairs of gages, the standard errors ranged from 31% to 111%. At 139 ungaged sites, the estimates from two or more of the methods were weighted and combined in accordance with the variance of individual methods. When estimates from three methods were combined the standard errors ranged from 24% to 63 %. A drainage-area-ratio adjustment method was used to estimate monthly streamflow characteristics at seven ungaged sites. The reliability of the drainage-area-ratio adjustment method was estimated to be about equal to that of the basin-characteristics method. The estimate were checked for reliability. Estimates of monthly streamflow characteristics from gaged records were considered to be most reliable, and estimates at sites with actual flow record from 1937-86 were considered to be completely reliable (zero error). Weighted-average estimates were considered to be the most reliable estimates made at ungaged sites. (USGS)

  9. Analysis of uncertainties and convergence of the statistical quantities in turbulent wall-bounded flows by means of a physically based criterion

    NASA Astrophysics Data System (ADS)

    Andrade, João Rodrigo; Martins, Ramon Silva; Thompson, Roney Leon; Mompean, Gilmar; da Silveira Neto, Aristeu

    2018-04-01

    The present paper provides an analysis of the statistical uncertainties associated with direct numerical simulation (DNS) results and experimental data for turbulent channel and pipe flows, showing a new physically based quantification of these errors, to improve the determination of the statistical deviations between DNSs and experiments. The analysis is carried out using a recently proposed criterion by Thompson et al. ["A methodology to evaluate statistical errors in DNS data of plane channel flows," Comput. Fluids 130, 1-7 (2016)] for fully turbulent plane channel flows, where the mean velocity error is estimated by considering the Reynolds stress tensor, and using the balance of the mean force equation. It also presents how the residual error evolves in time for a DNS of a plane channel flow, and the influence of the Reynolds number on its convergence rate. The root mean square of the residual error is shown in order to capture a single quantitative value of the error associated with the dimensionless averaging time. The evolution in time of the error norm is compared with the final error provided by DNS data of similar Reynolds numbers available in the literature. A direct consequence of this approach is that it was possible to compare different numerical results and experimental data, providing an improved understanding of the convergence of the statistical quantities in turbulent wall-bounded flows.

  10. A method for estimating spatially variable seepage and hydrualic conductivity in channels with very mild slopes

    USGS Publications Warehouse

    Shanafield, Margaret; Niswonger, Richard G.; Prudic, David E.; Pohll, Greg; Susfalk, Richard; Panday, Sorab

    2014-01-01

    Infiltration along ephemeral channels plays an important role in groundwater recharge in arid regions. A model is presented for estimating spatial variability of seepage due to streambed heterogeneity along channels based on measurements of streamflow-front velocities in initially dry channels. The diffusion-wave approximation to the Saint-Venant equations, coupled with Philip's equation for infiltration, is connected to the groundwater model MODFLOW and is calibrated by adjusting the saturated hydraulic conductivity of the channel bed. The model is applied to portions of two large water delivery canals, which serve as proxies for natural ephemeral streams. Estimated seepage rates compare well with previously published values. Possible sources of error stem from uncertainty in Manning's roughness coefficients, soil hydraulic properties and channel geometry. Model performance would be most improved through more frequent longitudinal estimates of channel geometry and thalweg elevation, and with measurements of stream stage over time to constrain wave timing and shape. This model is a potentially valuable tool for estimating spatial variability in longitudinal seepage along intermittent and ephemeral channels over a wide range of bed slopes and the influence of seepage rates on groundwater levels.

  11. Receiver Diversity Combining Using Evolutionary Algorithms in Rayleigh Fading Channel

    PubMed Central

    Akbari, Mohsen; Manesh, Mohsen Riahi

    2014-01-01

    In diversity combining at the receiver, the output signal-to-noise ratio (SNR) is often maximized by using the maximal ratio combining (MRC) provided that the channel is perfectly estimated at the receiver. However, channel estimation is rarely perfect in practice, which results in deteriorating the system performance. In this paper, an imperialistic competitive algorithm (ICA) is proposed and compared with two other evolutionary based algorithms, namely, particle swarm optimization (PSO) and genetic algorithm (GA), for diversity combining of signals travelling across the imperfect channels. The proposed algorithm adjusts the combiner weights of the received signal components in such a way that maximizes the SNR and minimizes the bit error rate (BER). The results indicate that the proposed method eliminates the need of channel estimation and can outperform the conventional diversity combining methods. PMID:25045725

  12. Reliable video transmission over fading channels via channel state estimation

    NASA Astrophysics Data System (ADS)

    Kumwilaisak, Wuttipong; Kim, JongWon; Kuo, C.-C. Jay

    2000-04-01

    Transmission of continuous media such as video over time- varying wireless communication channels can benefit from the use of adaptation techniques in both source and channel coding. An adaptive feedback-based wireless video transmission scheme is investigated in this research with special emphasis on feedback-based adaptation. To be more specific, an interactive adaptive transmission scheme is developed by letting the receiver estimate the channel state information and send it back to the transmitter. By utilizing the feedback information, the transmitter is capable of adapting the level of protection by changing the flexible RCPC (rate-compatible punctured convolutional) code ratio depending on the instantaneous channel condition. The wireless channel is modeled as a fading channel, where the long-term and short- term fading effects are modeled as the log-normal fading and the Rayleigh flat fading, respectively. Then, its state (mainly the long term fading portion) is tracked and predicted by using an adaptive LMS (least mean squares) algorithm. By utilizing the delayed feedback on the channel condition, the adaptation performance of the proposed scheme is first evaluated in terms of the error probability and the throughput. It is then extended to incorporate variable size packets of ITU-T H.263+ video with the error resilience option. Finally, the end-to-end performance of wireless video transmission is compared against several non-adaptive protection schemes.

  13. Equations for estimating Clark Unit-hydrograph parameters for small rural watersheds in Illinois

    USGS Publications Warehouse

    Straub, Timothy D.; Melching, Charles S.; Kocher, Kyle E.

    2000-01-01

    Simulation of the measured discharge hydrographs for the verification storms utilizing TC and R obtained from the estimation equations yielded good results. The error in peak discharge for 21 of the 29 verification storms was less than 25 percent, and the error in time-to-peak discharge for 18 of the 29 verification storms also was less than 25 percent. Therefore, applying the estimation equations to determine TC and R for design-storm simulation may result in reliable design hydrographs, as long as the physical characteristics of the watersheds under consideration are within the range of those characteristics for the watersheds in this study [area: 0.02-2.3 mi2, main-channel length: 0.17-3.4 miles, main-channel slope: 10.5-229 feet per mile, and insignificant percentage of impervious cover].

  14. Linear error analysis of slope-area discharge determinations

    USGS Publications Warehouse

    Kirby, W.H.

    1987-01-01

    The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.

  15. An enhanced multi-channel bacterial foraging optimization algorithm for MIMO communication system

    NASA Astrophysics Data System (ADS)

    Palanimuthu, Senthilkumar Jayalakshmi; Muthial, Chandrasekaran

    2017-04-01

    Channel estimation and optimisation are the main challenging tasks in Multi Input Multi Output (MIMO) wireless communication systems. In this work, a Multi-Channel Bacterial Foraging Optimization Algorithm approach is proposed for the selection of antenna in a transmission area. The main advantage of this method is, it reduces the loss of bandwidth during data transmission effectively. Here, we considered the channel estimation and optimisation for improving the transmission speed and reducing the unused bandwidth. Initially, the message is given to the input of the communication system. Then, the symbol mapping process is performed for converting the message into signals. It will be encoded based on the space-time encoding technique. Here, the single signal is divided into multiple signals and it will be given to the input of space-time precoder. Hence, the multiplexing is applied to transmission channel estimation. In this paper, the Rayleigh channel is selected based on the bandwidth range. This is the Gaussian distribution type channel. Then, the demultiplexing is applied on the obtained signal that is the reverse function of multiplexing, which splits the combined signal arriving from a medium into the original information signal. Furthermore, the long-term evolution technique is used for scheduling the time to channels during transmission. Here, the hidden Markov model technique is employed to predict the status information of the channel. Finally, the signals are decoded and the reconstructed signal is obtained after performing the scheduling process. The experimental results evaluate the performance of the proposed MIMO communication system in terms of bit error rate, mean squared error, average throughput, outage capacity and signal to interference noise ratio.

  16. Robust Rate Maximization for Heterogeneous Wireless Networks under Channel Uncertainties

    PubMed Central

    Xu, Yongjun; Hu, Yuan; Li, Guoquan

    2018-01-01

    Heterogeneous wireless networks are a promising technology in next generation wireless communication networks, which has been shown to efficiently reduce the blind area of mobile communication and improve network coverage compared with the traditional wireless communication networks. In this paper, a robust power allocation problem for a two-tier heterogeneous wireless networks is formulated based on orthogonal frequency-division multiplexing technology. Under the consideration of imperfect channel state information (CSI), the robust sum-rate maximization problem is built while avoiding sever cross-tier interference to macrocell user and maintaining the minimum rate requirement of each femtocell user. To be practical, both of channel estimation errors from the femtocells to the macrocell and link uncertainties of each femtocell user are simultaneously considered in terms of outage probabilities of users. The optimization problem is analyzed under no CSI feedback with some cumulative distribution function and partial CSI with Gaussian distribution of channel estimation error. The robust optimization problem is converted into the convex optimization problem which is solved by using Lagrange dual theory and subgradient algorithm. Simulation results demonstrate the effectiveness of the proposed algorithm by the impact of channel uncertainties on the system performance. PMID:29466315

  17. A burst-mode photon counting receiver with automatic channel estimation and bit rate detection

    NASA Astrophysics Data System (ADS)

    Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.

    2016-04-01

    We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.

  18. Narrative-compression coding for a channel with errors. Professional paper for period ending June 1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bond, J.W.

    1988-01-01

    Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident onmore » an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.« less

  19. Estimating peak discharges, flood volumes, and hydrograph shapes of small ungaged urban streams in Ohio

    USGS Publications Warehouse

    Sherwood, J.M.

    1986-01-01

    Methods are presented for estimating peak discharges, flood volumes and hydrograph shapes of small (less than 5 sq mi) urban streams in Ohio. Examples of how to use the various regression equations and estimating techniques also are presented. Multiple-regression equations were developed for estimating peak discharges having recurrence intervals of 2, 5, 10, 25, 50, and 100 years. The significant independent variables affecting peak discharge are drainage area, main-channel slope, average basin-elevation index, and basin-development factor. Standard errors of regression and prediction for the peak discharge equations range from +/-37% to +/-41%. An equation also was developed to estimate the flood volume of a given peak discharge. Peak discharge, drainage area, main-channel slope, and basin-development factor were found to be the significant independent variables affecting flood volumes for given peak discharges. The standard error of regression for the volume equation is +/-52%. A technique is described for estimating the shape of a runoff hydrograph by applying a specific peak discharge and the estimated lagtime to a dimensionless hydrograph. An equation for estimating the lagtime of a basin was developed. Two variables--main-channel length divided by the square root of the main-channel slope and basin-development factor--have a significant effect on basin lagtime. The standard error of regression for the lagtime equation is +/-48%. The data base for the study was established by collecting rainfall-runoff data at 30 basins distributed throughout several metropolitan areas of Ohio. Five to eight years of data were collected at a 5-min record interval. The USGS rainfall-runoff model A634 was calibrated for each site. The calibrated models were used in conjunction with long-term rainfall records to generate a long-term streamflow record for each site. Each annual peak-discharge record was fitted to a Log-Pearson Type III frequency curve. Multiple-regression techniques were then used to analyze the peak discharge data as a function of the basin characteristics of the 30 sites. (Author 's abstract)

  20. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.

    PubMed

    Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-09-13

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.

  1. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter

    PubMed Central

    Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-01-01

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154

  2. On Several Fundamental Problems of Optimization, Estimation, and Scheduling in Wireless Communications

    NASA Astrophysics Data System (ADS)

    Gao, Qian

    For both the conventional radio frequency and the comparably recent optical wireless communication systems, extensive effort from the academia had been made in improving the network spectrum efficiency and/or reducing the error rate. To achieve these goals, many fundamental challenges such as power efficient constellation design, nonlinear distortion mitigation, channel training design, network scheduling and etc. need to be properly addressed. In this dissertation, novel schemes are proposed accordingly to deal with specific problems falling in category of these challenges. Rigorous proofs and analyses are provided for each of our work to make a fair comparison with the corresponding peer works to clearly demonstrate the advantages. The first part of this dissertation considers a multi-carrier optical wireless system employing intensity modulation (IM) and direct detection (DD). A block-wise constellation design is presented, which treats the DC-bias that conventionally used solely for biasing purpose as an information basis. Our scheme, we term it MSM-JDCM, takes advantage of the compactness of sphere packing in a higher dimensional space, and in turn power efficient constellations are obtained by solving an advanced convex optimization problem. Besides the significant power gains, the MSM-JDCM has many other merits such as being capable of mitigating nonlinear distortion by including a peak-to-power ratio (PAPR) constraint, minimizing inter-symbol-interference (ISI) caused by frequency-selective fading with a novel precoder designed and embedded, and further reducing the bit-error-rate (BER) by combining with an optimized labeling scheme. The second part addresses several optimization problems in a multi-color visible light communication system, including power efficient constellation design, joint pre-equalizer and constellation design, and modeling of different structured channels with cross-talks. Our novel constellation design scheme, termed CSK-Advanced, is compared with the conventional decoupled system with the same spectrum efficiency to demonstrate the power efficiency. Crucial lighting requirements are included as optimization constraints. To control non-linear distortion, the optical peak-to-average-power ratio (PAPR) of LEDs can be individually constrained. With a SVD-based pre-equalizer designed and employed, our scheme can achieve lower BER than counterparts applying zero-forcing (ZF) or linear minimum-mean-squared-error (LMMSE) based post-equalizers. Besides, a binary switching algorithm (BSA) is applied to improve BER performance. The third part looks into a problem of two-phase channel estimation in a relayed wireless network. The channel estimates in every phase are obtained by the linear minimum mean squared error (LMMSE) method. Inaccurate estimate of the relay to destination (RtD) channel in phase 1 could affect estimate of the source to relay (StR) channel in phase 2, which is made erroneous. We first derive a close-form expression for the averaged Bayesian mean-square estimation error (ABMSE) for both phase estimates in terms of the length of source and relay training slots, based on which an iterative searching algorithm is then proposed that optimally allocates training slots to the two phases such that estimation errors are balanced. Analysis shows how the ABMSE of the StD channel estimation varies with the lengths of relay training and source training slots, the relay amplification gain, and the channel prior information respectively. The last part deals with a transmission scheduling problem in a uplink multiple-input-multiple-output (MIMO) wireless network. Code division multiple access (CDMA) is assumed as a multiple access scheme and pseudo-random codes are employed for different users. We consider a heavy traffic scenario, in which each user always has packets to transmit in the scheduled time slots. If the relay is scheduled for transmission together with users, then it operates in a full-duplex mode, where the packets previously collected from users are transmitted to the destination while new packets are being collected from users. A novel expression of throughput is first derived and then used to develop a scheduling algorithm to maximize the throughput. Our full-duplex scheduling is compared with a half-duplex scheduling, random access, and time division multiple access (TDMA), and simulation results illustrate its superiority. Throughput gains due to employment of both MIMO and CDMA are observed.

  3. Estimating surface reflectance from Himawari-8/AHI reflectance channels Using 6SV

    NASA Astrophysics Data System (ADS)

    Lee, Kyeong-sang; Choi, Sungwon; Seo, Minji; Seong, Noh-hun; Han, Kyung-soo

    2017-04-01

    TOA (Top Of Atmospheric) reflectance observed by satellite is modified by the influence of atmosphere such as absorbing and scattering by molecular and gasses. Removing TOA reflectance attenuation which is caused by the atmospheric is essential. surface reflectance with compensated atmospheric effects used as important input data for land product such as Normalized Difference Vegetation Index (NDVI), Land Surface Albedo (LSA) and etc. In this study, we Second Simulation of a Satellite Signal in the Solar Spectrum Vector (6SV) Radiative Transfer Model (RTM) for atmospheric correction and estimating surface reflectance from Himawari-8/Advanced Himawari Imager (AHI) reflectance channels. 6SV has the advantage that it has high accuracy by performing the atmospheric correction by dividing the width of the satellite channel by 2.5 nm, but it is slow to use in the operation. So, we use LUT approach to reduce the computation time and avoid the intensive calculation required for retrieving surface reflectance. Estimated surface reflectance data were compared with PROBA-V S1 data to evaluate the accuracy. As a result Root Mean Square Error (RMSE) and bias were about 0.05 and -0.02. It is considered that this error is due to the difference of angle component and Spectral Response Function (SRF) of each channel.

  4. Improving Pulse Rate Measurements during Random Motion Using a Wearable Multichannel Reflectance Photoplethysmograph.

    PubMed

    Warren, Kristen M; Harvey, Joshua R; Chon, Ki H; Mendelson, Yitzhak

    2016-03-07

    Photoplethysmographic (PPG) waveforms are used to acquire pulse rate (PR) measurements from pulsatile arterial blood volume. PPG waveforms are highly susceptible to motion artifacts (MA), limiting the implementation of PR measurements in mobile physiological monitoring devices. Previous studies have shown that multichannel photoplethysmograms can successfully acquire diverse signal information during simple, repetitive motion, leading to differences in motion tolerance across channels. In this paper, we investigate the performance of a custom-built multichannel forehead-mounted photoplethysmographic sensor under a variety of intense motion artifacts. We introduce an advanced multichannel template-matching algorithm that chooses the channel with the least motion artifact to calculate PR for each time instant. We show that for a wide variety of random motion, channels respond differently to motion artifacts, and the multichannel estimate outperforms single-channel estimates in terms of motion tolerance, signal quality, and PR errors. We have acquired 31 data sets consisting of PPG waveforms corrupted by random motion and show that the accuracy of PR measurements achieved was increased by up to 2.7 bpm when the multichannel-switching algorithm was compared to individual channels. The percentage of PR measurements with error ≤ 5 bpm during motion increased by 18.9% when the multichannel switching algorithm was compared to the mean PR from all channels. Moreover, our algorithm enables automatic selection of the best signal fidelity channel at each time point among the multichannel PPG data.

  5. Error-Rate Bounds for Coded PPM on a Poisson Channel

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  6. Multi-Dielectric Brownian Dynamics and Design-Space-Exploration Studies of Permeation in Ion Channels.

    PubMed

    Siksik, May; Krishnamurthy, Vikram

    2017-09-01

    This paper proposes a multi-dielectric Brownian dynamics simulation framework for design-space-exploration (DSE) studies of ion-channel permeation. The goal of such DSE studies is to estimate the channel modeling-parameters that minimize the mean-squared error between the simulated and expected "permeation characteristics." To address this computational challenge, we use a methodology based on statistical inference that utilizes the knowledge of channel structure to prune the design space. We demonstrate the proposed framework and DSE methodology using a case study based on the KcsA ion channel, in which the design space is successfully reduced from a 6-D space to a 2-D space. Our results show that the channel dielectric map computed using the framework matches with that computed directly using molecular dynamics with an error of 7%. Finally, the scalability and resolution of the model used are explored, and it is shown that the memory requirements needed for DSE remain constant as the number of parameters (degree of heterogeneity) increases.

  7. A hybrid frame concealment algorithm for H.264/AVC.

    PubMed

    Yan, Bo; Gharavi, Hamid

    2010-01-01

    In packet-based video transmissions, packets loss due to channel errors may result in the loss of the whole video frame. Recently, many error concealment algorithms have been proposed in order to combat channel errors; however, most of the existing algorithms can only deal with the loss of macroblocks and are not able to conceal the whole missing frame. In order to resolve this problem, in this paper, we have proposed a new hybrid motion vector extrapolation (HMVE) algorithm to recover the whole missing frame, and it is able to provide more accurate estimation for the motion vectors of the missing frame than other conventional methods. Simulation results show that it is highly effective and significantly outperforms other existing frame recovery methods.

  8. Estimating Discharge in Low-Order Rivers With High-Resolution Aerial Imagery

    NASA Astrophysics Data System (ADS)

    King, Tyler V.; Neilson, Bethany T.; Rasmussen, Mitchell T.

    2018-02-01

    Remote sensing of river discharge promises to augment in situ gauging stations, but the majority of research in this field focuses on large rivers (>50 m wide). We present a method for estimating volumetric river discharge in low-order (<50 m wide) rivers from remotely sensed data by coupling high-resolution imagery with one-dimensional hydraulic modeling at so-called virtual gauging stations. These locations were identified as locations where the river contracted under low flows, exposing a substantial portion of the river bed. Topography of the exposed river bed was photogrammetrically extracted from high-resolution aerial imagery while the geometry of the remaining inundated portion of the channel was approximated based on adjacent bank topography and maximum depth assumptions. Full channel bathymetry was used to create hydraulic models that encompassed virtual gauging stations. Discharge for each aerial survey was estimated with the hydraulic model by matching modeled and remotely sensed wetted widths. Based on these results, synthetic width-discharge rating curves were produced for each virtual gauging station. In situ observations were used to determine the accuracy of wetted widths extracted from imagery (mean error 0.36 m), extracted bathymetry (mean vertical RMSE 0.23 m), and discharge (mean percent error 7% with a standard deviation of 6%). Sensitivity analyses were conducted to determine the influence of inundated channel bathymetry and roughness parameters on estimated discharge. Comparison of synthetic rating curves produced through sensitivity analyses show that reasonable ranges of parameter values result in mean percent errors in predicted discharges of 12%-27%.

  9. Controls of channel morphology and sediment concentration on flow resistance in a large sand-bed river: A case study of the lower Yellow River

    NASA Astrophysics Data System (ADS)

    Ma, Yuanxu; Huang, He Qing

    2016-07-01

    Accurate estimation of flow resistance is crucial for flood routing, flow discharge and velocity estimation, and engineering design. Various empirical and semiempirical flow resistance models have been developed during the past century; however, a universal flow resistance model for varying types of rivers has remained difficult to be achieved to date. In this study, hydrometric data sets from six stations in the lower Yellow River during 1958-1959 are used to calibrate three empirical flow resistance models (Eqs. (5)-(7)) and evaluate their predictability. A group of statistical measures have been used to evaluate the goodness of fit of these models, including root mean square error (RMSE), coefficient of determination (CD), the Nash coefficient (NA), mean relative error (MRE), mean symmetry error (MSE), percentage of data with a relative error ≤ 50% and 25% (P50, P25), and percentage of data with overestimated error (POE). Three model selection criterions are also employed to assess the model predictability: Akaike information criterion (AIC), Bayesian information criterion (BIC), and a modified model selection criterion (MSC). The results show that mean flow depth (d) and water surface slope (S) can only explain a small proportion of variance in flow resistance. When channel width (w) and suspended sediment concentration (SSC) are involved, the new model (7) achieves a better performance than the previous ones. The MRE of model (7) is generally < 20%, which is apparently better than that reported by previous studies. This model is validated using the data sets from the corresponding stations during 1965-1966, and the results show larger uncertainties than the calibrating model. This probably resulted from the temporal shift of dominant controls caused by channel change resulting from varying flow regime. With the advancements of earth observation techniques, information about channel width, mean flow depth, and suspended sediment concentration can be effectively extracted from multisource satellite images. We expect that the empirical methods developed in this study can be used as an effective surrogate in estimation of flow resistance in the large sand-bed rivers like the lower Yellow River.

  10. Estimating discharge in rivers using remotely sensed hydraulic information

    USGS Publications Warehouse

    Bjerklie, D.M.; Moller, D.; Smith, L.C.; Dingman, S.L.

    2005-01-01

    A methodology to estimate in-bank river discharge exclusively from remotely sensed hydraulic data is developed. Water-surface width and maximum channel width measured from 26 aerial and digital orthophotos of 17 single channel rivers and 41 SAR images of three braided rivers were coupled with channel slope data obtained from topographic maps to estimate the discharge. The standard error of the discharge estimates were within a factor of 1.5-2 (50-100%) of the observed, with the mean estimate accuracy within 10%. This level of accuracy was achieved using calibration functions developed from observed discharge. The calibration functions use reach specific geomorphic variables, the maximum channel width and the channel slope, to predict a correction factor. The calibration functions are related to channel type. Surface velocity and width information, obtained from a single C-band image obtained by the Jet Propulsion Laboratory's (JPL's) AirSAR was also used to estimate discharge for a reach of the Missouri River. Without using a calibration function, the estimate accuracy was +72% of the observed discharge, which is within the expected range of uncertainty for the method. However, using the observed velocity to calibrate the initial estimate improved the estimate accuracy to within +10% of the observed. Remotely sensed discharge estimates with accuracies reported in this paper could be useful for regional or continental scale hydrologic studies, or in regions where ground-based data is lacking. ?? 2004 Elsevier B.V. All rights reserved.

  11. Perceptually tuned low-bit-rate video codec for ATM networks

    NASA Astrophysics Data System (ADS)

    Chou, Chun-Hsien

    1996-02-01

    In order to maintain high visual quality in transmitting low bit-rate video signals over asynchronous transfer mode (ATM) networks, a layered coding scheme that incorporates the human visual system (HVS), motion compensation (MC), and conditional replenishment (CR) is presented in this paper. An empirical perceptual model is proposed to estimate the spatio- temporal just-noticeable distortion (STJND) profile for each frame, by which perceptually important (PI) prediction-error signals can be located. Because of the limited channel capacity of the base layer, only coded data of motion vectors, the PI signals within a small strip of the prediction-error image and, if there are remaining bits, the PI signals outside the strip are transmitted by the cells of the base-layer channel. The rest of the coded data are transmitted by the second-layer cells which may be lost due to channel error or network congestion. Simulation results show that visual quality of the reconstructed CIF sequence is acceptable when the capacity of the base-layer channel is allocated with 2 multiplied by 64 kbps and the cells of the second layer are all lost.

  12. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    NASA Astrophysics Data System (ADS)

    Marinkovic, Slavica; Guillemot, Christine

    2006-12-01

    Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  13. Accuracy of sea ice temperature derived from the advanced very high resolution radiometer

    NASA Technical Reports Server (NTRS)

    Yu, Y.; Rothrock, D. A.; Lindsay, R. W.

    1995-01-01

    The accuracy of Arctic sea ice surface temperatures T(sub s) dericed from advanced very high resolution radiometer (AVHRR) thermal channels is evaluated in the cold seasons by comparing them with surface air temperatures T(sub air) from drifting buoys and ice stations. We use three different estimates of satellite surface temperatures, a direct estimate from AVHRR channel 4 with only correction for the snow surface emissivity but not for the atmosphere, a single-channel regression of T(sub s) with T(sub air), and Key and Haefliger's (1992) polar multichannel algorithm. We find no measurable bias in any of these estimates and few differences in their statistics. The similar performance of all three methods indicates that an atmospheric water vapor correction is not important for the dry winter atmosphere in the central Arctic, given the other sources of error that remain in both the satellite and the comparison data. A record of drifting station data shows winter air temperature to be 1.4 C warmer than the snow surface temperature. `Correcting' air temperatures to skin temperature by subtracting this amount implies that satellite T(sub s) estimates are biased warm with respect to skin temperature by about this amount. A case study with low-flying aircraft data suggests that ice crystal precipitation can cause satellite estimates of T(sub s) to be several degrees warmer than radiometric measurements taken close to the surface, presumably below the ice crystal precipitation layer. An analysis in which errors are assumed to exist in all measurements, not just the satellite measurements, gives a standard deviation in the satellite estimates of 0.9 C, about half the standard deviation of 1.7 C estimated by assigning all the variation between T(sub s) and T(sub air) to errors in T(sub s).

  14. The effects of non-stationary noise on electromagnetic response estimates

    NASA Astrophysics Data System (ADS)

    Banks, R. J.

    1998-11-01

    The noise in natural electromagnetic time series is typically non-stationary. Sections of data with high magnetic noise levels bias impedances and generate unreliable error estimates. Sections containing noise that is coherent between electric and magnetic channels also produce inappropriate impedances and errors. The answer is to compute response values for data sections which are as short as is feasible, i.e. which are compatible both with the chosen bandwidth and with the need to over-determine the least-squares estimation of the impedance and coherence. Only those values that are reliable are selected, and the best single measure of the reliability of Earth impedance estimates is their temporal invariance, which is tested by the coherence between the measured and predicted electric fields. Complex demodulation is the method used here to explore the temporal structure of electromagnetic fields in the period range 20-6000 s. For periods above 300 s, noisy sections are readily identified in time series of impedance values. The corresponding estimates deviate strongly from the normal value, are biased towards low impedance values, and are associated with low coherences. Plots of the impedance against coherence are particularly valuable diagnostic aids. For periods below 300 s, impedance bias increases systematically as the coherence falls, identifying input channel noise as the cause. By selecting sections with high coherence (equivalent to the impedance being invariant over the section) unbiased impedances and realistic errors can be determined. The scatter in impedance values among high-coherence sections is due to noise that is coherent between input and output channels, implying the presence of two or more systems for which a consistent response can be defined. Where the Earth and noise responses are significantly different, it may be possible to improve estimates of the former by rejecting sections that do not generate satisfactory values for all the response elements.

  15. Adaptive Pre-FFT Equalizer with High-Precision Channel Estimator for ISI Channels

    NASA Astrophysics Data System (ADS)

    Yoshida, Makoto

    We present an attractive approach for OFDM transmission using an adaptive pre-FFT equalizer, which can select ICI reduction mode according to channel condition, and a degenerated-inverse-matrix-based channel estimator (DIME), which uses a cyclic sinc-function matrix uniquely determined by transmitted subcarriers. In addition to simulation results, the proposed system with an adaptive pre-FFT equalizer and DIME has been laboratory tested by using a software defined radio (SDR)-based test bed. The simulation and experimental results demonstrated that the system at a rate of more than 100Mbps can provide a bit error rate of less than 10-3 for a fast multi-path fading channel that has a moving velocity of more than 200km/h with a delay spread of 1.9µs (a maximum delay path of 7.3µs) in the 5-GHz band.

  16. Compact disk error measurements

    NASA Technical Reports Server (NTRS)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  17. Modifying Bagnold's Sediment Transport Equation for Use in Watershed-Scale Channel Incision Models

    NASA Astrophysics Data System (ADS)

    Lammers, R. W.; Bledsoe, B. P.

    2016-12-01

    Destabilized stream channels may evolve through a sequence of stages, initiated by bed incision and followed by bank erosion and widening. Channel incision can be modeled using Exner-type mass balance equations, but model accuracy is limited by the accuracy and applicability of the selected sediment transport equation. Additionally, many sediment transport relationships require significant data inputs, limiting their usefulness in data-poor environments. Bagnold's empirical relationship for bedload transport is attractive because it is based on stream power, a relatively straightforward parameter to estimate using remote sensing data. However, the equation is also dependent on flow depth, which is more difficult to measure or estimate for entire drainage networks. We recast Bagnold's original sediment transport equation using specific discharge in place of flow depth. Using a large dataset of sediment transport rates from the literature, we show that this approach yields similar predictive accuracy as other stream power based relationships. We also explore the applicability of various critical stream power equations, including Bagnold's original, and support previous conclusions that these critical values can be predicted well based solely on sediment grain size. In addition, we propagate error in these sediment transport equations through channel incision modeling to compare the errors associated with our equation to alternative formulations. This new version of Bagnold's bedload transport equation has utility for channel incision modeling at larger spatial scales using widely available and remote sensing data.

  18. Sea Turtles Geolocalization in the Indian Ocean: An Over Sea Radio Channel framework integrating a trilateration technique

    NASA Astrophysics Data System (ADS)

    Guegan, Loic; Murad, Nour Mohammad; Bonhommeau, Sylvain

    2018-03-01

    This paper deals with the modeling of the over sea radio channel and aims to establish sea turtles localization off the coast of Reunion Island, and also on Europa Island in the Mozambique Channel. In order to model this radio channel, a framework measurement protocol is proposed. The over sea measured channel is integrated to the localization algorithm to estimate the turtle trajectory based on Power of Arrival (PoA) technique compared to GPS localization. Moreover, cross correlation tool is used to characterize the over sea propagation channel. First measurement of the radio channel on the Reunion Island coast combine to the POA algorithm show an error of 18 m for 45% of the approximated points.

  19. A novel time of arrival estimation algorithm using an energy detector receiver in MMW systems

    NASA Astrophysics Data System (ADS)

    Liang, Xiaolin; Zhang, Hao; Lyu, Tingting; Xiao, Han; Gulliver, T. Aaron

    2017-12-01

    This paper presents a new time of arrival (TOA) estimation technique using an improved energy detection (ED) receiver based on the empirical mode decomposition (EMD) in an impulse radio (IR) 60 GHz millimeter wave (MMW) system. A threshold is employed via analyzing the characteristics of the received energy values with an extreme learning machine (ELM). The effect of the channel and integration period on the TOA estimation is evaluated. Several well-known ED-based TOA algorithms are used to compare with the proposed technique. It is shown that this ELM-based technique has lower TOA estimation error compared to other approaches and provides robust performance with the IEEE 802.15.3c channel models.

  20. Improved Atmospheric Soundings and Error Estimates from Analysis of AIRS/AMSU Data

    NASA Technical Reports Server (NTRS)

    Susskind, Joel

    2007-01-01

    The AIRS Science Team Version 5.0 retrieval algorithm became operational at the Goddard DAAC in July 2007 generating near real-time products from analysis of AIRS/AMSU sounding data. This algorithm contains many significant theoretical advances over the AIRS Science Team Version 4.0 retrieval algorithm used previously. Three very significant developments of Version 5 are: 1) the development and implementation of an improved Radiative Transfer Algorithm (RTA) which allows for accurate treatment of non-Local Thermodynamic Equilibrium (non-LTE) effects on shortwave sounding channels; 2) the development of methodology to obtain very accurate case by case product error estimates which are in turn used for quality control; and 3) development of an accurate AIRS only cloud clearing and retrieval system. These theoretical improvements taken together enabled a new methodology to be developed which further improves soundings in partially cloudy conditions, without the need for microwave observations in the cloud clearing step as has been done previously. In this methodology, longwave C02 channel observations in the spectral region 700 cm-' to 750 cm-' are used exclusively for cloud clearing purposes, while shortwave C02 channels in the spectral region 2195 cm-' to 2395 cm-' are used for temperature sounding purposes. The new methodology for improved error estimates and their use in quality control is described briefly and results are shown indicative of their accuracy. Results are also shown of forecast impact experiments assimilating AIRS Version 5.0 retrieval products in the Goddard GEOS 5 Data Assimilation System using different quality control thresholds.

  1. A Burst-Mode Photon-Counting Receiver with Automatic Channel Estimation and Bit Rate Detection

    DTIC Science & Technology

    2016-02-24

    communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode...obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver...receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB

  2. Using computational modeling of river flow with remotely sensed data to infer channel bathymetry

    USGS Publications Warehouse

    Nelson, Jonathan M.; McDonald, Richard R.; Kinzel, Paul J.; Shimizu, Y.

    2012-01-01

    As part of an ongoing investigation into the use of computational river flow and morphodynamic models for the purpose of correcting and extending remotely sensed river datasets, a simple method for inferring channel bathymetry is developed and discussed. The method is based on an inversion of the equations expressing conservation of mass and momentum to develop equations that can be solved for depth given known values of vertically-averaged velocity and water-surface elevation. The ultimate goal of this work is to combine imperfect remotely sensed data on river planform, water-surface elevation and water-surface velocity in order to estimate depth and other physical parameters of river channels. In this paper, the technique is examined using synthetic data sets that are developed directly from the application of forward two-and three-dimensional flow models. These data sets are constrained to satisfy conservation of mass and momentum, unlike typical remotely sensed field data sets. This provides a better understanding of the process and also allows assessment of how simple inaccuracies in remotely sensed estimates might propagate into depth estimates. The technique is applied to three simple cases: First, depth is extracted from a synthetic dataset of vertically averaged velocity and water-surface elevation; second, depth is extracted from the same data set but with a normally-distributed random error added to the water-surface elevation; third, depth is extracted from a synthetic data set for the same river reach using computed water-surface velocities (in place of depth-integrated values) and water-surface elevations. In each case, the extracted depths are compared to the actual measured depths used to construct the synthetic data sets (with two- and three-dimensional flow models). Errors in water-surface elevation and velocity that are very small degrade depth estimates and cannot be recovered. Errors in depth estimates associated with assuming water-surface velocities equal to depth-integrated velocities are substantial, but can be reduced with simple corrections.

  3. Performance Analysis of HF Band FB-MC-SS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hussein Moradi; Stephen Andrew Laraway; Behrouz Farhang-Boroujeny

    Abstract—In a recent paper [1] the filter bank multicarrier spread spectrum (FB-MC-SS) waveform was proposed for wideband spread spectrum HF communications. A significant benefit of this waveform is robustness against narrow and partial band interference. Simulation results in [1] demonstrated good performance in a wideband HF channel over a wide range of conditions. In this paper we present a theoretical analysis of the bit error probably for this system. Our analysis tailors the results from [2] where BER performance was analyzed for maximum ration combining systems that accounted for correlation between subcarriers and channel estimation error. Equations are give formore » BER that closely match the simulated performance in most situations.« less

  4. Characterizing the SWOT discharge error budget on the Sacramento River, CA

    NASA Astrophysics Data System (ADS)

    Yoon, Y.; Durand, M. T.; Minear, J. T.; Smith, L.; Merry, C. J.

    2013-12-01

    The Surface Water and Ocean Topography (SWOT) is an upcoming satellite mission (2020 year) that will provide surface-water elevation and surface-water extent globally. One goal of SWOT is the estimation of river discharge directly from SWOT measurements. SWOT discharge uncertainty is due to two sources. First, SWOT cannot measure channel bathymetry and determine roughness coefficient data necessary for discharge calculations directly; these parameters must be estimated from the measurements or from a priori information. Second, SWOT measurement errors directly impact the discharge estimate accuracy. This study focuses on characterizing parameter and measurement uncertainties for SWOT river discharge estimation. A Bayesian Markov Chain Monte Carlo scheme is used to calculate parameter estimates, given the measurements of river height, slope and width, and mass and momentum constraints. The algorithm is evaluated using simulated both SWOT and AirSWOT (the airborne version of SWOT) observations over seven reaches (about 40 km) of the Sacramento River. The SWOT and AirSWOT observations are simulated by corrupting the ';true' HEC-RAS hydraulic modeling results with the instrument error. This experiment answers how unknown bathymetry and roughness coefficients affect the accuracy of the river discharge algorithm. From the experiment, the discharge error budget is almost completely dominated by unknown bathymetry and roughness; 81% of the variance error is explained by uncertainties in bathymetry and roughness. Second, we show how the errors in water surface, slope, and width observations influence the accuracy of discharge estimates. Indeed, there is a significant sensitivity to water surface, slope, and width errors due to the sensitivity of bathymetry and roughness to measurement errors. Increasing water-surface error above 10 cm leads to a corresponding sharper increase of errors in bathymetry and roughness. Increasing slope error above 1.5 cm/km leads to a significant degradation due to direct error in the discharge estimates. As the width error increases past 20%, the discharge error budget is dominated by the width error. Above two experiments are performed based on AirSWOT scenarios. In addition, we explore the sensitivity of the algorithm to the SWOT scenarios.

  5. Delay-distribution-dependent H∞ state estimation for delayed neural networks with (x,v)-dependent noises and fading channels.

    PubMed

    Sheng, Li; Wang, Zidong; Tian, Engang; Alsaadi, Fuad E

    2016-12-01

    This paper deals with the H ∞ state estimation problem for a class of discrete-time neural networks with stochastic delays subject to state- and disturbance-dependent noises (also called (x,v)-dependent noises) and fading channels. The time-varying stochastic delay takes values on certain intervals with known probability distributions. The system measurement is transmitted through fading channels described by the Rice fading model. The aim of the addressed problem is to design a state estimator such that the estimation performance is guaranteed in the mean-square sense against admissible stochastic time-delays, stochastic noises as well as stochastic fading signals. By employing the stochastic analysis approach combined with the Kronecker product, several delay-distribution-dependent conditions are derived to ensure that the error dynamics of the neuron states is stochastically stable with prescribed H ∞ performance. Finally, a numerical example is provided to illustrate the effectiveness of the obtained results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Uncertainty analysis technique for OMEGA Dante measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M. J.; Widmann, K.; Sorce, C.

    2010-10-15

    The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less

  7. Uncertainty Analysis Technique for OMEGA Dante Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    May, M J; Widmann, K; Sorce, C

    2010-05-07

    The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less

  8. Receiver IQ mismatch estimation in PDM CO-OFDM system using training symbol

    NASA Astrophysics Data System (ADS)

    Peng, Dandan; Ma, Xiurong; Yao, Xin; Zhang, Haoyuan

    2017-07-01

    Receiver in-phase/quadrature (IQ) mismatch is hard to mitigate at the receiver via using conventional method in polarization division multiplexed (PDM) coherent optical orthogonal frequency division multiplexing (CO-OFDM) system. In this paper, a novel training symbol structure is proposed to estimate IQ mismatch and channel distortion. Combined this structure with Gram Schmidt orthogonalization procedure (GSOP) algorithm, we can get lower bit error rate (BER). Meanwhile, based on this structure one estimation method is deduced in frequency domain which can achieve the estimation of IQ mismatch and channel distortion independently and improve the system performance obviously. Numerical simulation shows that the proposed two methods have better performance than compared method at 100 Gb/s after 480 km fiber transmission. Besides, the calculation complexity is also analyzed.

  9. Demodulation Algorithms for the Ofdm Signals in the Time- and Frequency-Scattering Channels

    NASA Astrophysics Data System (ADS)

    Bochkov, G. N.; Gorokhov, K. V.; Kolobkov, A. V.

    2016-06-01

    We consider a method based on the generalized maximum-likelihood rule for solving the problem of reception of the signals with orthogonal frequency division multiplexing of their harmonic components (OFDM signals) in the time- and frequency-scattering channels. The coherent and incoherent demodulators effectively using the time scattering due to the fast fading of the signal are developed. Using computer simulation, we performed comparative analysis of the proposed algorithms and well-known signal-reception algorithms with equalizers. The proposed symbolby-symbol detector with decision feedback and restriction of the number of searched variants is shown to have the best bit-error-rate performance. It is shown that under conditions of the limited accuracy of estimating the communication-channel parameters, the incoherent OFDMsignal detectors with differential phase-shift keying can ensure a better bit-error-rate performance compared with the coherent OFDM-signal detectors with absolute phase-shift keying.

  10. Improved Soundings and Error Estimates using AIRS/AMSU Data

    NASA Technical Reports Server (NTRS)

    Susskind, Joel

    2006-01-01

    AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1 K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.

  11. Bankfull discharge and channel characteristics of streams in New York State

    USGS Publications Warehouse

    Mulvihill, Christiane I.; Baldigo, Barry P.; Miller, Sarah J.; DeKoskie, Douglas; DuBois, Joel

    2009-01-01

    Equations that relate drainage area to bankfull discharge and channel characteristics (such as width, depth, and cross-sectional area) at gaged sites are needed to help define bankfull discharge and channel characteristics at ungaged sites and can be used in stream-restoration and protection projects, stream-channel classification, and channel assessments. These equations are intended to serve as a guide for streams in areas of similar hydrologic, climatic, and physiographic conditions. New York State contains eight hydrologic regions that were previously delineated on the basis of high-flow (flood) characteristics. This report seeks to increase understanding of the factors affecting bankfull discharge and channel characteristics to drainage-area size relations in New York State by providing an in-depth analysis of seven previously published regional bankfull-discharge and channel-characteristics curves.Stream-survey data and discharge records from 281 cross sections at 82 streamflow-gaging stations were used in regression analyses to relate drainage area to bankfull discharge and bankfull-channel width, depth, and cross-sectional area. The R2 and standard errors of estimate of each regional equation were compared to the R2 and standard errors of estimate for the statewide (pooled) model to determine if regionalizing data reduced model variability. It was found that regional models typically yield less variable results than those obtained using pooled statewide equations, which indicates statistically significant regional differences in bankfull-discharge and channel-characteristics relations.Statistical analysis of bankfull-discharge relations found that curves for regions 4 and 7 fell outside the 95-percent confidence interval bands of the statewide model and had intercepts that were significantly diferent (p≤0.10) from the other five hydrologic regions.Analysis of channel-characteristics relations found that the bankfull width, depth, and cross-sectional area curves for region 3 were significantly different p(≤0.05) from the other six regions.It was hypothesized that some regional variability could be reduced by creating models for streams with similar physiographic and climatic characteristics. Available data on streamflow patterns and previous regional-curve research suggested that mean annual runoff, Rosgen stream type, and water-surface slope were the variables most likely to influence regional bankfull discharge and channel characteristics to drainage-area size relations. Results showed that although all of these factors had an influence on regional relations, most stratified models have lower 2 values and higher standard errors of estimate than the regional models.The New York statewide (pooled) bankfull-discharge equation and equations for regions 4 and 7 were compared with equations for four other regions in the Northeast to evaluate region-to-region differences, and assess the ability of individual curves to produce results more accurate than those that would be obtained from one model of the northeastern United States. Results indicated that model slopes lack significant diferences, though intercepts are significantly different. Comparison of bankfull-discharge estimates using different models shows that results could vary by as much as 100 percent depending on which model was used and indicated that regionalization improved model accuracy.

  12. Research Supporting Satellite Communications Technology

    NASA Technical Reports Server (NTRS)

    Horan Stephen; Lyman, Raphael

    2005-01-01

    This report describes the second year of research effort under the grant Research Supporting Satellite Communications Technology. The research program consists of two major projects: Fault Tolerant Link Establishment and the design of an Auto-Configurable Receiver. The Fault Tolerant Link Establishment protocol is being developed to assist the designers of satellite clusters to manage the inter-satellite communications. During this second year, the basic protocol design was validated with an extensive testing program. After this testing was completed, a channel error model was added to the protocol to permit the effects of channel errors to be measured. This error generation was used to test the effects of channel errors on Heartbeat and Token message passing. The C-language source code for the protocol modules was delivered to Goddard Space Flight Center for integration with the GSFC testbed. The need for a receiver autoconfiguration capability arises when a satellite-to-ground transmission is interrupted due to an unexpected event, the satellite transponder may reset to an unknown state and begin transmitting in a new mode. During Year 2, we completed testing of these algorithms when noise-induced bit errors were introduced. We also developed and tested an algorithm for estimating the data rate, assuming an NRZ-formatted signal corrupted with additive white Gaussian noise, and we took initial steps in integrating both algorithms into the SDR test bed at GSFC.

  13. Post-processing procedure for industrial quantum key distribution systems

    NASA Astrophysics Data System (ADS)

    Kiktenko, Evgeny; Trushechkin, Anton; Kurochkin, Yury; Fedorov, Aleksey

    2016-08-01

    We present algorithmic solutions aimed on post-processing procedure for industrial quantum key distribution systems with hardware sifting. The main steps of the procedure are error correction, parameter estimation, and privacy amplification. Authentication of classical public communication channel is also considered.

  14. A new method of hybrid frequency hopping signals selection and blind parameter estimation

    NASA Astrophysics Data System (ADS)

    Zeng, Xiaoyu; Jiao, Wencheng; Sun, Huixian

    2018-04-01

    Frequency hopping communication is widely used in military communications at home and abroad. In the case of single-channel reception, it is scarce to process multiple frequency hopping signals both effectively and simultaneously. A method of hybrid FH signals selection and blind parameter estimation is proposed. The method makes use of spectral transformation, spectral entropy calculation and PRI transformation basic theory to realize the sorting and parameter estimation of the components in the hybrid frequency hopping signal. The simulation results show that this method can correctly classify the frequency hopping component signal, and the estimated error of the frequency hopping period is about 5% and the estimated error of the frequency hopping frequency is less than 1% when the SNR is 10dB. However, the performance of this method deteriorates seriously at low SNR.

  15. Activation of zero-error classical capacity in low-dimensional quantum systems

    NASA Astrophysics Data System (ADS)

    Park, Jeonghoon; Heo, Jun

    2018-06-01

    Channel capacities of quantum channels can be nonadditive even if one of two quantum channels has no channel capacity. We call this phenomenon activation of the channel capacity. In this paper, we show that when we use a quantum channel on a qubit system, only a noiseless qubit channel can generate the activation of the zero-error classical capacity. In particular, we show that the zero-error classical capacity of two quantum channels on qubit systems cannot be activated. Furthermore, we present a class of examples showing the activation of the zero-error classical capacity in low-dimensional systems.

  16. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.

    1986-01-01

    High rate concatenated coding systems with trellis inner codes and Reed-Solomon (RS) outer codes for application in satellite communication systems are considered. Two types of inner codes are studied: high rate punctured binary convolutional codes which result in overall effective information rates between 1/2 and 1 bit per channel use; and bandwidth efficient signal space trellis codes which can achieve overall effective information rates greater than 1 bit per channel use. Channel capacity calculations with and without side information performed for the concatenated coding system. Concatenated coding schemes are investigated. In Scheme 1, the inner code is decoded with the Viterbi algorithm and the outer RS code performs error-correction only (decoding without side information). In scheme 2, the inner code is decoded with a modified Viterbi algorithm which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, while branch metrics are used to provide the reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. These two schemes are proposed for use on NASA satellite channels. Results indicate that high system reliability can be achieved with little or no bandwidth expansion.

  17. Estimating suspended sediment concentrations in turbid coastal waters of the Santa Barbara Channel with SeaWiFS

    USGS Publications Warehouse

    Warrick, J.A.; Mertes, L.A.K.; Siegel, D.A.; Mackenzie, C.

    2004-01-01

    A technique is presented for estimating suspended sediment concentrations of turbid coastal waters with remotely sensed multi-spectral data. The method improves upon many standard techniques, since it incorporates analyses of multiple wavelength bands (four for Sea-viewing Wide Field of view Sensor (SeaWiFS)) and a nonlinear calibration, which produce highly accurate results (expected errors are approximately ±10%). Further, potential errors produced by erroneous atmospheric calibration in excessively turbid waters and influences of dissolved organic materials, chlorophyll pigments and atmospheric aerosols are limited by a dark pixel subtraction and removal of the violet to blue wavelength bands. Results are presented for the Santa Barbara Channel, California where suspended sediment concentrations ranged from 0–200+ mg l−1 (±20 mg l−1) immediately after large river runoff events. The largest plumes were observed 10–30 km off the coast and occurred immediately following large El Niño winter floods.

  18. Amazon floodplain channels regulate channel-floodplain water exchange

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; Baugh, C.; Trigg, M.

    2017-12-01

    We examine the role of floodplain channels in regulating the exchange of water between the Amazon main stem and its extensive floodplains using a combination of field survey, remote sensing and numerical modelling for a 30,000 km2 area around the confluence of the Solimões and Purus rivers. From Landsat imagery we identified 1762 individual floodplain channel reaches with total length of nearly 9300 line km that range in width from 900m to 20m. Using a boat survey we measured width and depth along 509 line km of floodplain channels in 45 separate reaches and used these data to develop geomorphic relationships between width and depth. This enabled reconstruction of the depth of all other channels in the Landsat survey to an RMSE of 2.5m. We then constructed a 2D hydraulic model of this site which included all 9300km of floodplain channels as sub-grid scale features using a recently developed version of the LISFLOOD-FP code. The DEM for the model was derived from a version of the SRTM Digital Elevation Model that was processed to remove vegetation artefacts. The model was run at 270m resolution over the entire 30,000 km2 domain for the period from 2002-2009. Simulations were run with and without floodplain channels to examine the impact of these features on floodplain flow dynamics and storage. Simulated floodplain channel hydraulics were validated against a combination of in-situ and remotely sensed data. Our results show that approximately 100 km3 of water is exchanged between the channel and the floodplain during a typical annual cycle, and 8.5±2.1% of mainstem flows is routed through the floodplain. The overall effect of floodplains channels was to increase the duration of connections between the Amazon River and the floodplain. Inclusion of floodplain channels in the model increased inundation volume by 7.3% - 11.3% at high water, and decreased it at low water by 4.0% - 16.6%, with the range in these estimates due to potential errors in floodplain channel geometry. Inundation extent in the model did not increase at high water, but low water flood extents declined by 8.8% - 29.7% due to increased connectivity between the floodplain and the mainstem. The wide range of flow decrease estimates demonstrates that the results are sensitive to errors in the estimation of floodplain channel geometries, particularly bed elevations.

  19. Sensitivity Analysis for Atmospheric Infrared Sounder (AIRS) CO2 Retrieval

    NASA Technical Reports Server (NTRS)

    Gat, Ilana

    2012-01-01

    The Atmospheric Infrared Sounder (AIRS) is a thermal infrared sensor able to retrieve the daily atmospheric state globally for clear as well as partially cloudy field-of-views. The AIRS spectrometer has 2378 channels sensing from 15.4 micrometers to 3.7 micrometers, of which a small subset in the 15 micrometers region has been selected, to date, for CO2 retrieval. To improve upon the current retrieval method, we extended the retrieval calculations to include a prior estimate component and developed a channel ranking system to optimize the channels and number of channels used. The channel ranking system uses a mathematical formalism to rapidly process and assess the retrieval potential of large numbers of channels. Implementing this system, we identifed a larger optimized subset of AIRS channels that can decrease retrieval errors and minimize the overall sensitivity to other iridescent contributors, such as water vapor, ozone, and atmospheric temperature. This methodology selects channels globally by accounting for the latitudinal, longitudinal, and seasonal dependencies of the subset. The new methodology increases accuracy in AIRS CO2 as well as other retrievals and enables the extension of retrieved CO2 vertical profiles to altitudes ranging from the lower troposphere to upper stratosphere. The extended retrieval method for CO2 vertical profile estimation using a maximum-likelihood estimation method. We use model data to demonstrate the beneficial impact of the extended retrieval method using the new channel ranking system on CO2 retrieval.

  20. Adaptive UEP and Packet Size Assignment for Scalable Video Transmission over Burst-Error Channels

    NASA Astrophysics Data System (ADS)

    Lee, Chen-Wei; Yang, Chu-Sing; Su, Yih-Ching

    2006-12-01

    This work proposes an adaptive unequal error protection (UEP) and packet size assignment scheme for scalable video transmission over a burst-error channel. An analytic model is developed to evaluate the impact of channel bit error rate on the quality of streaming scalable video. A video transmission scheme, which combines the adaptive assignment of packet size with unequal error protection to increase the end-to-end video quality, is proposed. Several distinct scalable video transmission schemes over burst-error channel have been compared, and the simulation results reveal that the proposed transmission schemes can react to varying channel conditions with less and smoother quality degradation.

  1. Distributed Compressive CSIT Estimation and Feedback for FDD Multi-User Massive MIMO Systems

    NASA Astrophysics Data System (ADS)

    Rao, Xiongbin; Lau, Vincent K. N.

    2014-06-01

    To fully utilize the spatial multiplexing gains or array gains of massive MIMO, the channel state information must be obtained at the transmitter side (CSIT). However, conventional CSIT estimation approaches are not suitable for FDD massive MIMO systems because of the overwhelming training and feedback overhead. In this paper, we consider multi-user massive MIMO systems and deploy the compressive sensing (CS) technique to reduce the training as well as the feedback overhead in the CSIT estimation. The multi-user massive MIMO systems exhibits a hidden joint sparsity structure in the user channel matrices due to the shared local scatterers in the physical propagation environment. As such, instead of naively applying the conventional CS to the CSIT estimation, we propose a distributed compressive CSIT estimation scheme so that the compressed measurements are observed at the users locally, while the CSIT recovery is performed at the base station jointly. A joint orthogonal matching pursuit recovery algorithm is proposed to perform the CSIT recovery, with the capability of exploiting the hidden joint sparsity in the user channel matrices. We analyze the obtained CSIT quality in terms of the normalized mean absolute error, and through the closed-form expressions, we obtain simple insights into how the joint channel sparsity can be exploited to improve the CSIT recovery performance.

  2. A Space-Time Signal Decomposition Algorithm for Downlink MIMO DS-CDMA Receivers

    NASA Astrophysics Data System (ADS)

    Wang, Yung-Yi; Fang, Wen-Hsien; Chen, Jiunn-Tsair

    We propose a dimension reduction algorithm for the receiver of the downlink of direct-sequence code-division multiple access (DS-CDMA) systems in which both the transmitters and the receivers employ antenna arrays of multiple elements. To estimate the high order channel parameters, we develop a layered architecture using dimension-reduced parameter estimation algorithms to estimate the frequency-selective multipath channels. In the proposed architecture, to exploit the space-time geometric characteristics of multipath channels, spatial beamformers and constrained (or unconstrained) temporal filters are adopted for clustered-multipath grouping and path isolation. In conjunction with the multiple access interference (MAI) suppression techniques, the proposed architecture jointly estimates the direction of arrivals, propagation delays, and fading amplitudes of the downlink fading multipaths. With the outputs of the proposed architecture, the signals of interest can then be naturally detected by using path-wise maximum ratio combining. Compared to the traditional techniques, such as the Joint-Angle-and-Delay-Estimation (JADE) algorithm for DOA-delay joint estimation and the space-time minimum mean square error (ST-MMSE) algorithm for signal detection, computer simulations show that the proposed algorithm substantially mitigate the computational complexity at the expense of only slight performance degradation.

  3. Application of a soft computing technique in predicting the percentage of shear force carried by walls in a rectangular channel with non-homogeneous roughness.

    PubMed

    Khozani, Zohreh Sheikh; Bonakdari, Hossein; Zaji, Amir Hossein

    2016-01-01

    Two new soft computing models, namely genetic programming (GP) and genetic artificial algorithm (GAA) neural network (a combination of modified genetic algorithm and artificial neural network methods) were developed in order to predict the percentage of shear force in a rectangular channel with non-homogeneous roughness. The ability of these methods to estimate the percentage of shear force was investigated. Moreover, the independent parameters' effectiveness in predicting the percentage of shear force was determined using sensitivity analysis. According to the results, the GP model demonstrated superior performance to the GAA model. A comparison was also made between the GP program determined as the best model and five equations obtained in prior research. The GP model with the lowest error values (root mean square error ((RMSE) of 0.0515) had the best function compared with the other equations presented for rough and smooth channels as well as smooth ducts. The equation proposed for rectangular channels with rough boundaries (RMSE of 0.0642) outperformed the prior equations for smooth boundaries.

  4. The Effects of Channel Curvature and Protrusion Height on Nucleate Boiling and the Critical Heat Flux of a Simulated Electronic Chip

    DTIC Science & Technology

    1994-05-01

    parameters and geometry factor. 57 3.2 Laminar sublayer and buffer layer thicknesses for geometry of Mudawar and Maddox.ŝ 68 3.3 Correlation constants...transfer from simulated electronic chip heat sources that are flush with the flow channel wall. Mudawar and Maddox2" have studied enhanced surfaces...bias error was not estimated; however, the percentage of heat loss measured compares with that previously reported by Mudawar and Maddox19 for a

  5. An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images

    PubMed Central

    Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush

    2009-01-01

    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770

  6. Bounding the error on bottom estimation for multi-angle swath bathymetry sonar

    NASA Astrophysics Data System (ADS)

    Mullins, Geoff K.; Bird, John S.

    2005-04-01

    With the recent introduction of multi-angle swath bathymetry (MASB) sonar to the commercial marketplace (e.g., Benthos Inc., C3D sonar, 2004), additions must be made to the current sonar lexicon. The correct interpretation of measurements made with MASB sonar, which uses filled transducer arrays to compute angle-of-arrival information (AOA) from backscattered signal, is essential not only for mapping, but for applications such as statistical bottom classification. In this paper it is shown that aside from uncorrelated channel to channel noise, there exists a tradeoff between effects that govern the error bounds on bottom estimation for surfaces having shallow grazing angle and surfaces distributed along a radial arc centered at the transducer. In the first case, as the bottom aligns with the radial direction to the receiver, footprint shift and shallow grazing angle effects dominate the uncertainty in physical bottom position (surface aligns along a single AOA). Alternatively, if signal from a radial arc arrives, a single AOA is usually estimated (not necessarily at the average location of the surface). Through theoretical treatment, simulation, and field measurements, the aforementioned factors affecting MASB bottom mapping are examined. [Work supported by NSERC.

  7. EPIC/DSCOVR's Oxygen Absorption Channels: A Cloud Profiling Information Content Analysis

    NASA Astrophysics Data System (ADS)

    Davis, A. B.; Merlin, G.; Labonnote, L. C.; Cornet, C.; Dubuisson, P.; Ferlay, N.; Parol, F.; Riedi, J.; Yang, Y.

    2016-12-01

    EPIC/DSCOVR has several spectral channels dedicated to cloud characterization, most notably O2 A- and B-band. Differential optical absorption spectroscopy (DOAS) ratios of in-band and reference channels are less prone to calibration error than the 4 individual signals. Using these ratios, we have replicated for mono-directional (quasi-backscattering) EPIC observations the recent cloud information content analysis by Merlin et al. (AMT-D,8:12709-12758,2015) that was focused on A-band-only but multi-angle observations by POLDER in the past, by AirMSPI in the present, and by 3MI and MAIA in the future. The methodology is based on extensive forward 1D radiative transfer (RT) computations using the ARTDECO model that implements a k-distribution technique for the absorbing (in-band) channels. These synthetic signals are combined into a Bayesian Rodgers-type framework for estimating posterior uncertainty on retrieved quantities. Recall that this formalism calls explicitly for: (1) estimates of instrument error, and (2) prior uncertainty on the retrieved quantities, to which we add (3) reasonable estimates of uncertainty in the non- or otherwise-retrieved properties. Wide ranges of cloud top heights (CTHs) and cloud geometrical thicknesses (CGTs) are examined for a representative selection of cloud optical thicknesses (COTs), solar angles, and surface reflectances. We found that CTH should be reliably retrieved from EPIC data under most circumstances as long as COT can be inferred from non-absorbing channels, and the bias from in-cloud absorption is removed. However, CGT will be hard to determine unless CTH is constrained by independent means. EPIC has several UV channels that could be brought to bear. These findings conflict those of Yang et al. (JQSRT,122:141-149,2013), so we also revisit that more preliminary study that did not account for a realistic level of residual instrument noise in the DOAS ratios. In conclusion, we believe that the present information content analysis will inform the EPIC/DSCOVR Level 2 algorithm development team about what cloud properties to target using the A/B-band channels, depending on the availability of other cloud information.

  8. Real time estimation of generation, extinction and flow of muscle fibre action potentials in high density surface EMG.

    PubMed

    Mesin, Luca

    2015-02-01

    Developing a real time method to estimate generation, extinction and propagation of muscle fibre action potentials from bi-dimensional and high density surface electromyogram (EMG). A multi-frame generalization of an optical flow technique including a source term is considered. A model describing generation, extinction and propagation of action potentials is fit to epochs of surface EMG. The algorithm is tested on simulations of high density surface EMG (inter-electrode distance equal to 5mm) from finite length fibres generated using a multi-layer volume conductor model. The flow and source term estimated from interference EMG reflect the anatomy of the muscle, i.e. the direction of the fibres (2° of average estimation error) and the positions of innervation zone and tendons under the electrode grid (mean errors of about 1 and 2mm, respectively). The global conduction velocity of the action potentials from motor units under the detection system is also obtained from the estimated flow. The processing time is about 1 ms per channel for an epoch of EMG of duration 150 ms. A new real time image processing algorithm is proposed to investigate muscle anatomy and activity. Potential applications are proposed in prosthesis control, automatic detection of optimal channels for EMG index extraction and biofeedback. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Effective channel estimation and efficient symbol detection for multi-input multi-output underwater acoustic communications

    NASA Astrophysics Data System (ADS)

    Ling, Jun

    Achieving reliable underwater acoustic communications (UAC) has long been recognized as a challenging problem owing to the scarce bandwidth available and the reverberant spread in both time and frequency domains. To pursue high data rates, we consider a multi-input multi-output (MIMO) UAC system, and our focus is placed on two main issues regarding a MIMO UAC system: (1) channel estimation, which involves the design of the training sequences and the development of a reliable channel estimation algorithm, and (2) symbol detection, which requires interference cancelation schemes due to simultaneous transmission from multiple transducers. To enhance channel estimation performance, we present a cyclic approach for designing training sequences with good auto- and cross-correlation properties, and a channel estimation algorithm called the iterative adaptive approach (IAA). Sparse channel estimates can be obtained by combining IAA with the Bayesian information criterion (BIC). Moreover, we present sparse learning via iterative minimization (SLIM) and demonstrate that SLIM gives similar performance to IAA but at a much lower computational cost. Furthermore, an extension of the SLIM algorithm is introduced to estimate the sparse and frequency modulated acoustic channels. The extended algorithm is referred to as generalization of SLIM (GoSLIM). Regarding symbol detection, a linear minimum mean-squared error based detection scheme, called RELAX-BLAST, which is a combination of vertical Bell Labs layered space-time (V-BLAST) algorithm and the cyclic principle of the RELAX algorithm, is presented and it is shown that RELAX-BLAST outperforms V-BLAST. We show that RELAX-BLAST can be implemented efficiently by making use of the conjugate gradient method and diagonalization properties of circulant matrices. This fast implementation approach requires only simple fast Fourier transform operations and facilitates parallel implementations. The effectiveness of the proposed MIMO schemes is verified by both computer simulations and experimental results obtained by analyzing the measurements acquired in multiple in-water experiments.

  10. Optimized retrievals of precipitable water from the VAS 'split window'

    NASA Technical Reports Server (NTRS)

    Chesters, Dennis; Robinson, Wayne D.; Uccellini, Louis W.

    1987-01-01

    Precipitable water fields have been retrieved from the VISSR Atmospheric Sounder (VAS) using a radiation transfer model for the differential water vapor absorption between the 11- and 12-micron 'split window' channels. Previous moisture retrievals using only the split window channels provided very good space-time continuity but poor absolute accuracy. This note describes how retrieval errors can be significantly reduced from plus or minus 0.9 to plus or minus 0.6 gm/sq cm by empirically optimizing the effective air temperature and absorption coefficients used in the two-channel model. The differential absorption between the VAS 11- and 12-micron channels, empirically estimated from 135 colocated VAS-RAOB observations, is found to be approximately 50 percent smaller than the theoretical estimates. Similar discrepancies have been noted previously between theoretical and empirical absorption coefficients applied to the retrieval of sea surface temperatures using radiances observed by VAS and polar-orbiting satellites. These discrepancies indicate that radiation transfer models for the 11-micron window appear to be less accurate than the satellite observations.

  11. A two-factor error model for quantitative steganalysis

    NASA Astrophysics Data System (ADS)

    Böhme, Rainer; Ker, Andrew D.

    2006-02-01

    Quantitative steganalysis refers to the exercise not only of detecting the presence of hidden stego messages in carrier objects, but also of estimating the secret message length. This problem is well studied, with many detectors proposed but only a sparse analysis of errors in the estimators. A deep understanding of the error model, however, is a fundamental requirement for the assessment and comparison of different detection methods. This paper presents a rationale for a two-factor model for sources of error in quantitative steganalysis, and shows evidence from a dedicated large-scale nested experimental set-up with a total of more than 200 million attacks. Apart from general findings about the distribution functions found in both classes of errors, their respective weight is determined, and implications for statistical hypothesis tests in benchmarking scenarios or regression analyses are demonstrated. The results are based on a rigorous comparison of five different detection methods under many different external conditions, such as size of the carrier, previous JPEG compression, and colour channel selection. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error.

  12. The effect of domain length and parameter estimation on observation impact in data assimilation for flood inundation forecasting.

    NASA Astrophysics Data System (ADS)

    Cooper, Elizabeth; Dance, Sarah; Garcia-Pintado, Javier; Nichols, Nancy; Smith, Polly

    2017-04-01

    Timely and accurate inundation forecasting provides vital information about the behaviour of fluvial flood water, enabling mitigating actions to be taken by residents and emergency services. Data assimilation is a powerful mathematical technique for combining forecasts from hydrodynamic models with observations to produce a more accurate forecast. We discuss the effect of both domain size and channel friction parameter estimation on observation impact in data assimilation for inundation forecasting. Numerical shallow water simulations are carried out in a simple, idealized river channel topography. Data assimilation is performed using an Ensemble Transform Kalman Filter (ETKF) and synthetic observations of water depth in identical twin experiments. We show that reinitialising the numerical inundation model with corrected water levels after an assimilation can cause an initialisation shock if a hydrostatic assumption is made, leading to significant degradation of the forecast for several hours immediately following an assimilation. We demonstrate an effective and novel method for dealing with this. We find that using data assimilation to combine observations of water depth with forecasts from a hydrodynamic model corrects the forecast very effectively at time of the observations. In agreement with other authors we find that the corrected forecast then moves quickly back to the open loop forecast which does not take the observations into account. Our investigations show that the time taken for the forecast to decay back to the open loop case depends on the length of the domain of interest when only water levels are corrected. This is because the assimilation corrects water depths in all parts of the domain, even when observations are only available in one area. Error growth in the forecast step then starts at the upstream part of the domain and propagates downstream. The impact of the observations is therefore longer-lived in a longer domain. We have found that the upstream-downstream pattern of error growth can be due to incorrect friction parameter specification, rather than errors in inflow as shown elsewhere. Our results show that joint state-parameter estimation can recover accurate values for the parameter controlling channel friction processes in the model, even when observations of water level are only available on part of the flood plain. Correcting water levels and the channel friction parameter together leads to a large improvement in the forecast water levels at all simulation times. The impact of the observations is therefore much greater when the channel friction parameter is corrected along with water levels. We find that domain length effects disappear for joint state-parameter estimation.

  13. A model of the 0.4-GHz scatterometer. [used for agriculture soil moisture program

    NASA Technical Reports Server (NTRS)

    Wu, S. T.

    1978-01-01

    The 0.4 GHz aircraft scatterometer system used for the agricultural soil moisture estimation program is analyzed for the antenna pattern, the signal flow in the receiver data channels, and the errors in the signal outputs. The operational principal, system sensitivity, data handling, and resolution cell length requirements are also described. The backscattering characteristics of the agriculture scenes are contained in the form of the functional dependence of the backscattering coefficient on the incidence angle. The substantial gains of the cross-polarization term of the horizontal and vertical antennas have profound effects on the cross-polarized backscattered signals. If these signals are not corrected properly, large errors could result in the estimate of the cross-polarized backscattering coefficient. It is also necessary to correct the variations of the aircraft parameters during data processing to minimize the error in the 0 degree estimation. Recommendations are made to improve the overall performance of the scatterometer system.

  14. TOPEX/POSEIDON microwave radiometer performance and in-flight calibration

    NASA Technical Reports Server (NTRS)

    Ruf, C. S.; Keihm, Stephen J.; Subramanya, B.; Janssen, Michael A.

    1994-01-01

    Results of the in-flight calibration and performance evaluation campaign for the TOPEX/POSEIDON microwave radiometer (TMR) are presented. Intercomparisons are made between TMR and various sources of ground truth, including ground-based microwave water vapor radiometers, radiosondes, global climatological models, special sensor microwave imager data over the Amazon rain forest, and models of clear, calm, subpolar ocean regions. After correction for preflight errors in the processing of thermal/vacuum data, relative channel offsets in the open ocean TMR brightness temperatures were noted at the approximately = 1 K level for the three TMR frequencies. Larger absolute offsets of 6-9 K over the rain forest indicated a approximately = 5% gain error in the three channel calibrations. This was corrected by adjusting the antenna pattern correction (APC) algorithm. AS 10% scale error in the TMR path delay estimates, relative to coincident radiosondes, was corrected in part by the APC adjustment and in part by a 5% modification to the value assumed for the 22.235 FGHz water vapor line strength in the path delay retrieval algorithm. After all in-flight corrections to the calibration, TMR global retrieval accuracy for the wet tropospheric range correction is estimated at 1.1 cm root mean square (RMS) with consistent peformance under clear, cloudy, and windy conditions.

  15. Swath-altimetry measurements of the main stem Amazon River: measurement errors and hydraulic implications

    NASA Astrophysics Data System (ADS)

    Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.

    2015-04-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water-surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a virtual mission for a ~260 km reach of the central Amazon (Solimões) River, using a hydraulic model to provide water-surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimensional height error spectrum derived from the SWOT design requirements. We thereby obtained water-surface elevation measurements for the Amazon main stem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-sectional averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1 % average overall error in discharge, respectively. We extend the results to other rivers worldwide and infer that SWOT-derived discharge estimates may be more accurate for rivers with larger channel widths (permitting a greater level of cross-sectional averaging and the use of shorter reach lengths) and higher water-surface slopes (reducing the proportional impact of slope errors on discharge calculation).

  16. Data Assimilation Experiments using Quality Controlled AIRS Version 5 Temperature Soundings

    NASA Technical Reports Server (NTRS)

    SUsskind, Joel

    2008-01-01

    The AIRS Science Team Version 5 retrieval algorithm has been finalized and is now operational at the Goddard DAAC in the processing (and reprocessing) of all AIRS data. The AIRS Science Team Version 5 retrieval algorithm contains two significant improvements over Version 4: 1) Improved physics allows for use of AIRS observations in the entire 4.3 pm C02 absorption band in the retrieval of temperature profile T(p) during both day and night. Tropospheric sounding 15 pm C02 observations are now used primarily in the generation of cloud cleared radiances Ri. This approach allows for the generation of accurate values of Ri and T(p) under most cloud conditions. 2) Another very significant improvement in Version 5 is the ability to generate accurate case-by-case, level-by-level error estimates for the atmospheric temperature profile, as well as for channel-by- channel error estimates for Ri. These error estimates are used for quality control of the retrieved products. We have conducted forecast impact experiments assimilating AIRS temperature profiles with different levels of quality control using the NASA GEOS-5 data assimilation system. Assimilation of quality controlled T(p) resulted in significantly improved forecast skill compared to that obtained from analyses obtained when all data used operationally by NCEP, except for AIRS data, is assimilated. We also conducted an experiment assimilating AIRS radiances uncontaminated by clouds, as done Operationally by ECMWF and NCEP. Forecasts resulting from assimilated AIRS radiances were of poorer quality than those obtained assimilating AIRS temperatures.

  17. Real-time correction of beamforming time delay errors in abdominal ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Rigby, K. W.

    2000-04-01

    The speed of sound varies with tissue type, yet commercial ultrasound imagers assume a constant sound speed. Sound speed variation in abdominal fat and muscle layers is widely believed to be largely responsible for poor contrast and resolution in some patients. The simplest model of the abdominal wall assumes that it adds a spatially varying time delay to the ultrasound wavefront. The adequacy of this model is controversial. We describe an adaptive imaging system consisting of a GE LOGIQ 700 imager connected to a multi- processor computer. Arrival time errors for each beamforming channel, estimated by correlating each channel signal with the beamsummed signal, are used to correct the imager's beamforming time delays at the acoustic frame rate. A multi- row transducer provides two-dimensional sampling of arrival time errors. We observe significant improvement in abdominal images of healthy male volunteers: increased contrast of blood vessels, increased visibility of the renal capsule, and increased brightness of the liver.

  18. Experimental research of UWB over fiber system employing 128-QAM and ISFA-optimized scheme

    NASA Astrophysics Data System (ADS)

    He, Jing; Xiang, Changqing; Long, Fengting; Chen, Zuo

    2018-05-01

    In this paper, an optimized intra-symbol frequency-domain averaging (ISFA) scheme is proposed and experimentally demonstrated in intensity-modulation and direct-detection (IMDD) multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system. According to the channel responses of three MB-OFDM UWB sub-bands, the optimal ISFA window size for each sub-band is investigated. After 60-km standard single mode fiber (SSMF) transmission, the experimental results show that, at the bit error rate (BER) of 3.8 × 10-3, the receiver sensitivity of 128-quadrature amplitude modulation (QAM) can be improved by 1.9 dB using the proposed enhanced ISFA combined with training sequence (TS)-based channel estimation scheme, compared with the conventional TS-based channel estimation. Moreover, the spectral efficiency (SE) is up to 5.39 bit/s/Hz.

  19. Uncertainties in Cloud Phase and Optical Thickness Retrievals from the Earth Polychromatic Imaging Camera (EPIC)

    NASA Technical Reports Server (NTRS)

    Meyer, Kerry; Yang, Yuekui; Platnick, Steven

    2016-01-01

    This paper presents an investigation of the expected uncertainties of a single channel cloud optical thickness (COT) retrieval technique, as well as a simple cloud-temperature-threshold-based thermodynamic phase approach, in support of the Deep Space Climate Observatory (DSCOVR) mission. DSCOVR cloud products will be derived from Earth Polychromatic Imaging Camera (EPIC) observations in the ultraviolet and visible spectra. Since EPIC is not equipped with a spectral channel in the shortwave or mid-wave infrared that is sensitive to cloud effective radius (CER), COT will be inferred from a single visible channel with the assumption of appropriate CER values for liquid and ice phase clouds. One month of Aqua MODIS daytime granules from April 2005 is selected for investigating cloud phase sensitivity, and a subset of these granules that has similar EPIC sun-view geometry is selected for investigating COT uncertainties. EPIC COT retrievals are simulated with the same algorithm as the operational MODIS cloud products (MOD06), except using fixed phase-dependent CER values. Uncertainty estimates are derived by comparing the single channel COT retrievals with the baseline bi-spectral MODIS retrievals. Results show that a single channel COT retrieval is feasible for EPIC. For ice clouds, single channel retrieval errors are minimal (less than 2 percent) due to the particle- size insensitivity of the assumed ice crystal (i.e., severely roughened aggregate of hexagonal columns) scattering properties at visible wavelengths, while for liquid clouds the error is mostly limited to within 10 percent, although for thin clouds (COT less than 2) the error can be higher. Potential uncertainties in EPIC cloud masking and cloud temperature retrievals are not considered in this study.

  20. Uncertainties in cloud phase and optical thickness retrievals from the Earth Polychromatic Imaging Camera (EPIC)

    PubMed Central

    Meyer, Kerry; Yang, Yuekui; Platnick, Steven

    2018-01-01

    This paper presents an investigation of the expected uncertainties of a single channel cloud optical thickness (COT) retrieval technique, as well as a simple cloud temperature threshold based thermodynamic phase approach, in support of the Deep Space Climate Observatory (DSCOVR) mission. DSCOVR cloud products will be derived from Earth Polychromatic Imaging Camera (EPIC) observations in the ultraviolet and visible spectra. Since EPIC is not equipped with a spectral channel in the shortwave or mid-wave infrared that is sensitive to cloud effective radius (CER), COT will be inferred from a single visible channel with the assumption of appropriate CER values for liquid and ice phase clouds. One month of Aqua MODIS daytime granules from April 2005 is selected for investigating cloud phase sensitivity, and a subset of these granules that has similar EPIC sun-view geometry is selected for investigating COT uncertainties. EPIC COT retrievals are simulated with the same algorithm as the operational MODIS cloud products (MOD06), except using fixed phase-dependent CER values. Uncertainty estimates are derived by comparing the single channel COT retrievals with the baseline bi-spectral MODIS retrievals. Results show that a single channel COT retrieval is feasible for EPIC. For ice clouds, single channel retrieval errors are minimal (< 2%) due to the particle size insensitivity of the assumed ice crystal (i.e., severely roughened aggregate of hexagonal columns) scattering properties at visible wavelengths, while for liquid clouds the error is mostly limited to within 10%, although for thin clouds (COT < 2) the error can be higher. Potential uncertainties in EPIC cloud masking and cloud temperature retrievals are not considered in this study. PMID:29619116

  1. Uncertainties in cloud phase and optical thickness retrievals from the Earth Polychromatic Imaging Camera (EPIC).

    PubMed

    Meyer, Kerry; Yang, Yuekui; Platnick, Steven

    2016-01-01

    This paper presents an investigation of the expected uncertainties of a single channel cloud optical thickness (COT) retrieval technique, as well as a simple cloud temperature threshold based thermodynamic phase approach, in support of the Deep Space Climate Observatory (DSCOVR) mission. DSCOVR cloud products will be derived from Earth Polychromatic Imaging Camera (EPIC) observations in the ultraviolet and visible spectra. Since EPIC is not equipped with a spectral channel in the shortwave or mid-wave infrared that is sensitive to cloud effective radius (CER), COT will be inferred from a single visible channel with the assumption of appropriate CER values for liquid and ice phase clouds. One month of Aqua MODIS daytime granules from April 2005 is selected for investigating cloud phase sensitivity, and a subset of these granules that has similar EPIC sun-view geometry is selected for investigating COT uncertainties. EPIC COT retrievals are simulated with the same algorithm as the operational MODIS cloud products (MOD06), except using fixed phase-dependent CER values. Uncertainty estimates are derived by comparing the single channel COT retrievals with the baseline bi-spectral MODIS retrievals. Results show that a single channel COT retrieval is feasible for EPIC. For ice clouds, single channel retrieval errors are minimal (< 2%) due to the particle size insensitivity of the assumed ice crystal (i.e., severely roughened aggregate of hexagonal columns) scattering properties at visible wavelengths, while for liquid clouds the error is mostly limited to within 10%, although for thin clouds (COT < 2) the error can be higher. Potential uncertainties in EPIC cloud masking and cloud temperature retrievals are not considered in this study.

  2. Uncertainties in cloud phase and optical thickness retrievals from the Earth Polychromatic Imaging Camera (EPIC)

    NASA Astrophysics Data System (ADS)

    Meyer, Kerry; Yang, Yuekui; Platnick, Steven

    2016-04-01

    This paper presents an investigation of the expected uncertainties of a single-channel cloud optical thickness (COT) retrieval technique, as well as a simple cloud-temperature-threshold-based thermodynamic phase approach, in support of the Deep Space Climate Observatory (DSCOVR) mission. DSCOVR cloud products will be derived from Earth Polychromatic Imaging Camera (EPIC) observations in the ultraviolet and visible spectra. Since EPIC is not equipped with a spectral channel in the shortwave or mid-wave infrared that is sensitive to cloud effective radius (CER), COT will be inferred from a single visible channel with the assumption of appropriate CER values for liquid and ice phase clouds. One month of Aqua MODerate-resolution Imaging Spectroradiometer (MODIS) daytime granules from April 2005 is selected for investigating cloud phase sensitivity, and a subset of these granules that has similar EPIC Sun-view geometry is selected for investigating COT uncertainties. EPIC COT retrievals are simulated with the same algorithm as the operational MODIS cloud products (MOD06), except using fixed phase-dependent CER values. Uncertainty estimates are derived by comparing the single-channel COT retrievals with the baseline bi-spectral MODIS retrievals. Results show that a single-channel COT retrieval is feasible for EPIC. For ice clouds, single-channel retrieval errors are minimal (< 2 %) due to the particle size insensitivity of the assumed ice crystal (i.e., severely roughened aggregate of hexagonal columns) scattering properties at visible wavelengths, while for liquid clouds the error is mostly limited to within 10 %, although for thin clouds (COT < 2) the error can be higher. Potential uncertainties in EPIC cloud masking and cloud temperature retrievals are not considered in this study.

  3. On codes with multi-level error-correction capabilities

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1987-01-01

    In conventional coding for error control, all the information symbols of a message are regarded equally significant, and hence codes are devised to provide equal protection for each information symbol against channel errors. However, in some occasions, some information symbols in a message are more significant than the other symbols. As a result, it is desired to devise codes with multilevel error-correcting capabilities. Another situation where codes with multi-level error-correcting capabilities are desired is in broadcast communication systems. An m-user broadcast channel has one input and m outputs. The single input and each output form a component channel. The component channels may have different noise levels, and hence the messages transmitted over the component channels require different levels of protection against errors. Block codes with multi-level error-correcting capabilities are also known as unequal error protection (UEP) codes. Structural properties of these codes are derived. Based on these structural properties, two classes of UEP codes are constructed.

  4. Performance Analysis for Channel Estimation With 1-Bit ADC and Unknown Quantization Threshold

    NASA Astrophysics Data System (ADS)

    Stein, Manuel S.; Bar, Shahar; Nossek, Josef A.; Tabrikian, Joseph

    2018-05-01

    In this work, the problem of signal parameter estimation from measurements acquired by a low-complexity analog-to-digital converter (ADC) with $1$-bit output resolution and an unknown quantization threshold is considered. Single-comparator ADCs are energy-efficient and can be operated at ultra-high sampling rates. For analysis of such systems, a fixed and known quantization threshold is usually assumed. In the symmetric case, i.e., zero hard-limiting offset, it is known that in the low signal-to-noise ratio (SNR) regime the signal processing performance degrades moderately by ${2}/{\\pi}$ ($-1.96$ dB) when comparing to an ideal $\\infty$-bit converter. Due to hardware imperfections, low-complexity $1$-bit ADCs will in practice exhibit an unknown threshold different from zero. Therefore, we study the accuracy which can be obtained with receive data processed by a hard-limiter with unknown quantization level by using asymptotically optimal channel estimation algorithms. To characterize the estimation performance of these nonlinear algorithms, we employ analytic error expressions for different setups while modeling the offset as a nuisance parameter. In the low SNR regime, we establish the necessary condition for a vanishing loss due to missing offset knowledge at the receiver. As an application, we consider the estimation of single-input single-output wireless channels with inter-symbol interference and validate our analysis by comparing the analytic and experimental performance of the studied estimation algorithms. Finally, we comment on the extension to multiple-input multiple-output channel models.

  5. Hierarchical Boltzmann simulations and model error estimation

    NASA Astrophysics Data System (ADS)

    Torrilhon, Manuel; Sarna, Neeraj

    2017-08-01

    A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.

  6. An approach enabling adaptive FEC for OFDM in fiber-VLLC system

    NASA Astrophysics Data System (ADS)

    Wei, Yiran; He, Jing; Deng, Rui; Shi, Jin; Chen, Shenghai; Chen, Lin

    2017-12-01

    In this paper, we propose an orthogonal circulant matrix transform (OCT)-based adaptive frame-level-forward error correction (FEC) scheme for fiber-visible laser light communication (VLLC) system and experimentally demonstrate by Reed-Solomon (RS) Code. In this method, no extra bits are spent for adaptive message, except training sequence (TS), which is simultaneously used for synchronization and channel estimation. Therefore, RS-coding can be adaptively performed frames by frames via the last received codeword-error-rate (CER) feedback estimated by the TSs of the previous few OFDM frames. In addition, the experimental results exhibit that over 20 km standard single-mode fiber (SSMF) and 8 m visible light transmission, the costs of RS codewords are at most 14.12% lower than those of conventional adaptive subcarrier-RS-code based 16-QAM OFDM at bit error rate (BER) of 10-5.

  7. On the more accurate channel model and positioning based on time-of-arrival for visible light localization

    NASA Astrophysics Data System (ADS)

    Amini, Changeez; Taherpour, Abbas; Khattab, Tamer; Gazor, Saeed

    2017-01-01

    This paper presents an improved propagation channel model for the visible light in indoor environments. We employ this model to derive an enhanced positioning algorithm using on the relation between the time-of-arrivals (TOAs) and the distances for two cases either by assuming known or unknown transmitter and receiver vertical distances. We propose two estimators, namely the maximum likelihood estimator and an estimator by employing the method of moments. To have an evaluation basis for these methods, we calculate the Cramer-Rao lower bound (CRLB) for the performance of the estimations. We show that the proposed model and estimations result in a superior performance in positioning when the transmitter and receiver are perfectly synchronized in comparison to the existing state-of-the-art counterparts. Moreover, the corresponding CRLB of the proposed model represents almost about 20 dB reduction in the localization error bound in comparison with the previous model for some practical scenarios.

  8. RD Optimized, Adaptive, Error-Resilient Transmission of MJPEG2000-Coded Video over Multiple Time-Varying Channels

    NASA Astrophysics Data System (ADS)

    Bezan, Scott; Shirani, Shahram

    2006-12-01

    To reliably transmit video over error-prone channels, the data should be both source and channel coded. When multiple channels are available for transmission, the problem extends to that of partitioning the data across these channels. The condition of transmission channels, however, varies with time. Therefore, the error protection added to the data at one instant of time may not be optimal at the next. In this paper, we propose a method for adaptively adding error correction code in a rate-distortion (RD) optimized manner using rate-compatible punctured convolutional codes to an MJPEG2000 constant rate-coded frame of video. We perform an analysis on the rate-distortion tradeoff of each of the coding units (tiles and packets) in each frame and adapt the error correction code assigned to the unit taking into account the bandwidth and error characteristics of the channels. This method is applied to both single and multiple time-varying channel environments. We compare our method with a basic protection method in which data is either not transmitted, transmitted with no protection, or transmitted with a fixed amount of protection. Simulation results show promising performance for our proposed method.

  9. Multiple estimation channel decoupling and optimization method based on inverse system

    NASA Astrophysics Data System (ADS)

    Wu, Peng; Mu, Rongjun; Zhang, Xin; Deng, Yanpeng

    2018-03-01

    This paper addressed the intelligent autonomous navigation request of intelligent deformation missile, based on the intelligent deformation missile dynamics and kinematics modeling, navigation subsystem solution method and error modeling, and then focuses on the corresponding data fusion and decision fusion technology, decouples the sensitive channel of the filter input through the inverse system of design dynamics to reduce the influence of sudden change of the measurement information on the filter input. Then carrying out a series of simulation experiments, which verified the feasibility of the inverse system decoupling algorithm effectiveness.

  10. Long-range multi-carrier acoustic communications in shallow water based on iterative sparse channel estimation.

    PubMed

    Kang, Taehyuk; Song, H C; Hodgkiss, W S; Soo Kim, Jea

    2010-12-01

    Long-range orthogonal frequency division multiplexing (OFDM) acoustic communications is demonstrated using data from the Kauai Acomms MURI 2008 (KAM08) experiment carried out in about 106 m deep shallow water west of Kauai, HI, in June 2008. The source bandwidth was 8 kHz (12-20 kHz), and the data were received by a 16-element vertical array at a distance of 8 km. Iterative sparse channel estimation is applied in conjunction with low-density parity-check decoding. In addition, the impact of diversity combining in a highly inhomogeneous underwater environment is investigated. Error-free transmission using 16-quadtrative amplitude modulation is achieved at a data rate of 10 kb/s.

  11. The use of fractional orders in the determination of birefringence of highly dispersive materials by the channelled spectrum method

    NASA Astrophysics Data System (ADS)

    Nagarajan, K.; Shashidharan Nair, C. K.

    2007-07-01

    The channelled spectrum employing polarized light interference is a very convenient method for the study of dispersion of birefringence. However, while using this method, the absolute order of the polarized light interference fringes cannot be determined easily. Approximate methods are therefore used to estimate the order. One of the approximations is that the dispersion of birefringence across neighbouring integer order fringes is negligible. In this paper, we show how this approximation can cause errors. A modification is reported whereby the error in the determination of absolute fringe order can be reduced using fractional orders instead of integer orders. The theoretical background for this method supported with computer simulation is presented. An experimental arrangement implementing these modifications is described. This method uses a Constant Deviation Spectrometer (CDS) and a Soleil Babinet Compensator (SBC).

  12. Quantum steganography and quantum error-correction

    NASA Astrophysics Data System (ADS)

    Shaw, Bilal A.

    Quantum error-correcting codes have been the cornerstone of research in quantum information science (QIS) for more than a decade. Without their conception, quantum computers would be a footnote in the history of science. When researchers embraced the idea that we live in a world where the effects of a noisy environment cannot completely be stripped away from the operations of a quantum computer, the natural way forward was to think about importing classical coding theory into the quantum arena to give birth to quantum error-correcting codes which could help in mitigating the debilitating effects of decoherence on quantum data. We first talk about the six-qubit quantum error-correcting code and show its connections to entanglement-assisted error-correcting coding theory and then to subsystem codes. This code bridges the gap between the five-qubit (perfect) and Steane codes. We discuss two methods to encode one qubit into six physical qubits. Each of the two examples corrects an arbitrary single-qubit error. The first example is a degenerate six-qubit quantum error-correcting code. We explicitly provide the stabilizer generators, encoding circuits, codewords, logical Pauli operators, and logical CNOT operator for this code. We also show how to convert this code into a non-trivial subsystem code that saturates the subsystem Singleton bound. We then prove that a six-qubit code without entanglement assistance cannot simultaneously possess a Calderbank-Shor-Steane (CSS) stabilizer and correct an arbitrary single-qubit error. A corollary of this result is that the Steane seven-qubit code is the smallest single-error correcting CSS code. Our second example is the construction of a non-degenerate six-qubit CSS entanglement-assisted code. This code uses one bit of entanglement (an ebit) shared between the sender (Alice) and the receiver (Bob) and corrects an arbitrary single-qubit error. The code we obtain is globally equivalent to the Steane seven-qubit code and thus corrects an arbitrary error on the receiver's half of the ebit as well. We prove that this code is the smallest code with a CSS structure that uses only one ebit and corrects an arbitrary single-qubit error on the sender's side. We discuss the advantages and disadvantages for each of the two codes. In the second half of this thesis we explore the yet uncharted and relatively undiscovered area of quantum steganography. Steganography is the process of hiding secret information by embedding it in an "innocent" message. We present protocols for hiding quantum information in a codeword of a quantum error-correcting code passing through a channel. Using either a shared classical secret key or shared entanglement Alice disguises her information as errors in the channel. Bob can retrieve the hidden information, but an eavesdropper (Eve) with the power to monitor the channel, but without the secret key, cannot distinguish the message from channel noise. We analyze how difficult it is for Eve to detect the presence of secret messages, and estimate rates of steganographic communication and secret key consumption for certain protocols. We also provide an example of how Alice hides quantum information in the perfect code when the underlying channel between Bob and her is the depolarizing channel. Using this scheme Alice can hide up to four stego-qubits.

  13. Quantum error correction for continuously detected errors with any number of error channels per qubit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Charlene; Wiseman, Howard; Jacobs, Kurt

    2004-08-01

    It was shown by Ahn, Wiseman, and Milburn [Phys. Rev. A 67, 052310 (2003)] that feedback control could be used as a quantum error correction process for errors induced by weak continuous measurement, given one perfectly measured error channel per qubit. Here we point out that this method can be easily extended to an arbitrary number of error channels per qubit. We show that the feedback protocols generated by our method encode n-2 logical qubits in n physical qubits, thus requiring just one more physical qubit than in the previous case.

  14. Mitigating Photon Jitter in Optical PPM Communication

    NASA Technical Reports Server (NTRS)

    Moision, Bruce

    2008-01-01

    A theoretical analysis of photon-arrival jitter in an optical pulse-position-modulation (PPM) communication channel has been performed, and now constitutes the basis of a methodology for designing receivers to compensate so that errors attributable to photon-arrival jitter would be minimized or nearly minimized. Photon-arrival jitter is an uncertainty in the estimated time of arrival of a photon relative to the boundaries of a PPM time slot. Photon-arrival jitter is attributable to two main causes: (1) receiver synchronization error [error in the receiver operation of partitioning time into PPM slots] and (2) random delay between the time of arrival of a photon at a detector and the generation, by the detector circuitry, of a pulse in response to the photon. For channels with sufficiently long time slots, photon-arrival jitter is negligible. However, as durations of PPM time slots are reduced in efforts to increase throughputs of optical PPM communication channels, photon-arrival jitter becomes a significant source of error, leading to significant degradation of performance if not taken into account in design. For the purpose of the analysis, a receiver was assumed to operate in a photon- starved regime, in which photon counts follow a Poisson distribution. The analysis included derivation of exact equations for symbol likelihoods in the presence of photon-arrival jitter. These equations describe what is well known in the art as a matched filter for a channel containing Gaussian noise. These equations would yield an optimum receiver if they could be implemented in practice. Because the exact equations may be too complex to implement in practice, approximations that would yield suboptimal receivers were also derived.

  15. Search for gamma-ray events in the BATSE data base

    NASA Technical Reports Server (NTRS)

    Lewin, Walter

    1994-01-01

    We find large location errors and error radii in the locations of channel 1 Cygnus X-1 events. These errors and their associated uncertainties are a result of low signal-to-noise ratios (a few sigma) in the two brightest detectors for each event. The untriggered events suffer from similarly low signal-to-noise ratios, and their location errors are expected to be at least as large as those found for Cygnus X-1 with a given signal-to-noise ratio. The statistical error radii are consistent with those found for Cygnus X-1 and with the published estimates. We therefore expect approximately 20 - 30 deg location errors for the untriggered events. Hence, many of the untriggered events occurring within a few months of the triggered activity from SGR 1900 plus 14 are indeed consistent with the SGR source location, although Cygnus X-1 is also a good candidate.

  16. Evaluation of process errors in bed load sampling using a Dune Model

    USGS Publications Warehouse

    Gomez, Basil; Troutman, Brent M.

    1997-01-01

    Reliable estimates of the streamwide bed load discharge obtained using sampling devices are dependent upon good at-a-point knowledge across the full width of the channel. Using field data and information derived from a model that describes the geometric features of a dune train in terms of a spatial process observed at a fixed point in time, we show that sampling errors decrease as the number of samples collected increases, and the number of traverses of the channel over which the samples are collected increases. It also is preferable that bed load sampling be conducted at a pace which allows a number of bed forms to pass through the sampling cross section. The situations we analyze and simulate pertain to moderate transport conditions in small rivers. In such circumstances, bed load sampling schemes typically should involve four or five traverses of a river, and the collection of 20–40 samples at a rate of five or six samples per hour. By ensuring that spatial and temporal variability in the transport process is accounted for, such a sampling design reduces both random and systematic errors and hence minimizes the total error involved in the sampling process.

  17. Five-wave-packet quantum error correction based on continuous-variable cluster entanglement

    PubMed Central

    Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi

    2015-01-01

    Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit. PMID:26498395

  18. A study of digital holographic filters generation. Phase 2: Digital data communication system, volume 1

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Mo, C. D.

    1978-01-01

    An empirical study of the performance of the Viterbi decoders in bursty channels was carried out and an improved algebraic decoder for nonsystematic codes was developed. The hybrid algorithm was simulated for the (2,1), k = 7 code on a computer using 20 channels having various error statistics, ranging from pure random error to pure bursty channels. The hybrid system outperformed both the algebraic and the Viterbi decoders in every case, except the 1% random error channel where the Viterbi decoder had one bit less decoding error.

  19. Five-equation and robust three-equation methods for solution verification of large eddy simulation

    NASA Astrophysics Data System (ADS)

    Dutta, Rabijit; Xing, Tao

    2018-02-01

    This study evaluates the recently developed general framework for solution verification methods for large eddy simulation (LES) using implicitly filtered LES of periodic channel flows at friction Reynolds number of 395 on eight systematically refined grids. The seven-equation method shows that the coupling error based on Hypothesis I is much smaller as compared with the numerical and modeling errors and therefore can be neglected. The authors recommend five-equation method based on Hypothesis II, which shows a monotonic convergence behavior of the predicted numerical benchmark ( S C ), and provides realistic error estimates without the need of fixing the orders of accuracy for either numerical or modeling errors. Based on the results from seven-equation and five-equation methods, less expensive three and four-equation methods for practical LES applications were derived. It was found that the new three-equation method is robust as it can be applied to any convergence types and reasonably predict the error trends. It was also observed that the numerical and modeling errors usually have opposite signs, which suggests error cancellation play an essential role in LES. When Reynolds averaged Navier-Stokes (RANS) based error estimation method is applied, it shows significant error in the prediction of S C on coarse meshes. However, it predicts reasonable S C when the grids resolve at least 80% of the total turbulent kinetic energy.

  20. Investigation of Point Doppler Velocimetry (PDV) for Transition Detection in Boundary Layers

    NASA Technical Reports Server (NTRS)

    Kuhlman, John M.

    1999-01-01

    A two component Point Doppler Velocimetry (PDV) system has been developed and tested. Improvements were made to an earlier PDV system, in terms of experimental techniques, as well as the data acquisition and reduction software. Measurements of the streamwise and spanwise mean and fluctuating velocities for flows from a rectangular channel and over an NACA 0012 airfoil were made, and the data were compared against hot wire data. The closest to the airfoil surface that PDV measurements could be made was on the order of 0.005 m(0.2", z/c = 0.0169). When the PDV and hot wire data were compared, the time traces for each appeared similar. The mean velocities agreed to within plus or minus 2 m/sec, while the RMS velocities agreed to plus or minus 0.4 m/sec. While the PDV time autocorrelations agreed with those of the hot wire data, the PDV power spectral densities were noisier above 750 Hz. A major source of error in these experiments was determined to be the drifting of the iodine cell stem temperatures. While the stem temperatures were controlled to within plus or minus 0.1 C, this could lead to a frequency shift of as much as 6 MHz, which translates into an error of 1.6 m/sec for the back scatter channel, and up to 6.9 m/sec for the forward scatter channel. These error estimates are consistent with the observed error magnitudes.

  1. Progressive transmission of images over fading channels using rate-compatible LDPC codes.

    PubMed

    Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul

    2006-12-01

    In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.

  2. The Effects of Spatial Diversity and Imperfect Channel Estimation on Wideband MC-DS-CDMA and MC-CDMA

    DTIC Science & Technology

    2009-10-01

    In our previous work, we compared the theoretical bit error rates of multi-carrier direct sequence code division multiple access (MC- DS - CDMA ) and...consider only those cases where MC- CDMA has higher frequency diversity than MC- DS - CDMA . Since increases in diversity yield diminishing gains, we conclude

  3. Improving Forecast Skill by Assimilation of Quality Controlled AIRS Version 5 Temperature Soundings

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Reale, Oreste

    2009-01-01

    The AIRS Science Team Version 5 retrieval algorithm has been finalized and is now operational at the Goddard DAAC in the processing (and reprocessing) of all AIRS data. The AIRS Science Team Version 5 retrieval algorithm contains two significant improvements over Version 4: 1) Improved physics allows for use of AIRS observations in the entire 4.3 micron CO2 absorption band in the retrieval of temperature profile T(p) during both day and night. Tropospheric sounding 15 micron CO2 observations are now used primarily in the generation of cloud cleared radiances R(sub i). This approach allows for the generation of accurate values of R(sub i) and T(p) under most cloud conditions. 2) Another very significant improvement in Version 5 is the ability to generate accurate case-by-case, level-by-level error estimates for the atmospheric temperature profile, as well as for channel-by-channel error estimates for R(sub i). These error estimates are used for Quality Control of the retrieved products. We have conducted forecast impact experiments assimilating AIRS temperature profiles with different levels of Quality Control using the NASA GEOS-5 data assimilation system. Assimilation of Quality Controlled T(p) resulted in significantly improved forecast skill compared to that obtained from analyses obtained when all data used operationally by NCEP, except for AIRS data, is assimilated. We also conducted an experiment assimilating AIRS radiances uncontaminated by clouds, as done operationally by ECMWF and NCEP. Forecast resulting from assimilated AIRS radiances were of poorer quality than those obtained assimilating AIRS temperatures.

  4. The effect of flow data resolution on sediment yield estimation and channel design

    NASA Astrophysics Data System (ADS)

    Rosburg, Tyler T.; Nelson, Peter A.; Sholtes, Joel S.; Bledsoe, Brian P.

    2016-07-01

    The decision to use either daily-averaged or sub-daily streamflow records has the potential to impact the calculation of sediment transport metrics and stream channel design. Using bedload and suspended load sediment transport measurements collected at 138 sites across the United States, we calculated the effective discharge, sediment yield, and half-load discharge using sediment rating curves over long time periods (median record length = 24 years) with both daily-averaged and sub-daily streamflow records. A comparison of sediment transport metrics calculated with both daily-average and sub-daily stream flow data at each site showed that daily-averaged flow data do not adequately represent the magnitude of high stream flows at hydrologically flashy sites. Daily-average stream flow data cause an underestimation of sediment transport and sediment yield (including the half-load discharge) at flashy sites. The degree of underestimation was correlated with the level of flashiness and the exponent of the sediment rating curve. No consistent relationship between the use of either daily-average or sub-daily streamflow data and the resultant effective discharge was found. When used in channel design, computed sediment transport metrics may have errors due to flow data resolution, which can propagate into design slope calculations which, if implemented, could lead to unwanted aggradation or degradation in the design channel. This analysis illustrates the importance of using sub-daily flow data in the calculation of sediment yield in urbanizing or otherwise flashy watersheds. Furthermore, this analysis provides practical charts for estimating and correcting these types of underestimation errors commonly incurred in sediment yield calculations.

  5. Channel Acquisition for Massive MIMO-OFDM With Adjustable Phase Shift Pilots

    NASA Astrophysics Data System (ADS)

    You, Li; Gao, Xiqi; Swindlehurst, A. Lee; Zhong, Wen

    2016-03-01

    We propose adjustable phase shift pilots (APSPs) for channel acquisition in wideband massive multiple-input multiple-output (MIMO) systems employing orthogonal frequency division multiplexing (OFDM) to reduce the pilot overhead. Based on a physically motivated channel model, we first establish a relationship between channel space-frequency correlations and the channel power angle-delay spectrum in the massive antenna array regime, which reveals the channel sparsity in massive MIMO-OFDM. With this channel model, we then investigate channel acquisition, including channel estimation and channel prediction, for massive MIMO-OFDM with APSPs. We show that channel acquisition performance in terms of sum mean square error can be minimized if the user terminals' channel power distributions in the angle-delay domain can be made non-overlapping with proper phase shift scheduling. A simplified pilot phase shift scheduling algorithm is developed based on this optimal channel acquisition condition. The performance of APSPs is investigated for both one symbol and multiple symbol data models. Simulations demonstrate that the proposed APSP approach can provide substantial performance gains in terms of achievable spectral efficiency over the conventional phase shift orthogonal pilot approach in typical mobility scenarios.

  6. Robust THP Transceiver Designs for Multiuser MIMO Downlink with Imperfect CSIT

    NASA Astrophysics Data System (ADS)

    Ubaidulla, P.; Chockalingam, A.

    2009-12-01

    We present robust joint nonlinear transceiver designs for multiuser multiple-input multiple-output (MIMO) downlink in the presence of imperfections in the channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. The BS employs Tomlinson-Harashima precoding (THP) for interuser interference precancellation at the transmitter. We consider robust transceiver designs that jointly optimize the transmit THP filters and receive filter for two models of CSIT errors. The first model is a stochastic error (SE) model, where the CSIT error is Gaussian-distributed. This model is applicable when the CSIT error is dominated by channel estimation error. In this case, the proposed robust transceiver design seeks to minimize a stochastic function of the sum mean square error (SMSE) under a constraint on the total BS transmit power. We propose an iterative algorithm to solve this problem. The other model we consider is a norm-bounded error (NBE) model, where the CSIT error can be specified by an uncertainty set. This model is applicable when the CSIT error is dominated by quantization errors. In this case, we consider a worst-case design. For this model, we consider robust (i) minimum SMSE, (ii) MSE-constrained, and (iii) MSE-balancing transceiver designs. We propose iterative algorithms to solve these problems, wherein each iteration involves a pair of semidefinite programs (SDPs). Further, we consider an extension of the proposed algorithm to the case with per-antenna power constraints. We evaluate the robustness of the proposed algorithms to imperfections in CSIT through simulation, and show that the proposed robust designs outperform nonrobust designs as well as robust linear transceiver designs reported in the recent literature.

  7. Non-contact cardiac pulse rate estimation based on web-camera

    NASA Astrophysics Data System (ADS)

    Wang, Yingzhi; Han, Tailin

    2015-12-01

    In this paper, we introduce a new methodology of non-contact cardiac pulse rate estimation based on the imaging Photoplethysmography (iPPG) and blind source separation. This novel's approach can be applied to color video recordings of the human face and is based on automatic face tracking along with blind source separation of the color channels into RGB three-channel component. First of all, we should do some pre-processings of the data which can be got from color video such as normalization and sphering. We can use spectrum analysis to estimate the cardiac pulse rate by Independent Component Analysis (ICA) and JADE algorithm. With Bland-Altman and correlation analysis, we compared the cardiac pulse rate extracted from videos recorded by a basic webcam to a Commercial pulse oximetry sensors and achieved high accuracy and correlation. Root mean square error for the estimated results is 2.06bpm, which indicates that the algorithm can realize the non-contact measurements of cardiac pulse rate.

  8. Techniques for estimating streamflow characteristics in the Eastern and Interior coal provinces of the United States

    USGS Publications Warehouse

    Wetzel, Kim L.; Bettandorff, J.M.

    1986-01-01

    Techniques are presented for estimating various streamflow characteristics, such as peak flows, mean monthly and annual flows, flow durations, and flow volumes, at ungaged sites on unregulated streams in the Eastern Coal region. Streamflow data and basin characteristics for 629 gaging stations were used to develop multiple-linear-regression equations. Separate equations were developed for the Eastern and Interior Coal Provinces. Drainage area is an independent variable common to all equations. Other variables needed, depending on the streamflow characteristic, are mean annual precipitation, mean basin elevation, main channel length, basin storage, main channel slope, and forest cover. A ratio of the observed 50- to 90-percent flow durations was used in the development of relations to estimate low-flow frequencies in the Eastern Coal Province. Relations to estimate low flows in the Interior Coal Province are not presented because the standard errors were greater than 0.7500 log units and were considered to be of poor reliability.

  9. Uncertainty of Passive Imager Cloud Optical Property Retrievals to Instrument Radiometry and Model Assumptions: Examples from MODIS

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Wind, Galina; Meyer, Kerry; Amarasinghe, Nandana; Arnold, G. Thomas; Zhang, Zhibo; King, Michael D.

    2013-01-01

    The optical and microphysical structure of clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS on the NASA EOS Terra and Aqua platforms, simultaneous global-daily 1 km retrievals of cloud optical thickness (COT) and effective particle radius (CER) are provided, as well as the derived water path (WP). The cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate retrieval datasets for various two-channel retrievals, typically a VISNIR channel paired with a 1.6, 2.1, and 3.7 m spectral channel. The MOD06 forward model is derived from on a homogeneous plane-parallel cloud. In Collection 5 processing (completed in 2007 with a modified Collection 5.1 completed in 2010), pixel-level retrieval uncertainties were calculated for the following non-3-D error sources: radiometry, surface spectral albedo, and atmospheric corrections associated with model analysis uncertainties (water vapor only). The latter error source includes error correlation across the retrieval spectral channels. Estimates of uncertainty in 1 aggregated (Level-3) means were also provided assuming unity correlation between error sources for all pixels in a grid for a single day, and zero correlation of error sources from one day to the next. I n Collection 6 (expected to begin in late summer 2013) we expanded the uncertainty analysis to include: (a) scene-dependent calibration uncertainty that depends on new band and detector-specific Level 1B uncertainties, (b) new model error sources derived from the look-up tables which includes sensitivities associated with wind direction over the ocean and uncertainties in liquid water and ice effective variance, (c) thermal emission uncertainties in the 3.7 m band associated with cloud and surface temperatures that are needed to extract reflected solar radiation from the total radiance signal, (d) uncertainty in the solar spectral irradiance at 3.7 m, and (e) addition of stratospheric ozone uncertainty in visible atmospheric corrections. A summary of the approach and example Collection 6 results will be shown.

  10. Uncertainty of passive imager cloud retrievals to instrument radiometry and model assumptions: Examples from MODIS Collection 6

    NASA Astrophysics Data System (ADS)

    Platnick, S.; Wind, G.; Amarasinghe, N.; Arnold, G. T.; Zhang, Z.; Meyer, K.; King, M. D.

    2013-12-01

    The optical and microphysical structure of clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness (COT) and effective particle radius (CER) are provided, as well as the derived water path (WP). The cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate retrieval datasets for various two-channel retrievals, typically a VIS/NIR channel paired with a 1.6, 2.1, and 3.7 μm spectral channel. The MOD06 forward model is derived from a homogeneous plane-parallel cloud. In Collection 5 processing (completed in 2007 with a modified Collection 5.1 completed in 2010), pixel-level retrieval uncertainties were calculated for the following non-3-D error sources: radiometry, surface spectral albedo, and atmospheric corrections associated with model analysis uncertainties (water vapor only). The latter error source includes error correlation across the retrieval spectral channels. Estimates of uncertainty in 1° aggregated (Level-3) means were also provided assuming unity correlation between error sources for all pixels in a grid for a single day, and zero correlation of error sources from one day to the next. In Collection 6 (expected to begin in late summer 2013) we expanded the uncertainty analysis to include: (a) scene-dependent calibration uncertainty that depends on new band and detector-specific Level 1B uncertainties, (b) new model error sources derived from the look-up tables which includes sensitivities associated with wind direction over the ocean and uncertainties in liquid water and ice effective variance, (c) thermal emission uncertainties in the 3.7 μm band associated with cloud and surface temperatures that are needed to extract reflected solar radiation from the total radiance signal, (d) uncertainty in the solar spectral irradiance at 3.7 μm, and (e) addition of stratospheric ozone uncertainty in visible atmospheric corrections. A summary of the approach and example Collection 6 results will be shown.

  11. On-Board Event-Based State Estimation for Trajectory Approaching and Tracking of a Vehicle

    PubMed Central

    Martínez-Rey, Miguel; Espinosa, Felipe; Gardel, Alfredo; Santos, Carlos

    2015-01-01

    For the problem of pose estimation of an autonomous vehicle using networked external sensors, the processing capacity and battery consumption of these sensors, as well as the communication channel load should be optimized. Here, we report an event-based state estimator (EBSE) consisting of an unscented Kalman filter that uses a triggering mechanism based on the estimation error covariance matrix to request measurements from the external sensors. This EBSE generates the events of the estimator module on-board the vehicle and, thus, allows the sensors to remain in stand-by mode until an event is generated. The proposed algorithm requests a measurement every time the estimation distance root mean squared error (DRMS) value, obtained from the estimator's covariance matrix, exceeds a threshold value. This triggering threshold can be adapted to the vehicle's working conditions rendering the estimator even more efficient. An example of the use of the proposed EBSE is given, where the autonomous vehicle must approach and follow a reference trajectory. By making the threshold a function of the distance to the reference location, the estimator can halve the use of the sensors with a negligible deterioration in the performance of the approaching maneuver. PMID:26102489

  12. Precision of channel catfish catch estimates using hoop nets in larger Oklahoma reservoirs

    USGS Publications Warehouse

    Stewart, David R.; Long, James M.

    2012-01-01

    Hoop nets are rapidly becoming the preferred gear type used to sample channel catfish Ictalurus punctatus, and many managers have reported that hoop nets effectively sample channel catfish in small impoundments (<200 ha). However, the utility and precision of this approach in larger impoundments have not been tested. We sought to determine how the number of tandem hoop net series affected the catch of channel catfish and the time involved in using 16 tandem hoop net series in larger impoundments (>200 ha). Hoop net series were fished once, set for 3 d; then we used Monte Carlo bootstrapping techniques that allowed us to estimate the number of net series required to achieve two levels of precision (relative standard errors [RSEs] of 15 and 25) at two levels of confidence (80% and 95%). Sixteen hoop net series were effective at obtaining an RSE of 25 with 80% and 95% confidence in all but one reservoir. Achieving an RSE of 15 was often less effective and required 18-96 hoop net series given the desired level of confidence. We estimated that an hour was needed, on average, to deploy and retrieve three hoop net series, which meant that 16 hoop net series per reservoir could be "set" and "retrieved" within a day, respectively. The estimated number of net series to achieve an RSE of 25 or 15 was positively associated with the coefficient of variation (CV) of the sample but not with reservoir surface area or relative abundance. Our results suggest that hoop nets are capable of providing reasonably precise estimates of channel catfish relative abundance and that the relationship with the CV of the sample reported herein can be used to determine the sampling effort for a desired level of precision.

  13. Measuring kinetics of complex single ion channel data using mean-variance histograms.

    PubMed

    Patlak, J B

    1993-07-01

    The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance histogram technique provided a more credible analysis of the open, closed, and subconductance times for the patch. I also show that the method produces accurate results on simulated data in a wide variety of conditions, whereas the half-amplitude method, when applied to complex simulated data shows the same errors as were apparent in the real data. The utility and the limitations of this new method are discussed.

  14. Monitoring sleepiness with on-board electrophysiological recordings for preventing sleep-deprived traffic accidents.

    PubMed

    Papadelis, Christos; Chen, Zhe; Kourtidou-Papadeli, Chrysoula; Bamidis, Panagiotis D; Chouvarda, Ioanna; Bekiaris, Evangelos; Maglaveras, Nikos

    2007-09-01

    The objective of this study is the development and evaluation of efficient neurophysiological signal statistics, which may assess the driver's alertness level and serve as potential indicators of sleepiness in the design of an on-board countermeasure system. Multichannel EEG, EOG, EMG, and ECG were recorded from sleep-deprived subjects exposed to real field driving conditions. A number of severe driving errors occurred during the experiments. The analysis was performed in two main dimensions: the macroscopic analysis that estimates the on-going temporal evolution of physiological measurements during the driving task, and the microscopic event analysis that focuses on the physiological measurements' alterations just before, during, and after the driving errors. Two independent neurophysiologists visually interpreted the measurements. The EEG data were analyzed by using both linear and non-linear analysis tools. We observed the occurrence of brief paroxysmal bursts of alpha activity and an increased synchrony among EEG channels before the driving errors. The alpha relative band ratio (RBR) significantly increased, and the Cross Approximate Entropy that quantifies the synchrony among channels also significantly decreased before the driving errors. Quantitative EEG analysis revealed significant variations of RBR by driving time in the frequency bands of delta, alpha, beta, and gamma. Most of the estimated EEG statistics, such as the Shannon Entropy, Kullback-Leibler Entropy, Coherence, and Cross-Approximate Entropy, were significantly affected by driving time. We also observed an alteration of eyes blinking duration by increased driving time and a significant increase of eye blinks' number and duration before driving errors. EEG and EOG are promising neurophysiological indicators of driver sleepiness and have the potential of monitoring sleepiness in occupational settings incorporated in a sleepiness countermeasure device. The occurrence of brief paroxysmal bursts of alpha activity before severe driving errors is described in detail for the first time. Clear evidence is presented that eye-blinking statistics are sensitive to the driver's sleepiness and should be considered in the design of an efficient and driver-friendly sleepiness detection countermeasure device.

  15. Nimbus-7 Earth radiation budget calibration history. Part 1: The solar channels

    NASA Technical Reports Server (NTRS)

    Kyle, H. Lee; Hoyt, Douglas V.; Hickey, John R.; Maschhoff, Robert H.; Vallette, Brenda J.

    1993-01-01

    The Earth Radiation Budget (ERB) experiment on the Nimbus-7 satellite measured the total solar irradiance plus broadband spectral components on a nearly daily basis from 16 Nov. 1978, until 16 June 1992. Months of additional observations were taken in late 1992 and in 1993. The emphasis is on the electrically self calibrating cavity radiometer, channel 10c, which recorded accurate total solar irradiance measurements over the whole period. The spectral channels did not have inflight calibration adjustment capabilities. These channels can, with some additional corrections, be used for short-term studies (one or two solar rotations - 27 to 60 days), but not for long-term trend analysis. For channel 10c, changing radiometer pointing, the zero offsets, the stability of the gain, the temperature sensitivity, and the influences of other platform instruments are all examined and their effects on the measurements considered. Only the question of relative accuracy (not absolute) is examined. The final channel 10c product is also compared with solar measurements made by independent experiments on other satellites. The Nimbus experiment showed that the mean solar energy was about 0.1 percent (1.4 W/sqm) higher in the excited Sun years of 1979 and 1991 than in the quiet Sun years of 1985 and 1986. The error analysis indicated that the measured long-term trends may be as accurate as +/- 0.005 percent. The worse-case error estimate is +/- 0.03 percent.

  16. Quantum biological channel modeling and capacity calculation.

    PubMed

    Djordjevic, Ivan B

    2012-12-10

    Quantum mechanics has an important role in photosynthesis, magnetoreception, and evolution. There were many attempts in an effort to explain the structure of genetic code and transfer of information from DNA to protein by using the concepts of quantum mechanics. The existing biological quantum channel models are not sufficiently general to incorporate all relevant contributions responsible for imperfect protein synthesis. Moreover, the problem of determination of quantum biological channel capacity is still an open problem. To solve these problems, we construct the operator-sum representation of biological channel based on codon basekets (basis vectors), and determine the quantum channel model suitable for study of the quantum biological channel capacity and beyond. The transcription process, DNA point mutations, insertions, deletions, and translation are interpreted as the quantum noise processes. The various types of quantum errors are classified into several broad categories: (i) storage errors that occur in DNA itself as it represents an imperfect storage of genetic information, (ii) replication errors introduced during DNA replication process, (iii) transcription errors introduced during DNA to mRNA transcription, and (iv) translation errors introduced during the translation process. By using this model, we determine the biological quantum channel capacity and compare it against corresponding classical biological channel capacity. We demonstrate that the quantum biological channel capacity is higher than the classical one, for a coherent quantum channel model, suggesting that quantum effects have an important role in biological systems. The proposed model is of crucial importance towards future study of quantum DNA error correction, developing quantum mechanical model of aging, developing the quantum mechanical models for tumors/cancer, and study of intracellular dynamics in general.

  17. Improving Forecast Skill by Assimilation of AIRS Cloud Cleared Radiances RiCC

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Rosenberg, Robert I.; Iredell, Lena

    2015-01-01

    ECMWF, NCEP, and GMAO routinely assimilate radiosonde and other in-situ observations along with satellite IR and MW Sounder radiance observations. NCEP and GMAO use the NCEP GSI Data Assimilation System (DAS).GSI DAS assimilates AIRS, CrIS, IASI channel radiances Ri on a channel-by-channel, case-by-case basis, only for those channels i thought to be unaffected by cloud cover. This test excludes Ri for most tropospheric sounding channels under partial cloud cover conditions. AIRS Version-6 RiCC is a derived quantity representative of what AIRS channel i would have seen if the AIRS FOR were cloud free. All values of RiCC have case-by-case error estimates RiCC associated with them. Our experiments present to the GSI QCd values of AIRS RiCC in place of AIRS Ri observations. GSI DAS assimilates only those values of RiCC it thinks are cloud free. This potentially allows for better coverage of assimilated QCd values of RiCC as compared to Ri.

  18. Data Assimilation Experiments Using Quality Controlled AIRS Version 5 Temperature Soundings

    NASA Technical Reports Server (NTRS)

    Susskind, Joel

    2009-01-01

    The AIRS Science Team Version 5 retrieval algorithm has been finalized and is now operational at the Goddard DAAC in the processing (and reprocessing) of all AIRS data. The AIRS Science Team Version 5 retrieval algorithm contains a number of significant improvements over Version 4. Two very significant improvements are described briefly below. 1) The AIRS Science Team Radiative Transfer Algorithm (RTA) has now been upgraded to accurately account for effects of non-local thermodynamic equilibrium on the AIRS observations. This allows for use of AIRS observations in the entire 4.3 micron CO2 absorption band in the retrieval algorithm during both day and night. Following theoretical considerations, tropospheric temperature profile information is obtained almost exclusively from clear column radiances in the 4.3 micron CO2 band in the AIRS Version 5 temperature profile retrieval step. These clear column radiances are a derived product that are indicative of radiances AIRS channels would have seen if the field of view were completely clear. Clear column radiances for all channels are determined using tropospheric sounding 15 micron CO2 observations. This approach allows for the generation of accurate values of clear column radiances and T(p) under most cloud conditions. 2) Another very significant improvement in Version 5 is the ability to generate accurate case-by-case, level-by-level error estimates for the atmospheric temperature profile, as well as for channel-by-channel clear column radiances. These error estimates are used for quality control of the retrieved products. Based on error estimate thresholds, each temperature profiles is assigned a characteristic pressure, pg, down to which the profile is characterized as good for use for data assimilation purposes. We have conducted forecast impact experiments assimilating AIRS quality controlled temperature profiles using the NASA GEOS-5 data assimilation system, consisting of the NCEP GSI analysis coupled with the NASA FVGCM, at a spatial resolution of 0.5 deg by 0.5 deg. Assimilation of Quality Controlled AIRS temperature profiles down to pg resulted in significantly improved forecast skill compared to that obtained from experiments when all data used operationally by NCEP, except for AIRS data, is assimilated. These forecasts were also significantly better than to those obtained when AIRS radiances (rather than temperature profiles) are assimilated, which is the way AIRS data is used operationally by NCEP and ECMWF.

  19. Quantifying peak discharges for historical floods

    USGS Publications Warehouse

    Cook, J.L.

    1987-01-01

    It is usually advantageous to use information regarding historical floods, if available, to define the flood-frequency relation for a stream. Peak stages can sometimes be determined for outstanding floods that occurred many years ago before systematic gaging of streams began. In the United States, this information is usually not available for more than 100-200 years, but in countries with long cultural histories, such as China, historical flood data are available at some sites as far back as 2,000 years or more. It is important in flood studies to be able to assign a maximum discharge rate and an associated error range to the historical flood. This paper describes the significant characteristics and uncertainties of four commonly used methods for estimating the peak discharge of a flood. These methods are: (1) rating curve (stage-discharge relation) extension; (2) slope conveyance; (3) slope area; and (4) step backwater. Logarithmic extensions of rating curves are based on theoretical plotting techniques that results in straight line extensions provided that channel shape and roughness do not change significantly. The slope-conveyance and slope-area methods are based on the Manning equation, which requires specific data on channel size, shape and roughness, as well as the water-surface slope for one or more cross-sections in a relatively straight reach of channel. The slope-conveyance method is used primarily for shaping and extending rating curves, whereas the slope-area method is used for specific floods. The step-backwater method, also based on the Manning equation, requires more cross-section data than the slope-area ethod, but has a water-surface profile convergence characteristic that negates the need for known or estimated water-surface slope. Uncertainties in calculating peak discharge for historical floods may be quite large. Various investigations have shown that errors in calculating peak discharges by the slope-area method under ideal conditions for recent floods (i.e., when flood elevations, slope and channel characteristics are reasonably certain), may be on the order of 10-25%. Under less than ideal conditions, where streams are hydraulically steep and rough, errors may be much larger. The additional uncertainties for historical floods created by the passage of time may result in even larger errors of peak discharge. ?? 1987.

  20. Multi-photon self-error-correction hyperentanglement distribution over arbitrary collective-noise channels

    NASA Astrophysics Data System (ADS)

    Gao, Cheng-Yan; Wang, Guan-Yu; Zhang, Hao; Deng, Fu-Guo

    2017-01-01

    We present a self-error-correction spatial-polarization hyperentanglement distribution scheme for N-photon systems in a hyperentangled Greenberger-Horne-Zeilinger state over arbitrary collective-noise channels. In our scheme, the errors of spatial entanglement can be first averted by encoding the spatial-polarization hyperentanglement into the time-bin entanglement with identical polarization and defined spatial modes before it is transmitted over the fiber channels. After transmission over the noisy channels, the polarization errors introduced by the depolarizing noise can be corrected resorting to the time-bin entanglement. Finally, the parties in quantum communication can in principle share maximally hyperentangled states with a success probability of 100%.

  1. Channel modeling, signal processing and coding for perpendicular magnetic recording

    NASA Astrophysics Data System (ADS)

    Wu, Zheng

    With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by combining the new detector with a simple write precompensation scheme. Soft-decision decoding for algebraic codes can improve performance for magnetic recording systems. In this dissertation, we propose two soft-decision decoding methods for tensor-product parity codes. We also present a list decoding algorithm for generalized error locating codes.

  2. Bankfull characteristics of Ohio streams and their relation to peak streamflows

    USGS Publications Warehouse

    Sherwood, James M.; Huitger, Carrie A.

    2005-01-01

    Regional curves, simple-regression equations, and multiple-regression equations were developed to estimate bankfull width, bankfull mean depth, bankfull cross-sectional area, and bankfull discharge of rural, unregulated streams in Ohio. The methods are based on geomorphic, basin, and flood-frequency data collected at 50 study sites on unregulated natural alluvial streams in Ohio, of which 40 sites are near streamflow-gaging stations. The regional curves and simple-regression equations relate the bankfull characteristics to drainage area. The multiple-regression equations relate the bankfull characteristics to drainage area, main-channel slope, main-channel elevation index, median bed-material particle size, bankfull cross-sectional area, and local-channel slope. Average standard errors of prediction for bankfull width equations range from 20.6 to 24.8 percent; for bankfull mean depth, 18.8 to 20.6 percent; for bankfull cross-sectional area, 25.4 to 30.6 percent; and for bankfull discharge, 27.0 to 78.7 percent. The simple-regression (drainage-area only) equations have the highest average standard errors of prediction. The multiple-regression equations in which the explanatory variables included drainage area, main-channel slope, main-channel elevation index, median bed-material particle size, bankfull cross-sectional area, and local-channel slope have the lowest average standard errors of prediction. Field surveys were done at each of the 50 study sites to collect the geomorphic data. Bankfull indicators were identified and evaluated, cross-section and longitudinal profiles were surveyed, and bed- and bank-material were sampled. Field data were analyzed to determine various geomorphic characteristics such as bankfull width, bankfull mean depth, bankfull cross-sectional area, bankfull discharge, streambed slope, and bed- and bank-material particle-size distribution. The various geomorphic characteristics were analyzed by means of a combination of graphical and statistical techniques. The logarithms of the annual peak discharges for the 40 gaged study sites were fit by a Pearson Type III frequency distribution to develop flood-peak discharges associated with recurrence intervals of 2, 5, 10, 25, 50, and 100 years. The peak-frequency data were related to geomorphic, basin, and climatic variables by multiple-regression analysis. Simple-regression equations were developed to estimate 2-, 5-, 10-, 25-, 50-, and 100-year flood-peak discharges of rural, unregulated streams in Ohio from bankfull channel cross-sectional area. The average standard errors of prediction are 31.6, 32.6, 35.9, 41.5, 46.2, and 51.2 percent, respectively. The study and methods developed are intended to improve understanding of the relations between geomorphic, basin, and flood characteristics of streams in Ohio and to aid in the design of hydraulic structures, such as culverts and bridges, where stability of the stream and structure is an important element of the design criteria. The study was done in cooperation with the Ohio Department of Transportation and the U.S. Department of Transportation, Federal Highway Administration.

  3. Sparsity-driven coupled imaging and autofocusing for interferometric SAR

    NASA Astrophysics Data System (ADS)

    Zengin, Oǧuzcan; Khwaja, Ahmed Shaharyar; ćetin, Müjdat

    2018-04-01

    We propose a sparsity-driven method for coupled image formation and autofocusing based on multi-channel data collected in interferometric synthetic aperture radar (IfSAR). Relative phase between SAR images contains valuable information. For example, it can be used to estimate the height of the scene in SAR interferometry. However, this relative phase could be degraded when independent enhancement methods are used over SAR image pairs. Previously, Ramakrishnan et al. proposed a coupled multi-channel image enhancement technique, based on a dual descent method, which exhibits better performance in phase preservation compared to independent enhancement methods. Their work involves a coupled optimization formulation that uses a sparsity enforcing penalty term as well as a constraint tying the multichannel images together to preserve the cross-channel information. In addition to independent enhancement, the relative phase between the acquisitions can be degraded due to other factors as well, such as platform location uncertainties, leading to phase errors in the data and defocusing in the formed imagery. The performance of airborne SAR systems can be affected severely by such errors. We propose an optimization formulation that combines Ramakrishnan et al.'s coupled IfSAR enhancement method with the sparsity-driven autofocus (SDA) approach of Önhon and Çetin to alleviate the effects of phase errors due to motion errors in the context of IfSAR imaging. Our method solves the joint optimization problem with a Lagrangian optimization method iteratively. In our preliminary experimental analysis, we have obtained results of our method on synthetic SAR images and compared its performance to existing methods.

  4. Blind ICA detection based on second-order cone programming for MC-CDMA systems

    NASA Astrophysics Data System (ADS)

    Jen, Chih-Wei; Jou, Shyh-Jye

    2014-12-01

    The multicarrier code division multiple access (MC-CDMA) technique has received considerable interest for its potential application to future wireless communication systems due to its high data rate. A common problem regarding the blind multiuser detectors used in MC-CDMA systems is that they are extremely sensitive to the complex channel environment. Besides, the perturbation of colored noise may negatively affect the performance of the system. In this paper, a new coherent detection method will be proposed, which utilizes the modified fast independent component analysis (FastICA) algorithm, based on approximate negentropy maximization that is subject to the second-order cone programming (SOCP) constraint. The aim of the proposed coherent detection is to provide robustness against small-to-medium channel estimation mismatch (CEM) that may arise from channel frequency response estimation error in the MC-CDMA system, which is modulated by downlink binary phase-shift keying (BPSK) under colored noise. Noncoherent demodulation schemes are preferable to coherent demodulation schemes, as the latter are difficult to implement over time-varying fading channels. Differential phase-shift keying (DPSK) is therefore the natural choice for an alternative modulation scheme. Furthermore, the new blind differential SOCP-based ICA (SOCP-ICA) detection without channel estimation and compensation will be proposed to combat Doppler spread caused by time-varying fading channels in the DPSK-modulated MC-CDMA system under colored noise. In this paper, numerical simulations are used to illustrate the robustness of the proposed blind coherent SOCP-ICA detector against small-to-medium CEM and to emphasize the advantage of the blind differential SOCP-ICA detector in overcoming Doppler spread.

  5. Electrode channel selection based on backtracking search optimization in motor imagery brain-computer interfaces.

    PubMed

    Dai, Shengfa; Wei, Qingguo

    2017-01-01

    Common spatial pattern algorithm is widely used to estimate spatial filters in motor imagery based brain-computer interfaces. However, use of a large number of channels will make common spatial pattern tend to over-fitting and the classification of electroencephalographic signals time-consuming. To overcome these problems, it is necessary to choose an optimal subset of the whole channels to save computational time and improve the classification accuracy. In this paper, a novel method named backtracking search optimization algorithm is proposed to automatically select the optimal channel set for common spatial pattern. Each individual in the population is a N-dimensional vector, with each component representing one channel. A population of binary codes generate randomly in the beginning, and then channels are selected according to the evolution of these codes. The number and positions of 1's in the code denote the number and positions of chosen channels. The objective function of backtracking search optimization algorithm is defined as the combination of classification error rate and relative number of channels. Experimental results suggest that higher classification accuracy can be achieved with much fewer channels compared to standard common spatial pattern with whole channels.

  6. Estimating Extracellular Spike Waveforms from CA1 Pyramidal Cells with Multichannel Electrodes

    PubMed Central

    Molden, Sturla; Moldestad, Olve; Storm, Johan F.

    2013-01-01

    Extracellular (EC) recordings of action potentials from the intact brain are embedded in background voltage fluctuations known as the “local field potential” (LFP). In order to use EC spike recordings for studying biophysical properties of neurons, the spike waveforms must be separated from the LFP. Linear low-pass and high-pass filters are usually insufficient to separate spike waveforms from LFP, because they have overlapping frequency bands. Broad-band recordings of LFP and spikes were obtained with a 16-channel laminar electrode array (silicone probe). We developed an algorithm whereby local LFP signals from spike-containing channel were modeled using locally weighted polynomial regression analysis of adjoining channels without spikes. The modeled LFP signal was subtracted from the recording to estimate the embedded spike waveforms. We tested the method both on defined spike waveforms added to LFP recordings, and on in vivo-recorded extracellular spikes from hippocampal CA1 pyramidal cells in anaesthetized mice. We show that the algorithm can correctly extract the spike waveforms embedded in the LFP. In contrast, traditional high-pass filters failed to recover correct spike shapes, albeit produceing smaller standard errors. We found that high-pass RC or 2-pole Butterworth filters with cut-off frequencies below 12.5 Hz, are required to retrieve waveforms comparable to our method. The method was also compared to spike-triggered averages of the broad-band signal, and yielded waveforms with smaller standard errors and less distortion before and after the spike. PMID:24391714

  7. Theoretical and experimental studies of turbo product code with time diversity in free space optical communication.

    PubMed

    Han, Yaoqiang; Dang, Anhong; Ren, Yongxiong; Tang, Junxiong; Guo, Hong

    2010-12-20

    In free space optical communication (FSOC) systems, channel fading caused by atmospheric turbulence degrades the system performance seriously. However, channel coding combined with diversity techniques can be exploited to mitigate channel fading. In this paper, based on the experimental study of the channel fading effects, we propose to use turbo product code (TPC) as the channel coding scheme, which features good resistance to burst errors and no error floor. However, only channel coding cannot cope with burst errors caused by channel fading, interleaving is also used. We investigate the efficiency of interleaving for different interleaving depths, and then the optimum interleaving depth for TPC is also determined. Finally, an experimental study of TPC with interleaving is demonstrated, and we show that TPC with interleaving can significantly mitigate channel fading in FSOC systems.

  8. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.

    PubMed

    Park, Chulhee; Kang, Moon Gi

    2016-05-18

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.

  9. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

    PubMed Central

    Park, Chulhee; Kang, Moon Gi

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381

  10. Enabling vendor independent photoacoustic imaging systems with asynchronous laser source

    NASA Astrophysics Data System (ADS)

    Wu, Yixuan; Zhang, Haichong K.; Boctor, Emad M.

    2018-02-01

    Channel data acquisition, and synchronization between laser excitation and PA signal acquisition, are two fundamental hardware requirements for photoacoustic (PA) imaging. Unfortunately, however, neither is equipped by most clinical ultrasound scanners. Therefore, less economical specialized research platforms are used in general, which hinders a smooth clinical transition of PA imaging. In previous studies, we have proposed an algorithm to achieve PA imaging using ultrasound post-beamformed (USPB) RF data instead of channel data. This work focuses on enabling clinical ultrasound scanners to implement PA imaging, without requiring synchronization between the laser excitation and PA signal acquisition. Laser synchronization is inherently consisted of two aspects: frequency and phase information. We synchronize without communicating the laser and the ultrasound scanner by investigating USPB images of a point-target phantom in two steps. First, frequency information is estimated by solving a nonlinear optimization problem, under the assumption that the segmented wave-front can only be beamformed into a single spot when synchronization is achieved. Second, after making frequencies of two systems identical, phase delay is estimated by optimizing the image quality while varying phase value. The proposed method is validated through simulation, by manually adding both frequency and phase errors, then applying the proposed algorithm to correct errors and reconstruct PA images. Compared with the ground truth, simulation results indicate that the remaining errors in frequency correction and phase correction are 0.28% and 2.34%, respectively, which affirm the potential of overcoming hardware barriers on PA imaging through software solution.

  11. Reliability of fish size estimates obtained from multibeam imaging sonar

    USGS Publications Warehouse

    Hightower, Joseph E.; Magowan, Kevin J.; Brown, Lori M.; Fox, Dewayne A.

    2013-01-01

    Multibeam imaging sonars have considerable potential for use in fisheries surveys because the video-like images are easy to interpret, and they contain information about fish size, shape, and swimming behavior, as well as characteristics of occupied habitats. We examined images obtained using a dual-frequency identification sonar (DIDSON) multibeam sonar for Atlantic sturgeon Acipenser oxyrinchus oxyrinchus, striped bass Morone saxatilis, white perch M. americana, and channel catfish Ictalurus punctatus of known size (20–141 cm) to determine the reliability of length estimates. For ranges up to 11 m, percent measurement error (sonar estimate – total length)/total length × 100 varied by species but was not related to the fish's range or aspect angle (orientation relative to the sonar beam). Least-square mean percent error was significantly different from 0.0 for Atlantic sturgeon (x̄  =  −8.34, SE  =  2.39) and white perch (x̄  = 14.48, SE  =  3.99) but not striped bass (x̄  =  3.71, SE  =  2.58) or channel catfish (x̄  = 3.97, SE  =  5.16). Underestimating lengths of Atlantic sturgeon may be due to difficulty in detecting the snout or the longer dorsal lobe of the heterocercal tail. White perch was the smallest species tested, and it had the largest percent measurement errors (both positive and negative) and the lowest percentage of images classified as good or acceptable. Automated length estimates for the four species using Echoview software varied with position in the view-field. Estimates tended to be low at more extreme azimuthal angles (fish's angle off-axis within the view-field), but mean and maximum estimates were highly correlated with total length. Software estimates also were biased by fish images partially outside the view-field and when acoustic crosstalk occurred (when a fish perpendicular to the sonar and at relatively close range is detected in the side lobes of adjacent beams). These sources of bias are apparent when files are processed manually and can be filtered out when producing automated software estimates. Multibeam sonar estimates of fish size should be useful for research and management if these potential sources of bias and imprecision are addressed.

  12. Exchanges of sediment between the flood plain and channel of the Amazon River in Brazil

    USGS Publications Warehouse

    Dunne, T.; Mertes, L.A.K.; Meade, R.H.; Richey, J.E.; Forsberg, B.R.

    1998-01-01

    Sediment transport through the Brazilian sector of the Amazon River valley, a distance of 2010 km, involves exchanges between the channel and the flood plain that in each direction exceed the annual flux of sediment out of the river at O??bidos (???1200 Mt yr-1). The exchanges occur through bank erosion, bar deposition, settling from diffuse overbank flow, and sedimentation in flood-plain channels. We estimated the magnitude of these exchanges for each of 10 reaches of the valley, and combined them with calculations of sediment transport into and out of the reaches based on sediment sampling and flow records to define a sediment budget for each reach. Residuals in the sediment budget of a reach include errors of estimation and erosion or deposition within the channel. The annual supply of sediment entering the channel from bank erosion was estimated to average 1570 Mt yr-1 (1.3 ?? the O??bidos flux) and the amount transferred from channel transport to the bars (380 Mt yr-1) and the flood plain (460 Mt yr-1 in channelized flow; 1230 Mt yr-1 in diffuse overbank flow) totaled 2070 Mt yr-1 (1.7 ?? the O??bidos flux). Thus, deposition on the bars and flood plain exceeded bank erosion by 500 Mt yr-1 over a 10-16 yr period. Sampling and calculation of sediment loads in the channel indicate a net accumulation in the valley floor of approximately 200 Mt yr-1 over 16 yr, crudely validating the process-based calculations of the sediment budget, which in turn illuminate the physical controls on each exchange process. Another 300-400 Mt yr-1 are deposited in a delta plain downstream of O??bidos. The components of the sediment budget reflect hydrologie characteristics of the valley floor and geomorphic characteristics of the channel and flood plain, which in turn are influenced by tectonic features of the Amazon structural trough.

  13. Covariance Matrix Estimation for Massive MIMO

    NASA Astrophysics Data System (ADS)

    Upadhya, Karthik; Vorobyov, Sergiy A.

    2018-04-01

    We propose a novel pilot structure for covariance matrix estimation in massive multiple-input multiple-output (MIMO) systems in which each user transmits two pilot sequences, with the second pilot sequence multiplied by a random phase-shift. The covariance matrix of a particular user is obtained by computing the sample cross-correlation of the channel estimates obtained from the two pilot sequences. This approach relaxes the requirement that all the users transmit their uplink pilots over the same set of symbols. We derive expressions for the achievable rate and the mean-squared error of the covariance matrix estimate when the proposed method is used with staggered pilots. The performance of the proposed method is compared with existing methods through simulations.

  14. A polarization-division multiplexing SSB-OFDM system with beat interference cancellation receivers

    NASA Astrophysics Data System (ADS)

    Yang, Peiling; Ma, Jianxin; Zhang, Junyi

    2018-06-01

    In this paper, we have proposed a polarization-division multiplexing (PDM) single-sideband optical orthogonal frequency division multiplexing (SSB-OOFDM) scheme with signal-signal beat interference cancellation receivers with balanced detection (ICRBD). This system can double channel capacity and improve spectrum efficiency (SE) with the reduced guard band (GB) due to the PDM. Multiple input multiple output (MIMO) technique is used to solve polarization mode dispersion (PMD) associated with channel estimation and equalization. By simulation, we demonstrate the efficacy of the proposed technique for a 2 ×40 Gbit/s 16-QAM SSB-PDM-OOFDM system according to the error vector magnitude (EVM) and the constellation diagrams.

  15. Estimation of liquid water cloud height and fraction using simulated AMSU-A and MHS data. [Advanced Microwave Sounding Unit and Microwave Humidity Sounder

    NASA Technical Reports Server (NTRS)

    Huang, Hung-Lung; Diak, George R.

    1992-01-01

    The rms retrieval errors in cloud top pressure for fully overcast conditions over both land and water surfaces are shown for AMSU-A oxygen channel pair 3 and 5 and MHS water vapor channel pair 4 and 5. For both pairs, the decrease of retrieval skill from high cloud is evident for almost all liquid water contents. For high cloud and medium cloud, the water vapor pair outperforms the oxygen pair. Retrieval accuracy is the best for high and middle clouds and degrades as the cloud top is lower in the atmosphere.

  16. Lessons Learned from AIRS: Improved Determination of Surface and Atmospheric Temperatures Using Only Shortwave AIRS Channels

    NASA Technical Reports Server (NTRS)

    Susskind, Joel

    2011-01-01

    This slide presentation reviews the use of shortwave channels available to the Atmospheric Infrared Sounder (AIRS) to improve the determination of surface and atmospheric temperatures. The AIRS instrument is compared with the Infrared Atmospheric Sounding Interferometer (IASI) on-board the MetOp-A satellite. The objectives of the AIRS/AMSU were to (1) provide real time observations to improve numerical weather prediction via data assimilation, (2) Provide observations to measure and explain interannual variability and trends and (3) Use of AIRS product error estimates allows for QC optimized for each application. Successive versions in the AIRS retrieval methodology have shown significant improvement.

  17. Information content of OCO-2 oxygen A-band channels for retrieving marine liquid cloud properties

    NASA Astrophysics Data System (ADS)

    Richardson, Mark; Stephens, Graeme L.

    2018-03-01

    Information content analysis is used to select channels for a marine liquid cloud retrieval using the high-spectral-resolution oxygen A-band instrument on NASA's Orbiting Carbon Observatory-2 (OCO-2). Desired retrieval properties are cloud optical depth, cloud-top pressure and cloud pressure thickness, which is the geometric thickness expressed in hectopascals. Based on information content criteria we select a micro-window of 75 of the 853 functioning OCO-2 channels spanning 763.5-764.6 nm and perform a series of synthetic retrievals with perturbed initial conditions. We estimate posterior errors from the sample standard deviations and obtain ±0.75 in optical depth and ±12.9 hPa in both cloud-top pressure and cloud pressure thickness, although removing the 10 % of samples with the highest χ2 reduces posterior error in cloud-top pressure to ±2.9 hPa and cloud pressure thickness to ±2.5 hPa. The application of this retrieval to real OCO-2 measurements is briefly discussed, along with limitations and the greatest caution is urged regarding the assumption of a single homogeneous cloud layer, which is often, but not always, a reasonable approximation for marine boundary layer clouds.

  18. Expeditious reconciliation for practical quantum key distribution

    NASA Astrophysics Data System (ADS)

    Nakassis, Anastase; Bienfang, Joshua C.; Williams, Carl J.

    2004-08-01

    The paper proposes algorithmic and environmental modifications to the extant reconciliation algorithms within the BB84 protocol so as to speed up reconciliation and privacy amplification. These algorithms have been known to be a performance bottleneck 1 and can process data at rates that are six times slower than the quantum channel they serve2. As improvements in single-photon sources and detectors are expected to improve the quantum channel throughput by two or three orders of magnitude, it becomes imperative to improve the performance of the classical software. We developed a Cascade-like algorithm that relies on a symmetric formulation of the problem, error estimation through the segmentation process, outright elimination of segments with many errors, Forward Error Correction, recognition of the distinct data subpopulations that emerge as the algorithm runs, ability to operate on massive amounts of data (of the order of 1 Mbit), and a few other minor improvements. The data from the experimental algorithm we developed show that by operating on massive arrays of data we can improve software performance by better than three orders of magnitude while retaining nearly as many bits (typically more than 90%) as the algorithms that were designed for optimal bit retention.

  19. Analysis of synchronous digital-modulation schemes for satellite communication

    NASA Technical Reports Server (NTRS)

    Takhar, G. S.; Gupta, S. C.

    1975-01-01

    The multipath communication channel for space communications is modeled as a multiplicative channel. This paper discusses the effects of multiplicative channel processes on the symbol error rate for quadrature modulation (QM) digital modulation schemes. An expression for the upper bound on the probability of error is derived and numerically evaluated. The results are compared with those obtained for additive channels.

  20. Design of a digital voice data compression technique for orbiter voice channels

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Candidate techniques were investigated for digital voice compression to a transmission rate of 8 kbps. Good voice quality, speaker recognition, and robustness in the presence of error bursts were considered. The technique of delayed-decision adaptive predictive coding is described and compared with conventional adaptive predictive coding. Results include a set of experimental simulations recorded on analog tape. The two FM broadcast segments produced show the delayed-decision technique to be virtually undegraded or minimally degraded at .001 and .01 Viterbi decoder bit error rates. Preliminary estimates of the hardware complexity of this technique indicate potential for implementation in space shuttle orbiters.

  1. Transfer Error and Correction Approach in Mobile Network

    NASA Astrophysics Data System (ADS)

    Xiao-kai, Wu; Yong-jin, Shi; Da-jin, Chen; Bing-he, Ma; Qi-li, Zhou

    With the development of information technology and social progress, human demand for information has become increasingly diverse, wherever and whenever people want to be able to easily, quickly and flexibly via voice, data, images and video and other means to communicate. Visual information to the people direct and vivid image, image / video transmission also been widespread attention. Although the third generation mobile communication systems and the emergence and rapid development of IP networks, making video communications is becoming the main business of the wireless communications, however, the actual wireless and IP channel will lead to error generation, such as: wireless channel multi- fading channels generated error and blocking IP packet loss and so on. Due to channel bandwidth limitations, the video communication compression coding of data is often beyond the data, and compress data after the error is very sensitive to error conditions caused a serious decline in image quality.

  2. Multichannel myopic deconvolution in underwater acoustic channels via low-rank recovery

    PubMed Central

    Tian, Ning; Byun, Sung-Hoon; Sabra, Karim; Romberg, Justin

    2017-01-01

    This paper presents a technique for solving the multichannel blind deconvolution problem. The authors observe the convolution of a single (unknown) source with K different (unknown) channel responses; from these channel outputs, the authors want to estimate both the source and the channel responses. The authors show how this classical signal processing problem can be viewed as solving a system of bilinear equations, and in turn can be recast as recovering a rank-1 matrix from a set of linear observations. Results of prior studies in the area of low-rank matrix recovery have identified effective convex relaxations for problems of this type and efficient, scalable heuristic solvers that enable these techniques to work with thousands of unknown variables. The authors show how a priori information about the channels can be used to build a linear model for the channels, which in turn makes solving these systems of equations well-posed. This study demonstrates the robustness of this methodology to measurement noises and parametrization errors of the channel impulse responses with several stylized and shallow water acoustic channel simulations. The performance of this methodology is also verified experimentally using shipping noise recorded on short bottom-mounted vertical line arrays. PMID:28599565

  3. Spectrally based bathymetric mapping of a dynamic, sand‐bedded channel: Niobrara River, Nebraska, USA

    USGS Publications Warehouse

    Dilbone, Elizabeth; Legleiter, Carl; Alexander, Jason S.; McElroy, Brandon

    2018-01-01

    Methods for spectrally based mapping of river bathymetry have been developed and tested in clear‐flowing, gravel‐bed channels, with limited application to turbid, sand‐bed rivers. This study used hyperspectral images and field surveys from the dynamic, sandy Niobrara River to evaluate three depth retrieval methods. The first regression‐based approach, optimal band ratio analysis (OBRA), paired in situ depth measurements with image pixel values to estimate depth. The second approach used ground‐based field spectra to calibrate an OBRA relationship. The third technique, image‐to‐depth quantile transformation (IDQT), estimated depth by linking the cumulative distribution function (CDF) of depth to the CDF of an image‐derived variable. OBRA yielded the lowest depth retrieval mean error (0.005 m) and highest observed versus predicted R2 (0.817). Although misalignment between field and image data did not compromise the performance of OBRA in this study, poor georeferencing could limit regression‐based approaches such as OBRA in dynamic, sand‐bedded rivers. Field spectroscopy‐based depth maps exhibited a mean error with a slight shallow bias (0.068 m) but provided reliable estimates for most of the study reach. IDQT had a strong deep bias but provided informative relative depth maps. Overprediction of depth by IDQT highlights the need for an unbiased sampling strategy to define the depth CDF. Although each of the techniques we tested demonstrated potential to provide accurate depth estimates in sand‐bed rivers, each method also was subject to certain constraints and limitations.

  4. Measuring kinetics of complex single ion channel data using mean-variance histograms.

    PubMed Central

    Patlak, J B

    1993-01-01

    The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance histogram technique provided a more credible analysis of the open, closed, and subconductance times for the patch. I also show that the method produces accurate results on simulated data in a wide variety of conditions, whereas the half-amplitude method, when applied to complex simulated data shows the same errors as were apparent in the real data. The utility and the limitations of this new method are discussed. Images FIGURE 2 FIGURE 4 FIGURE 8 FIGURE 9 PMID:7690261

  5. Errors in Martian paleodischarges skew interpretations of hydrologic history: Case study of the Aeolis Dorsa, Mars, with insights from the Quinn River, NV

    NASA Astrophysics Data System (ADS)

    Jacobsen, Robert E.; Burr, Devon M.

    2018-03-01

    Changes in Martian fluvial geomorphology with time-stratigraphic age, including decreases in paleochannel widths, suggest waning paleodischarges through time. Where fluvial landforms do not preserve paleochannel widths (e.g., meander deposits), other landform dimensions (i.e., radius of curvature) may be used to estimate paleodischarges. In the Aeolis Dorsa region, topographically inverted and stacked fluvial deposits - wide meander point bars overlain by thin channel fills - preserve ostensible evidence of decreasing paleodischarges through time. However, a robust paleohydraulic analysis of these distinct deposits requires knowledge of the accuracy of a terrestrial-based empirical relationship that estimates channel width from point-bar radius of curvature. We assess the accuracy of this radius-width relationship by applying it to a well-studied terrestrial analog, the Quinn River, Nevada. We find that radii of curvature from the Quinn River exceed the values predicted from the empirical relationship. These anomalously high radii are associated with greater resistance in the channel cut banks, indicating that bank strength is a confounding factor in the radius-width relationship. Some deposits in the Aeolis Dorsa include irregular meander morphologies, suggesting variably resistant channel banks and overestimates of both paleochannel widths and paleodischarges. Furthermore, the morphometry of the overlying thin channel fills suggests their widths have been eroded, such that their paleodischarges are underestimates. These overestimates and underestimates, when considered together, suggest little change in paleodischarge during the stratigraphic transition from meander deposits to channel fills. This work demonstrates the importance of terrestrial analog studies for revealing confounding factors in Martian fluvial systems and cautions against simplistic interpretations of Martian fluvial history. The discovered inaccuracies of paleodischarge estimates expose sources of uncertainty in the extant paleodischarge data that bias inferences toward waning hydrologic activity through time.

  6. Novel SVM-based technique to improve rainfall estimation over the Mediterranean region (north of Algeria) using the multispectral MSG SEVIRI imagery

    NASA Astrophysics Data System (ADS)

    Sehad, Mounir; Lazri, Mourad; Ameur, Soltane

    2017-03-01

    In this work, a new rainfall estimation technique based on the high spatial and temporal resolution of the Spinning Enhanced Visible and Infra Red Imager (SEVIRI) aboard the Meteosat Second Generation (MSG) is presented. This work proposes efficient scheme rainfall estimation based on two multiclass support vector machine (SVM) algorithms: SVM_D for daytime and SVM_N for night time rainfall estimations. Both SVM models are trained using relevant rainfall parameters based on optical, microphysical and textural cloud proprieties. The cloud parameters are derived from the Spectral channels of the SEVIRI MSG radiometer. The 3-hourly and daily accumulated rainfall are derived from the 15 min-rainfall estimation given by the SVM classifiers for each MSG observation image pixel. The SVMs were trained with ground meteorological radar precipitation scenes recorded from November 2006 to March 2007 over the north of Algeria located in the Mediterranean region. Further, the SVM_D and SVM_N models were used to estimate 3-hourly and daily rainfall using data set gathered from November 2010 to March 2011 over north Algeria. The results were validated against collocated rainfall observed by rain gauge network. Indeed, the statistical scores given by correlation coefficient, bias, root mean square error and mean absolute error, showed good accuracy of rainfall estimates by the present technique. Moreover, rainfall estimates of our technique were compared with two high accuracy rainfall estimates methods based on MSG SEVIRI imagery namely: random forests (RF) based approach and an artificial neural network (ANN) based technique. The findings of the present technique indicate higher correlation coefficient (3-hourly: 0.78; daily: 0.94), and lower mean absolute error and root mean square error values. The results show that the new technique assign 3-hourly and daily rainfall with good and better accuracy than ANN technique and (RF) model.

  7. Beamforming Based Full-Duplex for Millimeter-Wave Communication

    PubMed Central

    Liu, Xiao; Xiao, Zhenyu; Bai, Lin; Choi, Jinho; Xia, Pengfei; Xia, Xiang-Gen

    2016-01-01

    In this paper, we study beamforming based full-duplex (FD) systems in millimeter-wave (mmWave) communications. A joint transmission and reception (Tx/Rx) beamforming problem is formulated to maximize the achievable rate by mitigating self-interference (SI). Since the optimal solution is difficult to find due to the non-convexity of the objective function, suboptimal schemes are proposed in this paper. A low-complexity algorithm, which iteratively maximizes signal power while suppressing SI, is proposed and its convergence is proven. Moreover, two closed-form solutions, which do not require iterations, are also derived under minimum-mean-square-error (MMSE), zero-forcing (ZF), and maximum-ratio transmission (MRT) criteria. Performance evaluations show that the proposed iterative scheme converges fast (within only two iterations on average) and approaches an upper-bound performance, while the two closed-form solutions also achieve appealing performances, although there are noticeable differences from the upper bound depending on channel conditions. Interestingly, these three schemes show different robustness against the geometry of Tx/Rx antenna arrays and channel estimation errors. PMID:27455256

  8. Long-term analysis of aerosol optical depth over Northeast Asia using a satellite-based measurement: MI Yonsei Aerosol Retrieval Algorithm (YAER)

    NASA Astrophysics Data System (ADS)

    Kim, Mijin; Kim, Jhoon; Yoon, Jongmin; Chung, Chu-Yong; Chung, Sung-Rae

    2017-04-01

    In 2010, the Korean geostationary earth orbit (GEO) satellite, the Communication, Ocean, and Meteorological Satellite (COMS), was launched including the Meteorological Imager (MI). The MI measures atmospheric condition over Northeast Asia (NEA) using a single visible channel centered at 0.675 μm and four IR channels at 3.75, 6.75, 10.8, 12.0 μm. The visible measurement can also be utilized for the retrieval of aerosol optical properties (AOPs). Since the GEO satellite measurement has an advantage for continuous monitoring of AOPs, we can analyze the spatiotemporal variation of the aerosol using the MI observations over NEA. Therefore, we developed an algorithm to retrieve aerosol optical depth (AOD) using the visible observation of MI, and named as MI Yonsei Aerosol Retrieval Algorithm (YAER). In this study, we investigated the accuracy of MI YAER AOD by comparing the values with the long-term products of AERONET sun-photometer. The result showed that the MI AODs were significantly overestimated than the AERONET values over bright surface in low AOD case. Because the MI visible channel centered at red color range, contribution of aerosol signal to the measured reflectance is relatively lower than the surface contribution. Therefore, the AOD error in low AOD case over bright surface can be a fundamental limitation of the algorithm. Meanwhile, an assumption of background aerosol optical depth (BAOD) could result in the retrieval uncertainty, also. To estimate the surface reflectance by considering polluted air condition over the NEA, we estimated the BAOD from the MODIS dark target (DT) aerosol products by pixel. The satellite-based AOD retrieval, however, largely depends on the accuracy of the surface reflectance estimation especially in low AOD case, and thus, the BAOD could include the uncertainty in surface reflectance estimation of the satellite-based retrieval. Therefore, we re-estimated the BAOD using the ground-based sun-photometer measurement, and investigated the effects of the BAOD assumption. The satellite-based BAOD was significantly higher than the ground-based value over urban area, and thus, resulted in the underestimation of surface reflectance and the overestimation of AOD. The error analysis of the MI AOD also showed sensitivity to cloud contamination, clearly. Therefore, improvements of cloud masking process in the developed single channel MI algorithm as well as the modification of the surface reflectance estimation will be required for the future study.

  9. High Throughput via Cross-Layer Interference Alignment for Mobile Ad Hoc Networks

    DTIC Science & Technology

    2013-08-26

    MIMO zero-forcing receiver in the presence of channel estimation error,” IEEE Transactions on Wireless Communications , vol. 6 , no. 3, pp. 805–810, Mar...Robert W. Heath, Nachiappan Valliappan. Antenna Subset Modulation for Secure Millimeter-Wave Wireless Communication , IEEE Transactions on...in MIMO Interference Alignment Networks, IEEE Transactions on Wireless Communications , (02 2012): 0. doi: 10.1109/TWC.2011.120511.111088 TOTAL: 2

  10. Experimental Demonstration of Long-Range Underwater Acoustic Communication Using a Vertical Sensor Array

    PubMed Central

    Zhao, Anbang; Zeng, Caigao; Hui, Juan; Ma, Lin; Bi, Xuejie

    2017-01-01

    This paper proposes a composite channel virtual time reversal mirror (CCVTRM) for vertical sensor array (VSA) processing and applies it to long-range underwater acoustic (UWA) communication in shallow water. Because of weak signal-to-noise ratio (SNR), it is unable to accurately estimate the channel impulse response of each sensor of the VSA, thus the traditional passive time reversal mirror (PTRM) cannot perform well in long-range UWA communication in shallow water. However, CCVTRM only needs to estimate the composite channel of the VSA to accomplish time reversal mirror (TRM), which can effectively mitigate the inter-symbol interference (ISI) and reduce the bit error rate (BER). In addition, the calculation of CCVTRM is simpler than traditional PTRM. An UWA communication experiment using a VSA of 12 sensors was conducted in the South China Sea. The experiment achieves a very low BER communication at communication rate of 66.7 bit/s over an 80 km range. The results of the sea trial demonstrate that CCVTRM is feasible and can be applied to long-range UWA communication in shallow water. PMID:28653976

  11. Computer simulation of storm runoff for three watersheds in Albuquerque, New Mexico

    USGS Publications Warehouse

    Knutilla, R.L.; Veenhuis, J.E.

    1994-01-01

    Rainfall-runoff data from three watersheds were selected for calibration and verification of the U.S. Geological Survey's Distributed Routing Rainfall-Runoff Model. The watersheds chosen are residentially developed. The conceptually based model uses an optimization process that adjusts selected parameters to achieve the best fit between measured and simulated runoff volumes and peak discharges. Three of these optimization parameters represent soil-moisture conditions, three represent infiltration, and one accounts for effective impervious area. Each watershed modeled was divided into overland-flow segments and channel segments. The overland-flow segments were further subdivided to reflect pervious and impervious areas. Each overland-flow and channel segment was assigned representative values of area, slope, percentage of imperviousness, and roughness coefficients. Rainfall-runoff data for each watershed were separated into two sets for use in calibration and verification. For model calibration, seven input parameters were optimized to attain a best fit of the data. For model verification, parameter values were set using values from model calibration. The standard error of estimate for calibration of runoff volumes ranged from 19 to 34 percent, and for peak discharge calibration ranged from 27 to 44 percent. The standard error of estimate for verification of runoff volumes ranged from 26 to 31 percent, and for peak discharge verification ranged from 31 to 43 percent.

  12. Quantifying Uncertainty in Instantaneous Orbital Data Products of TRMM over Indian Subcontinent

    NASA Astrophysics Data System (ADS)

    Jayaluxmi, I.; Nagesh, D.

    2013-12-01

    In the last 20 years, microwave radiometers have taken satellite images of earth's weather proving to be a valuable tool for quantitative estimation of precipitation from space. However, along with the widespread acceptance of microwave based precipitation products, it has also been recognized that they contain large uncertainties. While most of the uncertainty evaluation studies focus on the accuracy of rainfall accumulated over time (e.g., season/year), evaluation of instantaneous rainfall intensities from satellite orbital data products are relatively rare. These instantaneous products are known to potentially cause large uncertainties during real time flood forecasting studies at the watershed scale. Especially over land regions, where the highly varying land surface emissivity offer a myriad of complications hindering accurate rainfall estimation. The error components of orbital data products also tend to interact nonlinearly with hydrologic modeling uncertainty. Keeping these in mind, the present study fosters the development of uncertainty analysis using instantaneous satellite orbital data products (version 7 of 1B11, 2A25, 2A23) derived from the passive and active sensors onboard Tropical Rainfall Measuring Mission (TRMM) satellite, namely TRMM microwave imager (TMI) and Precipitation Radar (PR). The study utilizes 11 years of orbital data from 2002 to 2012 over the Indian subcontinent and examines the influence of various error sources on the convective and stratiform precipitation types. Analysis conducted over the land regions of India investigates three sources of uncertainty in detail. These include 1) Errors due to improper delineation of rainfall signature within microwave footprint (rain/no rain classification), 2) Uncertainty offered by the transfer function linking rainfall with TMI low frequency channels and 3) Sampling errors owing to the narrow swath and infrequent visits of TRMM sensors. Case study results obtained during the Indian summer monsoon months of June-September are presented using contingency table statistics, performance diagram, scatter plots and probability density functions. Our study demonstrates that theory of copula can be efficiently used to represent the highly non linear dependency structure of rainfall with respect to TMI low frequency channels of 19, 21, 37 GHz. This questions the exclusive usage of high frequency 85 GHz channel for TMI overland rainfall retrieval algorithms. Further, the PR sampling errors revealed using a statistical bootstrap technique was found to incur relative sampling errors <30% (for 2 degree grids) over India whose magnitudes were biased towards stratiform rainfall type and sampling technique employed. These findings clearly document that proper characterization of error structure offered by TMI and PR has wider implications for decision making prior to incorporating the resulting orbital products for basin scale hydrologic modeling.

  13. An Off-Grid Turbo Channel Estimation Algorithm for Millimeter Wave Communications.

    PubMed

    Han, Lingyi; Peng, Yuexing; Wang, Peng; Li, Yonghui

    2016-09-22

    The bandwidth shortage has motivated the exploration of the millimeter wave (mmWave) frequency spectrum for future communication networks. To compensate for the severe propagation attenuation in the mmWave band, massive antenna arrays can be adopted at both the transmitter and receiver to provide large array gains via directional beamforming. To achieve such array gains, channel estimation (CE) with high resolution and low latency is of great importance for mmWave communications. However, classic super-resolution subspace CE methods such as multiple signal classification (MUSIC) and estimation of signal parameters via rotation invariant technique (ESPRIT) cannot be applied here due to RF chain constraints. In this paper, an enhanced CE algorithm is developed for the off-grid problem when quantizing the angles of mmWave channel in the spatial domain where off-grid problem refers to the scenario that angles do not lie on the quantization grids with high probability, and it results in power leakage and severe reduction of the CE performance. A new model is first proposed to formulate the off-grid problem. The new model divides the continuously-distributed angle into a quantized discrete grid part, referred to as the integral grid angle, and an offset part, termed fractional off-grid angle. Accordingly, an iterative off-grid turbo CE (IOTCE) algorithm is proposed to renew and upgrade the CE between the integral grid part and the fractional off-grid part under the Turbo principle. By fully exploiting the sparse structure of mmWave channels, the integral grid part is estimated by a soft-decoding based compressed sensing (CS) method called improved turbo compressed channel sensing (ITCCS). It iteratively updates the soft information between the linear minimum mean square error (LMMSE) estimator and the sparsity combiner. Monte Carlo simulations are presented to evaluate the performance of the proposed method, and the results show that it enhances the angle detection resolution greatly.

  14. Reconstructing for joint angles on the shoulder and elbow from non-invasive electroencephalographic signals through electromyography

    PubMed Central

    Choi, Kyuwan

    2013-01-01

    In this study, first the cortical activities over 2240 vertexes on the brain were estimated from 64 channels electroencephalography (EEG) signals using the Hierarchical Bayesian estimation while 5 subjects did continuous arm reaching movements. From the estimated cortical activities, a sparse linear regression method selected only useful features in reconstructing the electromyography (EMG) signals and estimated the EMG signals of 9 arm muscles. Then, a modular artificial neural network was used to estimate four joint angles from the estimated EMG signals of 9 muscles: one for movement control and the other for posture control. The estimated joint angles using this method have the correlation coefficient (CC) of 0.807 (±0.10) and the normalized root-mean-square error (nRMSE) of 0.176 (±0.29) with the actual joint angles. PMID:24167469

  15. Evaluating flow cytometer performance with weighted quadratic least squares analysis of LED and multi-level bead data

    PubMed Central

    Parks, David R.; Khettabi, Faysal El; Chase, Eric; Hoffman, Robert A.; Perfetto, Stephen P.; Spidlen, Josef; Wood, James C.S.; Moore, Wayne A.; Brinkman, Ryan R.

    2017-01-01

    We developed a fully automated procedure for analyzing data from LED pulses and multi-level bead sets to evaluate backgrounds and photoelectron scales of cytometer fluorescence channels. The method improves on previous formulations by fitting a full quadratic model with appropriate weighting and by providing standard errors and peak residuals as well as the fitted parameters themselves. Here we describe the details of the methods and procedures involved and present a set of illustrations and test cases that demonstrate the consistency and reliability of the results. The automated analysis and fitting procedure is generally quite successful in providing good estimates of the Spe (statistical photoelectron) scales and backgrounds for all of the fluorescence channels on instruments with good linearity. The precision of the results obtained from LED data is almost always better than for multi-level bead data, but the bead procedure is easy to carry out and provides results good enough for most purposes. Including standard errors on the fitted parameters is important for understanding the uncertainty in the values of interest. The weighted residuals give information about how well the data fits the model, and particularly high residuals indicate bad data points. Known photoelectron scales and measurement channel backgrounds make it possible to estimate the precision of measurements at different signal levels and the effects of compensated spectral overlap on measurement quality. Combining this information with measurements of standard samples carrying dyes of biological interest, we can make accurate comparisons of dye sensitivity among different instruments. Our method is freely available through the R/Bioconductor package flowQB. PMID:28160404

  16. Estimating net surface shortwave radiation from Chinese geostationary meteorological satellite FengYun-2D (FY-2D) data under clear sky.

    PubMed

    Zhang, Xiaoyu; Li, Lingling

    2016-03-21

    Net surface shortwave radiation (NSSR) significantly affects regional and global climate change, and is an important aspect of research on surface radiation budget balance. Many previous studies have proposed methods for estimating NSSR. This study proposes a method to calculate NSSR using FY-2D short-wave channel data. Firstly, a linear regression model is established between the top-of-atmosphere (TOA) broadband albedo (r) and the narrowband reflectivity (ρ1), based on data simulated with MODTRAN 4.2. Secondly, the relationship between surface absorption coefficient (as) and broadband albedo (r) is determined by dividing the surface type into land, sea, or snow&ice, and NSSR can then be calculated. Thirdly, sensitivity analysis is performed for errors associated with sensor noise, vertically integrated atmospheric water content, view zenith angle and solar zenith angle. Finally, validation using ground measurements is performed. Results show that the root mean square error (RMSE) between the estimated and actual r is less than 0.011 for all conditions, and the RMSEs between estimated and real NSSR are 26.60 W/m2, 9.99 W/m2, and 23.40 W/m2, using simulated data for land, sea, and snow&ice surfaces, respectively. This indicates that the proposed method can be used to adequately estimate NSSR. Additionally, we compare field measurements from TaiYuan and ChangWu ecological stations with estimates using corresponding FY-2D data acquired from January to April 2012, on cloud-free days. Results show that the RMSE between the estimated and actual NSSR is 48.56W/m2, with a mean error of -2.23W/m2. Causes of errors also include measurement accuracy and estimations of atmospheric water vertical contents. This method is only suitable for cloudless conditions.

  17. Stability assessment of QKD procedures in commercial quantum cryptography systems versus quality of dark channel

    NASA Astrophysics Data System (ADS)

    Jacak, Monika; Melniczuk, Damian; Jacak, Janusz; Jóźwiak, Ireneusz; Gruber, Jacek; Jóźwiak, Piotr

    2015-02-01

    In order to assess the susceptibility of the quantum key distribution (QKD) systems to the hacking attack including simultaneous and frequent system self-decalibrations, we analyze the stability of the QKD transmission organized in two commercially available systems. The first one employs non-entangled photons as flying qubits in the dark quantum channel for communication whereas the second one utilizes the entangled photon pairs to secretly share the cryptographic key. Applying standard methods of the statistical data analysis to the characteristic indicators of the quality of the QKD communication (the raw key exchange rate [RKER] and the quantum bit error rate [QBER]), we have estimated the pace of the self-decalibration of both systems and the repeatability rate in the case of controlled worsening of the dark channel quality.

  18. A model of the 1.6 GHz scatterometer. [performance of airborne scatterometer used as microwave remote sensor of soil moisture

    NASA Technical Reports Server (NTRS)

    Wang, J. R.

    1977-01-01

    The performance was studied of the 1.6 GHz airborne scatterometer system which is used as one of several Johnson Space Center (JSC) microwave remote sensors to detect moisture content of soil. The system is analyzed with respect to its antenna pattern and coupling, the signal flow in the receiver data channels, and the errors in the signal outputs. The operational principle and the sensitivity of the system, as well as data handling are also described. The finite cross-polarized gains of all four 1.6 GHz scatterometer antennae are found to have profound influence on the cross-polarized backscattered signal returns. If these signals are not analyzed properly, large errors could result in the estimate of the cross-polarized coefficient. It is also found necessary to make corrections to the variations of the aircraft parameters during data reduction in order to minimize the error in the coefficient estimate. Finally, a few recommendations are made to improve the overall performance of the scatterometer system.

  19. Dynamical noise filter and conditional entropy analysis in chaos synchronization.

    PubMed

    Wang, Jiao; Lai, C-H

    2006-06-01

    It is shown that, in a chaotic synchronization system whose driving signal is exposed to channel noise, the estimation of the drive system states can be greatly improved by applying the dynamical noise filtering to the response system states. If the noise is bounded in a certain range, the estimation errors, i.e., the difference between the filtered responding states and the driving states, can be made arbitrarily small. This property can be used in designing an alternative digital communication scheme. An analysis based on the conditional entropy justifies the application of dynamical noise filtering in generating quality synchronization.

  20. Beat-to-beat heart rate estimation fusing multimodal video and sensor data

    PubMed Central

    Antink, Christoph Hoog; Gao, Hanno; Brüser, Christoph; Leonhardt, Steffen

    2015-01-01

    Coverage and accuracy of unobtrusively measured biosignals are generally relatively low compared to clinical modalities. This can be improved by exploiting redundancies in multiple channels with methods of sensor fusion. In this paper, we demonstrate that two modalities, skin color variation and head motion, can be extracted from the video stream recorded with a webcam. Using a Bayesian approach, these signals are fused with a ballistocardiographic signal obtained from the seat of a chair with a mean absolute beat-to-beat estimation error below 25 milliseconds and an average coverage above 90% compared to an ECG reference. PMID:26309754

  1. Beat-to-beat heart rate estimation fusing multimodal video and sensor data.

    PubMed

    Antink, Christoph Hoog; Gao, Hanno; Brüser, Christoph; Leonhardt, Steffen

    2015-08-01

    Coverage and accuracy of unobtrusively measured biosignals are generally relatively low compared to clinical modalities. This can be improved by exploiting redundancies in multiple channels with methods of sensor fusion. In this paper, we demonstrate that two modalities, skin color variation and head motion, can be extracted from the video stream recorded with a webcam. Using a Bayesian approach, these signals are fused with a ballistocardiographic signal obtained from the seat of a chair with a mean absolute beat-to-beat estimation error below 25 milliseconds and an average coverage above 90% compared to an ECG reference.

  2. Performance analysis of cross-layer design with average PER constraint over MIMO fading channels

    NASA Astrophysics Data System (ADS)

    Dang, Xiaoyu; Liu, Yan; Yu, Xiangbin

    2015-12-01

    In this article, a cross-layer design (CLD) scheme for multiple-input and multiple-output system with the dual constraints of imperfect feedback and average packet error rate (PER) is presented, which is based on the combination of the adaptive modulation and the automatic repeat request protocols. The design performance is also evaluated over wireless Rayleigh fading channel. With the constraint of target PER and average PER, the optimum switching thresholds (STs) for attaining maximum spectral efficiency (SE) are developed. An effective iterative algorithm for finding the optimal STs is proposed via Lagrange multiplier optimisation. With different thresholds available, the analytical expressions of the average SE and PER are provided for the performance evaluation. To avoid the performance loss caused by the conventional single estimate, multiple outdated estimates (MOE) method, which utilises multiple previous channel estimation information, is presented for CLD to improve the system performance. It is shown that numerical simulations for average PER and SE are in consistent with the theoretical analysis and that the developed CLD with average PER constraint can meet the target PER requirement and show better performance in comparison with the conventional CLD with instantaneous PER constraint. Especially, the CLD based on the MOE method can obviously increase the system SE and reduce the impact of feedback delay greatly.

  3. Equalization and detection for digital communication over nonlinear bandlimited satellite communication channels. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gutierrez, Alberto, Jr.

    1995-01-01

    This dissertation evaluates receiver-based methods for mitigating the effects due to nonlinear bandlimited signal distortion present in high data rate satellite channels. The effects of the nonlinear bandlimited distortion is illustrated for digitally modulated signals. A lucid development of the low-pass Volterra discrete time model for a nonlinear communication channel is presented. In addition, finite-state machine models are explicitly developed for a nonlinear bandlimited satellite channel. A nonlinear fixed equalizer based on Volterra series has previously been studied for compensation of noiseless signal distortion due to a nonlinear satellite channel. This dissertation studies adaptive Volterra equalizers on a downlink-limited nonlinear bandlimited satellite channel. We employ as figure of merits performance in the mean-square error and probability of error senses. In addition, a receiver consisting of a fractionally-spaced equalizer (FSE) followed by a Volterra equalizer (FSE-Volterra) is found to give improvement beyond that gained by the Volterra equalizer. Significant probability of error performance improvement is found for multilevel modulation schemes. Also, it is found that probability of error improvement is more significant for modulation schemes, constant amplitude and multilevel, which require higher signal to noise ratios (i.e., higher modulation orders) for reliable operation. The maximum likelihood sequence detection (MLSD) receiver for a nonlinear satellite channel, a bank of matched filters followed by a Viterbi detector, serves as a probability of error lower bound for the Volterra and FSE-Volterra equalizers. However, this receiver has not been evaluated for a specific satellite channel. In this work, an MLSD receiver is evaluated for a specific downlink-limited satellite channel. Because of the bank of matched filters, the MLSD receiver may be high in complexity. Consequently, the probability of error performance of a more practical suboptimal MLSD receiver, requiring only a single receive filter, is evaluated.

  4. System Design for Nano-Network Communications

    NASA Astrophysics Data System (ADS)

    ShahMohammadian, Hoda

    The potential applications of nanotechnology in a wide range of areas necessities nano-networking research. Nano-networking is a new type of networking which has emerged by applying nanotechnology to communication theory. Therefore, this dissertation presents a framework for physical layer communications in a nano-network and addresses some of the pressing unsolved challenges in designing a molecular communication system. The contribution of this dissertation is proposing well-justified models for signal propagation, noise sources, optimum receiver design and synchronization in molecular communication channels. The design of any communication system is primarily based on the signal propagation channel and noise models. Using the Brownian motion and advection molecular statistics, separate signal propagation and noise models are presented for diffusion-based and flow-based molecular communication channels. It is shown that the corrupting noise of molecular channels is uncorrelated and non-stationary with a signal dependent magnitude. The next key component of any communication system is the reception and detection process. This dissertation provides a detailed analysis of the effect of the ligand-receptor binding mechanism on the received signal, and develops the first optimal receiver design for molecular communications. The bit error rate performance of the proposed receiver is evaluated and the impact of medium motion on the receiver performance is investigated. Another important feature of any communication system is synchronization. In this dissertation, the first blind synchronization algorithm is presented for the molecular communication channels. The proposed algorithm uses a non-decision directed maximum likelihood criterion for estimating the channel delay. The Cramer-Rao lower bound is also derived and the performance of the proposed synchronization algorithm is evaluated by investigating its mean square error.

  5. The Surface Water and Ocean Topography Satellite Mission - An Assessment of Swath Altimetry Measurements of River Hydrodynamics

    NASA Technical Reports Server (NTRS)

    Wilson, Matthew D.; Durand, Michael; Alsdorf, Douglas; Chul-Jung, Hahn; Andreadis, Konstantinos M.; Lee, Hyongki

    2012-01-01

    The Surface Water and Ocean Topography (SWOT) satellite mission, scheduled for launch in 2020 with development commencing in 2015, will provide a step-change improvement in the measurement of terrestrial surface water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water surface elevations, which will allow for the estimation of river and floodplain flows via the water surface slope. In this paper, we characterize the measurements which may be obtained from SWOT and illustrate how they may be used to derive estimates of river discharge. In particular, we show (i) the spatia-temporal sampling scheme of SWOT, (ii) the errors which maybe expected in swath altimetry measurements of the terrestrial surface water, and (iii) the impacts such errors may have on estimates of water surface slope and river discharge, We illustrate this through a "virtual mission" study for a approximately 300 km reach of the central Amazon river, using a hydraulic model to provide water surface elevations according to the SWOT spatia-temporal sampling scheme (orbit with 78 degree inclination, 22 day repeat and 140 km swath width) to which errors were added based on a two-dimension height error spectrum derived from the SWOT design requirements. Water surface elevation measurements for the Amazon mainstem as may be observed by SWOT were thereby obtained. Using these measurements, estimates of river slope and discharge were derived and compared to those which may be obtained without error, and those obtained directly from the hydraulic model. It was found that discharge can be reproduced highly accurately from the water height, without knowledge of the detailed channel bathymetry using a modified Manning's equation, if friction, depth, width and slope are known. Increasing reach length was found to be an effective method to reduce systematic height error in SWOT measurements.

  6. Application of left- and right-looking SAR stereo to depth measurements of the Ammavaru outflow channel, Lada Terra, Venus

    NASA Technical Reports Server (NTRS)

    Parker, T. J.

    1992-01-01

    Venusian channels are too narrow to be resolved by Magellan's radar altimeter, so they are not visible in the standard topographic data products. Stereo image data, in addition to their benefit to geologic mapping of Venus structures as a whole, are indispensible in measuring the topography across the channels. These measurements can then be used in conjunction with the regional topographic maps based on the altimeter data to produce cross-sectional areas for the channels and estimate the fluid discharge through them. As an example of the application of the stereo image data to venusian channels, a number of test depth and profile measurements were made of the large outflow channel system in Lada Terra, centered at 50 deg S latitude, 21 deg E longitude (F-MIDR 50S021). These measurements were made by viewing the cycle 1 and 2 digital FMIDRs in stereo on a display monitor, so as to minimize the errors in measuring parallax displacement as much as possible. The MIDRs are produced at a scale of 75 m/pixel. This corresponds to a vertical scale of about 17 m/pixel, when calculating the height of a feature from its parallax displacement. An error in placement determination of 1 pixel was assumed to characterize the vertical accuracy as plus or minus 17 m. When this technique was applied to the outflow channel, it was noted that the walls of the collapsed terrain source and 'trough reach' of the channel are laid over in both the cycle 1 and 2 images. This is evident when examining the distance between features on the plateau and the cliff walls in the two images. The layover 'shifts' the features closer to the apparent edge of the wall relative to the oppositely illuminated image.

  7. ViVaMBC: estimating viral sequence variation in complex populations from illumina deep-sequencing data using model-based clustering.

    PubMed

    Verbist, Bie; Clement, Lieven; Reumers, Joke; Thys, Kim; Vapirev, Alexander; Talloen, Willem; Wetzels, Yves; Meys, Joris; Aerssens, Jeroen; Bijnens, Luc; Thas, Olivier

    2015-02-22

    Deep-sequencing allows for an in-depth characterization of sequence variation in complex populations. However, technology associated errors may impede a powerful assessment of low-frequency mutations. Fortunately, base calls are complemented with quality scores which are derived from a quadruplet of intensities, one channel for each nucleotide type for Illumina sequencing. The highest intensity of the four channels determines the base that is called. Mismatch bases can often be corrected by the second best base, i.e. the base with the second highest intensity in the quadruplet. A virus variant model-based clustering method, ViVaMBC, is presented that explores quality scores and second best base calls for identifying and quantifying viral variants. ViVaMBC is optimized to call variants at the codon level (nucleotide triplets) which enables immediate biological interpretation of the variants with respect to their antiviral drug responses. Using mixtures of HCV plasmids we show that our method accurately estimates frequencies down to 0.5%. The estimates are unbiased when average coverages of 25,000 are reached. A comparison with the SNP-callers V-Phaser2, ShoRAH, and LoFreq shows that ViVaMBC has a superb sensitivity and specificity for variants with frequencies above 0.4%. Unlike the competitors, ViVaMBC reports a higher number of false-positive findings with frequencies below 0.4% which might partially originate from picking up artificial variants introduced by errors in the sample and library preparation step. ViVaMBC is the first method to call viral variants directly at the codon level. The strength of the approach lies in modeling the error probabilities based on the quality scores. Although the use of second best base calls appeared very promising in our data exploration phase, their utility was limited. They provided a slight increase in sensitivity, which however does not warrant the additional computational cost of running the offline base caller. Apparently a lot of information is already contained in the quality scores enabling the model based clustering procedure to adjust the majority of the sequencing errors. Overall the sensitivity of ViVaMBC is such that technical constraints like PCR errors start to form the bottleneck for low frequency variant detection.

  8. Strong Converse Exponents for a Quantum Channel Discrimination Problem and Quantum-Feedback-Assisted Communication

    NASA Astrophysics Data System (ADS)

    Cooney, Tom; Mosonyi, Milán; Wilde, Mark M.

    2016-06-01

    This paper studies the difficulty of discriminating between an arbitrary quantum channel and a "replacer" channel that discards its input and replaces it with a fixed state. The results obtained here generalize those known in the theory of quantum hypothesis testing for binary state discrimination. We show that, in this particular setting, the most general adaptive discrimination strategies provide no asymptotic advantage over non-adaptive tensor-power strategies. This conclusion follows by proving a quantum Stein's lemma for this channel discrimination setting, showing that a constant bound on the Type I error leads to the Type II error decreasing to zero exponentially quickly at a rate determined by the maximum relative entropy registered between the channels. The strong converse part of the lemma states that any attempt to make the Type II error decay to zero at a rate faster than the channel relative entropy implies that the Type I error necessarily converges to one. We then refine this latter result by identifying the optimal strong converse exponent for this task. As a consequence of these results, we can establish a strong converse theorem for the quantum-feedback-assisted capacity of a channel, sharpening a result due to Bowen. Furthermore, our channel discrimination result demonstrates the asymptotic optimality of a non-adaptive tensor-power strategy in the setting of quantum illumination, as was used in prior work on the topic. The sandwiched Rényi relative entropy is a key tool in our analysis. Finally, by combining our results with recent results of Hayashi and Tomamichel, we find a novel operational interpretation of the mutual information of a quantum channel {mathcal{N}} as the optimal Type II error exponent when discriminating between a large number of independent instances of {mathcal{N}} and an arbitrary "worst-case" replacer channel chosen from the set of all replacer channels.

  9. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiktenko, Evgeniy O.; Trushechkin, Anton S.; Lim, Charles Ci Wen

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. Finally, the proposed technique is based on introducing symmetry in operations of parties, and the consideration ofmore » results of unsuccessful belief-propagation decodings.« less

  10. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    DOE PAGES

    Kiktenko, Evgeniy O.; Trushechkin, Anton S.; Lim, Charles Ci Wen; ...

    2017-10-27

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. Finally, the proposed technique is based on introducing symmetry in operations of parties, and the consideration ofmore » results of unsuccessful belief-propagation decodings.« less

  11. Symmetric Blind Information Reconciliation for Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Kiktenko, E. O.; Trushechkin, A. S.; Lim, C. C. W.; Kurochkin, Y. V.; Fedorov, A. K.

    2017-10-01

    Quantum key distribution (QKD) is a quantum-proof key-exchange scheme which is fast approaching the communication industry. An essential component in QKD is the information reconciliation step, which is used for correcting the quantum-channel noise errors. The recently suggested blind-reconciliation technique, based on low-density parity-check codes, offers remarkable prospectives for efficient information reconciliation without an a priori quantum bit error rate estimation. We suggest an improvement of the blind-information-reconciliation protocol promoting a significant increase in the efficiency of the procedure and reducing its interactivity. The proposed technique is based on introducing symmetry in operations of parties, and the consideration of results of unsuccessful belief-propagation decodings.

  12. Public classical communication in quantum cryptography: Error correction, integrity, and authentication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timofeev, A. V.; Pomozov, D. I.; Makkaveev, A. P.

    2007-05-15

    Quantum cryptography systems combine two communication channels: a quantum and a classical one. (They can be physically implemented in the same fiber-optic link, which is employed as a quantum channel when one-photon states are transmitted and as a classical one when it carries classical data traffic.) Both channels are supposed to be insecure and accessible to an eavesdropper. Error correction in raw keys, interferometer balancing, and other procedures are performed by using the public classical channel. A discussion of the requirements to be met by the classical channel is presented.

  13. Cirrus Cloud Retrieval Using Infrared Sounding Data: Multilevel Cloud Errors.

    NASA Astrophysics Data System (ADS)

    Baum, Bryan A.; Wielicki, Bruce A.

    1994-01-01

    In this study we perform an error analysis for cloud-top pressure retrieval using the High-Resolution Infrared Radiometric Sounder (HIRS/2) 15-µm CO2 channels for the two-layer case of transmissive cirrus overlying an overcast, opaque stratiform cloud. This analysis includes standard deviation and bias error due to instrument noise and the presence of two cloud layers, the lower of which is opaque. Instantaneous cloud pressure retrieval errors are determined for a range of cloud amounts (0.1 1.0) and cloud-top pressures (850250 mb). Large cloud-top pressure retrieval errors are found to occur when a lower opaque layer is present underneath an upper transmissive cloud layer in the satellite field of view (FOV). Errors tend to increase with decreasing upper-cloud elective cloud amount and with decreasing cloud height (increasing pressure). Errors in retrieved upper-cloud pressure result in corresponding errors in derived effective cloud amount. For the case in which a HIRS FOV has two distinct cloud layers, the difference between the retrieved and actual cloud-top pressure is positive in all casts, meaning that the retrieved upper-cloud height is lower than the actual upper-cloud height. In addition, errors in retrieved cloud pressure are found to depend upon the lapse rate between the low-level cloud top and the surface. We examined which sounder channel combinations would minimize the total errors in derived cirrus cloud height caused by instrument noise and by the presence of a lower-level cloud. We find that while the sounding channels that peak between 700 and 1000 mb minimize random errors, the sounding channels that peak at 300—500 mb minimize bias errors. For a cloud climatology, the bias errors are most critical.

  14. Calculation of key reduction for B92 QKD protocol

    NASA Astrophysics Data System (ADS)

    Mehic, Miralem; Partila, Pavol; Tovarek, Jaromir; Voznak, Miroslav

    2015-05-01

    It is well known that Quantum Key Distribution (QKD) can be used with the highest level of security for distribution of the secret key, which is further used for symmetrical encryption. B92 is one of the oldest QKD protocols. It uses only two non-orthogonal states, each one coding for one bit-value. It is much faster and simpler when compared to its predecessors, but with the idealized maximum efficiencies of 25% over the quantum channel. B92 consists of several phases in which initial key is significantly reduced: secret key exchange, extraction of the raw key (sifting), error rate estimation, key reconciliation and privacy amplification. QKD communication is performed over two channels: the quantum channel and the classical public channel. In order to prevent a man-in-the-middle attack and modification of messages on the public channel, authentication of exchanged values must be performed. We used Wegman-Carter authentication because it describes an upper bound for needed symmetric authentication key. We explained the reduction of the initial key in each of QKD phases.

  15. Evolutionary Algorithm Based Feature Optimization for Multi-Channel EEG Classification.

    PubMed

    Wang, Yubo; Veluvolu, Kalyana C

    2017-01-01

    The most BCI systems that rely on EEG signals employ Fourier based methods for time-frequency decomposition for feature extraction. The band-limited multiple Fourier linear combiner is well-suited for such band-limited signals due to its real-time applicability. Despite the improved performance of these techniques in two channel settings, its application in multiple-channel EEG is not straightforward and challenging. As more channels are available, a spatial filter will be required to eliminate the noise and preserve the required useful information. Moreover, multiple-channel EEG also adds the high dimensionality to the frequency feature space. Feature selection will be required to stabilize the performance of the classifier. In this paper, we develop a new method based on Evolutionary Algorithm (EA) to solve these two problems simultaneously. The real-valued EA encodes both the spatial filter estimates and the feature selection into its solution and optimizes it with respect to the classification error. Three Fourier based designs are tested in this paper. Our results show that the combination of Fourier based method with covariance matrix adaptation evolution strategy (CMA-ES) has the best overall performance.

  16. Performance analysis of dual-hop optical wireless communication systems over k-distribution turbulence channel with pointing error

    NASA Astrophysics Data System (ADS)

    Mishra, Neha; Sriram Kumar, D.; Jha, Pranav Kumar

    2017-06-01

    In this paper, we investigate the performance of the dual-hop free space optical (FSO) communication systems under the effect of strong atmospheric turbulence together with misalignment effects (pointing error). We consider a relay assisted link using decode and forward (DF) relaying protocol between source and destination with the assumption that Channel State Information is available at both transmitting and receiving terminals. The atmospheric turbulence channels are modeled by k-distribution with pointing error impairment. The exact closed form expression is derived for outage probability and bit error rate and illustrated through numerical plots. Further BER results are compared for the different modulation schemes.

  17. Dereverberation and denoising based on generalized spectral subtraction by multi-channel LMS algorithm using a small-scale microphone array

    NASA Astrophysics Data System (ADS)

    Wang, Longbiao; Odani, Kyohei; Kai, Atsuhiko

    2012-12-01

    A blind dereverberation method based on power spectral subtraction (SS) using a multi-channel least mean squares algorithm was previously proposed to suppress the reverberant speech without additive noise. The results of isolated word speech recognition experiments showed that this method achieved significant improvements over conventional cepstral mean normalization (CMN) in a reverberant environment. In this paper, we propose a blind dereverberation method based on generalized spectral subtraction (GSS), which has been shown to be effective for noise reduction, instead of power SS. Furthermore, we extend the missing feature theory (MFT), which was initially proposed to enhance the robustness of additive noise, to dereverberation. A one-stage dereverberation and denoising method based on GSS is presented to simultaneously suppress both the additive noise and nonstationary multiplicative noise (reverberation). The proposed dereverberation method based on GSS with MFT is evaluated on a large vocabulary continuous speech recognition task. When the additive noise was absent, the dereverberation method based on GSS with MFT using only 2 microphones achieves a relative word error reduction rate of 11.4 and 32.6% compared to the dereverberation method based on power SS and the conventional CMN, respectively. For the reverberant and noisy speech, the dereverberation and denoising method based on GSS achieves a relative word error reduction rate of 12.8% compared to the conventional CMN with GSS-based additive noise reduction method. We also analyze the effective factors of the compensation parameter estimation for the dereverberation method based on SS, such as the number of channels (the number of microphones), the length of reverberation to be suppressed, and the length of the utterance used for parameter estimation. The experimental results showed that the SS-based method is robust in a variety of reverberant environments for both isolated and continuous speech recognition and under various parameter estimation conditions.

  18. Blind estimation of blur in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Zhang, Mo; Vozel, Benoit; Chehdi, Kacem; Uss, Mykhail; Abramov, Sergey; Lukin, Vladimir

    2017-10-01

    Hyperspectral images acquired by remote sensing systems are generally degraded by noise and can be sometimes more severely degraded by blur. When no knowledge is available about the degradations present on the original image, blind restoration methods can only be considered. By blind, we mean absolutely no knowledge neither of the blur point spread function (PSF) nor the original latent channel and the noise level. In this study, we address the blind restoration of the degraded channels component-wise, according to a sequential scheme. For each degraded channel, the sequential scheme estimates the blur point spread function (PSF) in a first stage and deconvolves the degraded channel in a second and final stage by means of using the PSF previously estimated. We propose a new component-wise blind method for estimating effectively and accurately the blur point spread function. This method follows recent approaches suggesting the detection, selection and use of sufficiently salient edges in the current processed channel for supporting the regularized blur PSF estimation. Several modifications are beneficially introduced in our work. A new selection of salient edges through thresholding adequately the cumulative distribution of their corresponding gradient magnitudes is introduced. Besides, quasi-automatic and spatially adaptive tuning of the involved regularization parameters is considered. To prove applicability and higher efficiency of the proposed method, we compare it against the method it originates from and four representative edge-sparsifying regularized methods of the literature already assessed in a previous work. Our attention is mainly paid to the objective analysis (via ݈l1-norm) of the blur PSF error estimation accuracy. The tests are performed on a synthetic hyperspectral image. This synthetic hyperspectral image has been built from various samples from classified areas of a real-life hyperspectral image, in order to benefit from realistic spatial distribution of reference spectral signatures to recover after synthetic degradation. The synthetic hyperspectral image has been successively degraded with eight real blurs taken from the literature, each of a different support size. Conclusions, practical recommendations and perspectives are drawn from the results experimentally obtained.

  19. Delay Tracking of Spread-Spectrum Signals for Indoor Optical Ranging

    PubMed Central

    Salido-Monzú, David; Martín-Gorostiza, Ernesto; Lázaro-Galilea, José Luis; Martos-Naya, Eduardo; Wieser, Andreas

    2014-01-01

    Delay tracking of spread-spectrum signals is widely used for ranging in radio frequency based navigation. Its use in non-coherent optical ranging, however, has not been extensively studied since optical channels are less subject to narrowband interference situations where these techniques become more useful. In this work, an early-late delay-locked loop adapted to indoor optical ranging is presented and analyzed. The specific constraints of free-space infrared channels in this context substantially differ from those typically considered in radio frequency applications. The tracking stage is part of an infrared differential range measuring system with application to mobile target indoor localization. Spread-spectrum signals are used in this context to provide accurate ranging while reducing the effect of multipath interferences. The performance of the stage regarding noise and dynamic errors is analyzed and validated, providing expressions that allow an adequate selection of the design parameters depending on the expected input signal characteristics. The behavior of the stage in a general multipath scenario is also addressed to estimate the multipath error bounds. The results, evaluated under realistic conditions corresponding to an 870 nm link with 25 MHz chip-rate, built with low-cost up-to-date devices, show that an overall error below 6% of a chip time can be achieved. PMID:25490585

  20. The influence of the uplink noise on the performance of satellite data transmission systems

    NASA Astrophysics Data System (ADS)

    Dewal, Vrinda P.

    The problem of transmission of binary phase shift keying (BPSK) modulated digital data through a bandlimited nonlinear satellite channel in the presence of uplink, downlink Gaussian noise and intersymbol interface is examined. The satellite transponder is represented by a zero memory bandpass nonlinearity, with AM/AM conversion. The proposed optimum linear receiver structure consists of tapped-delay lines followed by a decision device. The linear receiver is designed to minimize the mean square error that is a function of the intersymbol interface, the uplink and the downlink noise. The minimum mean square error equalizer (MMSE) is derived using the Wiener-Kolmogorov theory. In this receiver, the decision about the transmitted signal is made by taking into account the received sequence of present sample, and the interfering past and future samples, which represent the intersymbol interference (ISI). Illustrative examples of the receiver structures are considered for the nonlinear channels with a symmetrical and asymmetrical frequency responses of the transmitter filter. The transponder nonlinearity is simulated by a polynomial using only the first and the third orders terms. A computer simulation determines the tap gain coefficients of the MMSE equalizer that adapt to the various uplink and downlink noise levels. The performance of the MMSE equalizer is evaluated in terms of an estimate of the average probability of error.

  1. State-space estimation of the input stimulus function using the Kalman filter: a communication system model for fMRI experiments.

    PubMed

    Ward, B Douglas; Mazaheri, Yousef

    2006-12-15

    The blood oxygenation level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments in response to input stimuli is temporally delayed and distorted due to the blurring effect of the voxel hemodynamic impulse response function (IRF). Knowledge of the IRF, obtained during the same experiment, or as the result of a separate experiment, can be used to dynamically obtain an estimate of the input stimulus function. Reconstruction of the input stimulus function allows the fMRI experiment to be evaluated as a communication system. The input stimulus function may be considered as a "message" which is being transmitted over a noisy "channel", where the "channel" is characterized by the voxel IRF. Following reconstruction of the input stimulus function, the received message is compared with the transmitted message on a voxel-by-voxel basis to determine the transmission error rate. Reconstruction of the input stimulus function provides insight into actual brain activity during task activation with less temporal blurring, and may be considered as a first step toward estimation of the true neuronal input function.

  2. A Streamflow Statistics (StreamStats) Web Application for Ohio

    USGS Publications Warehouse

    Koltun, G.F.; Kula, Stephanie P.; Puskas, Barry M.

    2006-01-01

    A StreamStats Web application was developed for Ohio that implements equations for estimating a variety of streamflow statistics including the 2-, 5-, 10-, 25-, 50-, 100-, and 500-year peak streamflows, mean annual streamflow, mean monthly streamflows, harmonic mean streamflow, and 25th-, 50th-, and 75th-percentile streamflows. StreamStats is a Web-based geographic information system application designed to facilitate the estimation of streamflow statistics at ungaged locations on streams. StreamStats can also serve precomputed streamflow statistics determined from streamflow-gaging station data. The basic structure, use, and limitations of StreamStats are described in this report. To facilitate the level of automation required for Ohio's StreamStats application, the technique used by Koltun (2003)1 for computing main-channel slope was replaced with a new computationally robust technique. The new channel-slope characteristic, referred to as SL10-85, differed from the National Hydrography Data based channel slope values (SL) reported by Koltun (2003)1 by an average of -28.3 percent, with the median change being -13.2 percent. In spite of the differences, the two slope measures are strongly correlated. The change in channel slope values resulting from the change in computational method necessitated revision of the full-model equations for flood-peak discharges originally presented by Koltun (2003)1. Average standard errors of prediction for the revised full-model equations presented in this report increased by a small amount over those reported by Koltun (2003)1, with increases ranging from 0.7 to 0.9 percent. Mean percentage changes in the revised regression and weighted flood-frequency estimates relative to regression and weighted estimates reported by Koltun (2003)1 were small, ranging from -0.72 to -0.25 percent and -0.22 to 0.07 percent, respectively.

  3. Near-field Oblique Remote Sensing of Stream Water-surface Elevation, Slope, and Surface Velocity

    NASA Astrophysics Data System (ADS)

    Minear, J. T.; Kinzel, P. J.; Nelson, J. M.; McDonald, R.; Wright, S. A.

    2014-12-01

    A major challenge for estimating discharges during flood events or in steep channels is the difficulty and hazard inherent in obtaining in-stream measurements. One possible solution is to use near-field remote sensing to obtain simultaneous water-surface elevations, slope, and surface velocities. In this test case, we utilized Terrestrial Laser Scanning (TLS) to remotely measure water-surface elevations and slope in combination with surface velocities estimated from particle image velocimetry (PIV) obtained by video-camera and/or infrared camera. We tested this method at several sites in New Mexico and Colorado using independent validation data consisting of in-channel measurements from survey-grade GPS and Acoustic Doppler Current Profiler (ADCP) instruments. Preliminary results indicate that for relatively turbid or steep streams, TLS collects tens of thousands of water-surface elevations and slopes in minutes, much faster than conventional means and at relatively high precision, at least as good as continuous survey-grade GPS measurements. Estimated surface velocities from this technique are within 15% of measured velocity magnitudes and within 10 degrees from the measured velocity direction (using extrapolation from the shallowest bin of the ADCP measurements). Accurately aligning the PIV results into Cartesian coordinates appears to be one of the main sources of error, primarily due to the sensitivity at these shallow oblique look angles and the low numbers of stationary objects for rectification. Combining remotely-sensed water-surface elevations, slope, and surface velocities produces simultaneous velocity measurements from a large number of locations in the channel and is more spatially extensive than traditional velocity measurements. These factors make this technique useful for improving estimates of flow measurements during flood flows and in steep channels while also decreasing the difficulty and hazard associated with making measurements in these conditions.

  4. Performance analysis of an OAM multiplexing-based MIMO FSO system over atmospheric turbulence using space-time coding with channel estimation.

    PubMed

    Zhang, Yan; Wang, Ping; Guo, Lixin; Wang, Wei; Tian, Hongxin

    2017-08-21

    The average bit error rate (ABER) performance of an orbital angular momentum (OAM) multiplexing-based free-space optical (FSO) system with multiple-input multiple-output (MIMO) architecture has been investigated over atmospheric turbulence considering channel estimation and space-time coding. The impact of different types of space-time coding, modulation orders, turbulence strengths, receive antenna numbers on the transmission performance of this OAM-FSO system is also taken into account. On the basis of the proposed system model, the analytical expressions of the received signals carried by the k-th OAM mode of the n-th receive antenna for the vertical bell labs layered space-time (V-Blast) and space-time block codes (STBC) are derived, respectively. With the help of channel estimator carrying out with least square (LS) algorithm, the zero-forcing criterion with ordered successive interference cancellation criterion (ZF-OSIC) equalizer of V-Blast scheme and Alamouti decoder of STBC scheme are adopted to mitigate the performance degradation induced by the atmospheric turbulence. The results show that the ABERs obtained by channel estimation have excellent agreement with those of turbulence phase screen simulations. The ABERs of this OAM multiplexing-based MIMO system deteriorate with the increase of turbulence strengths. And both V-Blast and STBC schemes can significantly improve the system performance by mitigating the distortions of atmospheric turbulence as well as additive white Gaussian noise (AWGN). In addition, the ABER performances of both space-time coding schemes can be further enhanced by increasing the number of receive antennas for the diversity gain and STBC outperforms V-Blast in this system for data recovery. This work is beneficial to the OAM FSO system design.

  5. Beam-Switch Transient Effects in the RF Path of the ICAPA Receive Phased Array Antenna

    NASA Technical Reports Server (NTRS)

    Sands, O. Scott

    2003-01-01

    When the beam of a Phased Array Antenna (PAA) is switched from one pointing direction to another, transient effects in the RF path of the antenna are observed. Testing described in the report has revealed implementation-specific transient effects in the RF channel that are associated with digital clocking pulses that occur with transfer of data from the Beam Steering Controller (BSC) to the digital electronics of the PAA under test. The testing described here provides an initial assessment of the beam-switch phenomena by digitally acquiring time series of the RF communications channel, under CW excitation, during the period of time that the beam switch transient occurs. Effects are analyzed using time-frequency distributions and instantaneous frequency estimation techniques. The results of tests conducted with CW excitation supports further Bit-Error-Rate (BER) testing of the PAA communication channel.

  6. Performance of DPSK with convolutional encoding on time-varying fading channels

    NASA Technical Reports Server (NTRS)

    Mui, S. Y.; Modestino, J. W.

    1977-01-01

    The bit error probability performance of a differentially-coherent phase-shift keyed (DPSK) modem with convolutional encoding and Viterbi decoding on time-varying fading channels is examined. Both the Rician and the lognormal channels are considered. Bit error probability upper bounds on fully-interleaved (zero-memory) fading channels are derived and substantiated by computer simulation. It is shown that the resulting coded system performance is a relatively insensitive function of the choice of channel model provided that the channel parameters are related according to the correspondence developed as part of this paper. Finally, a comparison of DPSK with a number of other modulation strategies is provided.

  7. Suspended sediment fluxes in a tidal wetland: Measurement, controlling factors, and error analysis

    USGS Publications Warehouse

    Ganju, N.K.; Schoellhamer, D.H.; Bergamaschi, B.A.

    2005-01-01

    Suspended sediment fluxes to and from tidal wetlands are of increasing concern because of habitat restoration efforts, wetland sustainability as sea level rises, and potential contaminant accumulation. We measured water and sediment fluxes through two channels on Browns Island, at the landward end of San Francisco Bay, United States, to determine the factors that control sediment fluxes on and off the island. In situ instrumentation was deployed between October 10 and November 13, 2003. Acoustic Doppler current profilers and the index velocity method were employed to calculate water fluxes. Suspended sediment concentrations (SSC) were determined with optical sensors and cross-sectional water sampling. All procedures were analyzed for their contribution to total error in the flux measurement. The inability to close the water balance and determination of constituent concentration were identified as the main sources of error; total error was 27% for net sediment flux. The water budget for the island was computed with an unaccounted input of 0.20 m 3 s-1 (22% of mean inflow), after considering channel flow, change in water storage, evapotranspiration, and precipitation. The net imbalance may be a combination of groundwater seepage, overland flow, and flow through minor channels. Change of island water storage, caused by local variations in water surface elevation, dominated the tidalty averaged water flux. These variations were mainly caused by wind and barometric pressure change, which alter regional water levels throughout the Sacramento-San Joaquin River Delta. Peak instantaneous ebb flow was 35% greater than peak flood flow, indicating an ebb-dominant system, though dominance varied with the spring-neap cycle. SSC were controlled by wind-wave resuspension adjacent to the island and local tidal currents that mobilized sediment from the channel bed. During neap tides sediment was imported onto the island but during spring tides sediment was exported because the main channel became ebb dominant Over the 34-d monitoring period 14,000 kg of suspended sediment were imported through the two channels. The water imbalance may affect the sediment balance if the unmeasured water transport pathways are capable of transporting large amounts of sediment. We estimate a maximum of 2,800 kg of sediment may have been exported through unmeasured pathways, giving a minimum net import of 11,200 kg. Sediment flux measurements provide insight on tidal to fortnightly marsh sedimentation processes, especially in complex systems where sedimentation is spatially and temporally variable. ?? 2005 Estuarine Research Federation.

  8. Reaeration equations derived from U.S. geological survey database

    USGS Publications Warehouse

    Melching, C.S.; Flores, H.E.

    1999-01-01

    Accurate estimation of the reaeration-rate coefficient (K2) is extremely important for waste-load allocation. Currently, available K2 estimation equations generally yield poor estimates when applied to stream conditions different from those for which the equations were derived because they were derived from small databases composed of potentially highly inaccurate measurements. A large data set of K2 measurements made with tracer-gas methods was compiled from U.S. Geological Survey studies. This compilation included 493 reaches on 166 streams in 23 states. Careful screening to detect and eliminate erroneous measurements reduced the date set to 371 measurements. These measurements were divided into four subgroups on the basis of flow regime (channel control or pool and riffle) and stream scale (discharge greater than or less than 0.556 m3/s). Multiple linear regression in logarithms was applied to relate K2 to 12 stream hydraulic and water-quality characteristics. The resulting best-estimation equations had the form of semiempirical equations that included the rate of energy dissipation and discharge or depth and width as variables. For equation verification, a data set of K2 measurements made with tracer-gas procedures by other agencies was compiled from the literature. This compilation included 127 reaches on at least 24 streams in at least seven states. The standard error of estimate obtained when applying the developed equations to the U.S. Geological Survey data set ranged from 44 to 61%, whereas the standard error of estimate was 78% when applied to the verification data set.Accurate estimation of the reaeration-rate coefficient (K2) is extremely important for waste-load allocation. Currently, available K2 estimation equations generally yield poor estimates when applied to stream conditions different from those for which the equations were derived because they were derived from small databases composed of potentially highly inaccurate measurements. A large data set of K2 measurements made with tracer-gas methods was compiled from U.S. Geological Survey studies. This compilation included 493 reaches on 166 streams in 23 states. Careful screening to detect and eliminate erroneous measurements reduced the data set to 371 measurements. These measurements were divided into four subgroups on the basis of flow regime (channel control or pool and riffle) and stream scale (discharge greater than or less than 0.556 m3/s). Multiple linear regression in logarithms was applied to relate K2 to 12 stream hydraulic and water-quality characteristics. The resulting best-estimation equations had the form of semiempirical equations that included the rate of energy dissipation and discharge or depth and width as variables. For equation verification, a data set of K2 measurements made with tracer-gas procedures by other agencies was compiled from the literature. This compilation included 127 reaches on at least 24 streams in at least seven states. The standard error of estimate obtained when applying the developed equations to the U.S. Geological Survey data set ranged from 44 to 61%, whereas the standard error of estimate was 78% when applied to the verification data set.

  9. A Game Theoretic Framework for Power Control in Wireless Sensor Networks (POSTPRINT)

    DTIC Science & Technology

    2010-02-01

    which the sensor nodes compute based on past observations. Correspondingly, Pe can only be estimated; for example, with a noncoherent FSK modula...bit error probability for the link (i ! j) is given by some inverse function of j. For example, with noncoherent FSK modulation scheme, Pe ¼ 0:5e j...show the results for two different modulation schemes: DPSK and noncoherent PSK. As expected, with improvement in channel condition, i.e., with increase

  10. Evaluating flow cytometer performance with weighted quadratic least squares analysis of LED and multi-level bead data.

    PubMed

    Parks, David R; El Khettabi, Faysal; Chase, Eric; Hoffman, Robert A; Perfetto, Stephen P; Spidlen, Josef; Wood, James C S; Moore, Wayne A; Brinkman, Ryan R

    2017-03-01

    We developed a fully automated procedure for analyzing data from LED pulses and multilevel bead sets to evaluate backgrounds and photoelectron scales of cytometer fluorescence channels. The method improves on previous formulations by fitting a full quadratic model with appropriate weighting and by providing standard errors and peak residuals as well as the fitted parameters themselves. Here we describe the details of the methods and procedures involved and present a set of illustrations and test cases that demonstrate the consistency and reliability of the results. The automated analysis and fitting procedure is generally quite successful in providing good estimates of the Spe (statistical photoelectron) scales and backgrounds for all the fluorescence channels on instruments with good linearity. The precision of the results obtained from LED data is almost always better than that from multilevel bead data, but the bead procedure is easy to carry out and provides results good enough for most purposes. Including standard errors on the fitted parameters is important for understanding the uncertainty in the values of interest. The weighted residuals give information about how well the data fits the model, and particularly high residuals indicate bad data points. Known photoelectron scales and measurement channel backgrounds make it possible to estimate the precision of measurements at different signal levels and the effects of compensated spectral overlap on measurement quality. Combining this information with measurements of standard samples carrying dyes of biological interest, we can make accurate comparisons of dye sensitivity among different instruments. Our method is freely available through the R/Bioconductor package flowQB. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  11. Simplified Antenna Group Determination of RS Overhead Reduced Massive MIMO for Wireless Sensor Networks.

    PubMed

    Lee, Byung Moo

    2017-12-29

    Massive multiple-input multiple-output (MIMO) systems can be applied to support numerous internet of things (IoT) devices using its excessive amount of transmitter (TX) antennas. However, one of the big obstacles for the realization of the massive MIMO system is the overhead of reference signal (RS), because the number of RS is proportional to the number of TX antennas and/or related user equipments (UEs). It has been already reported that antenna group-based RS overhead reduction can be very effective to the efficient operation of massive MIMO, but the method of deciding the number of antennas needed in each group is at question. In this paper, we propose a simplified determination scheme of the number of antennas needed in each group for RS overhead reduced massive MIMO to support many IoT devices. Supporting many distributed IoT devices is a framework to configure wireless sensor networks. Our contribution can be divided into two parts. First, we derive simple closed-form approximations of the achievable spectral efficiency (SE) by using zero-forcing (ZF) and matched filtering (MF) precoding for the RS overhead reduced massive MIMO systems with channel estimation error. The closed-form approximations include a channel error factor that can be adjusted according to the method of the channel estimation. Second, based on the closed-form approximation, we present an efficient algorithm determining the number of antennas needed in each group for the group-based RS overhead reduction scheme. The algorithm depends on the exact inverse functions of the derived closed-form approximations of SE. It is verified with theoretical analysis and simulation that the proposed algorithm works well, and thus can be used as an important tool for massive MIMO systems to support many distributed IoT devices.

  12. Simplified Antenna Group Determination of RS Overhead Reduced Massive MIMO for Wireless Sensor Networks

    PubMed Central

    2017-01-01

    Massive multiple-input multiple-output (MIMO) systems can be applied to support numerous internet of things (IoT) devices using its excessive amount of transmitter (TX) antennas. However, one of the big obstacles for the realization of the massive MIMO system is the overhead of reference signal (RS), because the number of RS is proportional to the number of TX antennas and/or related user equipments (UEs). It has been already reported that antenna group-based RS overhead reduction can be very effective to the efficient operation of massive MIMO, but the method of deciding the number of antennas needed in each group is at question. In this paper, we propose a simplified determination scheme of the number of antennas needed in each group for RS overhead reduced massive MIMO to support many IoT devices. Supporting many distributed IoT devices is a framework to configure wireless sensor networks. Our contribution can be divided into two parts. First, we derive simple closed-form approximations of the achievable spectral efficiency (SE) by using zero-forcing (ZF) and matched filtering (MF) precoding for the RS overhead reduced massive MIMO systems with channel estimation error. The closed-form approximations include a channel error factor that can be adjusted according to the method of the channel estimation. Second, based on the closed-form approximation, we present an efficient algorithm determining the number of antennas needed in each group for the group-based RS overhead reduction scheme. The algorithm depends on the exact inverse functions of the derived closed-form approximations of SE. It is verified with theoretical analysis and simulation that the proposed algorithm works well, and thus can be used as an important tool for massive MIMO systems to support many distributed IoT devices. PMID:29286339

  13. Improved Temperature Sounding and Quality Control Methodology Using AIRS/AMSU Data: The AIRS Science Team Version 5 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John M.; Iredell, Lena; Keita, Fricky

    2009-01-01

    This paper describes the AIRS Science Team Version 5 retrieval algorithm in terms of its three most significant improvements over the methodology used in the AIRS Science Team Version 4 retrieval algorithm. Improved physics in Version 5 allows for use of AIRS clear column radiances in the entire 4.3 micron CO2 absorption band in the retrieval of temperature profiles T(p) during both day and night. Tropospheric sounding 15 micron CO2 observations are now used primarily in the generation of clear column radiances .R(sub i) for all channels. This new approach allows for the generation of more accurate values of .R(sub i) and T(p) under most cloud conditions. Secondly, Version 5 contains a new methodology to provide accurate case-by-case error estimates for retrieved geophysical parameters and for channel-by-channel clear column radiances. Thresholds of these error estimates are used in a new approach for Quality Control. Finally, Version 5 also contains for the first time an approach to provide AIRS soundings in partially cloudy conditions that does not require use of any microwave data. This new AIRS Only sounding methodology, referred to as AIRS Version 5 AO, was developed as a backup to AIRS Version 5 should the AMSU-A instrument fail. Results are shown comparing the relative performance of the AIRS Version 4, Version 5, and Version 5 AO for the single day, January 25, 2003. The Goddard DISC is now generating and distributing products derived using the AIRS Science Team Version 5 retrieval algorithm. This paper also described the Quality Control flags contained in the DISC AIRS/AMSU retrieval products and their intended use for scientific research purposes.

  14. A Comparison Between Heliosat-2 and Artificial Neural Network Methods for Global Horizontal Irradiance Retrievals over Desert Environments

    NASA Astrophysics Data System (ADS)

    Ghedira, H.; Eissa, Y.

    2012-12-01

    Global horizontal irradiance (GHI) retrievals at the surface of any given location could be used for preliminary solar resource assessments. More accurately, the direct normal irradiance (DNI) and diffuse horizontal irradiance (DHI) are also required to estimate the global tilt irradiance, mainly used for fixed flat plate collectors. Two different satellite-based models for solar irradiance retrievals have been applied over the desert environment of the United Arab Emirates (UAE). Both models employ channels of the SEVIRI instrument, onboard the geostationary satellite Meteosat Second Generation, as their main inputs. The satellite images used in this study have a temporal resolution of 15-min and a spatial resolution of 3-km. The objective of this study is to compare between the GHI retrieved using the Heliosat-2 method and an artificial neural network (ANN) ensemble method over the UAE. The high-resolution visible channel of SEVIRI is used in the Heliosat-2 method to derive the cloud index. The cloud index is then used to compute the cloud transmission, while the cloud-free GHI is computed from the Linke turbidity factor. The product of the cloud transmission and the cloud-free GHI denotes the estimated GHI. A constant underestimation is observed in the estimated GHI over the dataset available in the UAE. Therefore, the cloud-free DHI equation in the model was recalibrated to fix the bias. After recalibration, results over the UAE show a root mean square error (RMSE) value of 10.1% and a mean bias error (MBE) of -0.5%. As for the ANN approach, six thermal channels of SEVIRI were used to estimate the DHI and the total optical depth of the atmosphere (δ). An ensemble approach is employed to obtain a better generalizability of the results, as opposed to using one single weak network. The DNI is then computed from the estimated δ using the Beer-Bouguer-Lambert law. The GHI is computed from the DNI and DHI estimates. The RMSE for the estimated GHI obtained over an independent dataset over the UAE is 7.2% and the MBE is +1.9%. The results obtained by the two methods have shown that both the recalibrated Heliosat-2 and the ANN ensemble methods estimate the GHI at a 15-min resolution with high accuracy. The advantage of the ANN ensemble approach is that it derives the GHI from accurate DNI and DHI estimates. The DNI and DHI estimates are valuable when computing the global tilt irradiance. Also, accurate DNI estimates are beneficial for preliminary site selection for concentrating solar powered plants.

  15. Subarray Processing for Projection-based RFI Mitigation in Radio Astronomical Interferometers

    NASA Astrophysics Data System (ADS)

    Burnett, Mitchell C.; Jeffs, Brian D.; Black, Richard A.; Warnick, Karl F.

    2018-04-01

    Radio Frequency Interference (RFI) is a major problem for observations in Radio Astronomy (RA). Adaptive spatial filtering techniques such as subspace projection are promising candidates for RFI mitigation; however, for radio interferometric imaging arrays, these have primarily been used in engineering demonstration experiments rather than mainstream scientific observations. This paper considers one reason that adoption of such algorithms is limited: RFI decorrelates across the interferometric array because of long baseline lengths. This occurs when the relative RFI time delay along a baseline is large compared to the frequency channel inverse bandwidth used in the processing chain. Maximum achievable excision of the RFI is limited by covariance matrix estimation error when identifying interference subspace parameters, and decorrelation of the RFI introduces errors that corrupt the subspace estimate, rendering subspace projection ineffective over the entire array. In this work, we present an algorithm that overcomes this challenge of decorrelation by applying subspace projection via subarray processing (SP-SAP). Each subarray is designed to have a set of elements with high mutual correlation in the interferer for better estimation of subspace parameters. In an RFI simulation scenario for the proposed ngVLA interferometric imaging array with 15 kHz channel bandwidth for correlator processing, we show that compared to the former approach of applying subspace projection on the full array, SP-SAP improves mitigation of the RFI on the order of 9 dB. An example of improved image synthesis and reduced RFI artifacts for a simulated image “phantom” using the SP-SAP algorithm is presented.

  16. Quality assessment of color images based on the measure of just noticeable color difference

    NASA Astrophysics Data System (ADS)

    Chou, Chun-Hsien; Hsu, Yun-Hsiang

    2014-01-01

    Accurate assessment on the quality of color images is an important step to many image processing systems that convey visual information of the reproduced images. An accurate objective image quality assessment (IQA) method is expected to give the assessment result highly agreeing with the subjective assessment. To assess the quality of color images, many approaches simply apply the metric for assessing the quality of gray scale images to each of three color channels of the color image, neglecting the correlation among three color channels. In this paper, a metric for assessing color images' quality is proposed, in which the model of variable just-noticeable color difference (VJNCD) is employed to estimate the visibility thresholds of distortion inherent in each color pixel. With the estimated visibility thresholds of distortion, the proposed metric measures the average perceptible distortion in terms of the quantized distortion according to the perceptual error map similar to that defined by National Bureau of Standards (NBS) for converting the color difference enumerated by CIEDE2000 to the objective score of perceptual quality assessment. The perceptual error map in this case is designed for each pixel according to the visibility threshold estimated by the VJNCD model. The performance of the proposed metric is verified by assessing the test images in the LIVE database, and is compared with those of many well-know IQA metrics. Experimental results indicate that the proposed metric is an effective IQA method that can accurately predict the image quality of color images in terms of the correlation between objective scores and subjective evaluation.

  17. A Novel Method for gamma - text{photons} Depth-of-Interaction Detection in Monolithic Scintillation Crystals

    NASA Astrophysics Data System (ADS)

    Pani, Roberto; Bettiol, Marco; Preziosi, Enrico; Borrazzo, Cristian; Pellegrini, Rosanna; González, Antonio J.; Conde, Pablo; Cinti, Maria Nerina; Fabbri, Andrea; Di Castro, Elisabetta; Majewski, Stan

    2016-10-01

    Achieved spatial resolution of the PET systems is often limited by the parallax error due to the lack of information about the Depth of Interaction (DoI) inside the crystal of the incoming 511 keV annihilation photons. The smaller the diameter of the PET ring and the thicker the scintillator are, the more this error affects imaging performance. In this work, a DoI calculator suitable for monolithic scintillation crystals and based on the shape of the scintillation light distribution at the photodetector surface has been proposed. To test the estimator performance, a test PET module with a 50 × 50 × 20 mm monolithic LYSO crystal coupled to a 12 × 12 SiPM array has been employed. In addition, for calibration and validation of the method, Geant4 simulations have been also used. The key result of the application of the proposed DoI estimator is obtaining a continuous DoI estimation with an average DoI resolution of about 5 mm in the 20 mm-thick crystal. Benefiting from the DoI estimation capabilities of the method, it has been also possible to achieve additional important goals, first of all reducing the parallax error. First, because the scintillation light collection varies as a function of the 3D position of the interaction of the annihilation photon inside the crystal, a method to correct this response variation via a proper 3D look-up-table is proposed. This has led to an improvement of about 35% in energy resolution. Moreover, a DoI-dependent position algorithm has been proposed, allowing an improvement of both planar (X-Y) position linearity and planar spatial resolution. This algorithm is specifically developed for the rows/columns multi-channel readout logic, that reduces the number of independent channels from N × N to N + N, where N is the number of SiPM photodetection elements (12 in our case) in each row and column. This development was performed in the framework of the MindView PET/MilI brain imaging project.

  18. Methods for estimating magnitude and frequency of floods in Montana based on data through 1983

    USGS Publications Warehouse

    Omang, R.J.; Parrett, Charles; Hull, J.A.

    1986-01-01

    Equations are presented for estimating flood magnitudes for ungaged sites in Montana based on data through 1983. The State was divided into eight regions based on hydrologic conditions, and separate multiple regression equations were developed for each region. These equations relate annual flood magnitudes and frequencies to basin characteristics and are applicable only to natural flow streams. In three of the regions, equations also were developed relating flood magnitudes and frequencies to basin characteristics and channel geometry measurements. The standard errors of estimate for an exceedance probability of 1% ranged from 39% to 87%. Techniques are described for estimating annual flood magnitude and flood frequency information at ungaged sites based on data from gaged sites on the same stream. Included are curves relating flood frequency information to drainage area for eight major streams in the State. Maximum known flood magnitudes in Montana are compared with estimated 1 %-chance flood magnitudes and with maximum known floods in the United States. Values of flood magnitudes for selected exceedance probabilities and values of significant basin characteristics and channel geometry measurements for all gaging stations used in the analysis are tabulated. Included are 375 stations in Montana and 28 nearby stations in Canada and adjoining States. (Author 's abstract)

  19. Practical scheme to share a secret key through a quantum channel with a 27.6% bit error rate

    NASA Astrophysics Data System (ADS)

    Chau, H. F.

    2002-12-01

    A secret key shared through quantum key distribution between two cooperative players is secure against any eavesdropping attack allowed by the laws of physics. Yet, such a key can be established only when the quantum channel error rate due to eavesdropping or imperfect apparatus is low. Here, a practical quantum key distribution scheme by making use of an adaptive privacy amplification procedure with two-way classical communication is reported. Then, it is proven that the scheme generates a secret key whenever the bit error rate of the quantum channel is less than 0.5-0.1(5)≈27.6%, thereby making it the most error resistant scheme known to date.

  20. Equalization for a page-oriented optical memory system

    NASA Astrophysics Data System (ADS)

    Trelewicz, Jennifer Q.; Capone, Jeffrey

    1999-11-01

    In this work, a method of decision-feedback equalization is developed for a digital holographic channel that experiences moderate-to-severe imaging errors. Decision feedback is utilized, not only where the channel is well-behaved, but also near the edges of the camera grid that are subject to a high degree of imaging error. In addition to these effects, the channel is worsened by typical problems of holographic channels, including non-uniform illumination, dropouts, and stuck bits. The approach described in this paper builds on established methods for performing trained and blind equalization on time-varying channels. The approach is tested on experimental data sets. On most of these data sets, the method of equalization described in this work delivers at least an order of magnitude improvement in bit-error rate (BER) before error-correction coding (ECC). When ECC is introduced, the approach is able to recover stored data with no errors for many of the tested data sets. Furthermore, a low BER was maintained even over a range of small alignment perturbations in the system. It is believed that this equalization method can allow cost reductions to be made in page-memory systems, by allowing for a larger image area per page or less complex imaging components, without sacrificing the low BER required by data storage applications.

  1. Exploring effective multiplicity in multichannel functional near-infrared spectroscopy using eigenvalues of correlation matrices

    PubMed Central

    Uga, Minako; Dan, Ippeita; Dan, Haruka; Kyutoku, Yasushi; Taguchi, Y-h; Watanabe, Eiju

    2015-01-01

    Abstract. Recent advances in multichannel functional near-infrared spectroscopy (fNIRS) allow wide coverage of cortical areas while entailing the necessity to control family-wise errors (FWEs) due to increased multiplicity. Conventionally, the Bonferroni method has been used to control FWE. While Type I errors (false positives) can be strictly controlled, the application of a large number of channel settings may inflate the chance of Type II errors (false negatives). The Bonferroni-based methods are especially stringent in controlling Type I errors of the most activated channel with the smallest p value. To maintain a balance between Types I and II errors, effective multiplicity (Meff) derived from the eigenvalues of correlation matrices is a method that has been introduced in genetic studies. Thus, we explored its feasibility in multichannel fNIRS studies. Applying the Meff method to three kinds of experimental data with different activation profiles, we performed resampling simulations and found that Meff was controlled at 10 to 15 in a 44-channel setting. Consequently, the number of significantly activated channels remained almost constant regardless of the number of measured channels. We demonstrated that the Meff approach can be an effective alternative to Bonferroni-based methods for multichannel fNIRS studies. PMID:26157982

  2. Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems

    NASA Astrophysics Data System (ADS)

    Mahdi Alavi, S. M.; Saif, Mehrdad

    2013-12-01

    This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.

  3. Haptic control with environment force estimation for telesurgery.

    PubMed

    Bhattacharjee, Tapomayukh; Son, Hyoung Il; Lee, Doo Yong

    2008-01-01

    Success of telesurgical operations depends on better position tracking ability of the slave device. Improved position tracking of the slave device can lead to safer and less strenuous telesurgical operations. The two-channel force-position control architecture is widely used for better position tracking ability. This architecture requires force sensors for direct force feedback. Force sensors may not be a good choice in the telesurgical environment because of the inherent noise, and limitation in the deployable place and space. Hence, environment force estimation is developed using the concept of the robot function parameter matrix and a recursive least squares method. Simulation results show efficacy of the proposed method. The slave device successfully tracks the position of the master device, and the estimation error quickly becomes negligible.

  4. Performance evaluation of FSO system using wavelength and time diversity over malaga turbulence channel with pointing errors

    NASA Astrophysics Data System (ADS)

    Balaji, K. A.; Prabu, K.

    2018-03-01

    There is an immense demand for high bandwidth and high data rate systems, which is fulfilled by wireless optical communication or free space optics (FSO). Hence FSO gained a pivotal role in research which has a added advantage of both cost-effective and licence free huge bandwidth. Unfortunately the optical signal in free space suffers from irradiance and phase fluctuations due to atmospheric turbulence and pointing errors which deteriorates the signal and degrades the performance of communication system over longer distance which is undesirable. In this paper, we have considered polarization shift keying (POLSK) system applied with wavelength and time diversity technique over Malaga(M)distribution to mitigate turbulence induced fading. We derived closed form mathematical expressions for estimating the systems outage probability and average bit error rate (BER). Ultimately from the results we can infer that wavelength and time diversity schemes enhances these systems performance.

  5. Simulating and assessing boson sampling experiments with phase-space representations

    NASA Astrophysics Data System (ADS)

    Opanchuk, Bogdan; Rosales-Zárate, Laura; Reid, Margaret D.; Drummond, Peter D.

    2018-04-01

    The search for new, application-specific quantum computers designed to outperform any classical computer is driven by the ending of Moore's law and the quantum advantages potentially obtainable. Photonic networks are promising examples, with experimental demonstrations and potential for obtaining a quantum computer to solve problems believed classically impossible. This introduces a challenge: how does one design or understand such photonic networks? One must be able to calculate observables using general methods capable of treating arbitrary inputs, dissipation, and noise. We develop complex phase-space software for simulating these photonic networks, and apply this to boson sampling experiments. Our techniques give sampling errors orders of magnitude lower than experimental correlation measurements for the same number of samples. We show that these techniques remove systematic errors in previous algorithms for estimating correlations, with large improvements in errors in some cases. In addition, we obtain a scalable channel-combination strategy for assessment of boson sampling devices.

  6. Formulating the shear stress distribution in circular open channels based on the Renyi entropy

    NASA Astrophysics Data System (ADS)

    Khozani, Zohreh Sheikh; Bonakdari, Hossein

    2018-01-01

    The principle of maximum entropy is employed to derive the shear stress distribution by maximizing the Renyi entropy subject to some constraints and by assuming that dimensionless shear stress is a random variable. A Renyi entropy-based equation can be used to model the shear stress distribution along the entire wetted perimeter of circular channels and circular channels with flat beds and deposited sediments. A wide range of experimental results for 12 hydraulic conditions with different Froude numbers (0.375 to 1.71) and flow depths (20.3 to 201.5 mm) were used to validate the derived shear stress distribution. For circular channels, model performance enhanced with increasing flow depth (mean relative error (RE) of 0.0414) and only deteriorated slightly at the greatest flow depth (RE of 0.0573). For circular channels with flat beds, the Renyi entropy model predicted the shear stress distribution well at lower sediment depth. The Renyi entropy model results were also compared with Shannon entropy model results. Both models performed well for circular channels, but for circular channels with flat beds the Renyi entropy model displayed superior performance in estimating the shear stress distribution. The Renyi entropy model was highly precise and predicted the shear stress distribution in a circular channel with RE of 0.0480 and in a circular channel with a flat bed with RE of 0.0488.

  7. Liquid water path retrieval using the lowest frequency channels of Fengyun-3C Microwave Radiation Imager (MWRI)

    NASA Astrophysics Data System (ADS)

    Tang, Fei; Zou, Xiaolei

    2017-12-01

    The Microwave Radiation Imager (MWRI) on board Chinese Fengyun-3 (FY-3) satellites provides measurements at 10.65, 18.7, 23.8, 36.5, and 89.0 GHz with both horizontal and vertical polarization channels. Brightness temperature measurements of those channels with their central frequencies higher than 19 GHz from satellite-based microwave imager radiometers had traditionally been used to retrieve cloud liquid water path (LWP) over ocean. The results show that the lowest frequency channels are the most appropriate for retrieving LWP when its values are large. Therefore, a modified LWP retrieval algorithm is developed for retrieving LWP of different magnitudes involving not only the high frequency channels but also the lowest frequency channels of FY-3 MWRI. The theoretical estimates of the LWP retrieval errors are between 0.11 and 0.06 mm for 10.65- and 18.7-GHz channels and between 0.02 and 0.04 mm for 36.5- and 89.0-GHz channels. It is also shown that the brightness temperature observations at 10.65 GHz can be utilized to better retrieve the LWP greater than 3 mm in the eyewall region of Super Typhoon Neoguri (2014). The spiral structure of clouds within and around Typhoon Neoguri can be well captured by combining the LWP retrievals from different frequency channels.

  8. Encoding and Decoding of Multi-Channel ICMS in Macaque Somatosensory Cortex.

    PubMed

    Dadarlat, Maria C; Sabes, Philip N

    2016-01-01

    Naturalistic control of brain-machine interfaces will require artificial proprioception, potentially delivered via intracortical microstimulation (ICMS). We have previously shown that multi-channel ICMS can guide a monkey reaching to unseen targets in a planar workspace. Here, we expand on that work, asking how ICMS is decoded into target angle and distance by analyzing the performance of a monkey when ICMS feedback was degraded. From the resulting pattern of errors, we found that the animal's estimate of target direction was consistent with a weighted circular-mean strategy-close to the optimal decoding strategy given the ICMS encoding. These results support our previous finding that animals can learn to use this artificial sensory feedback in an efficient and naturalistic manner.

  9. Assimilation of Precipitation Measurement Missions Microwave Radiance Observations With GEOS-5

    NASA Technical Reports Server (NTRS)

    Jin, Jianjun; Kim, Min-Jeong; McCarty, Will; Akella, Santha; Gu, Wei

    2015-01-01

    The Global Precipitation Mission (GPM) Core Observatory satellite was launched in February, 2014. The GPM Microwave Imager (GMI) is a conically scanning radiometer measuring 13 channels ranging from 10 to 183 GHz and sampling between 65 S 65 N. This instrument is a successor to the Tropical Rainfall Measurement Mission (TRMM) Microwave Imager (TMI), which has observed 9 channels at frequencies ranging 10 to 85 GHz between 40 S 40 N since 1997. This presentation outlines the base procedures developed to assimilate GMI and TMI radiances in clear-sky conditions, including quality control methods, thinning decisions, and the estimation of, observation errors. This presentation also shows the impact of these observations when they are incorporated into the GEOS-5 atmospheric data assimilation system.

  10. Estimating the magnitude of peak flows for streams in Kentucky for selected recurrence intervals

    USGS Publications Warehouse

    Hodgkins, Glenn A.; Martin, Gary R.

    2003-01-01

    This report gives estimates of, and presents techniques for estimating, the magnitude of peak flows for streams in Kentucky for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years. A flowchart in this report guides the user to the appropriate estimates and (or) estimating techniques for a site on a specific stream. Estimates of peak flows are given for 222 U.S. Geological Survey streamflow-gaging stations in Kentucky. In the development of the peak-flow estimates at gaging stations, a new generalized skew coefficient was calculated for the State. This single statewide value of 0.011 (with a standard error of prediction of 0.520) is more appropriate for Kentucky than the national skew isoline map in Bulletin 17B of the Interagency Advisory Committee on Water Data. Regression equations are presented for estimating the peak flows on ungaged, unregulated streams in rural drainage basins. The equations were developed by use of generalized-least-squares regression procedures at 187 U.S. Geological Survey gaging stations in Kentucky and 51 stations in surrounding States. Kentucky was divided into seven flood regions. Total drainage area is used in the final regression equations as the sole explanatory variable, except in Regions 1 and 4 where main-channel slope also was used. The smallest average standard errors of prediction were in Region 3 (from -13.1 to +15.0 percent) and the largest average standard errors of prediction were in Region 5 (from -37.6 to +60.3 percent). One section of this report describes techniques for estimating peak flows for ungaged sites on gaged, unregulated streams in rural drainage basins. Another section references two previous U.S. Geological Survey reports for peak-flow estimates on ungaged, unregulated, urban streams. Estimating peak flows at ungaged sites on regulated streams is beyond the scope of this report, because peak flows on regulated streams are dependent upon variable human activities.

  11. Estimation of land surface heat fluxes based on visible infrared imaging radiometer suite data: case study in northern China

    NASA Astrophysics Data System (ADS)

    Li, Xiaojun; Xin, Xiaozhou; Peng, Zhiqing; Zhang, Hailong; Li, Li; Shao, Shanshan; Liu, Qinhuo

    2017-10-01

    Evapotranspiration (ET) plays an important role in surface-atmosphere interactions and can be monitored using remote sensing data. The visible infrared imaging radiometer suite (VIIRS) sensor is a generation of optical satellite sensors that provide daily global coverage at 375- to 750-m spatial resolutions with 22 spectral channels (0.412 to 12.05 μm) and capable of monitoring ET from regional to global scales. However, few studies have focused on methods of acquiring ET from VIIRS images. The objective of this study is to introduce an algorithm that uses the VIIRS data and meteorological variables to estimate the energy budgets of land surfaces, including the net radiation, soil heat flux, sensible heat flux, and latent heat fluxes. A single-source model that based on surface energy balance equation is used to obtain surface heat fluxes within the Zhangye oasis in China. The results were validated using observations collected during the HiWATER (Heihe Watershed Allied Telemetry Experimental Research) project. To facilitate comparison, we also use moderate resolution imaging spectrometer (MODIS) data to retrieve the regional surface heat fluxes. The validation results show that it is feasible to estimate the turbulent heat flux based on the VIIRS sensor and that these data have certain advantages (i.e., the mean bias error of sensible heat flux is 15.23 W m-2) compared with MODIS data (i.e., the mean bias error of sensible heat flux is -29.36 W m-2). Error analysis indicates that, in our model, the accuracies of the estimated sensible heat fluxes rely on the errors in the retrieved surface temperatures and the canopy heights.

  12. Design and simulation of sensor networks for tracking Wifi users in outdoor urban environments

    NASA Astrophysics Data System (ADS)

    Thron, Christopher; Tran, Khoi; Smith, Douglas; Benincasa, Daniel

    2017-05-01

    We present a proof-of-concept investigation into the use of sensor networks for tracking of WiFi users in outdoor urban environments. Sensors are fixed, and are capable of measuring signal power from users' WiFi devices. We derive a maximum likelihood estimate for user location based on instantaneous sensor power measurements. The algorithm takes into account the effects of power control, and is self-calibrating in that the signal power model used by the location algorithm is adjusted and improved as part of the operation of the network. Simulation results to verify the system's performance are presented. The simulation scenario is based on a 1.5 km2 area of lower Manhattan, The self-calibration mechanism was verified for initial rms (root mean square) errors of up to 12 dB in the channel power estimates: rms errors were reduced by over 60% in 300 track-hours, in systems with limited power control. Under typical operating conditions with (without) power control, location rms errors are about 8.5 (5) meters with 90% accuracy within 9 (13) meters, for both pedestrian and vehicular users. The distance error distributions for smaller distances (<30 m) are well-approximated by an exponential distribution, while the distributions for large distance errors have fat tails. The issue of optimal sensor placement in the sensor network is also addressed. We specify a linear programming algorithm for determining sensor placement for networks with reduced number of sensors. In our test case, the algorithm produces a network with 18.5% fewer sensors with comparable accuracy estimation performance. Finally, we discuss future research directions for improving the accuracy and capabilities of sensor network systems in urban environments.

  13. Nonlinear Algorithms for Channel Equalization and Map Symbol Detection.

    NASA Astrophysics Data System (ADS)

    Giridhar, K.

    The transfer of information through a communication medium invariably results in various kinds of distortion to the transmitted signal. In this dissertation, a feed -forward neural network-based equalizer, and a family of maximum a posteriori (MAP) symbol detectors are proposed for signal recovery in the presence of intersymbol interference (ISI) and additive white Gaussian noise. The proposed neural network-based equalizer employs a novel bit-mapping strategy to handle multilevel data signals in an equivalent bipolar representation. It uses a training procedure to learn the channel characteristics, and at the end of training, the multilevel symbols are recovered from the corresponding inverse bit-mapping. When the channel characteristics are unknown and no training sequences are available, blind estimation of the channel (or its inverse) and simultaneous data recovery is required. Convergence properties of several existing Bussgang-type blind equalization algorithms are studied through computer simulations, and a unique gain independent approach is used to obtain a fair comparison of their rates of convergence. Although simple to implement, the slow convergence of these Bussgang-type blind equalizers make them unsuitable for many high data-rate applications. Rapidly converging blind algorithms based on the principle of MAP symbol-by -symbol detection are proposed, which adaptively estimate the channel impulse response (CIR) and simultaneously decode the received data sequence. Assuming a linear and Gaussian measurement model, the near-optimal blind MAP symbol detector (MAPSD) consists of a parallel bank of conditional Kalman channel estimators, where the conditioning is done on each possible data subsequence that can convolve with the CIR. This algorithm is also extended to the recovery of convolutionally encoded waveforms in the presence of ISI. Since the complexity of the MAPSD algorithm increases exponentially with the length of the assumed CIR, a suboptimal decision-feedback mechanism is introduced to truncate the channel memory "seen" by the MAPSD section. Also, simpler gradient-based updates for the channel estimates, and a metric pruning technique are used to further reduce the MAPSD complexity. Spatial diversity MAP combiners are developed to enhance the error rate performance and combat channel fading. As a first application of the MAPSD algorithm, dual-mode recovery techniques for TDMA (time-division multiple access) mobile radio signals are presented. Combined estimation of the symbol timing and the multipath parameters is proposed, using an auxiliary extended Kalman filter during the training cycle, and then tracking of the fading parameters is performed during the data cycle using the blind MAPSD algorithm. For the second application, a single-input receiver is employed to jointly recover cochannel narrowband signals. Assuming known channels, this two-stage joint MAPSD (JMAPSD) algorithm is compared to the optimal joint maximum likelihood sequence estimator, and to the joint decision-feedback detector. A blind MAPSD algorithm for the joint recovery of cochannel signals is also presented. Computer simulation results are provided to quantify the performance of the various algorithms proposed in this dissertation.

  14. Estimating the magnitude and frequency of floods for streams in west-central Florida, 2001

    USGS Publications Warehouse

    Hammett, Kathleen M.; DelCharco, Michael J.

    2005-01-01

    Flood discharges were estimated for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years for 94 streamflow stations in west-central Florida. Most of the stations are located within the 10,000 square-mile, 16-county area that forms the Southwest Florida Water Management District. All stations had at least 10 years of homogeneous record, and none have flood discharges that are significantly affected by regulation or urbanization. Guidelines established by the U.S. Water Resources Council in Bulletin 17B were used to estimate flood discharges from gaging station records. Multiple linear regression analysis was then used to mathematically relate estimates of flood discharge for selected recurrence intervals to explanatory basin characteristics. Contributing drainage area, channel slope, and the percent of total drainage area covered by lakes (percent lake area) were the basin characteristics that provided the best regression estimates. The study area was subdivided into four geographic regions to further refine the regression equations. Region 1 at the northern end of the study area includes large rivers that are characteristic of the rolling karst terrain of northern Florida. Only a small part of Region 1 lies within the boundaries of the Southwest Florida Water Management District. Contributing drainage area and percent lake area were the most statistically significant basin characteristics in Region 1; the prediction error of the regression equations varied with the recurrence interval and ranged from 57 to 69 percent. In the three other regions of the study area, contributing drainage area, channel slope, and percent lake area were the most statistically significant basin characteristics, and are the three characteristics that can be used to best estimate the magnitude and frequency of floods on most streams within the Southwest Florida Water Management District. The Withlacoochee River Basin dominates Region 2; the prediction error of the regression models in the region ranged from 65 to 68 percent. The basins that drain into the northern part of Tampa Bay and the upper reaches of the Peace River Basin are in Region 3, which had prediction errors ranging from 54 to 74 percent. Region 4, at the southern end of the study area, had prediction errors that ranged from 40 to 56 percent. Estimates of flood discharge become more accurate as longer periods of record are used for analyses; results of this study should be used in lieu of results from earlier U.S. Geological Survey studies of flood magnitude and frequency in west-central Florida. A comparison of current results with earlier studies indicates that use of a longer period of record with additional high-water events produces substantially higher flood-discharge estimates for many gaging stations. Another comparison indicates that the use of a computed, generalized skew in a previous study in 1979 tended to overestimate flood discharges.

  15. Direct estimation and correction of bias from temporally variable non-stationary noise in a channelized Hotelling model observer.

    PubMed

    Fetterly, Kenneth A; Favazza, Christopher P

    2016-08-07

    Channelized Hotelling model observer (CHO) methods were developed to assess performance of an x-ray angiography system. The analytical methods included correction for known bias error due to finite sampling. Detectability indices ([Formula: see text]) corresponding to disk-shaped objects with diameters in the range 0.5-4 mm were calculated. Application of the CHO for variable detector target dose (DTD) in the range 6-240 nGy frame(-1) resulted in [Formula: see text] estimates which were as much as 2.9×  greater than expected of a quantum limited system. Over-estimation of [Formula: see text] was presumed to be a result of bias error due to temporally variable non-stationary noise. Statistical theory which allows for independent contributions of 'signal' from a test object (o) and temporally variable non-stationary noise (ns) was developed. The theory demonstrates that the biased [Formula: see text] is the sum of the detectability indices associated with the test object [Formula: see text] and non-stationary noise ([Formula: see text]). Given the nature of the imaging system and the experimental methods, [Formula: see text] cannot be directly determined independent of [Formula: see text]. However, methods to estimate [Formula: see text] independent of [Formula: see text] were developed. In accordance with the theory, [Formula: see text] was subtracted from experimental estimates of [Formula: see text], providing an unbiased estimate of [Formula: see text]. Estimates of [Formula: see text] exhibited trends consistent with expectations of an angiography system that is quantum limited for high DTD and compromised by detector electronic readout noise for low DTD conditions. Results suggest that these methods provide [Formula: see text] estimates which are accurate and precise for [Formula: see text]. Further, results demonstrated that the source of bias was detector electronic readout noise. In summary, this work presents theory and methods to test for the presence of bias in Hotelling model observers due to temporally variable non-stationary noise and correct this bias when the temporally variable non-stationary noise is independent and additive with respect to the test object signal.

  16. Development, Validation, and Potential Enhancements to the Second-Generation Operational Aerosol Product at the National Environmental Satellite, Data, and Information Service of the National Oceanic and Atmospheric Administration

    NASA Technical Reports Server (NTRS)

    Stowe, Larry L.; Ignatov, Alexander M.; Singh, Ramdas R.

    1997-01-01

    A revised (phase 2) single-channel algorithm for aerosol optical thickness, tau(sup A)(sub SAT), retrieval over oceans from radiances in channel 1 (0.63 microns) of the Advanced Very High Resolution Radiometer (AVHRR) has been implemented at the National Oceanic and Atmospheric Administration's National Environmental Satellite Data and Information Service for the NOAA 14 satellite launched December 30, 1994. It is based on careful validation of its operational predecessor (phase 1 algorithm), implemented for NOAA 14 in 1989. Both algorithms scale the upward satellite radiances in cloud-free conditions to aerosol optical thickness using an updated radiative transfer model of the ocean and atmosphere. Application of the phase 2 algorithm to three matchup Sun-photometer and satellite data sets, one with NOAA 9 in 1988 and two with NOAA 11 in 1989 and 1991, respectively, show systematic error is less than 10%, with a random error of sigma(sub tau) approx. equal 0.04. First results of tau(sup A)(sub SAT) retrievals from NOAA 14 using the phase 2 algorithm, and from checking its internal consistency, are presented. The potential two-channel (phase 3) algorithm for the retrieval of an aerosol size parameter, such as the Junge size distribution exponent, by adding either channel 2 (0.83 microns) from the current AVHRR instrument, or a 1.6-microns channel to be available on the Tropical Rainfall Measurement Mission and the NOAA-KLM satellites by 1997 is under investigation. The possibility of using this additional information in the retrieval of a more accurate estimate of aerosol optical thickness is being explored.

  17. Joint Channel and Phase Noise Estimation in MIMO-OFDM Systems

    NASA Astrophysics Data System (ADS)

    Ngebani, I. M.; Chuma, J. M.; Zibani, I.; Matlotse, E.; Tsamaase, K.

    2017-05-01

    The combination of multiple-input multiple-output (MIMO) techniques with orthogonal frequency division multiplexing (OFDM), MIMO-OFDM, is a promising way of achieving high spectral efficiency in wireless communication systems. However, the performance of MIMO-ODFM systems is highly degraded by radio frequency (RF) impairments such as phase noise. Similar to the single-input single-output (SISO) case, phase noise in MIMO-OFDM systems results in a common phase error (CPE) and inter carrier interference (ICI). In this paper the problem of joint channel and phase noise estimation in a system with multiple transmit and receive antennas where each antenna is equipped with its own independent oscillator is tackled. The technique employed makes use of a novel placement of pilot carriers in the preamble and data portion of the MIMO-OFDM frame. Numerical results using a 16 and 64 quadrature amplitude modulation QAM schemes are provided to illustrate the effectiveness of the proposed scheme for MIMO-OFDM systems.

  18. An analysis of carrier phase jitter in an MPSK receiver utilizing map estimation. Ph.D. Thesis Semiannual Status Report, Jul. 1993 - Jan. 1994

    NASA Technical Reports Server (NTRS)

    Osborne, William P.

    1994-01-01

    The use of 8 and 16 PSK TCM to support satellite communications in an effort to achieve more bandwidth efficiency in a power-limited channel has been proposed. This project addresses the problem of carrier phase jitter in an M-PSK receiver utilizing the high SNR approximation to the maximum aposteriori estimation of carrier phase. In particular, numerical solutions to the 8 and 16 PSK self-noise and phase detector gain in the carrier tracking loop are presented. The effect of changing SNR on the loop noise bandwidth is also discussed. These data are then used to compute variance of phase error as a function of SNR. Simulation and hardware data are used to verify these calculations. The results show that there is a threshold in the variance of phase error versus SNR curves that is a strong function of SNR and a weak function of loop bandwidth. The M-PSK variance thresholds occur at SNR's in the range of practical interest for the use of 8 and 16-PSK TCM. This suggests that phase error variance is an important consideration in the design of these systems.

  19. Influence of Joint Angle on EMG-Torque Model During Constant-Posture, Torque-Varying Contractions.

    PubMed

    Liu, Pu; Liu, Lukai; Clancy, Edward A

    2015-11-01

    Relating the electromyogram (EMG) to joint torque is useful in various application areas, including prosthesis control, ergonomics and clinical biomechanics. Limited study has related EMG to torque across varied joint angles, particularly when subjects performed force-varying contractions or when optimized modeling methods were utilized. We related the biceps-triceps surface EMG of 22 subjects to elbow torque at six joint angles (spanning 60° to 135°) during constant-posture, torque-varying contractions. Three nonlinear EMG σ -torque models, advanced EMG amplitude (EMG σ ) estimation processors (i.e., whitened, multiple-channel) and the duration of data used to train models were investigated. When EMG-torque models were formed separately for each of the six distinct joint angles, a minimum "gold standard" error of 4.01±1.2% MVC(F90) resulted (i.e., error relative to maximum voluntary contraction at 90° flexion). This model structure, however, did not directly facilitate interpolation across angles. The best model which did so achieved a statistically equivalent error of 4.06±1.2% MVC(F90). Results demonstrated that advanced EMG σ processors lead to improved joint torque estimation as do longer model training durations.

  20. Preliminary Evaluation of GAOFEN-3 Polarimetric and Radiometric Accuracy by Corner Reflectors in Inner Mongolia

    NASA Astrophysics Data System (ADS)

    Shi, L.; Ding, X.; Li, P.; Yang, J.; Zhao, L.; Yang, L.; Chang, Y.; Yan, L.

    2018-04-01

    On August 10, 2016, China launched its first C-band full polarimetric radar satellite, named Gaofen-3 (GF-3), for urban and agriculture monitoring, landslide detection, ocean applications, etc. According to the design specification, GF-3 is expected to work at -35 dB crosstalk and 0.5 dB channel imbalance, with less than 10 degree error. The absolute radiometric bias is expected to be less than 1.5 dB in a single scene and 2.0 dB when operating for a long time. To complete the calibration and evaluation, the Institute of Electronics, Chinese Academy Sciences (IECAS) built a test site at Inner Mongolia, and deployed active reflectors (ARs) and trihedral corner reflectors (CRs) to solve and evaluate the hardware distortion. To the best of the authors' knowledge, the product accuracy of GF-3 has not been comprehensively evaluated in any open publication. The remote sensing community urgently requires a detailed report about the product accuracy and stability, before any subsequent application. From June to August of 2017, IECAS begun its second round ground campaign and deployed 10 CRs to evaluate product distortions. In this paper, we exploit Inner Mongolia CRs to investigate polarimetric and radiometric accuracy of QPSI I Stripmap. Although some CRs found fall into AR side lobe, the rest CRs enable us to preliminarily evaluate the accuracy of some special imaging beams. In the experimental part, the image of July 6, 2017 was checked by 5 trihedral CRs and the integration estimation method demonstrated the crosstalk varying from -42.65 to -32.74 dB, and the channel imbalance varying from -0.21 to 0.47 with phase error from -2.4 to 0.2 degree. Comparing with the theoretical radar cross-section of 1.235 m trihedral CR, i.e. 35 dB, the radiometric error varies about 0.20 ± 0.29 dB in HH channel and 0.40 ± 0.20 dB in VV channel.

  1. Optimum Code Rates for Noncoherent MFSK with Errors and Erasures Decoding over Rayleigh Fading Channels

    NASA Technical Reports Server (NTRS)

    Ritcey, Adina Matache James A.

    1997-01-01

    In this paper, we analyze the performance of a communication system employing M-ary frequency shift keying (FSK) modulation with errors-and-erasures decoding using Viterbi ratio threshold technique for erasure insertion, in Rayleigh fading and AWGN channels.

  2. Assessing the Impact of Pre-gpm Microwave Precipitation Observations in the Goddard WRF Ensemble Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Chambon, Philippe; Zhang, Sara Q.; Hou, Arthur Y.; Zupanski, Milija; Cheung, Samson

    2013-01-01

    The forthcoming Global Precipitation Measurement (GPM) Mission will provide next generation precipitation observations from a constellation of satellites. Since precipitation by nature has large variability and low predictability at cloud-resolving scales, the impact of precipitation data on the skills of mesoscale numerical weather prediction (NWP) is largely affected by the characterization of background and observation errors and the representation of nonlinear cloud/precipitation physics in an NWP data assimilation system. We present a data impact study on the assimilation of precipitation-affected microwave (MW) radiances from a pre-GPM satellite constellation using the Goddard WRF Ensemble Data Assimilation System (Goddard WRF-EDAS). A series of assimilation experiments are carried out in a Weather Research Forecast (WRF) model domain of 9 km resolution in western Europe. Sensitivities to observation error specifications, background error covariance estimated from ensemble forecasts with different ensemble sizes, and MW channel selections are examined through single-observation assimilation experiments. An empirical bias correction for precipitation-affected MW radiances is developed based on the statistics of radiance innovations in rainy areas. The data impact is assessed by full data assimilation cycling experiments for a storm event that occurred in France in September 2010. Results show that the assimilation of MW precipitation observations from a satellite constellation mimicking GPM has a positive impact on the accumulated rain forecasts verified with surface radar rain estimates. The case-study on a convective storm also reveals that the accuracy of ensemble-based background error covariance is limited by sampling errors and model errors such as precipitation displacement and unresolved convective scale instability.

  3. Comparison of rate one-half, equivalent constraint length 24, binary convolutional codes for use with sequential decoding on the deep-space channel

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.

  4. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo

    1986-01-01

    A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  5. Critical heat flux for water boiling in channels. Modern state, typical regularities, unsolved problems, and ways for solving them (a review)

    NASA Astrophysics Data System (ADS)

    Bobkov, V. P.

    2015-02-01

    Some general matters concerned with description of burnout in channels are outlined. Data obtained from experimental investigations on critical heat fluxes (CHF) in different channels, CHF data banks, the main determining parameters, CHF basic dependences, and a system of correction functions are discussed. Two methods for estimating the CHF description errors are analyzed. The influence of operating parameters, transverse sizes of channels, and conditions at their inlet are analyzed. The effects of heat-transfer surface shape and heat supply arrangement are considered for concentric annular channels. The notions of a thermal boundary layer and an elementary thermal cell during burnout in channels with an intricate cross section are defined. New notions for describing CHF in rod assemblies are introduced: bundle effect, thermal misalignment, assembly-section-averaged and local parameters (for an elementary cell), cell-wise CHF analysis in bundles, and standard and nonstandard cells. Possible influence of wall thermophysical properties on CHF in dense assemblies and other effects are considered. Thermal interaction of nonequivalent cells and the effect of heat supply arrangement over the cell perimeter are analyzed. Special attention is paid to description of the effect the heat release nonuniformity along the channels has on CHF. Objectives to be pursued by studies of CHF in channels of different cross-section shapes are formulated.

  6. An integrated study of earth resources in the state of California using remote sensing techniques

    NASA Technical Reports Server (NTRS)

    Colwell, R. N. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. A weighted stratified double sample design using hardcopy LANDSAT-1 and ground data was utilized in developmental studies for snow water content estimation. Study results gave a correlation coefficient of 0.80 between LANDSAT sample units estimates of snow water content and ground subsamples. A basin snow water content estimate allowable error was given as 1.00 percent at the 99 percent confidence level with the same budget level utilized in conventional snow surveys. Several evapotranspiration estimation models were selected for efficient application at each level of data to be sampled. An area estimation procedure for impervious surface types of differing impermeability adjacent to stream channels was developed. This technique employs a double sample of 1:125,000 color infrared hightflight transparency data with ground or large scale photography.

  7. Performance of convolutionally encoded noncoherent MFSK modem in fading channels

    NASA Technical Reports Server (NTRS)

    Modestino, J. W.; Mui, S. Y.

    1976-01-01

    The performance of a convolutionally encoded noncoherent multiple-frequency shift-keyed (MFSK) modem utilizing Viterbi maximum-likelihood decoding and operating on a fading channel is described. Both the lognormal and classical Rician fading channels are considered for both slow and time-varying channel conditions. Primary interest is in the resulting bit error rate as a function of the ratio between the energy per transmitted information bit and noise spectral density, parameterized by both the fading channel and code parameters. Fairly general upper bounds on bit error probability are provided and compared with simulation results in the two extremes of zero and infinite channel memory. The efficacy of simple block interleaving in combatting channel memory effects are thoroughly explored. Both quantized and unquantized receiver outputs are considered.

  8. Study on advanced information processing system

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Liu, Jyh-Charn

    1992-01-01

    Issues related to the reliability of a redundant system with large main memory are addressed. In particular, the Fault-Tolerant Processor (FTP) for Advanced Launch System (ALS) is used as a basis for our presentation. When the system is free of latent faults, the probability of system crash due to nearly-coincident channel faults is shown to be insignificant even when the outputs of computing channels are infrequently voted on. In particular, using channel error maskers (CEMs) is shown to improve reliability more effectively than increasing the number of channels for applications with long mission times. Even without using a voter, most memory errors can be immediately corrected by CEMs implemented with conventional coding techniques. In addition to their ability to enhance system reliability, CEMs--with a low hardware overhead--can be used to reduce not only the need of memory realignment, but also the time required to realign channel memories in case, albeit rare, such a need arises. Using CEMs, we have developed two schemes, called Scheme 1 and Scheme 2, to solve the memory realignment problem. In both schemes, most errors are corrected by CEMs, and the remaining errors are masked by a voter.

  9. Study on fault-tolerant processors for advanced launch system

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Liu, Jyh-Charn

    1990-01-01

    Issues related to the reliability of a redundant system with large main memory are addressed. The Fault-Tolerant Processor (FTP) for the Advanced Launch System (ALS) is used as a basis for the presentation. When the system is free of latent faults, the probability of system crash due to multiple channel faults is shown to be insignificant even when voting on the outputs of computing channels is infrequent. Using channel error maskers (CEMs) is shown to improve reliability more effectively than increasing redundancy or the number of channels for applications with long mission times. Even without using a voter, most memory errors can be immediately corrected by those CEMs implemented with conventional coding techniques. In addition to their ability to enhance system reliability, CEMs (with a very low hardware overhead) can be used to dramatically reduce not only the need of memory realignment, but also the time required to realign channel memories in case, albeit rare, such a need arises. Using CEMs, two different schemes were developed to solve the memory realignment problem. In both schemes, most errors are corrected by CEMs, and the remaining errors are masked by a voter.

  10. Testing of Error-Correcting Sparse Permutation Channel Codes

    NASA Technical Reports Server (NTRS)

    Shcheglov, Kirill, V.; Orlov, Sergei S.

    2008-01-01

    A computer program performs Monte Carlo direct numerical simulations for testing sparse permutation channel codes, which offer strong error-correction capabilities at high code rates and are considered especially suitable for storage of digital data in holographic and volume memories. A word in a code of this type is characterized by, among other things, a sparseness parameter (M) and a fixed number (K) of 1 or "on" bits in a channel block length of N.

  11. Controller certification: The generalized stability margin inference for a large number of MIMO controllers

    NASA Astrophysics Data System (ADS)

    Park, Jisang

    In this dissertation, we investigate MIMO stability margin inference of a large number of controllers using pre-established stability margins of a small number of nu-gap-wise adjacent controllers. The generalized stability margin and the nu-gap metric are inherently able to handle MIMO system analysis without the necessity of repeating multiple channel-by-channel SISO analyses. This research consists of three parts: (i) development of a decision support tool for inference of the stability margin, (ii) computational considerations for yielding the maximal stability margin with the minimal nu-gap metric in a less conservative manner, and (iii) experiment design for estimating the generalized stability margin with an assured error bound. A modern problem from aerospace control involves the certification of a large set of potential controllers with either a single plant or a fleet of potential plant systems, with both plants and controllers being MIMO and, for the moment, linear. Experiments on a limited number of controller/plant pairs should establish the stability and a certain level of margin of the complete set. We consider this certification problem for a set of controllers and provide algorithms for selecting an efficient subset for testing. This is done for a finite set of candidate controllers and, at least for SISO plants, for an infinite set. In doing this, the nu-gap metric will be the main tool. We provide a theorem restricting a radius of a ball in the parameter space so that the controller can guarantee a prescribed level of stability and performance if parameters of the controllers are contained in the ball. Computational examples are given, including one of certification of an aircraft engine controller. The overarching aim is to introduce truly MIMO margin calculations and to understand their efficacy in certifying stability over a set of controllers and in replacing legacy single-loop gain and phase margin calculations. We consider methods for the computation of; maximal MIMO stability margins bP̂,C, minimal nu-gap metrics deltanu , and the maximal difference between these two values, through the use of scaling and weighting functions. We propose simultaneous scaling selections that attempt to maximize the generalized stability margin and minimize the nu-gap. The minimization of the nu-gap by scaling involves a non-convex optimization. We modify the XY-centering algorithm to handle this non-convexity. This is done for applications in controller certification. Estimating the generalized stability margin with an accurate error bound has significant impact on controller certification. We analyze an error bound of the generalized stability margin as the infinity norm of the MIMO empirical transfer function estimate (ETFE). Input signal design to reduce the error on the estimate is also studied. We suggest running the system for a certain amount of time prior to recording of each output data set. The assured upper bound of estimation error can be tuned by the amount of the pre-experiment.

  12. LDPC-coded MIMO optical communication over the atmospheric turbulence channel using Q-ary pulse-position modulation.

    PubMed

    Djordjevic, Ivan B

    2007-08-06

    We describe a coded power-efficient transmission scheme based on repetition MIMO principle suitable for communication over the atmospheric turbulence channel, and determine its channel capacity. The proposed scheme employs the Q-ary pulse-position modulation. We further study how to approach the channel capacity limits using low-density parity-check (LDPC) codes. Component LDPC codes are designed using the concept of pairwise-balanced designs. Contrary to the several recent publications, bit-error rates and channel capacities are reported assuming non-ideal photodetection. The atmospheric turbulence channel is modeled using the Gamma-Gamma distribution function due to Al-Habash et al. Excellent bit-error rate performance improvement, over uncoded case, is found.

  13. Evaluation and Assimilation of Cloud Cleared Radiances for AIRS in GEOS-5

    NASA Technical Reports Server (NTRS)

    Liu, Hui-chun

    2008-01-01

    The use of clear (cloud-free) channels for AIRS in GEOS-5 had shown positive impact on forecast skills in both hemispheres. However, improvements in forecast skills due to the assimilation of AIRS data are less impressive since the number of assimilated channels from AIRS is much larger than that from other Infrared sounders such as HIRS-3 onboard NOAA 15-17 satellites. This limitation of AIRS radiance data to improve the forecast skill is mainly due to the fact that channels capable of peaking below clouds are not used in the assimilation and yet those have highest vertical resolving capability of AIRS instrument are concentrated in the lower troposphere. On average, the percentage of AIRS footprints completely clear for all channels is less than 10%. The percentage of assimilated AIRS channel radiances however ranges from 100% for channels peaking in the upper stratosphere, above the cloud, to no more that 5% in the lower atmosphere due to cloud contamination. Our current ability to model and predict clouds accurately in global model, and to fully characterize and parameterize optical properties of cloud particles in radiative transfer model are the two major obstacles prohibiting us to use cloudy radiance directly in the assimilation. To further improve forecast skill using AIRS data, we ought to use the channels peaking below the clouds in the troposphere, which can be accomplished by assimilating cloud-cleared radiance. The cloud-cleared radiance data for AIRS used in this study were obtained from optimal cloud clearing procedures developed by researchers at CIMSS of University of Wisconsin at Madison to retrieve clear column radiances for all AIRS channels by collocating multi-band MODIS IR clear radiance observations with the AIRS cloudy radiances on a single footprint basis. Two adjacent AIRS cloudy footprints are used to retrieve one AIRS cloud-cleared radiance spectrum and no background information (first guess) is needed. To assimilate the cloud-cleared radiance data, the errors of the cloud-cleared radiances need to be addressed. The details of convolving AIRS radiances with MODIS spectral response function and comparison with MODIS-measured cloud-free radiance will be presented. The range of errors of cloud-cleared radiances for AIRS using collocated MODIS clear and near-by AIRS clear data will be shown. The NASA. global data assimilation model, GEOS-5, is used to evaluate and assimilate the cloud-cleared radiance for AIRS. The residues between the cloud-cleared brightness temperature and the simulated brightness temperature from background (i.e., OMFs) will be investigated. The quality control procedures will be documented based on error estimation and the OMFs. Finally, the impacts between assimilation of clear channel radiances and cloud-cleared radiances will be addressed.

  14. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  15. Digital Detection and Processing of Multiple Quadrature Harmonics for EPR Spectroscopy

    PubMed Central

    Ahmad, R.; Som, S.; Kesselring, E.; Kuppusamy, P.; Zweier, J.L.; Potter, L.C.

    2010-01-01

    A quadrature digital receiver and associated signal estimation procedure are reported for L-band electron paramagnetic resonance (EPR) spectroscopy. The approach provides simultaneous acquisition and joint processing of multiple harmonics in both in-phase and out-of-phase channels. The digital receiver, based on a high-speed dual-channel analog-to-digital converter, allows direct digital down-conversion with heterodyne processing using digital capture of the microwave reference signal. Thus, the receiver avoids noise and nonlinearity associated with analog mixers. Also, the architecture allows for low-Q anti-alias filtering and does not require the sampling frequency to be time-locked to the microwave reference. A noise model applicable for arbitrary contributions of oscillator phase noise is presented, and a corresponding maximum-likelihood estimator of unknown parameters is also reported. The signal processing is applicable for Lorentzian lineshape under nonsaturating conditions. The estimation is carried out using a convergent iterative algorithm capable of jointly processing the in-phase and out-of-phase data in the presence of phase noise and unknown microwave phase. Cramér-Rao bound analysis and simulation results demonstrate a significant reduction in linewidth estimation error using quadrature detection, for both low and high values of phase noise. EPR spectroscopic data are also reported for illustration. PMID:20971667

  16. Digital detection and processing of multiple quadrature harmonics for EPR spectroscopy.

    PubMed

    Ahmad, R; Som, S; Kesselring, E; Kuppusamy, P; Zweier, J L; Potter, L C

    2010-12-01

    A quadrature digital receiver and associated signal estimation procedure are reported for L-band electron paramagnetic resonance (EPR) spectroscopy. The approach provides simultaneous acquisition and joint processing of multiple harmonics in both in-phase and out-of-phase channels. The digital receiver, based on a high-speed dual-channel analog-to-digital converter, allows direct digital down-conversion with heterodyne processing using digital capture of the microwave reference signal. Thus, the receiver avoids noise and nonlinearity associated with analog mixers. Also, the architecture allows for low-Q anti-alias filtering and does not require the sampling frequency to be time-locked to the microwave reference. A noise model applicable for arbitrary contributions of oscillator phase noise is presented, and a corresponding maximum-likelihood estimator of unknown parameters is also reported. The signal processing is applicable for Lorentzian lineshape under nonsaturating conditions. The estimation is carried out using a convergent iterative algorithm capable of jointly processing the in-phase and out-of-phase data in the presence of phase noise and unknown microwave phase. Cramér-Rao bound analysis and simulation results demonstrate a significant reduction in linewidth estimation error using quadrature detection, for both low and high values of phase noise. EPR spectroscopic data are also reported for illustration. Copyright © 2010 Elsevier Inc. All rights reserved.

  17. Experimental characterization of a 400 Gbit/s orbital angular momentum multiplexed free-space optical link over 120 m.

    PubMed

    Ren, Yongxiong; Wang, Zhe; Liao, Peicheng; Li, Long; Xie, Guodong; Huang, Hao; Zhao, Zhe; Yan, Yan; Ahmed, Nisar; Willner, Asher; Lavery, Martin P J; Ashrafi, Nima; Ashrafi, Solyman; Bock, Robert; Tur, Moshe; Djordjevic, Ivan B; Neifeld, Mark A; Willner, Alan E

    2016-02-01

    We experimentally demonstrate and characterize the performance of a 400-Gbit/s orbital angular momentum (OAM) multiplexed free-space optical link over 120 m on the roof of a building. Four OAM beams, each carrying a 100-Gbit/s quadrature-phase-shift-keyed channel are multiplexed and transmitted. We investigate the influence of channel impairments on the received power, intermodal crosstalk among channels, and system power penalties. Without laser tracking and compensation systems, the measured received power and crosstalk among OAM channels fluctuate by 4.5 dB and 5 dB, respectively, over 180 s. For a beam displacement of 2 mm that corresponds to a pointing error less than 16.7 μrad, the link bit error rates are below the forward error correction threshold of 3.8×10(-3) for all channels. Both experimental and simulation results show that power penalties increase rapidly when the displacement increases.

  18. Analysis and compensation of synchronous measurement error for multi-channel laser interferometer

    NASA Astrophysics Data System (ADS)

    Du, Shengwu; Hu, Jinchun; Zhu, Yu; Hu, Chuxiong

    2017-05-01

    Dual-frequency laser interferometer has been widely used in precision motion system as a displacement sensor, to achieve nanoscale positioning or synchronization accuracy. In a multi-channel laser interferometer synchronous measurement system, signal delays are different in the different channels, which will cause asynchronous measurement, and then lead to measurement error, synchronous measurement error (SME). Based on signal delay analysis of the measurement system, this paper presents a multi-channel SME framework for synchronous measurement, and establishes the model between SME and motion velocity. Further, a real-time compensation method for SME is proposed. This method has been verified in a self-developed laser interferometer signal processing board (SPB). The experiment result showed that, using this compensation method, at a motion velocity 0.89 m s-1, the max SME between two measuring channels in the SPB is 1.1 nm. This method is more easily implemented and applied to engineering than the method of directly testing smaller signal delay.

  19. Adaptive Packet Combining Scheme in Three State Channel Model

    NASA Astrophysics Data System (ADS)

    Saring, Yang; Bulo, Yaka; Bhunia, Chandan Tilak

    2018-01-01

    The two popular techniques of packet combining based error correction schemes are: Packet Combining (PC) scheme and Aggressive Packet Combining (APC) scheme. PC scheme and APC scheme have their own merits and demerits; PC scheme has better throughput than APC scheme, but suffers from higher packet error rate than APC scheme. The wireless channel state changes all the time. Because of this random and time varying nature of wireless channel, individual application of SR ARQ scheme, PC scheme and APC scheme can't give desired levels of throughput. Better throughput can be achieved if appropriate transmission scheme is used based on the condition of channel. Based on this approach, adaptive packet combining scheme has been proposed to achieve better throughput. The proposed scheme adapts to the channel condition to carry out transmission using PC scheme, APC scheme and SR ARQ scheme to achieve better throughput. Experimentally, it was observed that the error correction capability and throughput of the proposed scheme was significantly better than that of SR ARQ scheme, PC scheme and APC scheme.

  20. Image navigation and registration performance assessment tool set for the GOES-R Advanced Baseline Imager and Geostationary Lightning Mapper

    NASA Astrophysics Data System (ADS)

    De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.

    2016-05-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99. 73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.

  1. Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for the GOES-R Advanced Baseline Imager and Geostationary Lightning Mapper

    NASA Technical Reports Server (NTRS)

    DeLuccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.

    2016-01-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.

  2. Image Navigation and Registration Performance Assessment Tool Set for the GOES-R Advanced Baseline Imager and Geostationary Lightning Mapper

    NASA Technical Reports Server (NTRS)

    De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.

    2016-01-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24-hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24-hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.

  3. Rank score and permutation testing alternatives for regression quantile estimates

    USGS Publications Warehouse

    Cade, B.S.; Richards, J.D.; Mielke, P.W.

    2006-01-01

    Performance of quantile rank score tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1) were evaluated by simulation for models with p = 2 and 6 predictors, moderate collinearity among predictors, homogeneous and hetero-geneous errors, small to moderate samples (n = 20–300), and central to upper quantiles (0.50–0.99). Test statistics evaluated were the conventional quantile rank score T statistic distributed as χ2 random variable with q degrees of freedom (where q parameters are constrained by H 0:) and an F statistic with its sampling distribution approximated by permutation. The permutation F-test maintained better Type I errors than the T-test for homogeneous error models with smaller n and more extreme quantiles τ. An F distributional approximation of the F statistic provided some improvements in Type I errors over the T-test for models with > 2 parameters, smaller n, and more extreme quantiles but not as much improvement as the permutation approximation. Both rank score tests required weighting to maintain correct Type I errors when heterogeneity under the alternative model increased to 5 standard deviations across the domain of X. A double permutation procedure was developed to provide valid Type I errors for the permutation F-test when null models were forced through the origin. Power was similar for conditions where both T- and F-tests maintained correct Type I errors but the F-test provided some power at smaller n and extreme quantiles when the T-test had no power because of excessively conservative Type I errors. When the double permutation scheme was required for the permutation F-test to maintain valid Type I errors, power was less than for the T-test with decreasing sample size and increasing quantiles. Confidence intervals on parameters and tolerance intervals for future predictions were constructed based on test inversion for an example application relating trout densities to stream channel width:depth.

  4. Removing ballistocardiogram (BCG) artifact from full-scalp EEG acquired inside the MR scanner with Orthogonal Matching Pursuit (OMP)

    PubMed Central

    Xia, Hongjing; Ruan, Dan; Cohen, Mark S.

    2014-01-01

    Ballistocardiogram (BCG) artifact remains a major challenge that renders electroencephalographic (EEG) signals hard to interpret in simultaneous EEG and functional MRI (fMRI) data acquisition. Here, we propose an integrated learning and inference approach that takes advantage of a commercial high-density EEG cap, to estimate the BCG contribution in noisy EEG recordings from inside the MR scanner. To estimate reliably the full-scalp BCG artifacts, a near-optimal subset (20 out of 256) of channels first was identified using a modified recording setup. In subsequent recordings inside the MR scanner, BCG-only signal from this subset of channels was used to generate continuous estimates of the full-scalp BCG artifacts via inference, from which the intended EEG signal was recovered. The reconstruction of the EEG was performed with both a direct subtraction and an optimization scheme. We evaluated the performance on both synthetic and real contaminated recordings, and compared it to the benchmark Optimal Basis Set (OBS) method. In the challenging non-event-related-potential (non-ERP) EEG studies, our reconstruction can yield more than fourteen-fold improvement in reducing the normalized RMS error of EEG signals, compared to OBS. PMID:25120421

  5. Angular and Seasonal Variation of Spectral Surface Reflectance Ratios: Implications for the Remote Sensing of Aerosol over Land

    NASA Technical Reports Server (NTRS)

    Remer, L. A.; Wald, A. E.; Kaufman, Y. J.

    1999-01-01

    We obtain valuable information on the angular and seasonal variability of surface reflectance using a hand-held spectrometer from a light aircraft. The data is used to test a procedure that allows us to estimate visible surface reflectance from the longer wavelength 2.1 micrometer channel (mid-IR). Estimating or avoiding surface reflectance in the visible is a vital first step in most algorithms that retrieve aerosol optical thickness over land targets. The data indicate that specular reflection found when viewing targets from the forward direction can severely corrupt the relationships between the visible and 2.1 micrometer reflectance that were derived from nadir data. There is a month by month variation in the ratios between the visible and the mid-IR, weakly correlated to the Normalized Difference Vegetation Index (NDVI). If specular reflection is not avoided, the errors resulting from estimating surface reflectance from the mid-IR exceed the acceptable limit of DELTA-rho approximately 0.01 in roughly 40% of the cases, using the current algorithm. This is reduced to 25% of the cases if specular reflection is avoided. An alternative method that uses path radiance rather than explicitly estimating visible surface reflectance results in similar errors. The two methods have different strengths and weaknesses that require further study.

  6. Measurements of Reynolds stress profiles in unstratified tidal flow

    USGS Publications Warehouse

    Stacey, M.T.; Monismith, Stephen G.; Burau, J.R.

    1999-01-01

    In this paper we present a method for measuring profiles of turbulence quantities using a broadband acoustic doppler current profiler (ADCP). The method follows previous work on the continental shelf and extends the analysis to develop estimates of the errors associated with the estimation methods. ADCP data was collected in an unstratified channel and the results of the analysis are compared to theory. This comparison shows that the method provides an estimate of the Reynolds stresses, which is unbiased by Doppler noise, and an estimate of the turbulent kinetic energy (TKE) which is biased by an amount proportional to the Doppler noise. The noise in each of these quantities as well as the bias in the TKE match well with the theoretical values produced by the error analysis. The quantification of profiles of Reynolds stresses simultaneous with the measurement of mean velocity profiles allows for extensive analysis of the turbulence of the flow. In this paper, we examine the relation between the turbulence and the mean flow through the calculation of u*, the friction velocity, and Cd, the coefficient of drag. Finally, we calculate quantities of particular interest in turbulence modeling and analysis, the characteristic lengthscales, including a lengthscale which represents the stream-wise scale of the eddies which dominate the Reynolds stresses. Copyright 1999 by the American Geophysical Union.

  7. Grip Force and 3D Push-Pull Force Estimation Based on sEMG and GRNN

    PubMed Central

    Wu, Changcheng; Zeng, Hong; Song, Aiguo; Xu, Baoguo

    2017-01-01

    The estimation of the grip force and the 3D push-pull force (push and pull force in the three dimension space) from the electromyogram (EMG) signal is of great importance in the dexterous control of the EMG prosthetic hand. In this paper, an action force estimation method which is based on the eight channels of the surface EMG (sEMG) and the Generalized Regression Neural Network (GRNN) is proposed to meet the requirements of the force control of the intelligent EMG prosthetic hand. Firstly, the experimental platform, the acquisition of the sEMG, the feature extraction of the sEMG and the construction of GRNN are described. Then, the multi-channels of the sEMG when the hand is moving are captured by the EMG sensors attached on eight different positions of the arm skin surface. Meanwhile, a grip force sensor and a three dimension force sensor are adopted to measure the output force of the human's hand. The characteristic matrix of the sEMG and the force signals are used to construct the GRNN. The mean absolute value and the root mean square of the estimation errors, the correlation coefficients between the actual force and the estimated force are employed to assess the accuracy of the estimation. Analysis of variance (ANOVA) is also employed to test the difference of the force estimation. The experiments are implemented to verify the effectiveness of the proposed estimation method and the results show that the output force of the human's hand can be correctly estimated by using sEMG and GRNN method. PMID:28713231

  8. Grip Force and 3D Push-Pull Force Estimation Based on sEMG and GRNN.

    PubMed

    Wu, Changcheng; Zeng, Hong; Song, Aiguo; Xu, Baoguo

    2017-01-01

    The estimation of the grip force and the 3D push-pull force (push and pull force in the three dimension space) from the electromyogram (EMG) signal is of great importance in the dexterous control of the EMG prosthetic hand. In this paper, an action force estimation method which is based on the eight channels of the surface EMG (sEMG) and the Generalized Regression Neural Network (GRNN) is proposed to meet the requirements of the force control of the intelligent EMG prosthetic hand. Firstly, the experimental platform, the acquisition of the sEMG, the feature extraction of the sEMG and the construction of GRNN are described. Then, the multi-channels of the sEMG when the hand is moving are captured by the EMG sensors attached on eight different positions of the arm skin surface. Meanwhile, a grip force sensor and a three dimension force sensor are adopted to measure the output force of the human's hand. The characteristic matrix of the sEMG and the force signals are used to construct the GRNN. The mean absolute value and the root mean square of the estimation errors, the correlation coefficients between the actual force and the estimated force are employed to assess the accuracy of the estimation. Analysis of variance (ANOVA) is also employed to test the difference of the force estimation. The experiments are implemented to verify the effectiveness of the proposed estimation method and the results show that the output force of the human's hand can be correctly estimated by using sEMG and GRNN method.

  9. Two-Step Fair Scheduling of Continuous Media Streams over Error-Prone Wireless Channels

    NASA Astrophysics Data System (ADS)

    Oh, Soohyun; Lee, Jin Wook; Park, Taejoon; Jo, Tae-Chang

    In wireless cellular networks, streaming of continuous media (with strict QoS requirements) over wireless links is challenging due to their inherent unreliability characterized by location-dependent, bursty errors. To address this challenge, we present a two-step scheduling algorithm for a base station to provide streaming of continuous media to wireless clients over the error-prone wireless links. The proposed algorithm is capable of minimizing the packet loss rate of individual clients in the presence of error bursts, by transmitting packets in the round-robin manner and also adopting a mechanism for channel prediction and swapping.

  10. Coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    Analysis of quantum error correcting (QEC) codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. We present analytic results for the logical error as a function of concatenation level and code distance for coherent errors under the repetition code. For data-only coherent errors, we find that the logical error is partially coherent and therefore non-Pauli. However, the coherent part of the error is negligible after two or more concatenation levels or at fewer than ɛ - (d - 1) error correction cycles. Here ɛ << 1 is the rotation angle error per cycle for a single physical qubit and d is the code distance. These results support the validity of modeling coherent errors using a Pauli channel under some minimum requirements for code distance and/or concatenation. We discuss extensions to imperfect syndrome extraction and implications for general QEC.

  11. Design of a robust baseband LPC coder for speech transmission over 9.6 kbit/s noisy channels

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. R.; Russell, W. H.; Higgins, A. L.

    1982-04-01

    This paper describes the design of a baseband Linear Predictive Coder (LPC) which transmits speech over 9.6 kbit/sec synchronous channels with random bit errors of up to 1%. Presented are the results of our investigation of a number of aspects of the baseband LPC coder with the goal of maximizing the quality of the transmitted speech. Important among these aspects are: bandwidth of the baseband, coding of the baseband residual, high-frequency regeneration, and error protection of important transmission parameters. The paper discusses these and other issues, presents the results of speech-quality tests conducted during the various stages of optimization, and describes the details of the optimized speech coder. This optimized speech coding algorithm has been implemented as a real-time full-duplex system on an array processor. Informal listening tests of the real-time coder have shown that the coder produces good speech quality in the absence of channel bit errors and introduces only a slight degradation in quality for channel bit error rates of up to 1%.

  12. Alternative Regression Equations for Estimation of Annual Peak-Streamflow Frequency for Undeveloped Watersheds in Texas using PRESS Minimization

    USGS Publications Warehouse

    Asquith, William H.; Thompson, David B.

    2008-01-01

    The U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, investigated a refinement of the regional regression method and developed alternative equations for estimation of peak-streamflow frequency for undeveloped watersheds in Texas. A common model for estimation of peak-streamflow frequency is based on the regional regression method. The current (2008) regional regression equations for 11 regions of Texas are based on log10 transformations of all regression variables (drainage area, main-channel slope, and watershed shape). Exclusive use of log10-transformation does not fully linearize the relations between the variables. As a result, some systematic bias remains in the current equations. The bias results in overestimation of peak streamflow for both the smallest and largest watersheds. The bias increases with increasing recurrence interval. The primary source of the bias is the discernible curvilinear relation in log10 space between peak streamflow and drainage area. Bias is demonstrated by selected residual plots with superimposed LOWESS trend lines. To address the bias, a statistical framework based on minimization of the PRESS statistic through power transformation of drainage area is described and implemented, and the resulting regression equations are reported. Compared to log10-exclusive equations, the equations derived from PRESS minimization have PRESS statistics and residual standard errors less than the log10 exclusive equations. Selected residual plots for the PRESS-minimized equations are presented to demonstrate that systematic bias in regional regression equations for peak-streamflow frequency estimation in Texas can be reduced. Because the overall error is similar to the error associated with previous equations and because the bias is reduced, the PRESS-minimized equations reported here provide alternative equations for peak-streamflow frequency estimation.

  13. Estimation of flood-frequency characteristics of small urban streams in North Carolina

    USGS Publications Warehouse

    Robbins, J.C.; Pope, B.F.

    1996-01-01

    A statewide study was conducted to develop methods for estimating the magnitude and frequency of floods of small urban streams in North Carolina. This type of information is critical in the design of bridges, culverts and water-control structures, establishment of flood-insurance rates and flood-plain regulation, and for other uses by urban planners and engineers. Concurrent records of rainfall and runoff data collected in small urban basins were used to calibrate rainfall-runoff models. Historic rain- fall records were used with the calibrated models to synthesize a long- term record of annual peak discharges. The synthesized record of annual peak discharges were used in a statistical analysis to determine flood- frequency distributions. These frequency distributions were used with distributions from previous investigations to develop a database for 32 small urban basins in the Blue Ridge-Piedmont, Sand Hills, and Coastal Plain hydrologic areas. The study basins ranged in size from 0.04 to 41.0 square miles. Data describing the size and shape of the basin, level of urban development, and climate and rural flood charac- teristics also were included in the database. Estimation equations were developed by relating flood-frequency char- acteristics to basin characteristics in a generalized least-squares regression analysis. The most significant basin characteristics are drainage area, impervious area, and rural flood discharge. The model error and prediction errors for the estimating equations were less than those for the national flood-frequency equations previously reported. Resulting equations, which have prediction errors generally less than 40 percent, can be used to estimate flood-peak discharges for 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals for small urban basins across the State assuming negligible, sustainable, in- channel detention or basin storage.

  14. Experimental studies of high-accuracy RFID localization with channel impairments

    NASA Astrophysics Data System (ADS)

    Pauls, Eric; Zhang, Yimin D.

    2015-05-01

    Radio frequency identification (RFID) systems present an incredibly cost-effective and easy-to-implement solution to close-range localization. One of the important applications of a passive RFID system is to determine the reader position through multilateration based on the estimated distances between the reader and multiple distributed reference tags obtained from, e.g., the received signal strength indicator (RSSI) readings. In practice, the achievable accuracy of passive RFID reader localization suffers from many factors, such as the distorted RSSI reading due to channel impairments in terms of the susceptibility to reader antenna patterns and multipath propagation. Previous studies have shown that the accuracy of passive RFID localization can be significantly improved by properly modeling and compensating for such channel impairments. The objective of this paper is to report experimental study results that validate the effectiveness of such approaches for high-accuracy RFID localization. We also examine a number of practical issues arising in the underlying problem that limit the accuracy of reader-tag distance measurements and, therefore, the estimated reader localization. These issues include the variations in tag radiation characteristics for similar tags, effects of tag orientations, and reader RSS quantization and measurement errors. As such, this paper reveals valuable insights of the issues and solutions toward achieving high-accuracy passive RFID localization.

  15. A Frequency-Domain Multipath Parameter Estimation and Mitigation Method for BOC-Modulated GNSS Signals

    PubMed Central

    Sun, Chao; Feng, Wenquan; Du, Songlin

    2018-01-01

    As multipath is one of the dominating error sources for high accuracy Global Navigation Satellite System (GNSS) applications, multipath mitigation approaches are employed to minimize this hazardous error in receivers. Binary offset carrier modulation (BOC), as a modernized signal structure, is adopted to achieve significant enhancement. However, because of its multi-peak autocorrelation function, conventional multipath mitigation techniques for binary phase shift keying (BPSK) signal would not be optimal. Currently, non-parametric and parametric approaches have been studied specifically aiming at multipath mitigation for BOC signals. Non-parametric techniques, such as Code Correlation Reference Waveforms (CCRW), usually have good feasibility with simple structures, but suffer from low universal applicability for different BOC signals. Parametric approaches can thoroughly eliminate multipath error by estimating multipath parameters. The problems with this category are at the high computation complexity and vulnerability to the noise. To tackle the problem, we present a practical parametric multipath estimation method in the frequency domain for BOC signals. The received signal is transferred to the frequency domain to separate out the multipath channel transfer function for multipath parameter estimation. During this process, we take the operations of segmentation and averaging to reduce both noise effect and computational load. The performance of the proposed method is evaluated and compared with the previous work in three scenarios. Results indicate that the proposed averaging-Fast Fourier Transform (averaging-FFT) method achieves good robustness in severe multipath environments with lower computational load for both low-order and high-order BOC signals. PMID:29495589

  16. Robust transceiver design for reciprocal M × N interference channel based on statistical linearization approximation

    NASA Astrophysics Data System (ADS)

    Mayvan, Ali D.; Aghaeinia, Hassan; Kazemi, Mohammad

    2017-12-01

    This paper focuses on robust transceiver design for throughput enhancement on the interference channel (IC), under imperfect channel state information (CSI). In this paper, two algorithms are proposed to improve the throughput of the multi-input multi-output (MIMO) IC. Each transmitter and receiver has, respectively, M and N antennas and IC operates in a time division duplex mode. In the first proposed algorithm, each transceiver adjusts its filter to maximize the expected value of signal-to-interference-plus-noise ratio (SINR). On the other hand, the second algorithm tries to minimize the variances of the SINRs to hedge against the variability due to CSI error. Taylor expansion is exploited to approximate the effect of CSI imperfection on mean and variance. The proposed robust algorithms utilize the reciprocity of wireless networks to optimize the estimated statistical properties in two different working modes. Monte Carlo simulations are employed to investigate sum rate performance of the proposed algorithms and the advantage of incorporating variation minimization into the transceiver design.

  17. A coupled channel study of HN2 unimolecular decay based on a global ab initio potential surface

    NASA Technical Reports Server (NTRS)

    Koizumi, Hiroyasu; Schatz, George C.; Walch, Stephen P.

    1991-01-01

    The unimolecular decay lifetimes of several vibrational states of HN2 are determined on the basis of an accurate coupled channel dynamics study using a global analytical potential surface. The surface reproduces the ab initio points with an rms error of 0.08 kcal/mol for energies below 20 kcal/mol. Modifications to the potential that describe the effect of improving the basis set in the ab initio calculations are provided. Converged coupled channel calculations are performed for the ground rotational state of HN2 to determine the lifetimes of the lowest ten vibrational states. Only the ground vibrational state (000) and first excited bend (001) are found to have lifetimes longer than 1 ps. The lifetimes of these states are estimated at 3 x 10 to the -9th and 2 x 10 to the -10th s, respectively. Variation of these results with quality of the ab initio calculations is not more than a factor of 5.

  18. Investigation of magneto-hemodynamic flow in a semi-porous channel using orthonormal Bernstein polynomials

    NASA Astrophysics Data System (ADS)

    Hosseini, E.; Loghmani, G. B.; Heydari, M.; Rashidi, M. M.

    2017-07-01

    In this paper, the problem of the magneto-hemodynamic laminar viscous flow of a conducting physiological fluid in a semi-porous channel under a transverse magnetic field is investigated numerically. Using a Berman's similarity transformation, the two-dimensional momentum conservation partial differential equations can be written as a system of nonlinear ordinary differential equations incorporating Lorentizian magneto-hydrodynamic body force terms. A new computational method based on the operational matrix of derivative of orthonormal Bernstein polynomials for solving the resulting differential systems is introduced. Moreover, by using the residual correction process, two types of error estimates are provided and reported to show the strength of the proposed method. Graphical and tabular results are presented to investigate the influence of the Hartmann number ( Ha) and the transpiration Reynolds number ( Re on velocity profiles in the channel. The results are compared with those obtained by previous works to confirm the accuracy and efficiency of the proposed scheme.

  19. Content-based multiple bitstream image transmission over noisy channels.

    PubMed

    Cao, Lei; Chen, Chang Wen

    2002-01-01

    In this paper, we propose a novel combined source and channel coding scheme for image transmission over noisy channels. The main feature of the proposed scheme is a systematic decomposition of image sources so that unequal error protection can be applied according to not only bit error sensitivity but also visual content importance. The wavelet transform is adopted to hierarchically decompose the image. The association between the wavelet coefficients and what they represent spatially in the original image is fully exploited so that wavelet blocks are classified based on their corresponding image content. The classification produces wavelet blocks in each class with similar content and statistics, therefore enables high performance source compression using the set partitioning in hierarchical trees (SPIHT) algorithm. To combat the channel noise, an unequal error protection strategy with rate-compatible punctured convolutional/cyclic redundancy check (RCPC/CRC) codes is implemented based on the bit contribution to both peak signal-to-noise ratio (PSNR) and visual quality. At the receiving end, a postprocessing method making use of the SPIHT decoding structure and the classification map is developed to restore the degradation due to the residual error after channel decoding. Experimental results show that the proposed scheme is indeed able to provide protection both for the bits that are more sensitive to errors and for the more important visual content under a noisy transmission environment. In particular, the reconstructed images illustrate consistently better visual quality than using the single-bitstream-based schemes.

  20. Random access to mobile networks with advanced error correction

    NASA Technical Reports Server (NTRS)

    Dippold, Michael

    1990-01-01

    A random access scheme for unreliable data channels is investigated in conjunction with an adaptive Hybrid-II Automatic Repeat Request (ARQ) scheme using Rate Compatible Punctured Codes (RCPC) Forward Error Correction (FEC). A simple scheme with fixed frame length and equal slot sizes is chosen and reservation is implicit by the first packet transmitted randomly in a free slot, similar to Reservation Aloha. This allows the further transmission of redundancy if the last decoding attempt failed. Results show that a high channel utilization and superior throughput can be achieved with this scheme that shows a quite low implementation complexity. For the example of an interleaved Rayleigh channel and soft decision utilization and mean delay are calculated. A utilization of 40 percent may be achieved for a frame with the number of slots being equal to half the station number under high traffic load. The effects of feedback channel errors and some countermeasures are discussed.

  1. Effects of data selection on the assimilation of AIRS data

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Brin, E.; Treadon, R.; Derber, J.; VanDelst, P.; DeSilva, A.; Marshall, J. Le; Poli, P.; Atlas, R.; Cruz, C.; hide

    2006-01-01

    The Atmospheric InfraRed Sounder (AIRS), flying aboard NASA's Earth Observing System (EOS) Aqua satellite with the Advanced Microwave Sounding Unit-A (AMSU-A), has been providing data for use in numerical weather prediction (NWP) and data assimilation systems (DAS) for over three years. The full AIRS data set is currently not transmitted in near-real-time (NRT) to the NWP centers. Instead, data sets with reduced spatial and spectral information are produced and made available in NRT. In this paper, we evaluate the use of different channel selections and error specifications. We achieved significant positive impact from the Aqua AIRS/AMSU-A combination in both hemispheres during our experimental time period of January 2003. The best results were obtained using a set of 156 channels that did not include any in the 6.7micron water vapor band. The latter have a large influence on both temperature and humidity analyses. If observation and background errors are not properly specified, the partitioning of temperature and humidity information from these channels will not be correct, and this can lead to a degradation in forecast skill. We found that changing the specified channel errors had a significant effect on the amount of data that entered into the analysis as a result of quality control thresholds that are related to the errors. However, changing the channel errors within a relatively small window did not significantly impact forecast skill with the 155 channel set. We also examined the effects of different types of spatial data reduction on assimilated data sets and NWP forecast skill. Whether we picked the center or the warmest AIRS pixel in a 3x3 array affected the amount of data ingested by the analysis but had a negligible impact on the forecast skill.

  2. Design of two-channel filter bank using nature inspired optimization based fractional derivative constraints.

    PubMed

    Kuldeep, B; Singh, V K; Kumar, A; Singh, G K

    2015-01-01

    In this article, a novel approach for 2-channel linear phase quadrature mirror filter (QMF) bank design based on a hybrid of gradient based optimization and optimization of fractional derivative constraints is introduced. For the purpose of this work, recently proposed nature inspired optimization techniques such as cuckoo search (CS), modified cuckoo search (MCS) and wind driven optimization (WDO) are explored for the design of QMF bank. 2-Channel QMF is also designed with particle swarm optimization (PSO) and artificial bee colony (ABC) nature inspired optimization techniques. The design problem is formulated in frequency domain as sum of L2 norm of error in passband, stopband and transition band at quadrature frequency. The contribution of this work is the novel hybrid combination of gradient based optimization (Lagrange multiplier method) and nature inspired optimization (CS, MCS, WDO, PSO and ABC) and its usage for optimizing the design problem. Performance of the proposed method is evaluated by passband error (ϕp), stopband error (ϕs), transition band error (ϕt), peak reconstruction error (PRE), stopband attenuation (As) and computational time. The design examples illustrate the ingenuity of the proposed method. Results are also compared with the other existing algorithms, and it was found that the proposed method gives best result in terms of peak reconstruction error and transition band error while it is comparable in terms of passband and stopband error. Results show that the proposed method is successful for both lower and higher order 2-channel QMF bank design. A comparative study of various nature inspired optimization techniques is also presented, and the study singles out CS as a best QMF optimization technique. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Improved Determination of Surface and Atmospheric Temperatures Using Only Shortwave AIRS Channels: The AIRS Version 6 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena

    2010-01-01

    AIRS was launched on EOS Aqua on May 4, 2002 together with ASMU-A and HSB to form a next generation polar orbiting infrared and microwave atmosphere sounding system (Pagano et al 2003). The theoretical approach used to analyze AIRS/AMSU/HSB data in the presence of clouds in the AIRS Science Team Version 3 at-launch algorithm, and that used in the Version 4 post-launch algorithm, have been published previously. Significant theoretical and practical improvements have been made in the analysis of AIRS/AMSU data since the Version 4 algorithm. Most of these have already been incorporated in the AIRS Science Team Version 5 algorithm (Susskind et al 2010), now being used operationally at the Goddard DISC. The AIRS Version 5 retrieval algorithm contains three significant improvements over Version 4. Improved physics in Version 5 allowed for use of AIRS clear column radiances (R(sub i)) in the entire 4.3 micron CO2 absorption band in the retrieval of temperature profiles T(p) during both day and night. Tropospheric sounding 15 micron CO2 observations were used primarily in the generation of clear column radiances (R(sub i)) for all channels. This new approach allowed for the generation of accurate Quality Controlled values of R(sub i) and T(p) under more stressing cloud conditions. Secondly, Version 5 contained a new methodology to provide accurate case-by-case error estimates for retrieved geophysical parameters and for channel-by-channel clear column radiances. Thresholds of these error estimates are used in a new approach for Quality Control. Finally, Version 5 contained for the first time an approach to provide AIRS soundings in partially cloudy conditions that does not require use of any microwave data. This new AIRS Only sounding methodology was developed as a backup to AIRS Version 5 should the AMSU-A instrument fail. Susskind et al 2010 shows that Version 5 AIRS Only sounding are only slightly degraded from the AIRS/AMSU soundings, even at large fractional cloud cover.

  4. Self-Noise of the STS-2 and sensitivity of its computation to errors in alignment of sensors

    NASA Astrophysics Data System (ADS)

    Gerner, Andreas; Sleeman, Reinoud; Grasemann, Bernhard; Lenhardt, Wolfgang

    2016-04-01

    The assessment of a seismometer's self-noise is an important part of establishing its health, quality, and suitability. A spectral coherence technique proposed by Sleeman et al. (2006) using synchronously recorded data of triples of collocated and co-aligned seismometers has shown to be a very robust and reliable way to estimate the self-noise of modern broadband seismic sensors. It has been demonstrated in previous works that the resulting self-noise spectra, primarily in the frequency range of Earth's microseisms, are considerably affected by small errors in the alignment of sensors. Further, due to the sensitivity of the 3-channel correlation technique to misalignment, numerical rotation of the recorded traces prior to self-noise computation can be performed to find best possible alignment by searching for minimum self-noise values. In this study we focus on the sensitivity of the 3-channel correlation technique to misalignment, and investigate the possibility of complete removal of the microseism signal from self-noise estimates for the sensors' three components separately. Data from a long-term installation of four STS-2 sensors, specifically intended for self-noise studies, at the Conrad Observatory (Austria) in a collaboration between the KNMI (Netherlands) and the ZAMG (Austria) provides a reliable basis for an accurate sensitivity analysis and self-noise assessment. Our work resulted in undisturbed self-noise estimates for the vertical components, and our current focus is on improving alignment of horizontal axes, and verification of the manufacturer's specification regarding orthogonality of all three components. The tools and methods developed within this research can help to quickly establish consistent self-noise models, including estimates of orthogonality and alignment, which facilitates comparison of different models and provides us with a means to test quality and accuracy of a seismic sensor over its life span.

  5. Low-mobility channel tracking for MIMO-OFDM communication systems

    NASA Astrophysics Data System (ADS)

    Pagadarai, Srikanth; Wyglinski, Alexander M.; Anderson, Christopher R.

    2013-12-01

    It is now well understood that by exploiting the available additional spatial dimensions, multiple-input multiple-output (MIMO) communication systems provide capacity gains, compared to a single-input single-output systems without increasing the overall transmit power or requiring additional bandwidth. However, these large capacity gains are feasible only when the perfect knowledge of the channel is available to the receiver. Consequently, when the channel knowledge is imperfect, as is common in practical settings, the impact of the achievable capacity needs to be evaluated. In this study, we begin with a general MIMO framework at the outset and specialize it to the case of orthogonal frequency division multiplexing (OFDM) systems by decoupling channel estimation from data detection. Cyclic-prefixed OFDM systems have attracted widespread interest due to several appealing characteristics not least of which is the fact that a single-tap frequency-domain equalizer per subcarrier is sufficient due to the circulant structure of the resulting channel matrix. We consider a low-mobility wireless channel which exhibits inter-block channel variations and apply Kalman tracking when MIMO-OFDM communication is performed. Furthermore, we consider the signal transmission to contain a stream of training and information symbols followed by information symbols alone. By relying on predicted channel states when training symbols are absent, we aim to understand how the improvements in channel capacity are affected by imperfect channel knowledge. We show that the Kalman recursion procedure can be simplified by the optimal minimum mean square error training design. Using the simplified recursion, we derive capacity upper and lower bounds to evaluate the performance of the system.

  6. Image transmission system using adaptive joint source and channel decoding

    NASA Astrophysics Data System (ADS)

    Liu, Weiliang; Daut, David G.

    2005-03-01

    In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.

  7. Quantum Capacity under Adversarial Quantum Noise: Arbitrarily Varying Quantum Channels

    NASA Astrophysics Data System (ADS)

    Ahlswede, Rudolf; Bjelaković, Igor; Boche, Holger; Nötzel, Janis

    2013-01-01

    We investigate entanglement transmission over an unknown channel in the presence of a third party (called the adversary), which is enabled to choose the channel from a given set of memoryless but non-stationary channels without informing the legitimate sender and receiver about the particular choice that he made. This channel model is called an arbitrarily varying quantum channel (AVQC). We derive a quantum version of Ahlswede's dichotomy for classical arbitrarily varying channels. This includes a regularized formula for the common randomness-assisted capacity for entanglement transmission of an AVQC. Quite surprisingly and in contrast to the classical analog of the problem involving the maximal and average error probability, we find that the capacity for entanglement transmission of an AVQC always equals its strong subspace transmission capacity. These results are accompanied by different notions of symmetrizability (zero-capacity conditions) as well as by conditions for an AVQC to have a capacity described by a single-letter formula. In the final part of the paper the capacity of the erasure-AVQC is computed and some light shed on the connection between AVQCs and zero-error capacities. Additionally, we show by entirely elementary and operational arguments motivated by the theory of AVQCs that the quantum, classical, and entanglement-assisted zero-error capacities of quantum channels are generically zero and are discontinuous at every positivity point.

  8. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.

    2001-01-01

    We completed the formulation of the smoothness penalty functional this past quarter. We used a simplified procedure for estimating the statistics of the FCA solution spectral coefficients from the results of the unconstrained, low-truncation FCA (stopping criterion) solutions. During the current reporting period we have completed the calculation of GEOS-2 model-equivalent brightness temperatures for the 6.7 micron and 11 micron window channels used in the GOES imagery for all 10 cases from August 1999. These were simulated using the AER-developed Optimal Spectral Sampling (OSS) model.

  9. Modeling coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  10. The Error Structure of the SMAP Single and Dual Channel Soil Moisture Retrievals

    NASA Astrophysics Data System (ADS)

    Dong, Jianzhi; Crow, Wade T.; Bindlish, Rajat

    2018-01-01

    Knowledge of the temporal error structure for remotely sensed surface soil moisture retrievals can improve our ability to exploit them for hydrologic and climate studies. This study employs a triple collocation analysis to investigate both the total variance and temporal autocorrelation of errors in Soil Moisture Active and Passive (SMAP) products generated from two separate soil moisture retrieval algorithms, the vertically polarized brightness temperature-based single-channel algorithm (SCA-V, the current baseline SMAP algorithm) and the dual-channel algorithm (DCA). A key assumption made in SCA-V is that real-time vegetation opacity can be accurately captured using only a climatology for vegetation opacity. Results demonstrate that while SCA-V generally outperforms DCA, SCA-V can produce larger total errors when this assumption is significantly violated by interannual variability in vegetation health and biomass. Furthermore, larger autocorrelated errors in SCA-V retrievals are found in areas with relatively large vegetation opacity deviations from climatological expectations. This implies that a significant portion of the autocorrelated error in SCA-V is attributable to the violation of its vegetation opacity climatology assumption and suggests that utilizing a real (as opposed to climatological) vegetation opacity time series in the SCA-V algorithm would reduce the magnitude of autocorrelated soil moisture retrieval errors.

  11. Identification and compensation of the temperature influences in a miniature three-axial accelerometer based on the least squares method

    NASA Astrophysics Data System (ADS)

    Grigorie, Teodor Lucian; Corcau, Ileana Jenica; Tudosie, Alexandru Nicolae

    2017-06-01

    The paper presents a way to obtain an intelligent miniaturized three-axial accelerometric sensor, based on the on-line estimation and compensation of the sensor errors generated by the environmental temperature variation. Taking into account that this error's value is a strongly nonlinear complex function of the values of environmental temperature and of the acceleration exciting the sensor, its correction may not be done off-line and it requires the presence of an additional temperature sensor. The proposed identification methodology for the error model is based on the least square method which process off-line the numerical values obtained from the accelerometer experimental testing for different values of acceleration applied to its axes of sensitivity and for different values of operating temperature. A final analysis of the error level after the compensation highlights the best variant for the matrix in the error model. In the sections of the paper are shown the results of the experimental testing of the accelerometer on all the three sensitivity axes, the identification of the error models on each axis by using the least square method, and the validation of the obtained models with experimental values. For all of the three detection channels was obtained a reduction by almost two orders of magnitude of the acceleration absolute maximum error due to environmental temperature variation.

  12. Improving AIRS Radiance Spectra in High Contrast Scenes Using MODIS

    NASA Technical Reports Server (NTRS)

    Pagano, Thomas S.; Aumann, Hartmut H.; Manning, Evan M.; Elliott, Denis A.; Broberg, Steven E.

    2015-01-01

    The Atmospheric Infrared Sounder (AIRS) on the EOS Aqua Spacecraft was launched on May 4, 2002. AIRS acquires hyperspectral infrared radiances in 2378 channels ranging in wavelength from 3.7-15.4 microns with spectral resolution of better than 1200, and spatial resolution of 13.5 km with global daily coverage. The AIRS is designed to measure temperature and water vapor profiles for improvement in weather forecast accuracy and improved understanding of climate processes. As with most instruments, the AIRS Point Spread Functions (PSFs) are not the same for all detectors. When viewing a non-uniform scene, this causes a significant radiometric error in some channels that is scene dependent and cannot be removed without knowledge of the underlying scene. The magnitude of the error depends on the combination of non-uniformity of the AIRS spatial response for a given channel and the non-uniformity of the scene, but is typically only noticeable in about 1% of the scenes and about 10% of the channels. The current solution is to avoid those channels when performing geophysical retrievals. In this effort we use data from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument to provide information on the scene uniformity that is used to correct the AIRS data. For the vast majority of channels and footprints the technique works extremely well when compared to a Principal Component (PC) reconstruction of the AIRS channels. In some cases where the scene has high inhomogeneity in an irregular pattern, and in some channels, the method can actually degrade the spectrum. Most of the degraded channels appear to be slightly affected by random noise introduced in the process, but those with larger degradation may be affected by alignment errors in the AIRS relative to MODIS or uncertainties in the PSF. Despite these errors, the methodology shows the ability to correct AIRS radiances in non-uniform scenes under some of the worst case conditions and improves the ability to match AIRS and MODIS radiances in non-uniform scenes.

  13. A Framework of Temporal-Spatial Descriptors-Based Feature Extraction for Improved Myoelectric Pattern Recognition.

    PubMed

    Khushaba, Rami N; Al-Timemy, Ali H; Al-Ani, Ahmed; Al-Jumaily, Adel

    2017-10-01

    The extraction of the accurate and efficient descriptors of muscular activity plays an important role in tackling the challenging problem of myoelectric control of powered prostheses. In this paper, we present a new feature extraction framework that aims to give an enhanced representation of muscular activities through increasing the amount of information that can be extracted from individual and combined electromyogram (EMG) channels. We propose to use time-domain descriptors (TDDs) in estimating the EMG signal power spectrum characteristics; a step that preserves the computational power required for the construction of spectral features. Subsequently, TDD is used in a process that involves: 1) representing the temporal evolution of the EMG signals by progressively tracking the correlation between the TDD extracted from each analysis time window and a nonlinearly mapped version of it across the same EMG channel and 2) representing the spatial coherence between the different EMG channels, which is achieved by calculating the correlation between the TDD extracted from the differences of all possible combinations of pairs of channels and their nonlinearly mapped versions. The proposed temporal-spatial descriptors (TSDs) are validated on multiple sparse and high-density (HD) EMG data sets collected from a number of intact-limbed and amputees performing a large number of hand and finger movements. Classification results showed significant reductions in the achieved error rates in comparison to other methods, with the improvement of at least 8% on average across all subjects. Additionally, the proposed TSDs achieved significantly well in problems with HD-EMG with average classification errors of <5% across all subjects using windows lengths of 50 ms only.

  14. The analysis to understand temporal variation and long-range transport of aerosol over Northeast-Asia Using COMS, MI

    NASA Astrophysics Data System (ADS)

    KIM, M.; Kim, J.

    2016-12-01

    Numerous efforts to retrieve aerosol optical properties (AOPs) using satellite measurements have been accumulated for decades, resulted in several qualified data which can be used for the analysis of spatiotemporal characteristics of AOPs. However, the limitation in the instrument lifetime restricts temporal window of the analysis of long-term AOPs variation. In this point of view, single channel algorithm, which uses a single visible channel to retrieve aerosol optical depth (AOD), has an advantage to extent the time domain of the analysis. The Korean geostationary earth orbit (GEO) satellite, the Communication, Ocean and Meteorological Satellite (COMS) includes the single channel Meteorological Imager (MI), which can also be utilized for the retrieval of AOPs. Since the GEO satellite measurement has an advantage for continuous monitoring of AOPs over Northeast Asia, we can analyze the spatiotemporal characteristic of the aerosol using MI observations. In this study, we investigate the trend of AOD and also discuss the impact of long-range transport of aerosol on the temporal variation. Since the year 2010 when the COMS was launched, AODs over Northeast China and Yellow Sea region show 3.02 % and 2.74 % decrease per year, respectively, which are significant trends in spite of only 5-year short period. The decreasing behavior seems associated with the recent decreasing frequency of dust event over the region. But other Northeast Asia regions do not show clear temporal change. The accuracy of retrieved AOD can relates to the uncertainty of this trend analysis. According to the error analysis, cloud contamination and error in bright surface reflectance results in the accuracy of AOD. Therefore, improvements of cloud masking process and surface reflectance estimation in the developed single channel MI algorithm will be required for the future study.

  15. Performance Bounds on Two Concatenated, Interleaved Codes

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Dolinar, Samuel

    2010-01-01

    A method has been developed of computing bounds on the performance of a code comprised of two linear binary codes generated by two encoders serially concatenated through an interleaver. Originally intended for use in evaluating the performances of some codes proposed for deep-space communication links, the method can also be used in evaluating the performances of short-block-length codes in other applications. The method applies, more specifically, to a communication system in which following processes take place: At the transmitter, the original binary information that one seeks to transmit is first processed by an encoder into an outer code (Co) characterized by, among other things, a pair of numbers (n,k), where n (n > k)is the total number of code bits associated with k information bits and n k bits are used for correcting or at least detecting errors. Next, the outer code is processed through either a block or a convolutional interleaver. In the block interleaver, the words of the outer code are processed in blocks of I words. In the convolutional interleaver, the interleaving operation is performed bit-wise in N rows with delays that are multiples of B bits. The output of the interleaver is processed through a second encoder to obtain an inner code (Ci) characterized by (ni,ki). The output of the inner code is transmitted over an additive-white-Gaussian- noise channel characterized by a symbol signal-to-noise ratio (SNR) Es/No and a bit SNR Eb/No. At the receiver, an inner decoder generates estimates of bits. Depending on whether a block or a convolutional interleaver is used at the transmitter, the sequence of estimated bits is processed through a block or a convolutional de-interleaver, respectively, to obtain estimates of code words. Then the estimates of the code words are processed through an outer decoder, which generates estimates of the original information along with flags indicating which estimates are presumed to be correct and which are found to be erroneous. From the perspective of the present method, the topic of major interest is the performance of the communication system as quantified in the word-error rate and the undetected-error rate as functions of the SNRs and the total latency of the interleaver and inner code. The method is embodied in equations that describe bounds on these functions. Throughout the derivation of the equations that embody the method, it is assumed that the decoder for the outer code corrects any error pattern of t or fewer errors, detects any error pattern of s or fewer errors, may detect some error patterns of more than s errors, and does not correct any patterns of more than t errors. Because a mathematically complete description of the equations that embody the method and of the derivation of the equations would greatly exceed the space available for this article, it must suffice to summarize by reporting that the derivation includes consideration of several complex issues, including relationships between latency and memory requirements for block and convolutional codes, burst error statistics, enumeration of error-event intersections, and effects of different interleaving depths. In a demonstration, the method was used to calculate bounds on the performances of several communication systems, each based on serial concatenation of a (63,56) expurgated Hamming code with a convolutional inner code through a convolutional interleaver. The bounds calculated by use of the method were compared with results of numerical simulations of performances of the systems to show the regions where the bounds are tight (see figure).

  16. Matrix structure for information-driven polarimeter design

    NASA Astrophysics Data System (ADS)

    Alenin, Andrey S.

    Estimating the polarization of light has been shown to have merit in a wide variety of applications between UV and LWIR wavelengths. These tasks include target identification, estimation of atmospheric aerosol properties, biomedical and other applications. In all of these applications, polarization sensing has been shown to assist in discrimination ability; however, due to the nature of many phenomena, it is difficult to add polarization sensing everywhere. The goal of this dissertation is to decrease the associated penalties of using polarimetry, and thereby broaden its applicability to other areas. First, the class of channeled polarimeter systems is generalized to relate the Fourier domains of applied modulations to the resulting information channels. The quality of reconstruction is maximized by virtue of using linear system manipulations rather than arithmetic derived by hand, while revealing system properties that allow for immediate performance estimation. Besides identifying optimal systems in terms of equally weighted variance (EWV), a way to redistribute the error between all the information channels is presented. The result of this development often leads to superficial changes that can improve signal-to-noise-ration (SNR) by up to a factor of three compared to existing designs in the literature. Second, the class of partial Mueller maitrx polarimeters (pMMPs) is inspected in regards to their capacity to match the level of discrimination performance achieved by full systems. The concepts of structured decomposition and the reconstructables matrix are developed to provide insight into Mueller subspace coverage of pMMPs, while yielding a pMMP basis that allows the formation of ten classes of pMMP systems. A method for evaluating such systems while considering a multi-objective optimization of noise resilience and space coverage is provided. An example is presented for which the number of measurements was reduced to half. Third, the novel developments intended for channeled and partial systems are combined to form a previously undiscussed class of channeled partial Mueller matrix polarimeters (c-pMMPs). These systems leverage the gained understanding in manipulating the structure of the measurement to design modulations such that the desired pieces of information are mapped into channels with favorable reconstruction characteristics.

  17. Adaptive color halftoning for minimum perceived error using the blue noise mask

    NASA Astrophysics Data System (ADS)

    Yu, Qing; Parker, Kevin J.

    1997-04-01

    Color halftoning using a conventional screen requires careful selection of screen angles to avoid Moire patterns. An obvious advantage of halftoning using a blue noise mask (BNM) is that there are no conventional screen angle or Moire patterns produced. However, a simple strategy of employing the same BNM on all color planes is unacceptable in case where a small registration error can cause objectionable color shifts. In a previous paper by Yao and Parker, strategies were presented for shifting or inverting the BNM as well as using mutually exclusive BNMs for different color planes. In this paper, the above schemes will be studied in CIE-LAB color space in terms of root mean square error and variance for luminance channel and chrominance channel respectively. We will demonstrate that the dot-on-dot scheme results in minimum chrominance error, but maximum luminance error and the 4-mask scheme results in minimum luminance error but maximum chrominance error, while the shift scheme falls in between. Based on this study, we proposed a new adaptive color halftoning algorithm that takes colorimetric color reproduction into account by applying 2-mutually exclusive BNMs on two different color planes and applying an adaptive scheme on other planes to reduce color error. We will show that by having one adaptive color channel, we obtain increased flexibility to manipulate the output so as to reduce colorimetric error while permitting customization to specific printing hardware.

  18. Microwave properties of a quiet sea

    NASA Technical Reports Server (NTRS)

    Stacey, J.

    1985-01-01

    The microwave flux responses of a quiet sea are observed at five microwave frequencies and with both horizontal and vertical polarizations at each frequency--a simultaneous 10 channel receiving system. The measurements are taken from Earth orbit with an articulating antenna. The 10 channel responses are taken simultaneously since they share a common articulating collector with a multifrequency feed. The plotted flux responses show: (1) the effects of the relative, on-axis-gain of the collecting aperture for each frequency; (2) the effects of polarization rotation in the output responses of the receive when the collecting aperture mechanically rotates about a feed that is fixed; (3) the difference between the flux magnitudes for the horizontal and vertical channels, at each of the five frequencies, and for each pointing position, over a 44 degree scan angle; and (4) the RMS value of the clutter--as reckoned over the interval of a full swath for each of the 10 channels. The clutter is derived from the standard error of estimate of the plotted swath response for each channel. The expected value of the background temperature is computed for each of the three quiet seas. The background temperature includes contributions from the cosmic background, the downwelling path, the sea surface, and the upwelling path.

  19. High-precision approach to localization scheme of visible light communication based on artificial neural networks and modified genetic algorithms

    NASA Astrophysics Data System (ADS)

    Guan, Weipeng; Wu, Yuxiang; Xie, Canyu; Chen, Hao; Cai, Ye; Chen, Yingcong

    2017-10-01

    An indoor positioning algorithm based on visible light communication (VLC) is presented. This algorithm is used to calculate a three-dimensional (3-D) coordinate of an indoor optical wireless environment, which includes sufficient orders of multipath reflections from reflecting surfaces of the room. Leveraging the global optimization ability of the genetic algorithm (GA), an innovative framework for 3-D position estimation based on a modified genetic algorithm is proposed. Unlike other techniques using VLC for positioning, the proposed system can achieve indoor 3-D localization without making assumptions about the height or acquiring the orientation angle of the mobile terminal. Simulation results show that an average localization error of less than 1.02 cm can be achieved. In addition, in most VLC-positioning systems, the effect of reflection is always neglected and its performance is limited by reflection, which makes the results not so accurate for a real scenario and the positioning errors at the corners are relatively larger than other places. So, we take the first-order reflection into consideration and use artificial neural network to match the model of a nonlinear channel. The studies show that under the nonlinear matching of direct and reflected channels the average positioning errors of four corners decrease from 11.94 to 0.95 cm. The employed algorithm is emerged as an effective and practical method for indoor localization and outperform other existing indoor wireless localization approaches.

  20. Bayesian sparse channel estimation

    NASA Astrophysics Data System (ADS)

    Chen, Chulong; Zoltowski, Michael D.

    2012-05-01

    In Orthogonal Frequency Division Multiplexing (OFDM) systems, the technique used to estimate and track the time-varying multipath channel is critical to ensure reliable, high data rate communications. It is recognized that wireless channels often exhibit a sparse structure, especially for wideband and ultra-wideband systems. In order to exploit this sparse structure to reduce the number of pilot tones and increase the channel estimation quality, the application of compressed sensing to channel estimation is proposed. In this article, to make the compressed channel estimation more feasible for practical applications, it is investigated from a perspective of Bayesian learning. Under the Bayesian learning framework, the large-scale compressed sensing problem, as well as large time delay for the estimation of the doubly selective channel over multiple consecutive OFDM symbols, can be avoided. Simulation studies show a significant improvement in channel estimation MSE and less computing time compared to the conventional compressed channel estimation techniques.

  1. Speaker diarization system on the 2007 NIST rich transcription meeting recognition evaluation

    NASA Astrophysics Data System (ADS)

    Sun, Hanwu; Nwe, Tin Lay; Koh, Eugene Chin Wei; Bin, Ma; Li, Haizhou

    2007-09-01

    This paper presents a speaker diarization system developed at the Institute for Infocomm Research (I2R) for NIST Rich Transcription 2007 (RT-07) evaluation task. We describe in details our primary approaches for the speaker diarization on the Multiple Distant Microphones (MDM) conditions in conference room scenario. Our proposed system consists of six modules: 1). Least-mean squared (NLMS) adaptive filter for the speaker direction estimate via Time Difference of Arrival (TDOA), 2). An initial speaker clustering via two-stage TDOA histogram distribution quantization approach, 3). Multiple microphone speaker data alignment via GCC-PHAT Time Delay Estimate (TDE) among all the distant microphone channel signals, 4). A speaker clustering algorithm based on GMM modeling approach, 5). Non-speech removal via speech/non-speech verification mechanism and, 6). Silence removal via "Double-Layer Windowing"(DLW) method. We achieves error rate of 31.02% on the 2006 Spring (RT-06s) MDM evaluation task and a competitive overall error rate of 15.32% for the NIST Rich Transcription 2007 (RT-07) MDM evaluation task.

  2. Security proof of a three-state quantum-key-distribution protocol without rotational symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fung, C.-H.F.; Lo, H.-K.

    2006-10-15

    Standard security proofs of quantum-key-distribution (QKD) protocols often rely on symmetry arguments. In this paper, we prove the security of a three-state protocol that does not possess rotational symmetry. The three-state QKD protocol we consider involves three qubit states, where the first two states |0{sub z}> and |1{sub z}> can contribute to key generation, and the third state |+>=(|0{sub z}>+|1{sub z}>)/{radical}(2) is for channel estimation. This protocol has been proposed and implemented experimentally in some frequency-based QKD systems where the three states can be prepared easily. Thus, by founding on the security of this three-state protocol, we prove that thesemore » QKD schemes are, in fact, unconditionally secure against any attacks allowed by quantum mechanics. The main task in our proof is to upper bound the phase error rate of the qubits given the bit error rates observed. Unconditional security can then be proved not only for the ideal case of a single-photon source and perfect detectors, but also for the realistic case of a phase-randomized weak coherent light source and imperfect threshold detectors. Our result in the phase error rate upper bound is independent of the loss in the channel. Also, we compare the three-state protocol with the Bennett-Brassard 1984 (BB84) protocol. For the single-photon source case, our result proves that the BB84 protocol strictly tolerates a higher quantum bit error rate than the three-state protocol, while for the coherent-source case, the BB84 protocol achieves a higher key generation rate and secure distance than the three-state protocol when a decoy-state method is used.« less

  3. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    van den Engh, Gerrit J.; Stokdijk, Willem

    1992-01-01

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.

  4. Improving TCP Network Performance by Detecting and Reacting to Packet Reordering

    NASA Technical Reports Server (NTRS)

    Kruse, Hans; Ostermann, Shawn; Allman, Mark

    2003-01-01

    There are many factors governing the performance of TCP-basec applications traversing satellite channels. The end-to-end performance of TCP is known to be degraded by the reordering, delay, noise and asymmetry inherent in geosynchronous systems. This result has been largely based on experiments that evaluate the performance of TCP in single flow tests. While single flow tests are useful for deriving information on the theoretical behavior of TCP and allow for easy diagnosis of problems they do not represent a broad range of realistic situations and therefore cannot be used to authoritatively comment on performance issues. The experiments discussed in this report test TCP s performance in a more dynamic environment with competing traffic flows from hundreds of TCP connections running simultaneously across the satellite channel. Another aspect we investigate is TCP's reaction to bit errors on satellite channels. TCP interprets loss as a sign of network congestion. This causes TCP to reduce its transmission rate leading to reduced performance when loss is due to corruption. We allowed the bit error rate on our satellite channel to vary widely and tested the performance of TCP as a function of these bit error rates. Our results show that the average performance of TCP on satellite channels is good even under conditions of loss as high as bit error rates of 10(exp -5)

  5. Comparative study of signalling methods for high-speed backplane transceiver

    NASA Astrophysics Data System (ADS)

    Wu, Kejun

    2017-11-01

    A combined analysis of transient simulation and statistical method is proposed for comparative study of signalling methods applied to high-speed backplane transceivers. This method enables fast and accurate signal-to-noise ratio and symbol error rate estimation of a serial link based on a four-dimension design space, including channel characteristics, noise scenarios, equalisation schemes, and signalling methods. The proposed combined analysis method chooses an efficient sampling size for performance evaluation. A comparative study of non-return-to-zero (NRZ), PAM-4, and four-phase shifted sinusoid symbol (PSS-4) using parameterised behaviour-level simulation shows PAM-4 and PSS-4 has substantial advantages over conventional NRZ in most of the cases. A comparison between PAM-4 and PSS-4 shows PAM-4 gets significant bit error rate degradation when noise level is enhanced.

  6. Observer-based output consensus of a class of heterogeneous multi-agent systems with unmatched disturbances

    NASA Astrophysics Data System (ADS)

    Zhang, Jiancheng; Zhu, Fanglai

    2018-03-01

    In this paper, the output consensus of a class of linear heterogeneous multi-agent systems with unmatched disturbances is considered. Firstly, based on the relative output information among neighboring agents, we propose an asymptotic sliding-mode based consensus control scheme, under which, the output consensus error can converge to zero by removing the disturbances from output channels. Secondly, in order to reach the consensus goal, we design a novel high-order unknown input observer for each agent. It can estimate not only each agent's states and disturbances, but also the disturbances' high-order derivatives, which are required in the control scheme aforementioned above. The observer-based consensus control laws and the convergence analysis of the consensus error dynamics are given. Finally, a simulation example is provided to verify the validity of our methods.

  7. Parameter Estimation with Almost No Public Communication for Continuous-Variable Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano

    2018-06-01

    One crucial step in any quantum key distribution (QKD) scheme is parameter estimation. In a typical QKD protocol the users have to sacrifice part of their raw data to estimate the parameters of the communication channel as, for example, the error rate. This introduces a trade-off between the secret key rate and the accuracy of parameter estimation in the finite-size regime. Here we show that continuous-variable QKD is not subject to this constraint as the whole raw keys can be used for both parameter estimation and secret key generation, without compromising the security. First, we show that this property holds for measurement-device-independent (MDI) protocols, as a consequence of the fact that in a MDI protocol the correlations between Alice and Bob are postselected by the measurement performed by an untrusted relay. This result is then extended beyond the MDI framework by exploiting the fact that MDI protocols can simulate device-dependent one-way QKD with arbitrarily high precision.

  8. An investigation of error characteristics and coding performance

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.

    1992-01-01

    The performance of forward error correcting coding schemes on errors anticipated for the Earth Observation System (EOS) Ku-band downlink are studied. The EOS transmits picture frame data to the ground via the Telemetry Data Relay Satellite System (TDRSS) to a ground-based receiver at White Sands. Due to unintentional RF interference from other systems operating in the Ku band, the noise at the receiver is non-Gaussian which may result in non-random errors output by the demodulator. That is, the downlink channel cannot be modeled by a simple memoryless Gaussian-noise channel. From previous experience, it is believed that those errors are bursty. The research proceeded by developing a computer based simulation, called Communication Link Error ANalysis (CLEAN), to model the downlink errors, forward error correcting schemes, and interleavers used with TDRSS. To date, the bulk of CLEAN was written, documented, debugged, and verified. The procedures for utilizing CLEAN to investigate code performance were established and are discussed.

  9. Technique for estimating the 2- to 500-year flood discharges on unregulated streams in rural Missouri

    USGS Publications Warehouse

    Alexander, Terry W.; Wilson, Gary L.

    1995-01-01

    A generalized least-squares regression technique was used to relate the 2- to 500-year flood discharges from 278 selected streamflow-gaging stations to statistically significant basin characteristics. The regression relations (estimating equations) were defined for three hydrologic regions (I, II, and III) in rural Missouri. Ordinary least-squares regression analyses indicate that drainage area (Regions I, II, and III) and main-channel slope (Regions I and II) are the only basin characteristics needed for computing the 2- to 500-year design-flood discharges at gaged or ungaged stream locations. The resulting generalized least-squares regression equations provide a technique for estimating the 2-, 5-, 10-, 25-, 50-, 100-, and 500-year flood discharges on unregulated streams in rural Missouri. The regression equations for Regions I and II were developed from stream-flow-gaging stations with drainage areas ranging from 0.13 to 11,500 square miles and 0.13 to 14,000 square miles, and main-channel slopes ranging from 1.35 to 150 feet per mile and 1.20 to 279 feet per mile. The regression equations for Region III were developed from streamflow-gaging stations with drainage areas ranging from 0.48 to 1,040 square miles. Standard errors of estimate for the generalized least-squares regression equations in Regions I, II, and m ranged from 30 to 49 percent.

  10. Layered video transmission over multirate DS-CDMA wireless systems

    NASA Astrophysics Data System (ADS)

    Kondi, Lisimachos P.; Srinivasan, Deepika; Pados, Dimitris A.; Batalama, Stella N.

    2003-05-01

    n this paper, we consider the transmission of video over wireless direct-sequence code-division multiple access (DS-CDMA) channels. A layered (scalable) video source codec is used and each layer is transmitted over a different CDMA channel. Spreading codes with different lengths are allowed for each CDMA channel (multirate CDMA). Thus, a different number of chips per bit can be used for the transmission of each scalable layer. For a given fixed energy value per chip and chip rate, the selection of a spreading code length affects the transmitted energy per bit and bit rate for each scalable layer. An MPEG-4 source encoder is used to provide a two-layer SNR scalable bitstream. Each of the two layers is channel-coded using Rate-Compatible Punctured Convolutional (RCPC) codes. Then, the data are interleaved, spread, carrier-modulated and transmitted over the wireless channel. A multipath Rayleigh fading channel is assumed. At the other end, we assume the presence of an antenna array receiver. After carrier demodulation, multiple-access-interference suppressing despreading is performed using space-time auxiliary vector (AV) filtering. The choice of the AV receiver is dictated by realistic channel fading rates that limit the data record available for receiver adaptation and redesign. Indeed, AV filter short-data-record estimators have been shown to exhibit superior bit-error-rate performance in comparison with LMS, RLS, SMI, or 'multistage nested Wiener' adaptive filter implementations. Our experimental results demonstrate the effectiveness of multirate DS-CDMA systems for wireless video transmission.

  11. Generalized Split-Window Algorithm for Estimate of Land Surface Temperature from Chinese Geostationary FengYun Meteorological Satellite (FY-2C) Data

    PubMed Central

    Tang, Bohui; Bi, Yuyun; Li, Zhao-Liang; Xia, Jun

    2008-01-01

    On the basis of the radiative transfer theory, this paper addressed the estimate of Land Surface Temperature (LST) from the Chinese first operational geostationary meteorological satellite-FengYun-2C (FY-2C) data in two thermal infrared channels (IR1, 10.3-11.3 μm and IR2, 11.5-12.5 μm), using the Generalized Split-Window (GSW) algorithm proposed by Wan and Dozier (1996). The coefficients in the GSW algorithm corresponding to a series of overlapping ranging of the mean emissivity, the atmospheric Water Vapor Content (WVC), and the LST were derived using a statistical regression method from the numerical values simulated with an accurate atmospheric radiative transfer model MODTRAN 4 over a wide range of atmospheric and surface conditions. The simulation analysis showed that the LST could be estimated by the GSW algorithm with the Root Mean Square Error (RMSE) less than 1 K for the sub-ranges with the Viewing Zenith Angle (VZA) less than 30° or for the sub-rangs with VZA less than 60° and the atmospheric WVC less than 3.5 g/cm2 provided that the Land Surface Emissivities (LSEs) are known. In order to determine the range for the optimum coefficients of the GSW algorithm, the LSEs could be derived from the data in MODIS channels 31 and 32 provided by MODIS/Terra LST product MOD11B1, or be estimated either according to the land surface classification or using the method proposed by Jiang et al. (2006); and the WVC could be obtained from MODIS total precipitable water product MOD05, or be retrieved using Li et al.' method (2003). The sensitivity and error analyses in term of the uncertainty of the LSE and WVC as well as the instrumental noise were performed. In addition, in order to compare the different formulations of the split-window algorithms, several recently proposed split-window algorithms were used to estimate the LST with the same simulated FY-2C data. The result of the intercomparsion showed that most of the algorithms give comparable results. PMID:27879744

  12. Generalized Split-Window Algorithm for Estimate of Land Surface Temperature from Chinese Geostationary FengYun Meteorological Satellite (FY-2C) Data.

    PubMed

    Tang, Bohui; Bi, Yuyun; Li, Zhao-Liang; Xia, Jun

    2008-02-14

    On the basis of the radiative transfer theory, this paper addressed the estimate ofLand Surface Temperature (LST) from the Chinese first operational geostationarymeteorological satellite-FengYun-2C (FY-2C) data in two thermal infrared channels (IR1,10.3-11.3 μ m and IR2, 11.5-12.5 μ m ), using the Generalized Split-Window (GSW)algorithm proposed by Wan and Dozier (1996). The coefficients in the GSW algorithmcorresponding to a series of overlapping ranging of the mean emissivity, the atmosphericWater Vapor Content (WVC), and the LST were derived using a statistical regressionmethod from the numerical values simulated with an accurate atmospheric radiativetransfer model MODTRAN 4 over a wide range of atmospheric and surface conditions.The simulation analysis showed that the LST could be estimated by the GSW algorithmwith the Root Mean Square Error (RMSE) less than 1 K for the sub-ranges with theViewing Zenith Angle (VZA) less than 30° or for the sub-rangs with VZA less than 60°and the atmospheric WVC less than 3.5 g/cm² provided that the Land Surface Emissivities(LSEs) are known. In order to determine the range for the optimum coefficients of theGSW algorithm, the LSEs could be derived from the data in MODIS channels 31 and 32 provided by MODIS/Terra LST product MOD11B1, or be estimated either according tothe land surface classification or using the method proposed by Jiang et al. (2006); and theWVC could be obtained from MODIS total precipitable water product MOD05, or beretrieved using Li et al.' method (2003). The sensitivity and error analyses in term of theuncertainty of the LSE and WVC as well as the instrumental noise were performed. Inaddition, in order to compare the different formulations of the split-window algorithms,several recently proposed split-window algorithms were used to estimate the LST with thesame simulated FY-2C data. The result of the intercomparsion showed that most of thealgorithms give comparable results.

  13. Deconvolution of continuous paleomagnetic data from pass-through magnetometer: A new algorithm to restore geomagnetic and environmental information based on realistic optimization

    NASA Astrophysics Data System (ADS)

    Oda, Hirokuni; Xuan, Chuang

    2014-10-01

    development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.

  14. Quantum subsystems: Exploring the complementarity of quantum privacy and error correction

    NASA Astrophysics Data System (ADS)

    Jochym-O'Connor, Tomas; Kribs, David W.; Laflamme, Raymond; Plosker, Sarah

    2014-09-01

    This paper addresses and expands on the contents of the recent Letter [Phys. Rev. Lett. 111, 030502 (2013), 10.1103/PhysRevLett.111.030502] discussing private quantum subsystems. Here we prove several previously presented results, including a condition for a given random unitary channel to not have a private subspace (although this does not mean that private communication cannot occur, as was previously demonstrated via private subsystems) and algebraic conditions that characterize when a general quantum subsystem or subspace code is private for a quantum channel. These conditions can be regarded as the private analog of the Knill-Laflamme conditions for quantum error correction, and we explore how the conditions simplify in some special cases. The bridge between quantum cryptography and quantum error correction provided by complementary quantum channels motivates the study of a new, more general definition of quantum error-correcting code, and we initiate this study here. We also consider the concept of complementarity for the general notion of a private quantum subsystem.

  15. Superdense coding interleaved with forward error correction

    DOE PAGES

    Humble, Travis S.; Sadlier, Ronald J.

    2016-05-12

    Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less

  16. Extraction of tidal channel networks from airborne scanning laser altimetry

    NASA Astrophysics Data System (ADS)

    Mason, David C.; Scott, Tania R.; Wang, Hai-Jing

    Tidal channel networks are important features of the inter-tidal zone, and play a key role in tidal propagation and in the evolution of salt marshes and tidal flats. The study of their morphology is currently an active area of research, and a number of theories related to networks have been developed which require validation using dense and extensive observations of network forms and cross-sections. The conventional method of measuring networks is cumbersome and subjective, involving manual digitisation of aerial photographs in conjunction with field measurement of channel depths and widths for selected parts of the network. This paper describes a semi-automatic technique developed to extract networks from high-resolution LiDAR data of the inter-tidal zone. A multi-level knowledge-based approach has been implemented, whereby low-level algorithms first extract channel fragments based mainly on image properties then a high-level processing stage improves the network using domain knowledge. The approach adopted at low level uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels. The higher level processing includes a channel repair mechanism. The algorithm may be extended to extract networks from aerial photographs as well as LiDAR data. Its performance is illustrated using LiDAR data of two study sites, the River Ems, Germany and the Venice Lagoon. For the River Ems data, the error of omission for the automatic channel extractor is 26%, partly because numerous small channels are lost because they fall below the edge threshold, though these are less than 10 cm deep and unlikely to be hydraulically significant. The error of commission is lower, at 11%. For the Venice Lagoon data, the error of omission is 14%, but the error of commission is 42%, due partly to the difficulty of interpreting channels in these natural scenes. As a benchmark, previous work has shown that this type of algorithm specifically designed for extracting tidal networks from LiDAR data is able to achieve substantially improved results compared with those obtained using standard algorithms for drainage network extraction from Digital Terrain Models.

  17. Automated biodosimetry using digital image analysis of fluorescence in situ hybridization specimens.

    PubMed

    Castleman, K R; Schulze, M; Wu, Q

    1997-11-01

    Fluorescence in situ hybridization (FISH) of metaphase chromosome spreads is valuable for monitoring the radiation dose to circulating lymphocytes. At low dose levels, the number of cells that must be examined to estimate aberration frequencies is quite large. An automated microscope that can perform this analysis autonomously on suitably prepared specimens promises to make practical the large-scale studies that will be required for biodosimetry in the future. This paper describes such an instrument that is currently under development. We use metaphase specimens in which the five largest chromosomes have been hybridized with different-colored whole-chromosome painting probes. An automated multiband fluorescence microscope locates the spreads and counts the number of chromosome components of each color. Digital image analysis is used to locate and isolate the cells, count chromosome components, and estimate the proportions of abnormal cells. Cells exhibiting more than two chromosomal fragments in any color correspond to a clastogenic event. These automatically derived counts are corrected for statistical bias and used to estimate the overall rate of chromosome breakage. Overlap of fluorophore emission spectra prohibits isolation of the different chromosomes into separate color channels. Image processing effectively isolates each fluorophore to a single monochrome image, simplifying the task of counting chromosome fragments and reducing the error in the algorithm. Using proportion estimation, we remove the bias introduced by counting errors, leaving accuracy restricted by sample size considerations alone.

  18. Precision and Radiosonde Validation of Satellite Gridpoint Temperature Anomalies. Part II: A Tropospheric Retrieval and Trends during 1979-90.

    NASA Astrophysics Data System (ADS)

    Spencer, Roy W.; Christy, John R.

    1992-08-01

    TIROS-N satellite Microwave Sounding Unit (MSU) channel 2 data from different view angles across the MSU man swath are combined to remove the influence of the lower stratosphere and much of the upper troposphere on the measured brightness temperatures. The retrieval provides a sharper averaging kernel than the raw channel 2 weighting function, with a peak lowered from 50 kPa to 70 kPa and with only slightly more surface influence than raw channel 2. Monthly 2.5° gridpoint anomalies of this tropospheric retrieval compared between simultaneously operating satellites indicate close agreement, 0.15°C in the tropics to around 0.30°C over much of the higher latitudes. The agreement is not as close as with raw channel 2 anomalies because synoptic-scale temperature gradient information across the 2000-km swath of the MSU is lost in the retrieval procedure and because the retrieval involves the magnification of a small difference between two large numbers. Single gridpoint monthly anomaly correlations between the satellite measurements and the radiosonde calculations range from around 0.95 at high latitudes to below 0.8 in the tropical west Pacific, with standard errors of estimate of 0.16°C at Guam to around 0.50°C at high-latitude continental stations. Calculation of radiosonde temperature with a static weighting function instead of the radiative transfer equation degrades the standard errors by an average of less than 0.04°C. Of various standard tropospheric layers, the channel 2 retrieval anomalies correlate best with radiosonde 100-50- or 100-40-kPa-thickness anomalies. A comparison between global and hemispheric anomalies computed for raw channel 2 data versus the tropospheric retrieval show a correction in the 1979-90 time series for the volcano-induced stratospheric warming of 1982-83, which was independently observed by MSU channel 4. This correction leads to a slightly greater tropospheric warming trend in the 12-year time series (1979-90) for the tropospheric retrieval [0.039°C (±0.03°C) per decade] than for channel 2 alone [0.022°C (±0.02°C) per decade].

  19. Communications protocol

    NASA Technical Reports Server (NTRS)

    Zhou, Xiaoming (Inventor); Baras, John S. (Inventor)

    2010-01-01

    The present invention relates to an improved communications protocol which increases the efficiency of transmission in return channels on a multi-channel slotted Alohas system by incorporating advanced error correction algorithms, selective retransmission protocols and the use of reserved channels to satisfy the retransmission requests.

  20. Radiometric analysis of the longwave infrared channel of the Thematic Mapper on LANDSAT 4 and 5

    NASA Technical Reports Server (NTRS)

    Schott, John R.; Volchok, William J.; Biegel, Joseph D.

    1986-01-01

    The first objective was to evaluate the postlaunch radiometric calibration of the LANDSAT Thematic Mapper (TM) band 6 data. The second objective was to determine to what extent surface temperatures could be computed from the TM and 6 data using atmospheric propagation models. To accomplish this, ground truth data were compared to a single TM-4 band 6 data set. This comparison indicated satisfactory agreement over a narrow temperature range. The atmospheric propagation model (modified LOWTRAN 5A) was used to predict surface temperature values based on the radiance at the spacecraft. The aircraft data were calibrated using a multi-altitude profile calibration technique which had been extensively tested in previous studies. This aircraft calibration permitted measurement of surface temperatures based on the radiance reaching the aircraft. When these temperature values are evaluated, an error in the satellite's ability to predict surface temperatures can be estimated. This study indicated that by carefully accounting for various sensor calibration and atmospheric propagation effects, and expected error (1 standard deviation) in surface temperature would be 0.9 K. This assumes no error in surface emissivity and no sampling error due to target location. These results indicate that the satellite calibration is within nominal limits to within this study's ability to measure error.

  1. Gaussian Hypothesis Testing and Quantum Illumination.

    PubMed

    Wilde, Mark M; Tomamichel, Marco; Lloyd, Seth; Berta, Mario

    2017-09-22

    Quantum hypothesis testing is one of the most basic tasks in quantum information theory and has fundamental links with quantum communication and estimation theory. In this paper, we establish a formula that characterizes the decay rate of the minimal type-II error probability in a quantum hypothesis test of two Gaussian states given a fixed constraint on the type-I error probability. This formula is a direct function of the mean vectors and covariance matrices of the quantum Gaussian states in question. We give an application to quantum illumination, which is the task of determining whether there is a low-reflectivity object embedded in a target region with a bright thermal-noise bath. For the asymmetric-error setting, we find that a quantum illumination transmitter can achieve an error probability exponent stronger than a coherent-state transmitter of the same mean photon number, and furthermore, that it requires far fewer trials to do so. This occurs when the background thermal noise is either low or bright, which means that a quantum advantage is even easier to witness than in the symmetric-error setting because it occurs for a larger range of parameters. Going forward from here, we expect our formula to have applications in settings well beyond those considered in this paper, especially to quantum communication tasks involving quantum Gaussian channels.

  2. Self-calibration method without joint iteration for distributed small satellite SAR systems

    NASA Astrophysics Data System (ADS)

    Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan

    2013-12-01

    The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.

  3. GOCI Yonsei aerosol retrieval version 2 products: an improved algorithm and error analysis with uncertainty estimation from 5-year validation over East Asia

    NASA Astrophysics Data System (ADS)

    Choi, Myungje; Kim, Jhoon; Lee, Jaehwa; Kim, Mijin; Park, Young-Je; Holben, Brent; Eck, Thomas F.; Li, Zhengqiang; Song, Chul H.

    2018-01-01

    The Geostationary Ocean Color Imager (GOCI) Yonsei aerosol retrieval (YAER) version 1 algorithm was developed to retrieve hourly aerosol optical depth at 550 nm (AOD) and other subsidiary aerosol optical properties over East Asia. The GOCI YAER AOD had accuracy comparable to ground-based and other satellite-based observations but still had errors because of uncertainties in surface reflectance and simple cloud masking. In addition, near-real-time (NRT) processing was not possible because a monthly database for each year encompassing the day of retrieval was required for the determination of surface reflectance. This study describes the improved GOCI YAER algorithm version 2 (V2) for NRT processing with improved accuracy based on updates to the cloud-masking and surface-reflectance calculations using a multi-year Rayleigh-corrected reflectance and wind speed database, and inversion channels for surface conditions. The improved GOCI AOD τG is closer to that of the Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) AOD than was the case for AOD from the YAER V1 algorithm. The V2 τG has a lower median bias and higher ratio within the MODIS expected error range (0.60 for land and 0.71 for ocean) compared with V1 (0.49 for land and 0.62 for ocean) in a validation test against Aerosol Robotic Network (AERONET) AOD τA from 2011 to 2016. A validation using the Sun-Sky Radiometer Observation Network (SONET) over China shows similar results. The bias of error (τG - τA) is within -0.1 and 0.1, and it is a function of AERONET AOD and Ångström exponent (AE), scattering angle, normalized difference vegetation index (NDVI), cloud fraction and homogeneity of retrieved AOD, and observation time, month, and year. In addition, the diagnostic and prognostic expected error (PEE) of τG are estimated. The estimated PEE of GOCI V2 AOD is well correlated with the actual error over East Asia, and the GOCI V2 AOD over South Korea has a higher ratio within PEE than that over China and Japan.

  4. SVM-Based Spectral Analysis for Heart Rate from Multi-Channel WPPG Sensor Signals.

    PubMed

    Xiong, Jiping; Cai, Lisang; Wang, Fei; He, Xiaowei

    2017-03-03

    Although wrist-type photoplethysmographic (hereafter referred to as WPPG) sensor signals can measure heart rate quite conveniently, the subjects' hand movements can cause strong motion artifacts, and then the motion artifacts will heavily contaminate WPPG signals. Hence, it is challenging for us to accurately estimate heart rate from WPPG signals during intense physical activities. The WWPG method has attracted more attention thanks to the popularity of wrist-worn wearable devices. In this paper, a mixed approach called Mix-SVM is proposed, it can use multi-channel WPPG sensor signals and simultaneous acceleration signals to measurement heart rate. Firstly, we combine the principle component analysis and adaptive filter to remove a part of the motion artifacts. Due to the strong relativity between motion artifacts and acceleration signals, the further denoising problem is regarded as a sparse signals reconstruction problem. Then, we use a spectrum subtraction method to eliminate motion artifacts effectively. Finally, the spectral peak corresponding to heart rate is sought by an SVM-based spectral analysis method. Through the public PPG database in the 2015 IEEE Signal Processing Cup, we acquire the experimental results, i.e., the average absolute error was 1.01 beat per minute, and the Pearson correlation was 0.9972. These results also confirm that the proposed Mix-SVM approach has potential for multi-channel WPPG-based heart rate estimation in the presence of intense physical exercise.

  5. Field assessment of noncontact stream gauging using portable surface velocity radars (SVR)

    NASA Astrophysics Data System (ADS)

    Welber, Matilde; Le Coz, Jérôme; Laronne, Jonathan B.; Zolezzi, Guido; Zamler, Daniel; Dramais, Guillaume; Hauet, Alexandre; Salvaro, Martino

    2016-02-01

    The applicability of a portable, commercially available surface velocity radar (SVR) for noncontact stream gauging was evaluated through a series of field-scale experiments carried out in a variety of sites and deployment conditions. Comparisons with various concurrent techniques showed acceptable agreement with velocity profiles, with larger uncertainties close to the banks. In addition to discharge error sources shared with intrusive velocity-area techniques, SVR discharge estimates are affected by flood-induced changes in the bed profile and by the selection of a depth-averaged to surface velocity ratio, or velocity coefficient (α). Cross-sectional averaged velocity coefficients showed smaller fluctuations and closer agreement with theoretical values than those computed on individual verticals, especially in channels with high relative roughness. Our findings confirm that α = 0.85 is a valid default value, with a preferred site-specific calibration to avoid underestimation of discharge in very smooth channels (relative roughness ˜ 0.001) and overestimation in very rough channels (relative roughness > 0.05). Theoretically derived and site-calibrated values of α also give accurate SVR-based discharge estimates (within 10%) for low and intermediate roughness flows (relative roughness 0.001 to 0.05). Moreover, discharge uncertainty does not exceed 10% even for a limited number of SVR positions along the cross section (particularly advantageous to gauge unsteady flood flows and very large floods), thereby extending the range of validity of rating curves.

  6. Errors in the estimation of the variance: implications for multiple-probability fluctuation analysis.

    PubMed

    Saviane, Chiara; Silver, R Angus

    2006-06-15

    Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.

  7. A passive microwave technique for estimating rainfall and vertical structure information from space. Part 2: Applications to SSM/I data

    NASA Technical Reports Server (NTRS)

    Kummerow, Christian; Giglio, Louis

    1994-01-01

    A multi channel physical approach for retrieving rainfall and its vertical structure from Special Sensor Microwave/Imager (SSM/I) observations is examined. While a companion paper was devoted exclusively to the description of the algorithm, its strengths, and its limitations, the main focus of this paper is to report on the results, applicability, and expected accuraciesfrom this algorithm. Some examples are given that compare retrieved results with ground-based radar data from different geographical regions to illustrate the performance and utility of the algorithm under distinct rainfall conditions. More quantitative validation is accomplished using two months of radar data from Darwin, Australia, and the radar network over Japan. Instantaneous comparisons at Darwin indicate that root-mean-square errors for 1.25 deg areas over water are 0.09 mm/h compared to the mean rainfall value of 0.224 mm/h while the correlation exceeds 0.9. Similar results are obtained over the Japanese validation site with rms errors of 0.615 mm/h compared to the mean of 0.0880 mm/h and a correlation of 0.9. Results are less encouraging over land with root-mean-square errors somewhat larger than the mean rain rates and correlations of only 0.71 and 0.62 for Darwin and Japan, respectively. These validation studies are further used in combination with the theoretical treatment of expected accuracies developed in the companion paper to define error estimates on a broader scale than individual radar sites from which the errors may be analyzed. Comparisons with simpler techniques that are based on either emission or scattering measurements are used to illustrate the fact that the current algorithm, while better correlated with the emission methods over water, cannot be reduced to either of these simpler methods.

  8. Navigation of the autonomous vehicle reverse movement

    NASA Astrophysics Data System (ADS)

    Rachkov, M.; Petukhov, S.

    2018-02-01

    The paper presents a mathematical formulation of the vehicle reverse motion along a multi-link polygonal trajectory consisting of rectilinear segments interconnected by nodal points. Relevance of the problem is caused by the need to solve a number of tasks: to save the vehicle in the event of а communication break by returning along the trajectory already passed, to avoid a turn on the ground in constrained obstacles or dangerous conditions, or a partial return stroke for the subsequent bypass of the obstacle and continuation of the forward movement. The method of navigation with direct movement assumes that the reverse path is elaborated by using landmarks. To measure landmarks on board, a block of cameras is placed on a vehicle controlled by the operator through the radio channel. Errors in estimating deviation from the nominal trajectory of motion are determined using the multidimensional correlation analysis apparatus based on the dynamics of a lateral deviation error and a vehicle speed error. The result of the experiment showed a relatively high accuracy in determining the state vector that provides the vehicle reverse motion relative to the reference trajectory with a practically acceptable error while returning to the start point.

  9. Performance Analysis of Iterative Channel Estimation and Multiuser Detection in Multipath DS-CDMA Channels

    NASA Astrophysics Data System (ADS)

    Li, Husheng; Betz, Sharon M.; Poor, H. Vincent

    2007-05-01

    This paper examines the performance of decision feedback based iterative channel estimation and multiuser detection in channel coded aperiodic DS-CDMA systems operating over multipath fading channels. First, explicit expressions describing the performance of channel estimation and parallel interference cancellation based multiuser detection are developed. These results are then combined to characterize the evolution of the performance of a system that iterates among channel estimation, multiuser detection and channel decoding. Sufficient conditions for convergence of this system to a unique fixed point are developed.

  10. Comparison of different source calculations in two-nucleon channel at large quark mass

    NASA Astrophysics Data System (ADS)

    Yamazaki, Takeshi; Ishikawa, Ken-ichi; Kuramashi, Yoshinobu

    2018-03-01

    We investigate a systematic error coming from higher excited state contributions in the energy shift of light nucleus in the two-nucleon channel by comparing two different source calculations with the exponential and wall sources. Since it is hard to obtain a clear signal of the wall source correlation function in a plateau region, we employ a large quark mass as the pion mass is 0.8 GeV in quenched QCD. We discuss the systematic error in the spin-triplet channel of the two-nucleon system, and the volume dependence of the energy shift.

  11. Estimation of color filter array data from JPEG images for improved demosaicking

    NASA Astrophysics Data System (ADS)

    Feng, Wei; Reeves, Stanley J.

    2006-02-01

    On-camera demosaicking algorithms are necessarily simple and therefore do not yield the best possible images. However, off-camera demosaicking algorithms face the additional challenge that the data has been compressed and therefore corrupted by quantization noise. We propose a method to estimate the original color filter array (CFA) data from JPEG-compressed images so that more sophisticated (and better) demosaicking schemes can be applied to get higher-quality images. The JPEG image formation process, including simple demosaicking, color space transformation, chrominance channel decimation and DCT, is modeled as a series of matrix operations followed by quantization on the CFA data, which is estimated by least squares. An iterative method is used to conserve memory and speed computation. Our experiments show that the mean square error (MSE) with respect to the original CFA data is reduced significantly using our algorithm, compared to that of unprocessed JPEG and deblocked JPEG data.

  12. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    Engh, G.J. van den; Stokdijk, W.

    1992-09-22

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate. 17 figs.

  13. Simple, accurate formula for the average bit error probability of multiple-input multiple-output free-space optical links over negative exponential turbulence channels.

    PubMed

    Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas

    2012-08-01

    In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.

  14. On the performance evaluation of LQAM-MPPM techniques over exponentiated Weibull fading free-space optical channels

    NASA Astrophysics Data System (ADS)

    Khallaf, Haitham S.; Elfiqi, Abdulaziz E.; Shalaby, Hossam M. H.; Sampei, Seiichi; Obayya, Salah S. A.

    2018-06-01

    We investigate the performance of hybrid L-ary quadrature-amplitude modulation-multi-pulse pulse-position modulation (LQAM-MPPM) techniques over exponentiated Weibull (EW) fading free-space optical (FSO) channel, considering both weather and pointing-error effects. Upper bound and approximate-tight upper bound expressions for the bit-error rate (BER) of LQAM-MPPM techniques over EW FSO channels are obtained, taking into account the effects of fog, beam divergence, and pointing-error. Setup block diagram for both the transmitter and receiver of the LQAM-MPPM/FSO system are introduced and illustrated. The BER expressions are evaluated numerically and the results reveal that LQAM-MPPM technique outperforms ordinary LQAM and MPPM schemes under different fading levels and weather conditions. Furthermore, the effect of modulation-index is investigated and it turned out that a modulation-index greater than 0.4 is required in order to optimize the system performance. Finally, the effect of pointing-error introduces a great power penalty on the LQAM-MPPM system performance. Specifically, at a BER of 10-9, pointing-error introduces power penalties of about 45 and 28 dB for receiver aperture sizes of DR = 50 and 200 mm, respectively.

  15. Ground Vibration Attenuation Measurement using Triaxial and Single Axis Accelerometers

    NASA Astrophysics Data System (ADS)

    Mohammad, A. H.; Yusoff, N. A.; Madun, A.; Tajudin, S. A. A.; Zahari, M. N. H.; Chik, T. N. T.; Rahman, N. A.; Annuar, Y. M. N.

    2018-04-01

    Peak Particle Velocity is one of the important term to show the level of the vibration amplitude especially traveling wave by distance. Vibration measurement using triaxial accelerometer is needed to obtain accurate value of PPV however limited by the size and the available channel of the data acquisition module for detailed measurement. In this paper, an attempt to estimate accurate PPV has been made by using only a triaxial accelerometer together with multiple single axis accelerometer for the ground vibration measurement. A field test was conducted on soft ground using nine single axis accelerometers and a triaxial accelerometer installed at nine receiver location R1 to R9. Based from the obtained result, the method shows convincing similarity between actual PPV with the calculated PPV with error ratio 0.97. With the design method, vibration measurement equipment size can be reduced with fewer channel required.

  16. Stability Analysis of Multi-Sensor Kalman Filtering over Lossy Networks

    PubMed Central

    Gao, Shouwan; Chen, Pengpeng; Huang, Dan; Niu, Qiang

    2016-01-01

    This paper studies the remote Kalman filtering problem for a distributed system setting with multiple sensors that are located at different physical locations. Each sensor encapsulates its own measurement data into one single packet and transmits the packet to the remote filter via a lossy distinct channel. For each communication channel, a time-homogeneous Markov chain is used to model the normal operating condition of packet delivery and losses. Based on the Markov model, a necessary and sufficient condition is obtained, which can guarantee the stability of the mean estimation error covariance. Especially, the stability condition is explicitly expressed as a simple inequality whose parameters are the spectral radius of the system state matrix and transition probabilities of the Markov chains. In contrast to the existing related results, our method imposes less restrictive conditions on systems. Finally, the results are illustrated by simulation examples. PMID:27104541

  17. Transmission over UWB channels with OFDM system using LDPC coding

    NASA Astrophysics Data System (ADS)

    Dziwoki, Grzegorz; Kucharczyk, Marcin; Sulek, Wojciech

    2009-06-01

    Hostile wireless environment requires use of sophisticated signal processing methods. The paper concerns on Ultra Wideband (UWB) transmission over Personal Area Networks (PAN) including MB-OFDM specification of physical layer. In presented work the transmission system with OFDM modulation was connected with LDPC encoder/decoder. Additionally the frame and bit error rate (FER and BER) of the system was decreased using results from the LDPC decoder in a kind of turbo equalization algorithm for better channel estimation. Computational block using evolutionary strategy, from genetic algorithms family, was also used in presented system. It was placed after SPA (Sum-Product Algorithm) decoder and is conditionally turned on in the decoding process. The result is increased effectiveness of the whole system, especially lower FER. The system was tested with two types of LDPC codes, depending on type of parity check matrices: randomly generated and constructed deterministically, optimized for practical decoder architecture implemented in the FPGA device.

  18. Performance of data-compression codes in channels with errors. Final report, October 1986-January 1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-10-01

    Huffman codes, comma-free codes, and block codes with shift indicators are important candidate-message compression codes for improving the efficiency of communications systems. This study was undertaken to determine if these codes could be used to increase the thruput of the fixed very-low-frequency (FVLF) communication system. This applications involves the use of compression codes in a channel with errors.

  19. Response Errors Explain the Failure of Independent-Channels Models of Perception of Temporal Order

    PubMed Central

    García-Pérez, Miguel A.; Alcalá-Quintana, Rocío

    2012-01-01

    Independent-channels models of perception of temporal order (also referred to as threshold models or perceptual latency models) have been ruled out because two formal properties of these models (monotonicity and parallelism) are not borne out by data from ternary tasks in which observers must judge whether stimulus A was presented before, after, or simultaneously with stimulus B. These models generally assume that observed responses are authentic indicators of unobservable judgments, but blinks, lapses of attention, or errors in pressing the response keys (maybe, but not only, motivated by time pressure when reaction times are being recorded) may make observers misreport their judgments or simply guess a response. We present an extension of independent-channels models that considers response errors and we show that the model produces psychometric functions that do not satisfy monotonicity and parallelism. The model is illustrated by fitting it to data from a published study in which the ternary task was used. The fitted functions describe very accurately the absence of monotonicity and parallelism shown by the data. These characteristics of empirical data are thus consistent with independent-channels models when response errors are taken into consideration. The implications of these results for the analysis and interpretation of temporal order judgment data are discussed. PMID:22493586

  20. Modeling the response of a monopulse radar to impulsive jamming signals using the Block Oriented System Simulator (BOSS)

    NASA Astrophysics Data System (ADS)

    Long, Jeffrey K.

    1989-09-01

    This theses developed computer models of two types of amplitude comparison monopulse processors using the Block Oriented System Simulation (BOSS) software package and to determine the response to these models to impulsive input signals. This study was an effort to determine the susceptibility of monopulse tracking radars to impulsing jamming signals. Two types of amplitude comparison monopulse receivers were modeled, one using logarithmic amplifiers and the other using automatic gain control for signal normalization. Simulations of both types of systems were run under various conditions of gain or frequency imbalance between the two receiver channels. The resulting errors from the imbalanced simulations were compared to the outputs of similar, baseline simulations which had no electrical imbalances. The accuracy of both types of processors was directly affected by gain or frequency imbalances in their receiver channels. In most cases, it was possible to generate both positive and negative angular errors, dependent upon the type and degree of mismatch between the channels. The system most susceptible to induced errors was a frequency imbalanced processor which used AGC circuitry. Any errors introduced will be a function of the degree of mismatch between the channels and therefore would be difficult to exploit reliably.

  1. Field evaluation of distance-estimation error during wetland-dependent bird surveys

    USGS Publications Warehouse

    Nadeau, Christopher P.; Conway, Courtney J.

    2012-01-01

    Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (x̄error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (x̄error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point-count surveys. Evaluating distance-estimation errors will allow investigators to better evaluate the accuracy of avian density and trend estimates. Moreover, investigators who evaluate distance-estimation errors could employ recently developed models to incorporate distance-estimation error into analyses. We encourage further development of such models, including the inclusion of such models into distance-analysis software.

  2. Ion channel-mediated uptake of cationic vital dyes into live cells: a potential source of error when assessing cell viability.

    PubMed

    Bukhari, Maurish; Burm, Hayley; Samways, Damien S K

    2016-10-01

    Ionic "vital dyes" are commonly used to assess cell viability based on the idea that their permeation is contingent on a loss of membrane integrity. However, the possibility that dye entry is conducted into live cells by endogenous membrane transporters must be recognized and controlled for. Several cation-selective plasma membrane-localized ion channels, including the adenosine 5'-triphosphate (ATP)-gated P2X receptors, have been reported to conduct entry of the DNA-binding fluorescence dye, YO-PRO-1, into live cells. Extracellular ATP often becomes elevated as a result of release from dying cells, and so it is possible that activation of P2X channels on neighboring live cells could lead to exaggerated estimation of cytotoxicity. Here, we screened a number of fluorescent vital dyes for ion channel-mediated uptake in HEK293 cells expressing recombinant P2X2, P2X7, or TRPV1 channels. Our data shows that activation of all three channels caused substantial uptake and nuclear accumulation of YO-PRO-1, 4',6-diamidino-2-phenylindole (DAPI), and Hoechst 33258 into transfected cells and did so well within the time period usually used for incubation of cells with vital dyes. In contrast, channel activation in the presence of propidium iodide and SYTOX Green caused no measurable uptake and accumulation during a 20-min exposure, suggesting that these dyes are not likely to exhibit measurable uptake through these particular ion channels during a conventional cell viability assay. Caution is encouraged when choosing and employing cationic dyes for the purpose of cell viability assessment, particularly when there is a likelihood of cells expressing ion channels permeable to large ions.

  3. Scalable video transmission over Rayleigh fading channels using LDPC codes

    NASA Astrophysics Data System (ADS)

    Bansal, Manu; Kondi, Lisimachos P.

    2005-03-01

    In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.

  4. Bias in the Wagner-Nelson estimate of the fraction of drug absorbed.

    PubMed

    Wang, Yibin; Nedelman, Jerry

    2002-04-01

    To examine and quantify bias in the Wagner-Nelson estimate of the fraction of drug absorbed resulting from the estimation error of the elimination rate constant (k), measurement error of the drug concentration, and the truncation error in the area under the curve. Bias in the Wagner-Nelson estimate was derived as a function of post-dosing time (t), k, ratio of absorption rate constant to k (r), and the coefficient of variation for estimates of k (CVk), or CV% for the observed concentration, by assuming a one-compartment model and using an independent estimate of k. The derived functions were used for evaluating the bias with r = 0.5, 3, or 6; k = 0.1 or 0.2; CV, = 0.2 or 0.4; and CV, =0.2 or 0.4; for t = 0 to 30 or 60. Estimation error of k resulted in an upward bias in the Wagner-Nelson estimate that could lead to the estimate of the fraction absorbed being greater than unity. The bias resulting from the estimation error of k inflates the fraction of absorption vs. time profiles mainly in the early post-dosing period. The magnitude of the bias in the Wagner-Nelson estimate resulting from estimation error of k was mainly determined by CV,. The bias in the Wagner-Nelson estimate resulting from to estimation error in k can be dramatically reduced by use of the mean of several independent estimates of k, as in studies for development of an in vivo-in vitro correlation. The truncation error in the area under the curve can introduce a negative bias in the Wagner-Nelson estimate. This can partially offset the bias resulting from estimation error of k in the early post-dosing period. Measurement error of concentration does not introduce bias in the Wagner-Nelson estimate. Estimation error of k results in an upward bias in the Wagner-Nelson estimate, mainly in the early drug absorption phase. The truncation error in AUC can result in a downward bias, which may partially offset the upward bias due to estimation error of k in the early absorption phase. Measurement error of concentration does not introduce bias. The joint effect of estimation error of k and truncation error in AUC can result in a non-monotonic fraction-of-drug-absorbed-vs-time profile. However, only estimation error of k can lead to the Wagner-Nelson estimate of fraction of drug absorbed greater than unity.

  5. Transforming Surface Water Hydrology Through SWOT Altimetry

    NASA Astrophysics Data System (ADS)

    Alsdorf, Douglas; Mognard, Nelly; Rodriguez, Ernesto

    2013-09-01

    SWOT will measure water surface elevations across rivers, lakes, wetlands, and reservoirs with a 120km wide swath using decimeter-scale pixels having centimetric-scale height accuracies. Nothing like this "water surface topography" has been collected on a consistent basis from any method. Thus, SWOT will provide a transformative measurement for global hydrology. Storage change measurements from SWOT are expected to have an error of 10% or better for 250m2 and larger water bodies. Discharge estimation is complicated by the lack of channel bathymetric knowledge. Nevertheless, two model-based studies of the Ohio River suggest SWOT discharge errors will be 10%. Important questions will be addressed via SWOT measurements, e.g., (1) What is the water balance of the Congo Basin and indeed of any basin? (2) Where does a wetland receive its water: from upland runoff or from an adjacent river? (3) What are the implications for transboundary rivers?

  6. Investigation of adaptive filtering and MDL mitigation based on space-time block-coding for spatial division multiplexed coherent receivers

    NASA Astrophysics Data System (ADS)

    Weng, Yi; He, Xuan; Yao, Wang; Pacheco, Michelle C.; Wang, Junyi; Pan, Zhongqi

    2017-07-01

    In this paper, we explored the performance of space-time block-coding (STBC) assisted multiple-input multiple-output (MIMO) scheme for modal dispersion and mode-dependent loss (MDL) mitigation in spatial-division multiplexed optical communication systems, whereas the weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive least squares (RLS) algorithm for convergence and channel estimation. The proposed STBC-RLS algorithm can achieve 43.6% enhancement on convergence rate over conventional least mean squares (LMS) for quadrature phase-shift keying (QPSK) signals with merely 16.2% increase in hardware complexity. The overall optical signal to noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16-quadrature amplitude modulation (QAM) and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE).

  7. A hybrid demodulation method of fiber-optic Fabry-Perot pressure sensor

    NASA Astrophysics Data System (ADS)

    Yu, Le; Lang, Jianjun; Pan, Yong; Wu, Di; Zhang, Min

    2013-12-01

    The fiber-optic Fabry-Perot pressure sensors have been widely applied to measure pressure in oilfield. For multi-well it will take a long time (dozens of seconds) to demodulate downhole pressure values of all wells by using only one demodulation system and it will cost a lot when every well is equipped with one system, which heavily limits the sensor applied in oilfield. In present paper, a new hybrid demodulation method, combining the windowed nonequispaced discrete Fourier Transform (nDFT) method with segment search minimum mean square error estimation (MMSE) method, was developed, by which the demodulation time can be reduced to 200ms, i.e., measuring 10 channels/wells was less than 2s. Besides, experimental results showed the demodulation cavity length of the fiber-optic Fabry-Perot sensor has a maximum error of 0.5 nm and consequently pressure measurement accuracy can reach 0.4% F.S.

  8. Characterization and error analysis of an operational retrieval algorithm for estimating column ozone and aerosol properties from ground-based ultra-violet irradiance measurements

    NASA Astrophysics Data System (ADS)

    Taylor, Thomas E.; L'Ecuyer, Tristan; Slusser, James; Stephens, Graeme; Krotkov, Nick; Davis, John; Goering, Christian

    2005-08-01

    Extensive sensitivity and error characteristics of a recently developed optimal estimation retrieval algorithm which simultaneously determines aerosol optical depth (AOD), aerosol single scatter albedo (SSA) and total ozone column (TOC) from ultra-violet irradiances are described. The algorithm inverts measured diffuse and direct irradiances at 7 channels in the UV spectral range obtained from the United States Department of Agriculture's (USDA) UV-B Monitoring and Research Program's (UVMRP) network of 33 ground-based UV-MFRSR instruments to produce aerosol optical properties and TOC at all seven wavelengths. Sensitivity studies of the Tropospheric Ultra-violet/Visible (TUV) radiative transfer model performed for various operating modes (Delta-Eddington versus n-stream Discrete Ordinate) over domains of AOD, SSA, TOC, asymmetry parameter and surface albedo show that the solutions are well constrained. Realistic input error budgets and diagnostic and error outputs from the retrieval are analyzed to demonstrate the atmospheric conditions under which the retrieval provides useful and significant results. After optimizing the algorithm for the USDA site in Panther Junction, Texas the retrieval algorithm was run on a cloud screened set of irradiance measurements for the month of May 2003. Comparisons to independently derived AOD's are favorable with root mean square (RMS) differences of about 3% to 7% at 300nm and less than 1% at 368nm, on May 12 and 22, 2003. This retrieval method will be used to build an aerosol climatology and provide ground-truthing of satellite measurements by running it operationally on the USDA UV network database.

  9. Estimating flood hydrographs for urban basins in North Carolina

    USGS Publications Warehouse

    Mason, R.R.; Bales, J.D.

    1996-01-01

    A dimensionless hydrograph for North Carolina was developed from data collected in 29 urban and urbanizing basins in the State. The dimen- sionless hydrograph can be used with an estimate of peak flow and basin lagtime to synthesize a design flood hydrograph for urban basins in North Carolina. Peak flows can be estimated from a number of avail- able techniques; a procedure for estimating basin lagtime from main channel length, stream slope, and percentage of impervious area was developed from data collected at 50 sites and is presented in this report. The North Carolina dimensionless hydrograph provides satis- factory predictions of flood hydrographs in all regions of the State except for basins in or near Asheville where the method overestimated 11 of 12 measured hydrographs. A previously developed dimensionless hydrograph for urban basins in the Piedmont and upper Coastal Plain of South Carolina provides better flood-hydrograph predictions for the Asheville basins and has a standard error of 21 percent as compared to 41 percent for the North Carolina dimensionless hydrograph.

  10. Using sediment 'fingerprints' to assess sediment-budget errors, north Halawa Valley, Oahu, Hawaii, 1991-92

    USGS Publications Warehouse

    Hill, B.R.; DeCarlo, E.H.; Fuller, C.C.; Wong, M.F.

    1998-01-01

    Reliable estimates of sediment-budget errors are important for interpreting sediment-budget results. Sediment-budget errors are commonly considered equal to sediment-budget imbalances, which may underestimate actual sediment-budget errors if they include compensating positive and negative errors. We modified the sediment 'fingerprinting' approach to qualitatively evaluate compensating errors in an annual (1991) fine (<63 ??m) sediment budget for the North Halawa Valley, a mountainous, forested drainage basin on the island of Oahu, Hawaii, during construction of a major highway. We measured concentrations of aeolian quartz and 137Cs in sediment sources and fluvial sediments, and combined concentrations of these aerosols with the sediment budget to construct aerosol budgets. Aerosol concentrations were independent of the sediment budget, hence aerosol budgets were less likely than sediment budgets to include compensating errors. Differences between sediment-budget and aerosol-budget imbalances therefore provide a measure of compensating errors in the sediment budget. The sediment-budget imbalance equalled 25% of the fluvial fine-sediment load. Aerosol-budget imbalances were equal to 19% of the fluvial 137Cs load and 34% of the fluval quartz load. The reasonably close agreement between sediment- and aerosol-budget imbalances indicates that compensating errors in the sediment budget were not large and that the sediment-budget imbalance as a reliable measure of sediment-budget error. We attribute at least one-third of the 1991 fluvial fine-sediment load to highway construction. Continued monitoring indicated that highway construction produced 90% of the fluvial fine-sediment load during 1992. Erosion of channel margins and attrition of coarse particles provided most of the fine sediment produced by natural processes. Hillslope processes contributed relatively minor amounts of sediment.

  11. Alterations in Neural Control of Constant Isometric Contraction with the Size of Error Feedback

    PubMed Central

    Hwang, Ing-Shiou; Lin, Yen-Ting; Huang, Wei-Min; Yang, Zong-Ru; Hu, Chia-Ling; Chen, Yi-Ching

    2017-01-01

    Discharge patterns from a population of motor units (MUs) were estimated with multi-channel surface electromyogram and signal processing techniques to investigate parametric differences in low-frequency force fluctuations, MU discharges, and force-discharge relation during static force-tracking with varying sizes of execution error presented via visual feedback. Fourteen healthy adults produced isometric force at 10% of maximal voluntary contraction through index abduction under three visual conditions that scaled execution errors with different amplification factors. Error-augmentation feedback that used a high amplification factor (HAF) to potentiate visualized error size resulted in higher sample entropy, mean frequency, ratio of high-frequency components, and spectral dispersion of force fluctuations than those of error-reducing feedback using a low amplification factor (LAF). In the HAF condition, MUs with relatively high recruitment thresholds in the dorsal interosseous muscle exhibited a larger coefficient of variation for inter-spike intervals and a greater spectral peak of the pooled MU coherence at 13–35 Hz than did those in the LAF condition. Manipulation of the size of error feedback altered the force-discharge relation, which was characterized with non-linear approaches such as mutual information and cross sample entropy. The association of force fluctuations and global discharge trace decreased with increasing error amplification factor. Our findings provide direct neurophysiological evidence that favors motor training using error-augmentation feedback. Amplification of the visualized error size of visual feedback could enrich force gradation strategies during static force-tracking, pertaining to selective increases in the discharge variability of higher-threshold MUs that receive greater common oscillatory inputs in the β-band. PMID:28125658

  12. Utilization of all Spectral Channels of IASI for the Retrieval of the Atmospheric State

    NASA Astrophysics Data System (ADS)

    Del Bianco, S.; Cortesi, U.; Carli, B.

    2010-12-01

    The retrieval of atmospheric state parameters from broadband measurements acquired by high spectral resolution sensors, such as the Infrared Atmospheric Sounding Interferometer (IASI) onboard the Meteorological Operational (MetOp) platform, generally requires to deal with a prohibitively large number of spectral elements available from a single observation (8461 samples in the case of IASI, covering the 645-2760 cm-1 range with a resolution of 0.5 cm-1 and a spectral sampling of 0.25 cm-1). Most inversion algorithms developed for both operational and scientific analysis of IASI spectra perform a reduction of the data - typically based on channel selection, super-channel clustering or Principal Component Analysis (PCA) techniques - in order to handle the high dimensionality of the problem. Accordingly, simultaneous processing of all IASI channels received relatively low attention. Here we prove the feasibility of a retrieval approach exploiting all spectral channels of IASI, to extract information on water vapor, temperature and ozone profiles. This multi-target retrieval removes the systematic errors due to interfering parameters and makes the channel selection no longer necessary. The challenging computation is made possible by the use of a coarse spectral grid for the forward model calculation and by the abatement of the associated modeling errors through the use of a variance-covariance matrix of the residuals that takes into account all the forward model errors.

  13. New coherent laser communication detection scheme based on channel-switching method.

    PubMed

    Liu, Fuchuan; Sun, Jianfeng; Ma, Xiaoping; Hou, Peipei; Cai, Guangyu; Sun, Zhiwei; Lu, Zhiyong; Liu, Liren

    2015-04-01

    A new coherent laser communication detection scheme based on the channel-switching method is proposed. The detection front end of this scheme comprises a 90° optical hybrid and two balanced photodetectors which outputs the in-phase (I) channel and quadrature-phase (Q) channel signal current, respectively. With this method, the ultrahigh speed analog/digital transform of the signal of the I or Q channel is not required. The phase error between the signal and local lasers is obtained by simple analog circuit. Using the phase error signal, the signals of the I/Q channel are switched alternately. The principle of this detection scheme is presented. Moreover, the comparison of the sensitivity of this scheme with that of homodyne detection with an optical phase-locked loop is discussed. An experimental setup was constructed to verify the proposed detection scheme. The offline processing procedure and results are presented. This scheme could be realized through simple structure and has potential applications in cost-effective high-speed laser communication.

  14. Analysis of automatic repeat request methods for deep-space downlinks

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Ekroot, L.

    1995-01-01

    Automatic repeat request (ARQ) methods cannot increase the capacity of a memoryless channel. However, they can be used to decrease the complexity of the channel-coding system to achieve essentially error-free transmission and to reduce link margins when the channel characteristics are poorly predictable. This article considers ARQ methods on a power-limited channel (e.g., the deep-space channel), where it is important to minimize the total power needed to transmit the data, as opposed to a bandwidth-limited channel (e.g., terrestrial data links), where the spectral efficiency or the total required transmission time is the most relevant performance measure. In the analysis, we compare the performance of three reference concatenated coded systems used in actual deep-space missions to that obtainable by ARQ methods using the same codes, in terms of required power, time to transmit with a given number of retransmissions, and achievable probability of word error. The ultimate limits of ARQ with an arbitrary number of retransmissions are also derived.

  15. A Channelization-Based DOA Estimation Method for Wideband Signals

    PubMed Central

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  16. Performance evaluation of spatial compounding in the presence of aberration and adaptive imaging

    NASA Astrophysics Data System (ADS)

    Dahl, Jeremy J.; Guenther, Drake; Trahey, Gregg E.

    2003-05-01

    Spatial compounding has been used for years to reduce speckle in ultrasonic images and to resolve anatomical features hidden behind the grainy appearance of speckle. Adaptive imaging restores image contrast and resolution by compensating for beamforming errors caused by tissue-induced phase errors. Spatial compounding represents a form of incoherent imaging, whereas adaptive imaging attempts to maintain a coherent, diffraction-limited aperture in the presence of aberration. Using a Siemens Antares scanner, we acquired single channel RF data on a commercially available 1-D probe. Individual channel RF data was acquired on a cyst phantom in the presence of a near field electronic phase screen. Simulated data was also acquired for both a 1-D and a custom built 8x96, 1.75-D probe (Tetrad Corp.). The data was compounded using a receive spatial compounding algorithm; a widely used algorithm because it takes advantage of parallel beamforming to avoid reductions in frame rate. Phase correction was also performed by using a least mean squares algorithm to estimate the arrival time errors. We present simulation and experimental data comparing the performance of spatial compounding to phase correction in contrast and resolution tasks. We evaluate spatial compounding and phase correction, and combinations of the two methods, under varying aperture sizes, aperture overlaps, and aberrator strength to examine the optimum configuration and conditions in which spatial compounding will provide a similar or better result than adaptive imaging. We find that, in general, phase correction is hindered at high aberration strengths and spatial frequencies, whereas spatial compounding is helped by these aberrators.

  17. Parallel processing spacecraft communication system

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary S. (Inventor); Donaldson, James A. (Inventor); Luong, Huy H. (Inventor); Wood, Steven H. (Inventor)

    1998-01-01

    An uplink controlling assembly speeds data processing using a special parallel codeblock technique. A correct start sequence initiates processing of a frame. Two possible start sequences can be used; and the one which is used determines whether data polarity is inverted or non-inverted. Processing continues until uncorrectable errors are found. The frame ends by intentionally sending a block with an uncorrectable error. Each of the codeblocks in the frame has a channel ID. Each channel ID can be separately processed in parallel. This obviates the problem of waiting for error correction processing. If that channel number is zero, however, it indicates that the frame of data represents a critical command only. That data is handled in a special way, independent of the software. Otherwise, the processed data further handled using special double buffering techniques to avoid problems from overrun. When overrun does occur, the system takes action to lose only the oldest data.

  18. Adaptive channel estimation for soft decision decoding over non-Gaussian optical channel

    NASA Astrophysics Data System (ADS)

    Xiang, Jing-song; Miao, Tao-tao; Huang, Sheng; Liu, Huan-lin

    2016-10-01

    An adaptive priori likelihood ratio (LLR) estimation method is proposed over non-Gaussian channel in the intensity modulation/direct detection (IM/DD) optical communication systems. Using the nonparametric histogram and the weighted least square linear fitting in the tail regions, the LLR is estimated and used for the soft decision decoding of the low-density parity-check (LDPC) codes. This method can adapt well to the three main kinds of intensity modulation/direct detection (IM/DD) optical channel, i.e., the chi-square channel, the Webb-Gaussian channel and the additive white Gaussian noise (AWGN) channel. The performance penalty of channel estimation is neglected.

  19. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    NASA Technical Reports Server (NTRS)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet's mean precipitation.

  20. Window-Based Channel Impulse Response Prediction for Time-Varying Ultra-Wideband Channels.

    PubMed

    Al-Samman, A M; Azmi, M H; Rahman, T A; Khan, I; Hindia, M N; Fattouh, A

    2016-01-01

    This work proposes channel impulse response (CIR) prediction for time-varying ultra-wideband (UWB) channels by exploiting the fast movement of channel taps within delay bins. Considering the sparsity of UWB channels, we introduce a window-based CIR (WB-CIR) to approximate the high temporal resolutions of UWB channels. A recursive least square (RLS) algorithm is adopted to predict the time evolution of the WB-CIR. For predicting the future WB-CIR tap of window wk, three RLS filter coefficients are computed from the observed WB-CIRs of the left wk-1, the current wk and the right wk+1 windows. The filter coefficient with the lowest RLS error is used to predict the future WB-CIR tap. To evaluate our proposed prediction method, UWB CIRs are collected through measurement campaigns in outdoor environments considering line-of-sight (LOS) and non-line-of-sight (NLOS) scenarios. Under similar computational complexity, our proposed method provides an improvement in prediction errors of approximately 80% for LOS and 63% for NLOS scenarios compared with a conventional method.

  1. Window-Based Channel Impulse Response Prediction for Time-Varying Ultra-Wideband Channels

    PubMed Central

    Al-Samman, A. M.; Azmi, M. H.; Rahman, T. A.; Khan, I.; Hindia, M. N.; Fattouh, A.

    2016-01-01

    This work proposes channel impulse response (CIR) prediction for time-varying ultra-wideband (UWB) channels by exploiting the fast movement of channel taps within delay bins. Considering the sparsity of UWB channels, we introduce a window-based CIR (WB-CIR) to approximate the high temporal resolutions of UWB channels. A recursive least square (RLS) algorithm is adopted to predict the time evolution of the WB-CIR. For predicting the future WB-CIR tap of window wk, three RLS filter coefficients are computed from the observed WB-CIRs of the left wk−1, the current wk and the right wk+1 windows. The filter coefficient with the lowest RLS error is used to predict the future WB-CIR tap. To evaluate our proposed prediction method, UWB CIRs are collected through measurement campaigns in outdoor environments considering line-of-sight (LOS) and non-line-of-sight (NLOS) scenarios. Under similar computational complexity, our proposed method provides an improvement in prediction errors of approximately 80% for LOS and 63% for NLOS scenarios compared with a conventional method. PMID:27992445

  2. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, V.; Farvardin, N.

    1986-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  3. Robust vector quantization for noisy channels

    NASA Technical Reports Server (NTRS)

    Demarca, J. R. B.; Farvardin, N.; Jayant, N. S.; Shoham, Y.

    1988-01-01

    The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance.

  4. A cascaded coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Shu, L.; Kasami, T.

    1985-01-01

    A cascade coding scheme for error control is investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are evaluated. They seem to be quite suitable for satellite down-link error control.

  5. A cascaded coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Kasami, T.; Lin, S.

    1985-01-01

    A cascaded coding scheme for error control was investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are studied which seem to be quite suitable for satellite down-link error control.

  6. Carrier recovery techniques on satellite mobile channels

    NASA Technical Reports Server (NTRS)

    Vucetic, B.; Du, J.

    1990-01-01

    An analytical method and a stored channel model were used to evaluate error performance of uncoded quadrature phase shift keying (QPSK) and M-ary phase shift keying (MPSK) trellis coded modulation (TCM) over shadowed satellite mobile channels in the presence of phase jitter for various carrier recovery techniques.

  7. Fade-resistant forward error correction method for free-space optical communications systems

    DOEpatents

    Johnson, Gary W.; Dowla, Farid U.; Ruggiero, Anthony J.

    2007-10-02

    Free-space optical (FSO) laser communication systems offer exceptionally wide-bandwidth, secure connections between platforms that cannot other wise be connected via physical means such as optical fiber or cable. However, FSO links are subject to strong channel fading due to atmospheric turbulence and beam pointing errors, limiting practical performance and reliability. We have developed a fade-tolerant architecture based on forward error correcting codes (FECs) combined with delayed, redundant, sub-channels. This redundancy is made feasible though dense wavelength division multiplexing (WDM) and/or high-order M-ary modulation. Experiments and simulations show that error-free communications is feasible even when faced with fades that are tens of milliseconds long. We describe plans for practical implementation of a complete system operating at 2.5 Gbps.

  8. Remote sensing of channels and riparian zones with a narrow-beam aquatic-terrestrial LIDAR

    Treesearch

    Jim McKean; Dave Nagel; Daniele Tonina; Philip Bailey; Charles Wayne Wright; Carolyn Bohn; Amar Nayegandhi

    2009-01-01

    The high-resolution Experimental Advanced Airborne Research LIDAR (EAARL) is a new technology for cross-environment surveys of channels and floodplains. EAARL measurements of basic channel geometry, such as wetted cross-sectional area, are within a few percent of those from control field surveys. The largest channel mapping errors are along stream banks. The LIDAR data...

  9. Study of a co-designed decision feedback equalizer, deinterleaver, and decoder

    NASA Technical Reports Server (NTRS)

    Peile, Robert E.; Welch, Loyd

    1990-01-01

    A technique that promises better quality data from band limited channels at lower received power in digital transmission systems is presented. Data transmission, in such systems often suffers from intersymbol interference (ISI) and noise. Two separate techniques, channel coding and equalization, have caused considerable advances in the state of communication systems and both concern themselves with removing the undesired effects of a communication channel. Equalizers mitigate the ISI whereas coding schemes are used to incorporate error-correction. In the past, most of the research in these two areas has been carried out separately. However, the individual techniques have strengths and weaknesses that are complementary in many applications: an integrated approach realizes gains in excess to that of a simple juxtaposition. Coding schemes have been successfully used in cascade with linear equalizers which in the absence of ISI provide excellent performance. However, when both ISI and the noise level are relatively high, nonlinear receivers like the decision feedback equalizer (DFE) perform better. The DFE has its drawbacks: it suffers from error propagation. The technique presented here takes advantage of interleaving to integrate the two approaches so that the error propagation in DFE can be reduced with the help of error correction provided by the decoder. The results of simulations carried out for both, binary, and non-binary, channels confirm that significant gain can be obtained by codesigning equalizer and decoder. Although, systems with time-invariant channels and simple DFE having linear filters were looked into, the technique is fairly general and can easily be modified for more sophisticated equalizers to obtain even larger gains.

  10. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies

    PubMed Central

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-01-01

    Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476

  11. Evaluation of aquifer heterogeneity effects on river flow loss using a transition probability framework

    USGS Publications Warehouse

    Engdahl, N.B.; Vogler, E.T.; Weissmann, G.S.

    2010-01-01

    River-aquifer exchange is considered within a transition probability framework along the Rio Grande in Albuquerque, New Mexico, to provide a stochastic estimate of aquifer heterogeneity and river loss. Six plausible hydrofacies configurations were determined using categorized drill core and wetland survey data processed through the TPROGS geostatistical package. A base case homogeneous model was also constructed for comparison. River loss was simulated for low, moderate, and high Rio Grande stages and several different riverside drain stage configurations. Heterogeneity effects were quantified by determining the mean and variance of the K field for each realization compared to the root-mean-square (RMS) error of the observed groundwater head data. Simulation results showed that the heterogeneous models produced smaller estimates of loss than the homogeneous approximation. Differences between heterogeneous and homogeneous model results indicate that the use of a homogeneous K in a regional-scale model may result in an overestimation of loss but comparable RMS error. We find that the simulated river loss is dependent on the aquifer structure and is most sensitive to the volumetric proportion of fines within the river channel. Copyright 2010 by the American Geophysical Union.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko

    Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subjectmore » to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).« less

  13. Performance Analysis of Amplify-and-Forward Relaying FSO/SC-QAM Systems over Weak Turbulence Channels and Pointing Error Impairments

    NASA Astrophysics Data System (ADS)

    Trung, Ha Duyen

    2017-12-01

    In this paper, the end-to-end performance of free-space optical (FSO) communication system combining with Amplify-and-Forward (AF)-assisted or fixed-gain relaying technology using subcarrier quadrature amplitude modulation (SC-QAM) over weak atmospheric turbulence channels modeled by log-normal distribution with pointing error impairments is studied. More specifically, unlike previous studies on AF relaying FSO communication systems without pointing error effects; the pointing error effect is studied by taking into account the influence of beamwidth, aperture size and jitter variance. In addition, a combination of these models to analyze the combined effect of atmospheric turbulence and pointing error to AF relaying FSO/SC-QAM systems is used. Finally, an analytical expression is derived to evaluate the average symbol error rate (ASER) performance of such systems. The numerical results show that the impact of pointing error on the performance of AF relaying FSO/SC-QAM systems and how we use proper values of aperture size and beamwidth to improve the performance of such systems. Some analytical results are confirmed by Monte-Carlo simulations.

  14. Contaminant point source localization error estimates as functions of data quantity and model quality

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Vesselinov, Velimir V.

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.

  15. Optimum Cyclic Redundancy Codes for Noisy Channels

    NASA Technical Reports Server (NTRS)

    Posner, E. C.; Merkey, P.

    1986-01-01

    Capabilities and limitations of cyclic redundancy codes (CRC's) for detecting transmission errors in data sent over relatively noisy channels (e.g., voice-grade telephone lines or very-high-density storage media) discussed in 16-page report. Due to prevalent use of bytes in multiples of 8 bits data transmission, report primarily concerned with cases in which both block length and number of redundant bits (check bits for use in error detection) included in each block are multiples of 8 bits.

  16. A function space approach to smoothing with applications to model error estimation for flexible spacecraft control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1981-01-01

    A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.

  17. Model error estimation for distributed systems described by elliptic equations

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1983-01-01

    A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.

  18. Fast QC-LDPC code for free space optical communication

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Zhang, Qi; Udeh, Chinonso Paschal; Wu, Rangzhong

    2017-02-01

    Free Space Optical (FSO) Communication systems use the atmosphere as a propagation medium. Hence the atmospheric turbulence effects lead to multiplicative noise related with signal intensity. In order to suppress the signal fading induced by multiplicative noise, we propose a fast Quasi-Cyclic (QC) Low-Density Parity-Check (LDPC) code for FSO Communication systems. As a linear block code based on sparse matrix, the performances of QC-LDPC is extremely near to the Shannon limit. Currently, the studies on LDPC code in FSO Communications is mainly focused on Gauss-channel and Rayleigh-channel, respectively. In this study, the LDPC code design over atmospheric turbulence channel which is nether Gauss-channel nor Rayleigh-channel is closer to the practical situation. Based on the characteristics of atmospheric channel, which is modeled as logarithmic-normal distribution and K-distribution, we designed a special QC-LDPC code, and deduced the log-likelihood ratio (LLR). An irregular QC-LDPC code for fast coding, of which the rates are variable, is proposed in this paper. The proposed code achieves excellent performance of LDPC codes and can present the characteristics of high efficiency in low rate, stable in high rate and less number of iteration. The result of belief propagation (BP) decoding shows that the bit error rate (BER) obviously reduced as the Signal-to-Noise Ratio (SNR) increased. Therefore, the LDPC channel coding technology can effectively improve the performance of FSO. At the same time, the BER, after decoding reduces with the increase of SNR arbitrarily, and not having error limitation platform phenomenon with error rate slowing down.

  19. Cascade and parallel combination (CPC) of adaptive filters for estimating heart rate during intensive physical exercise from photoplethysmographic signal

    PubMed Central

    Islam, Mohammad Tariqul; Tanvir Ahmed, Sk.; Zabir, Ishmam; Shahnaz, Celia

    2018-01-01

    Photoplethysmographic (PPG) signal is getting popularity for monitoring heart rate in wearable devices because of simplicity of construction and low cost of the sensor. The task becomes very difficult due to the presence of various motion artefacts. In this study, an algorithm based on cascade and parallel combination (CPC) of adaptive filters is proposed in order to reduce the effect of motion artefacts. First, preliminary noise reduction is performed by averaging two channel PPG signals. Next in order to reduce the effect of motion artefacts, a cascaded filter structure consisting of three cascaded adaptive filter blocks is developed where three-channel accelerometer signals are used as references to motion artefacts. To further reduce the affect of noise, a scheme based on convex combination of two such cascaded adaptive noise cancelers is introduced, where two widely used adaptive filters namely recursive least squares and least mean squares filters are employed. Heart rates are estimated from the noise reduced PPG signal in spectral domain. Finally, an efficient heart rate tracking algorithm is designed based on the nature of the heart rate variability. The performance of the proposed CPC method is tested on a widely used public database. It is found that the proposed method offers very low estimation error and a smooth heart rate tracking with simple algorithmic approach. PMID:29515812

  20. Flood-frequency characteristics of Wisconsin streams

    USGS Publications Warehouse

    Walker, John F.; Peppler, Marie C.; Danz, Mari E.; Hubbard, Laura E.

    2017-05-22

    Flood-frequency characteristics for 360 gaged sites on unregulated rural streams in Wisconsin are presented for percent annual exceedance probabilities ranging from 0.2 to 50 using a statewide skewness map developed for this report. Equations of the relations between flood-frequency and drainage-basin characteristics were developed by multiple-regression analyses. Flood-frequency characteristics for ungaged sites on unregulated, rural streams can be estimated by use of the equations presented in this report. The State was divided into eight areas of similar physiographic characteristics. The most significant basin characteristics are drainage area, soil saturated hydraulic conductivity, main-channel slope, and several land-use variables. The standard error of prediction for the equation for the 1-percent annual exceedance probability flood ranges from 56 to 70 percent for Wisconsin Streams; these values are larger than results presented in previous reports. The increase in the standard error of prediction is likely due to increased variability of the annual-peak discharges, resulting in increased variability in the magnitude of flood peaks at higher frequencies. For each of the unregulated rural streamflow-gaging stations, a weighted estimate based on the at-site log Pearson type III analysis and the multiple regression results was determined. The weighted estimate generally has a lower uncertainty than either the Log Pearson type III or multiple regression estimates. For regulated streams, a graphical method for estimating flood-frequency characteristics was developed from the relations of discharge and drainage area for selected annual exceedance probabilities. Graphs for the major regulated streams in Wisconsin are presented in the report.

  1. Validation of MODIS-derived bidirectional reflectivity retrieval algorithm in mid-infrared channel with field measurements.

    PubMed

    Tang, Bo-Hui; Wu, Hua-; Li, Zhao-Liang; Nerry, Françoise

    2012-07-30

    This work addressed the validation of the MODIS-derived bidirectional reflectivity retrieval algorithm in mid-infrared (MIR) channel, proposed by Tang and Li [Int. J. Remote Sens. 29, 4907 (2008)], with ground-measured data, which were collected from a field campaign that took place in June 2004 at the ONERA (Office National d'Etudes et de Recherches Aérospatiales) center of Fauga-Mauzac, on the PIRRENE (Programme Interdisciplinaire de Recherche sur la Radiométrie en Environnement Extérieur) experiment site [Opt. Express 15, 12464 (2007)]. The leaving-surface spectral radiances measured by a BOMEM (MR250 Series) Fourier transform interferometer were used to calculate the ground brightness temperatures with the combination of the inversion of the Planck function and the spectral response functions of MODIS channels 22 and 23, and then to estimate the ground brightness temperature without the contribution of the solar direct beam and the bidirectional reflectivity by using Tang and Li's proposed algorithm. On the other hand, the simultaneously measured atmospheric profiles were used to obtain the atmospheric parameters and then to calculate the ground brightness temperature without the contribution of the solar direct beam, based on the atmospheric radiative transfer equation in the MIR region. Comparison of those two kinds of brightness temperature obtained by two different methods indicated that the Root Mean Square Error (RMSE) between the brightness temperatures estimated respectively using Tang and Li's algorithm and the atmospheric radiative transfer equation is 1.94 K. In addition, comparison of the hemispherical-directional reflectances derived by Tang and Li's algorithm with those obtained from the field measurements showed that the RMSE is 0.011, which indicates that Tang and Li's algorithm is feasible to retrieve the bidirectional reflectivity in MIR channel from MODIS data.

  2. Cirrus Horizontal Heterogeneity Effects on Cloud Optical Properties Retrieved from MODIS VNIR to TIR Channels as a Function of the Spatial Resolution

    NASA Astrophysics Data System (ADS)

    Fauchez, T.; Platnick, S. E.; Sourdeval, O.; Wang, C.; Meyer, K.; Cornet, C.; Szczap, F.

    2017-12-01

    Cirrus are an important part of the Earth radiation budget but an assessment of their role yet remains highly uncertain. Cirrus optical properties such as Cloud Optical Thickness (COT) and ice crystal effective particle size (Re) are often retrieved with a combination of Visible/Near InfraRed (VNIR) and ShortWave-InfraRed (SWIR) reflectance channels. Alternatively, Thermal InfraRed (TIR) techniques, such as the Split Window Technique (SWT), have demonstrated better sensitivity to thin cirrus. However, current satellite operational products for both retrieval methods assume that cloudy pixels are horizontally homogeneous (Plane Parallel and Homogeneous Approximation (PPHA)) and independent (Independent Pixel Approximation (IPA)). The impact of these approximations on cirrus retrievals needs to be understood and, as far as possible, corrected. Horizontal heterogeneity effects can be more easily estimated and corrected in the TIR range because they are mainly dominated by the PPA bias, which primarily depends on the COT subpixel heterogeneity. For solar reflectance channels, in addition to the PPHA bias, the IPA can lead to significant retrieval errors if there is large photon transport between cloudy columns in addition to brightening and shadowing effects that are more difficult to quantify.The effects of cirrus horizontal heterogeneity are here studied on COT and Re retrievals obtained using simulated MODIS reflectances at 0.86 and 2.11 μm and radiances at 8.5, 11.0 and 12.0 μm, for spatial resolutions ranging from 50 m to 10 km. For each spatial resolution, simulated TOA reflectances and radiances are combined for cloud optical property retrievals with a research-level optimal estimation retrieval method (OEM). The impact of horizontal heterogeneity on the retrieved products is assessed for different solar geometries and various combinations of the five channels.

  3. Feedback power control strategies in wireless sensor networks with joint channel decoding.

    PubMed

    Abrardo, Andrea; Ferrari, Gianluigi; Martalò, Marco; Perna, Fabio

    2009-01-01

    In this paper, we derive feedback power control strategies for block-faded multiple access schemes with correlated sources and joint channel decoding (JCD). In particular, upon the derivation of the feasible signal-to-noise ratio (SNR) region for the considered multiple access schemes, i.e., the multidimensional SNR region where error-free communications are, in principle, possible, two feedback power control strategies are proposed: (i) a classical feedback power control strategy, which aims at equalizing all link SNRs at the access point (AP), and (ii) an innovative optimized feedback power control strategy, which tries to make the network operational point fall in the feasible SNR region at the lowest overall transmit energy consumption. These strategies will be referred to as "balanced SNR" and "unbalanced SNR," respectively. While they require, in principle, an unlimited power control range at the sources, we also propose practical versions with a limited power control range. We preliminary consider a scenario with orthogonal links and ideal feedback. Then, we analyze the robustness of the proposed power control strategies to possible non-idealities, in terms of residual multiple access interference and noisy feedback channels. Finally, we successfully apply the proposed feedback power control strategies to a limiting case of the class of considered multiple access schemes, namely a central estimating officer (CEO) scenario, where the sensors observe noisy versions of a common binary information sequence and the AP's goal is to estimate this sequence by properly fusing the soft-output information output by the JCD algorithm.

  4. Using LiDAR to Estimate Surface Erosion Volumes within the Post-storm 2012 Bagley Fire

    NASA Astrophysics Data System (ADS)

    Mikulovsky, R. P.; De La Fuente, J. A.; Mondry, Z. J.

    2014-12-01

    The total post-storm 2012 Bagley fire sediment budget of the Squaw Creek watershed in the Shasta-Trinity National Forest was estimated using many methods. A portion of the budget was quantitatively estimated using LiDAR. Simple workflows were designed to estimate the eroded volume's of debris slides, fill failures, gullies, altered channels and streams. LiDAR was also used to estimate depositional volumes. Thorough manual mapping of large erosional features using the ArcGIS 10.1 Geographic Information System was required as these mapped features determined the eroded volume boundaries in 3D space. The 3D pre-erosional surface for each mapped feature was interpolated based on the boundary elevations. A surface difference calculation was run using the estimated pre-erosional surfaces and LiDAR surfaces to determine volume of sediment potentially delivered into the stream system. In addition, cross sections of altered channels and streams were taken using stratified random selection based on channel gradient and stream order respectively. The original pre-storm surfaces of channel features were estimated using the cross sections and erosion depth criteria. Open source software Inkscape was used to estimate cross sectional areas for randomly selected channel features and then averaged for each channel gradient and stream order classes. The average areas were then multiplied by the length of each class to estimate total eroded altered channel and stream volume. Finally, reservoir and in-channel depositional volumes were estimated by mapping channel forms and generating specific reservoir elevation zones associated with depositional events. The in-channel areas and zones within the reservoir were multiplied by estimated and field observed sediment thicknesses to attain a best guess sediment volume. In channel estimates included re-occupying stream channel cross sections established before the fire. Once volumes were calculated, other erosion processes of the Bagley sedimentation study, such as surface soil erosion were combined to estimate the total fire and storm sediment budget for the Squaw Creek watershed. The LiDAR-based measurement workflows can be easily applied to other sediment budget studies using one high resolution LiDAR dataset.

  5. All-digital multicarrier demodulators for on-board processing satellites in mobile communication systems

    NASA Astrophysics Data System (ADS)

    Yim, Wan Hung

    Economical operation of future satellite systems for mobile communications can only be fulfilled by using dedicated on-board processing satellites, which would allow both cheap earth terminals and lower space segment costs. With on-board modems and codecs, the up-link and down-link can be optimized separately. An attractive scheme is to use frequency-division multiple access/single chanel per carrier (FDMA/SCPC) on the up-link and time division multiplexing (TDM) on the down-link. This scheme allows mobile terminals to transmit a narrow band, low power signal, resulting in smaller dishes and high power amplifiers (HPA's) with lower output power. On the up-link, there are hundreds to thousands of FDM channels to be demodulated on-board. The most promising approach is the use of all-digital multicarrier demodulators (MCD's), where analog and digital hardware are efficiently shared among channels, and digital signal processing (DSP) is used at an early stage to take advantage of very large scale integration (VLSI) implementation. A MCD consists of a channellizer for separation of frequency division multiplexing (FDM) channels, followed by individual modulators for each channel. Major research areas in MCD's are in multirate DSP, and the optimal estimation for synchronization, which form the basis of the thesis. Complex signal theories are central to the development of structured approaches for the sampling and processing of bandpass signals, which are the foundations in both channellizer and demodulator design. In multirate DSP, polyphase theories replace many ad-hoc, tedious and error-prone design procedures. For example, a polyphase-matrix deep space network frequency and timing system (DFT) channellizer includes all efficient filter bank techniques as special cases. Also, a polyphase-lattice filter is derived, not only for sampling rate conversion, but also capable of sampling phase variation, which is required for symbol timing adjustment in all-digital demodulators. In modulation schemes, a systematic survey is reported, based on two expressions that includes all formats in linear and constant envelope modulation. In synchronization techniques, classifications according to the criterion of statistical optimization, the data dependecy, and the method of parameter extraction, reflect the inherent complexity and performance of numerous existing algorithms. The designs of two new algorithms are presented: a differential decision frequency error detector that is simple and fast; a dual-comb-filter frequency/timing error detector that is targeted at VLSI implementation. The real-time implementation of a complete 4 x 16 kb/s MCD for the T-SAT project is described in detail, which proved many of the structured design concepts developed in this thesis. The requirements of software tools for various levels of simulation in multirate DSP and communications are analyzed. This led to the implementation of a data-flow oriented simulation system, which was used in all research work in the thesis.

  6. Bias in error estimation when using cross-validation for model selection.

    PubMed

    Varma, Sudhir; Simon, Richard

    2006-02-23

    Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data. We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these "null" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With "null" and "non null" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the "null" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of "null" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for "null" and "non-null" data distributions. We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error.

  7. Channel Estimation for Filter Bank Multicarrier Systems in Low SNR Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Driggs, Jonathan; Sibbett, Taylor; Moradiy, Hussein

    Channel estimation techniques are crucial for reliable communications. This paper is concerned with channel estimation in a filter bank multicarrier spread spectrum (FBMCSS) system. We explore two channel estimator options: (i) a method that makes use of a periodic preamble and mimics the channel estimation techniques that are widely used in OFDM-based systems; and (ii) a method that stays within the traditional realm of filter bank signal processing. For the case where the channel noise is white, both methods are analyzed in detail and their performance is compared against their respective Cramer-Rao Lower Bounds (CRLB). Advantages and disadvantages of themore » two methods under different channel conditions are given to provide insight to the reader as to when one will outperform the other.« less

  8. Enhancement of the NMSU Channel Error Simulator to Provide User-Selectable Link Delays

    NASA Technical Reports Server (NTRS)

    Horan, Stephen; Wang, Ru-Hai

    2000-01-01

    This is the third in a continuing series of reports describing the development of the Space-to-Ground Link Simulator (SGLS) to be used for testing data transfers under simulated space channel conditions. The SGLS is based upon Virtual Instrument (VI) software techniques for managing the error generation, link data rate configuration, and, now, selection of the link delay value. In this report we detail the changes that needed to be made to the SGLS VI configuration to permit link delays to be added to the basic error generation and link data rate control capabilities. This was accomplished by modifying the rate-splitting VIs to include a buffer the hold the incoming data for the duration selected by the user to emulate the channel link delay. In sample tests of this configuration, the TCP/IP(sub ftp) service and the SCPS(sub fp) service were used to transmit 10-KB data files using both symmetric (both forward and return links set to 115200 bps) and unsymmetric (forward link set at 2400 bps and a return link set at 115200 bps) link configurations. Transmission times were recorded at bit error rates of 0 through 10(exp -5) to give an indication of the link performance. In these tests. we noted separate timings for the protocol setup time to initiate the file transfer and the variation in the actual file transfer time caused by channel errors. Both protocols showed similar performance to that seen earlier for the symmetric and unsymmetric channels. This time, the delays in establishing the file protocol also showed that these delays could double the transmission time and need to be accounted for in mission planning. Both protocols also showed a difficulty in transmitting large data files over large link delays. In these tests, there was no clear favorite between the TCP/IP(sub ftp) and the SCPS(sub fp). Based upon these tests, further testing is recommended to extend the results to different file transfer configurations.

  9. Parallel computers - Estimate errors caused by imprecise data

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Bernat, Andrew; Villa, Elsa; Mariscal, Yvonne

    1991-01-01

    A new approach to the problem of estimating errors caused by imprecise data is proposed in the context of software engineering. A software device is used to produce an ideal solution to the problem, when the computer is capable of computing errors of arbitrary programs. The software engineering aspect of this problem is to describe a device for computing the error estimates in software terms and then to provide precise numbers with error estimates to the user. The feasibility of the program capable of computing both some quantity and its error estimate in the range of possible measurement errors is demonstrated.

  10. Optimization design of the tuning method for FBG spectroscopy based on the numerical analysis of all-fiber Raman temperature lidar

    NASA Astrophysics Data System (ADS)

    Wang, Li; Wang, Jun; Bao, Dong; Yang, Rong; Yan, Qing; Gao, Fei; Hua, Dengxin

    2018-01-01

    All fiber Raman temperature lidar for space borne platform has been proposed for profiling of the temperature with high accuracy. Fiber Bragg grating (FBG) is proposed as the spectroscopic system of Raman lidar because of good wavelength selectivity, high spectral resolution and high out-of-band rejection rate. Two sets of FBGs at visible wavelength 532 nm as Raman spectroscopy system are designed for extracting the rotational Raman spectra of atmospheric molecules, which intensities depend on the atmospheric temperature. The optimization design of the tuning method of an all-fiber rotational Raman spectroscopy system is analyzed and tested for estimating the potential temperature inversion error caused by the instability of FBG. The cantilever structure with temperature control device is designed to realize the tuning and stabilization of the central wavelengths of FBGs. According to numerical calculation of FBG and finite element analysis of the cantilever structure, the center wavelength offset of FBG is 11.03 nm/°C with the temperature change in the spectroscopy system. By experimental observation, the center wavelength offset of surface-bonded FBG is 9.80 nm/°C with temperature changing when subjected to certain strain for the high quantum number channel, while 10.01 nm/°C for the low quantum number channel. The tunable wavelength range of FBG is from 528.707 nm to 529.014 nm for the high quantum number channel and from 530.226 nm to 530.547 nm for the low quantum number channel. The temperature control accuracy of the FBG spectroscopy system is up to 0.03 °C, the corresponding potential atmospheric temperature inversion error is 0.04 K based on the numerical analysis of all-fiber Raman temperature lidar. The fine tuning and stabilization of the FBG wavelength realize the elaborate spectroscope of Raman lidar system. The conclusion is of great significance for the application of FBG spectroscopy system for space-borne platform Raman lidar.

  11. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies.

    PubMed

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-11-01

    Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  12. 3D bubble reconstruction using multiple cameras and space carving method

    NASA Astrophysics Data System (ADS)

    Fu, Yucheng; Liu, Yang

    2018-07-01

    An accurate measurement of bubble shape and size has a significant value in understanding the behavior of bubbles that exist in many engineering applications. Past studies usually use one or two cameras to estimate bubble volume, surface area, among other parameters. The 3D bubble shape and rotation angle are generally not available in these studies. To overcome this challenge and obtain more detailed information of individual bubbles, a 3D imaging system consisting of four high-speed cameras is developed in this paper, and the space carving method is used to reconstruct the 3D bubble shape based on the recorded high-speed images from different view angles. The proposed method can reconstruct the bubble surface with minimal assumptions. A benchmarking test is performed in a 3 cm  ×  1 cm rectangular channel with stagnant water. The results show that the newly proposed method can measure the bubble volume with an error of less than 2% compared with the syringe reading. The conventional two-camera system has an error around 10%. The one-camera system has an error greater than 25%. The visualization of a 3D bubble rising demonstrates the wall influence on bubble rotation angle and aspect ratio. This also explains the large error that exists in the single camera measurement.

  13. GOME Total Ozone and Calibration Error Derived Usign Version 8 TOMS Algorithm

    NASA Technical Reports Server (NTRS)

    Gleason, J.; Wellemeyer, C.; Qin, W.; Ahn, C.; Gopalan, A.; Bhartia, P.

    2003-01-01

    The Global Ozone Monitoring Experiment (GOME) is a hyper-spectral satellite instrument measuring the ultraviolet backscatter at relatively high spectral resolution. GOME radiances have been slit averaged to emulate measurements of the Total Ozone Mapping Spectrometer (TOMS) made at discrete wavelengths and processed using the new TOMS Version 8 Ozone Algorithm. Compared to Differential Optical Absorption Spectroscopy (DOAS) techniques based on local structure in the Huggins Bands, the TOMS uses differential absorption between a pair of wavelengths including the local stiucture as well as the background continuum. This makes the TOMS Algorithm more sensitive to ozone, but it also makes the algorithm more sensitive to instrument calibration errors. While calibration adjustments are not needed for the fitting techniques like the DOAS employed in GOME algorithms, some adjustment is necessary when applying the TOMS Algorithm to GOME. Using spectral discrimination at near ultraviolet wavelength channels unabsorbed by ozone, the GOME wavelength dependent calibration drift is estimated and then checked using pair justification. In addition, the day one calibration offset is estimated based on the residuals of the Version 8 TOMS Algorithm. The estimated drift in the 2b detector of GOME is small through the first four years and then increases rapidly to +5% in normalized radiance at 331 nm relative to 385 nm by mid 2000. The lb detector appears to be quite well behaved throughout this time period.

  14. Free space optical ultra-wideband communications over atmospheric turbulence channels.

    PubMed

    Davaslioğlu, Kemal; Cağiral, Erman; Koca, Mutlu

    2010-08-02

    A hybrid impulse radio ultra-wideband (IR-UWB) communication system in which UWB pulses are transmitted over long distances through free space optical (FSO) links is proposed. FSO channels are characterized by random fluctuations in the received light intensity mainly due to the atmospheric turbulence. For this reason, theoretical detection error probability analysis is presented for the proposed system for a time-hopping pulse-position modulated (TH-PPM) UWB signal model under weak, moderate and strong turbulence conditions. For the optical system output distributed over radio frequency UWB channels, composite error analysis is also presented. The theoretical derivations are verified via simulation results, which indicate a computationally and spectrally efficient UWB-over-FSO system.

  15. Extending the impulse response in order to reduce errors due to impulse noise and signal fading

    NASA Technical Reports Server (NTRS)

    Webb, Joseph A.; Rolls, Andrew J.; Sirisena, H. R.

    1988-01-01

    A finite impulse response (FIR) digital smearing filter was designed to produce maximum intersymbol interference and maximum extension of the impulse response of the signal in a noiseless binary channel. A matched FIR desmearing filter at the receiver then reduced the intersymbol interference to zero. Signal fades were simulated by means of 100 percent signal blockage in the channel. Smearing and desmearing filters of length 256, 512, and 1024 were used for these simulations. Results indicate that impulse response extension by means of bit smearing appears to be a useful technique for correcting errors due to impulse noise or signal fading in a binary channel.

  16. Flight evaluation of differential GPS aided inertial navigation systems

    NASA Technical Reports Server (NTRS)

    Mcnally, B. David; Paielli, Russell A.; Bach, Ralph E., Jr.; Warner, David N., Jr.

    1992-01-01

    Algorithms are described for integration of Differential Global Positioning System (DGPS) data with Inertial Navigation System (INS) data to provide an integrated DGPS/INS navigation system. The objective is to establish the benefits that can be achieved through various levels of integration of DGPS with INS for precision navigation. An eight state Kalman filter integration was implemented in real-time on a twin turbo-prop transport aircraft to evaluate system performance during terminal approach and landing operations. A fully integrated DGPS/INS system is also presented which models accelerometer and rate-gyro measurement errors plus position, velocity, and attitude errors. The fully integrated system was implemented off-line using range-domain (seventeen-state) and position domain (fifteen-state) Kalman filters. Both filter integration approaches were evaluated using data collected during the flight test. Flight-test data consisted of measurements from a 5 channel Precision Code GPS receiver, a strap-down Inertial Navigation Unit (INU), and GPS satellite differential range corrections from a ground reference station. The aircraft was laser tracked to determine its true position. Results indicate that there is no significant improvement in positioning accuracy with the higher levels of DGPS/INS integration. All three systems provided high-frequency (e.g., 20 Hz) estimates of position and velocity. The fully integrated system provided estimates of inertial sensor errors which may be used to improve INS navigation accuracy should GPS become unavailable, and improved estimates of acceleration, attitude, and body rates which can be used for guidance and control. Precision Code DGPS/INS positioning accuracy (root-mean-square) was 1.0 m cross-track and 3.0 m vertical. (This AGARDograph was sponsored by the Guidance and Control Panel.)

  17. Low-flow traveltime, longitudinal-dispersion, and reaeration characteristics of the Souris River from Lake Darling Dam to J Clark Salyer National Wildlife Refuge, North Dakota

    USGS Publications Warehouse

    Wesolowski, E.A.; Nelson, R.A.

    1987-01-01

    As part of the Sour is River water-quality assessment, traveltime, longitudinal-dispersion, and reaeration measurements were made during September 1983 on segments of the 186-mile reach of the Sour is River from Lake Darling Dam to the J. Clark Salyer National Wildlife Refuge. The primary objective was to determine traveltime, longitudinal-dispersion, and reaeration coefficients during low flow. Streamflow in the reach ranged from 10.5 to 47.0 cubic feet per second during the measurement period.On the basis of channel and hydraulic characteristics, the 186-mile reach was subdivided into five subreaches that ranged from 18 to 55 river miles in length. Within each subreach, representative test reaches that ranged from 5.0 to 9.1 river miles in length were selected for tracer injection and sample collection. Standard fluorometric techniques were used to measure traveltime and longitudinal dispersion, and a modified tracer technique that used ethylene and propane gas was used to measure reaeration. Mean test-reach velocities ranged from 0.05 to 0.30 foot per second, longitudinal-dispersion coefficients ranged from 4.2 to 61 square feet per second, and reaeration coefficients based on propane ranged from 0.39 to 1.66 per day. Predictive reaeration coefficients obtained from 18 equations (8 semiempirical and 10 empirical) were compared with each measured reaeration coefficient by use of an error-of-estimate analysis. The predictive reaeration coefficients ranged from 0.0008 to 3.4 per day. A semiempirical equation that produced coefficients most similar to the measured coefficients had the smallest absolute error of estimate (0.35). The smallest absolute error of estimate for the empirical equations was 0.41.

  18. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  19. Average capacity optimization in free-space optical communication system over atmospheric turbulence channels with pointing errors.

    PubMed

    Liu, Chao; Yao, Yong; Sun, Yun Xu; Xiao, Jun Jun; Zhao, Xin Hui

    2010-10-01

    A model is proposed to study the average capacity optimization in free-space optical (FSO) channels, accounting for effects of atmospheric turbulence and pointing errors. For a given transmitter laser power, it is shown that both transmitter beam divergence angle and beam waist can be tuned to maximize the average capacity. Meanwhile, their optimum values strongly depend on the jitter and operation wavelength. These results can be helpful for designing FSO communication systems.

  20. Evaluating Effects of Floodplain Constriction Along a High Energy Gravel-Bed River: Snake River, WY

    NASA Astrophysics Data System (ADS)

    Leonard, Christina M.

    This study examined approximately 66 km of the Snake River, WY, USA, spanning a natural reach within Grand Teton National Park and a reach immediately downstream that is confined by artificial levees. We linked the channel adjustments observed within these two reaches between 2007 and 2012 to sediment transport processes by developing a morphological sediment budget. A pair of digital elevation models (DEMs) was generated by fusing LiDAR topography with depth estimates derived from optical image data within wetted channels. Errors for both components of the DEMs (LiDAR and optical bathymetry) were propagated through the DEM of difference and sediment budget calculations. Our results indicated that even with the best available methods for acquiring high resolution topographic data over large areas, the uncertainty associated with bed elevation estimates implied that net volumetric changes were not statistically significant. In addition to the terrain analysis, we performed a tracer study to assess the mobility of different grain size classes in different morphological units. Grain sizes, hydraulic conditions, and flow resistance characteristics along cross-sections were used to calculate critical discharges for entrainment, but this bulk characterization of fluid driving forces failed to predict bed mobility. Our results indicated that over seasonal timescales specific grain classes were not preferentially entrained. Surface and subsurface grain size data were used to calculate armoring and dimensionless sediment transport ratios for both reaches; sediment supply exceeded transport capacity in the natural reach and vice versa in the confined reach. We used a conceptual model to describe channel adjustments to lateral constriction by levees. Initially we suggest levees focused flow energy and incised the bed, resulting in bed armoring. Bed armoring promoted channel widening, but levees prevented this and instead the channel migrated more rapidly within the constricted braidplain, eroding vegetated islands and bars and excavating sediment from the reach.

  1. A comparison of two estimates of standard error for a ratio-of-means estimator for a mapped-plot sample design in southeast Alaska.

    Treesearch

    Willem W.S. van Hees

    2002-01-01

    Comparisons of estimated standard error for a ratio-of-means (ROM) estimator are presented for forest resource inventories conducted in southeast Alaska between 1995 and 2000. Estimated standard errors for the ROM were generated by using a traditional variance estimator and also approximated by bootstrap methods. Estimates of standard error generated by both...

  2. Symbol Error Rate of Underlay Cognitive Relay Systems over Rayleigh Fading Channel

    NASA Astrophysics Data System (ADS)

    Ho van, Khuong; Bao, Vo Nguyen Quoc

    Underlay cognitive systems allow secondary users (SUs) to access the licensed band allocated to primary users (PUs) for better spectrum utilization with the power constraint imposed on SUs such that their operation does not harm the normal communication of PUs. This constraint, which limits the coverage range of SUs, can be offset by relaying techniques that take advantage of shorter range communication for lower path loss. Symbol error rate (SER) analysis of underlay cognitive relay systems over fading channel has not been reported in the literature. This paper fills this gap. The derived SER expressions are validated by simulations and show that underlay cognitive relay systems suffer a high error floor for any modulation level.

  3. Study to determine cloud motion from meteorological satellite data

    NASA Technical Reports Server (NTRS)

    Clark, B. B.

    1972-01-01

    Processing techniques were tested for deducing cloud motion vectors from overlapped portions of pairs of pictures made from meteorological satellites. This was accomplished by programming and testing techniques for estimating pattern motion by means of cross correlation analysis with emphasis placed upon identifying and reducing errors resulting from various factors. Techniques were then selected and incorporated into a cloud motion determination program which included a routine which would select and prepare sample array pairs from the preprocessed test data. The program was then subjected to limited testing with data samples selected from the Nimbus 4 THIR data provided by the 11.5 micron channel.

  4. Experiments and 3D simulations of flow structures in junctions and their influence on location of flowmeters.

    PubMed

    Mignot, E; Bonakdari, H; Knothe, P; Lipeme Kouyi, G; Bessette, A; Rivière, N; Bertrand-Krajewski, J-L

    2012-01-01

    Open-channel junctions are common occurrences in sewer networks and flow rate measurement often occurs near these singularities. Local flow structures are 3D, impact on the representativeness of the local flow measurements and thus lead to deviations in the flow rate estimation. The present study aims (i) to measure and simulate the flow pattern in a junction flow, (ii) to analyse the impact of the junction on the velocity distribution according to the distance from the junction and thus (iii) to evaluate the typical error derived from the computation of the flow rate close to the junction.

  5. Preliminary GOES-R ABI navigation and registration assessment results

    NASA Astrophysics Data System (ADS)

    Tan, B.; Dellomo, J.; Wolfe, R. E.; Reth, A. D.

    2017-12-01

    The US Geostationary Operational Environmental Satellite - R Series (GOES-R) was launched on November 19, 2016, and was designated GOESR-16 upon reaching geostationary orbit ten days later. The Advanced Baseline Imager (ABI) is the primary instrument on the GOES-R series for imaging Earth's surface and atmosphere to aid in weather prediction and climate monitoring. We developed algorithms and software for independent verification of the ABI Image Navigation and Registration (INR). Since late January 2017, four INR metrics have been continuously generated to monitor the ABI INR performance: navigation (NAV) error, channel-to-channel registration (CCR) error, frame-to-frame registration (FFR) error, and within-frame registration (WIFR) error. In this paper, we will describe the fundamental algorithm used for the image registration and briefly discuss the processing flow of INR Performance Assessment Tool Set (IPATS) developed for ABI INR. The assessment of the accuracy shows that IPATS measurements error is about 1/20 of the size of a pixel. Then the GOES-16 NAV assessments results, the primary metric, from January to August 2017, will be presented. The INR has improved over time as post-launch tests were performed and corrections were applied. The mean NAV error of the visible and near infrared (VNIR) channels dropped from 20 μrad in January to around 5 μrad (+/-4 μrad, 1 σ) in June, while the mean NAV error of long wave infrared (LWIR) channels dropped from around 70 μrad in January to around 5 μrad (+/-15 μrad, 1 σ) in June. A full global ABI image is composed with 22 east-west direction swaths. The swath-wise NAV error analysis shows that there was some variation in the mean swath-wise NAV errors. The variations are about as much as 20% of the scene NAV mean errors. As expected, the swaths over the tropical area have far fewer valid assessments (matchups) than those in mid-latitude region due to cloud coverage. It was also found that there was a rotation (clocking) of the focal plane of LWIR that was seen in both the NAV and CCR results. The rotation was corrected by an INR update in June 2017. Through deep-dive examinations of the scenes with large mean and/or variation in INR errors, we validated that IPATS is an excellent tool for assessing and improving the GOES-16 ABI INR and is also useful in INR long-term monitoring.

  6. The best of both worlds: automated CMP polishing of channel-cut monochromators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kasman, Elina; Erdmann, Mark; Stoupin, Stanislav

    2015-09-03

    The use of a channel-cut monochromator is the most straightforward method to ensure that the two reflection surfaces maintain alignment between crystallographic planes without the need for complicated alignment mechanisms. Three basic characteristics that affect monochromator performance are: subsurface damage which contaminates spectral purity; surface roughness which reduces efficiency due to scattering; and surface figure error which imparts intensity structure and coherence distortion in the beam. Standard chemical-mechanical polishing processes and equipment are used when the diffracting surface is easily accessible, such as for single-bounce monochromators. Due to the inaccessibly of the surfaces inside a channel-cut monochromator for polishing, thesemore » optics are generally wet-etched for their final processing. This results in minimal subsurface damage, but very poor roughness and figure error. A new CMP channel polishing instrument design is presented which allows the internal diffracting surface quality of channel-cut crystals to approach that of conventional single-bounce monochromators« less

  7. Attenuation and bit error rate for four co-propagating spatially multiplexed optical communication channels of exactly same wavelength in step index multimode fibers

    NASA Astrophysics Data System (ADS)

    Murshid, Syed H.; Chakravarty, Abhijit

    2011-06-01

    Spatial domain multiplexing (SDM) utilizes co-propagation of exactly the same wavelength in optical fibers to increase the bandwidth by integer multiples. Input signals from multiple independent single mode pigtail laser sources are launched at different input angles into a single multimode carrier fiber. The SDM channels follow helical paths and traverse through the carrier fiber without interfering with each other. The optical energy from the different sources is spatially distributed and takes the form of concentric circular donut shaped rings, where each ring corresponds to an independent laser source. At the output end of the fiber these donut shaped independent channels can be separated either with the help of bulk optics or integrated concentric optical detectors. This presents the experimental setup and results for a four channel SDM system. The attenuation and bit error rate for individual channels of such a system is also presented.

  8. MSG SEVIRI Applications for Weather and Climate: Cloud Properties and Calibrations

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Nguyen, Louis; Smith, William L.; Palikonda, Rabindra; Doelling, David R.; Ayers, J. Kirk; Trepte, Qing Z.; Chang, Fu-Lung

    2006-01-01

    SEVIRI data are cross-calibrated against the corresponding Aqua and Terra MODIS channels. Compared to Terra MODIS, no significant trends are evident in the 0.65, 0.86, and 1.6 micron channel gains between May 2004 and May 2006, indicating excellent stability in the solar-channel sensors. On average, the corresponding Terra reflectances are 12, 14, and 1% greater than the their SEVIRI counterparts. The Terra 3.8- micron channel brightness temperatures T are 7 and 4 K greater than their SEVIRI counterparts during day and night, respectively. The average differences between T for MODIS and SEVIRI 8.6, 10.8, 12.0, and 13.3- micron channels are between 0.5 and 2 K. The cloud properties are being derived hourly over Europe and, in initial comparisons, agree well with surface observations. Errors caused by residual calibration uncertainties, terminator conditions, and inaccurate temperature and humidity profiles are still problematic. Future versions will address those errors and the effects of multilayered clouds.

  9. Reconstruction of lightning channel geometry by localizing thunder sources

    NASA Astrophysics Data System (ADS)

    Bodhika, J. A. P.; Dharmarathna, W. G. D.; Fernando, Mahendra; Cooray, Vernon

    2013-09-01

    Thunder is generated as a result of a shock wave created by sudden expansion of air in the lightning channel due to high temperature variations. Even though the highest amplitudes of thunder signatures are generated at the return stroke stage, thunder signals generated at other events such as preliminary breakdown pulses also can be of amplitudes which are large enough to record using a sensitive system. In this study, it was attempted to reconstruct the lightning channel geometry of cloud and ground flashes by locating the temporal and spatial variations of thunder sources. Six lightning flashes were reconstructed using the recorded thunder signatures. Possible effects due to atmospheric conditions were neglected. Numerical calculations suggest that the time resolution of the recorded signal and 10 ms-1error in speed of sound leads to 2% and 3% errors, respectively, in the calculated coordinates. Reconstructed channel geometries for cloud and ground flashes agreed with the visual observations. Results suggest that the lightning channel can be successfully reconstructed using this technique.

  10. Moderate Deviation Analysis for Classical Communication over Quantum Channels

    NASA Astrophysics Data System (ADS)

    Chubb, Christopher T.; Tan, Vincent Y. F.; Tomamichel, Marco

    2017-11-01

    We analyse families of codes for classical data transmission over quantum channels that have both a vanishing probability of error and a code rate approaching capacity as the code length increases. To characterise the fundamental tradeoff between decoding error, code rate and code length for such codes we introduce a quantum generalisation of the moderate deviation analysis proposed by Altŭg and Wagner as well as Polyanskiy and Verdú. We derive such a tradeoff for classical-quantum (as well as image-additive) channels in terms of the channel capacity and the channel dispersion, giving further evidence that the latter quantity characterises the necessary backoff from capacity when transmitting finite blocks of classical data. To derive these results we also study asymmetric binary quantum hypothesis testing in the moderate deviations regime. Due to the central importance of the latter task, we expect that our techniques will find further applications in the analysis of other quantum information processing tasks.

  11. Statistical separability and classification of land use classes using image-100. [Brazil

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Kumar, R.; Niero, M.

    1977-01-01

    The author has identified the following significant results. The statistical separability of land use classes in the subsets of one to four spectral channels was investigated. Using ground observations and aerial photography, the MSS data of LANDSAT were analyzed with the Image-100. In the subsets of one to three spectral channels, channel 4, channel 4 & 7, and channels 4, 5, & 7 were found to be the best choices (ch.4 - 0.5 to 0.6 microns, ch. 5 - 0.6 to 0.7 microns, ch. 6 - 0.7 to 0.8 microns, and ch. 7 - 0.8 to 1.1 microns). For the single cell option of the Image-100, the errors of omission varied from 5% for the industrial class to 46% for the institutional class. The errors of commission varied from 11% for the commercial class to 39% for the industrial class. On the whole, the sample classifier gave considerably more accurate results compared to the single cell or multicell option.

  12. A-posteriori error estimation for second order mechanical systems

    NASA Astrophysics Data System (ADS)

    Ruiner, Thomas; Fehr, Jörg; Haasdonk, Bernard; Eberhard, Peter

    2012-06-01

    One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom. As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important. In this work, an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems. Due to the special second order structure of mechanical systems, an improvement of the a-posteriori error estimator is achieved. A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique. Therefore, it can be used for moment-matching based, Gramian matrices based or modal based model reduction techniques. The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system, and a sensitivity analysis of the parameters involved in the error estimation process is conducted.

  13. GOCI Yonsei aerosol retrieval version 2 aerosol products: improved algorithm description and error analysis with uncertainty estimation from 5-year validation over East Asia

    NASA Astrophysics Data System (ADS)

    Choi, M.; Kim, J.; Lee, J.; KIM, M.; Park, Y. J.; Holben, B. N.; Eck, T. F.; Li, Z.; Song, C. H.

    2017-12-01

    The Geostationary Ocean Color Imager (GOCI) Yonsei aerosol retrieval (YAER) version 1 algorithm was developed for retrieving hourly aerosol optical depth at 550 nm (AOD) and other subsidiary aerosol optical properties over East Asia. The GOCI YAER AOD showed comparable accuracy compared to ground-based and other satellite-based observations, but still had errors due to uncertainties in surface reflectance and simple cloud masking. Also, it was not capable of near-real-time (NRT) processing because it required a monthly database of each year encompassing the day of retrieval for the determination of surface reflectance. This study describes the improvement of GOCI YAER algorithm to the version 2 (V2) for NRT processing with improved accuracy from the modification of cloud masking, surface reflectance determination using multi-year Rayleigh corrected reflectance and wind speed database, and inversion channels per surface conditions. Therefore, the improved GOCI AOD ( ) is similar with those of Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) AOD compared to V1 of the YAER algorithm. The shows reduced median bias and increased ratio within range (i.e. absolute expected error range of MODIS AOD) compared to V1 in the validation results using Aerosol Robotic Network (AERONET) AOD ( ) from 2011 to 2016. The validation using the Sun-Sky Radiometer Observation Network (SONET) over China also shows similar results. The bias of error ( is within -0.1 and 0.1 range as a function of AERONET AOD and AE, scattering angle, NDVI, cloud fraction and homogeneity of retrieved AOD, observation time, month, and year. Also, the diagnostic and prognostic expected error (DEE and PEE, respectively) of are estimated. The estimated multiple PEE of GOCI V2 AOD is well matched with actual error over East Asia, and the GOCI V2 AOD over Korea shows higher ratio within PEE compared to over China and Japan. Hourly AOD products based on the improved GOCI YAER AOD could contribute to better understandings of aerosols in terms of long-term climate changes and short-term air quality monitoring and forecasting perspectives over East Asia, especially rapid diurnal variation and transboundary transport.

  14. Color filter array design based on a human visual model

    NASA Astrophysics Data System (ADS)

    Parmar, Manu; Reeves, Stanley J.

    2004-05-01

    To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of the error criterion.

  15. Optimal estimation of large structure model errors. [in Space Shuttle controller design

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1979-01-01

    In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.

  16. Real-time Neuroimaging and Cognitive Monitoring Using Wearable Dry EEG

    PubMed Central

    Mullen, Tim R.; Kothe, Christian A.E.; Chi, Mike; Ojeda, Alejandro; Kerth, Trevor; Makeig, Scott; Jung, Tzyy-Ping; Cauwenberghs, Gert

    2015-01-01

    Goal We present and evaluate a wearable high-density dry electrode EEG system and an open-source software framework for online neuroimaging and state classification. Methods The system integrates a 64-channel dry EEG form-factor with wireless data streaming for online analysis. A real-time software framework is applied, including adaptive artifact rejection, cortical source localization, multivariate effective connectivity inference, data visualization, and cognitive state classification from connectivity features using a constrained logistic regression approach (ProxConn). We evaluate the system identification methods on simulated 64-channel EEG data. Then we evaluate system performance, using ProxConn and a benchmark ERP method, in classifying response errors in 9 subjects using the dry EEG system. Results Simulations yielded high accuracy (AUC=0.97±0.021) for real-time cortical connectivity estimation. Response error classification using cortical effective connectivity (sdDTF) was significantly above chance with similar performance (AUC) for cLORETA (0.74±0.09) and LCMV (0.72±0.08) source localization. Cortical ERP-based classification was equivalent to ProxConn for cLORETA (0.74±0.16) but significantly better for LCMV (0.82±0.12). Conclusion We demonstrated the feasibility for real-time cortical connectivity analysis and cognitive state classification from high-density wearable dry EEG. Significance This paper is the first validated application of these methods to 64-channel dry EEG. The work addresses a need for robust real-time measurement and interpretation of complex brain activity in the dynamic environment of the wearable setting. Such advances can have broad impact in research, medicine, and brain-computer interfaces. The pipelines are made freely available in the open-source SIFT and BCILAB toolboxes. PMID:26415149

  17. Sloppy-slotted ALOHA

    NASA Technical Reports Server (NTRS)

    Crozier, Stewart N.

    1990-01-01

    Random access signaling, which allows slotted packets to spill over into adjacent slots, is investigated. It is shown that sloppy-slotted ALOHA can always provide higher throughput than conventional slotted ALOHA. The degree of improvement depends on the timing error distribution. Throughput performance is presented for Gaussian timing error distributions, modified to include timing error corrections. A general channel capacity lower bound, independent of the specific timing error distribution, is also presented.

  18. Error estimation of deformable image registration of pulmonary CT scans using convolutional neural networks.

    PubMed

    Eppenhof, Koen A J; Pluim, Josien P W

    2018-04-01

    Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.

  19. Positivity, discontinuity, finite resources, and nonzero error for arbitrarily varying quantum channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boche, H., E-mail: boche@tum.de, E-mail: janis.noetzel@tum.de; Nötzel, J., E-mail: boche@tum.de, E-mail: janis.noetzel@tum.de

    2014-12-15

    This work is motivated by a quite general question: Under which circumstances are the capacities of information transmission systems continuous? The research is explicitly carried out on finite arbitrarily varying quantum channels (AVQCs). We give an explicit example that answers the recent question whether the transmission of messages over AVQCs can benefit from assistance by distribution of randomness between the legitimate sender and receiver in the affirmative. The specific class of channels introduced in that example is then extended to show that the unassisted capacity does have discontinuity points, while it is known that the randomness-assisted capacity is always continuousmore » in the channel. We characterize the discontinuity points and prove that the unassisted capacity is always continuous around its positivity points. After having established shared randomness as an important resource, we quantify the interplay between the distribution of finite amounts of randomness between the legitimate sender and receiver, the (nonzero) probability of a decoding error with respect to the average error criterion and the number of messages that can be sent over a finite number of channel uses. We relate our results to the entanglement transmission capacities of finite AVQCs, where the role of shared randomness is not yet well understood, and give a new sufficient criterion for the entanglement transmission capacity with randomness assistance to vanish.« less

  20. An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.

    1994-01-01

    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.

  1. Piezo-Phototronic Effect Controlled Dual-Channel Visible light Communication (PVLC) Using InGaN/GaN Multiquantum Well Nanopillars.

    PubMed

    Du, Chunhua; Jiang, Chunyan; Zuo, Peng; Huang, Xin; Pu, Xiong; Zhao, Zhenfu; Zhou, Yongli; Li, Linxuan; Chen, Hong; Hu, Weiguo; Wang, Zhong Lin

    2015-12-02

    Visible light communication (VLC) simultaneously provides illumination and communication via light emitting diodes (LEDs). Keeping a low bit error rate is essential to communication quality, and holding a stable brightness level is pivotal for illumination function. For the first time, a piezo-phototronic effect controlled visible light communication (PVLC) system based on InGaN/GaN multiquantum wells nanopillars is demonstrated, in which the information is coded by mechanical straining. This approach of force coding is also instrumental to avoid LED blinks, which has less impact on illumination and is much safer to eyes than electrical on/off VLC. The two-channel transmission mode of the system here shows great superiority in error self-validation and error self-elimination in comparison to VLC. This two-channel PVLC system provides a suitable way to carry out noncontact, reliable communication under complex circumstances. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Towards Holography via Quantum Source-Channel Codes.

    PubMed

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-14

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  3. Towards Holography via Quantum Source-Channel Codes

    NASA Astrophysics Data System (ADS)

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-01

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  4. PyFLOWGO: An open-source platform for simulation of channelized lava thermo-rheological properties

    NASA Astrophysics Data System (ADS)

    Chevrel, Magdalena Oryaëlle; Labroquère, Jérémie; Harris, Andrew J. L.; Rowland, Scott K.

    2018-02-01

    Lava flow advance can be modeled through tracking the evolution of the thermo-rheological properties of a control volume of lava as it cools and crystallizes. An example of such a model was conceived by Harris and Rowland (2001) who developed a 1-D model, FLOWGO, in which the velocity of a control volume flowing down a channel depends on rheological properties computed following the thermal path estimated via a heat balance box model. We provide here an updated version of FLOWGO written in Python that is an open-source, modern and flexible language. Our software, named PyFLOWGO, allows selection of heat fluxes and rheological models of the user's choice to simulate the thermo-rheological evolution of the lava control volume. We describe its architecture which offers more flexibility while reducing the risk of making error when changing models in comparison to the previous FLOWGO version. Three cases are tested using actual data from channel-fed lava flow systems and results are discussed in terms of model validation and convergence. PyFLOWGO is open-source and packaged in a Python library to be imported and reused in any Python program (https://github.com/pyflowgo/pyflowgo)

  5. Performance analysis of replication ALOHA for fading mobile communications channels

    NASA Technical Reports Server (NTRS)

    Yan, Tsun-Yee; Clare, Loren P.

    1986-01-01

    This paper describes an ALOHA random access protocol for fading communications channels. A two-state Markov model is used for the channel error process to account for the channel fading memory. The ALOHA protocol is modified to send multiple contiguous copies of a message at each transmission attempt. Both pure and slotted ALOHA channels are considered. The analysis is applicable to fading environments where the channel memory is short compared to the propagation delay. It is shown that smaller delay may be achieved using replications and, in noisy conditions, can also improve throughput.

  6. A TDM link with channel coding and digital voice.

    NASA Technical Reports Server (NTRS)

    Jones, M. W.; Tu, K.; Harton, P. L.

    1972-01-01

    The features of a TDM (time-division multiplexed) link model are described. A PCM telemetry sequence was coded for error correction and multiplexed with a digitized voice channel. An all-digital implementation of a variable-slope delta modulation algorithm was used to digitize the voice channel. The results of extensive testing are reported. The measured coding gain and the system performance over a Gaussian channel are compared with theoretical predictions and computer simulations. Word intelligibility scores are reported as a measure of voice channel performance.

  7. Communication Channel Estimation and Waveform Design: Time Delay Estimation on Parallel, Flat Fading Channels

    DTIC Science & Technology

    2010-02-01

    channels, so the channel gain is known on each realization and used in a coherent matched filter; and (c) Rayleigh channels with noncoherent matched...gain is known on each realization and used in a coherent matched filter (channel model 1A); and (c) Rayleigh channels with noncoherent matched filters...filters, averaged over Rayleigh channel realizations (channel model 1A). (b) Noncoherent matched filters with Rayleigh fading (channel model 3). MSEs are

  8. NB-PLC channel modelling with cyclostationary noise addition & OFDM implementation for smart grid

    NASA Astrophysics Data System (ADS)

    Thomas, Togis; Gupta, K. K.

    2016-03-01

    Power line communication (PLC) technology can be a viable solution for the future ubiquitous networks because it provides a cheaper alternative to other wired technology currently being used for communication. In smart grid Power Line Communication (PLC) is used to support communication with low rate on low voltage (LV) distribution network. In this paper, we propose the channel modelling of narrowband (NB) PLC in the frequency range 5 KHz to 500 KHz by using ABCD parameter with cyclostationary noise addition. Behaviour of the channel was studied by the addition of 11KV/230V transformer, by varying load location and load. Bit error rate (BER) Vs signal to noise ratio SNR) was plotted for the proposed model by employing OFDM. Our simulation results based on the proposed channel model show an acceptable performance in terms of bit error rate versus signal to noise ratio, which enables communication required for smart grid applications.

  9. Error monitoring issues for common channel signaling

    NASA Astrophysics Data System (ADS)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  10. Interactive Video Coding and Transmission over Heterogeneous Wired-to-Wireless IP Networks Using an Edge Proxy

    NASA Astrophysics Data System (ADS)

    Pei, Yong; Modestino, James W.

    2004-12-01

    Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC) coding scheme employing Reed-Solomon (RS) codes and rate-compatible punctured convolutional (RCPC) codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC) approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.

  11. Prediction of transmission distortion for wireless video communication: analysis.

    PubMed

    Chen, Zhifeng; Wu, Dapeng

    2012-03-01

    Transmitting video over wireless is a challenging problem since video may be seriously distorted due to packet errors caused by wireless channels. The capability of predicting transmission distortion (i.e., video distortion caused by packet errors) can assist in designing video encoding and transmission schemes that achieve maximum video quality or minimum end-to-end video distortion. This paper is aimed at deriving formulas for predicting transmission distortion. The contribution of this paper is twofold. First, we identify the governing law that describes how the transmission distortion process evolves over time and analytically derive the transmission distortion formula as a closed-form function of video frame statistics, channel error statistics, and system parameters. Second, we identify, for the first time, two important properties of transmission distortion. The first property is that the clipping noise, which is produced by nonlinear clipping, causes decay of propagated error. The second property is that the correlation between motion-vector concealment error and propagated error is negative and has dominant impact on transmission distortion, compared with other correlations. Due to these two properties and elegant error/distortion decomposition, our formula provides not only more accurate prediction but also lower complexity than the existing methods.

  12. Channel Training for Analog FDD Repeaters: Optimal Estimators and Cramér-Rao Bounds

    NASA Astrophysics Data System (ADS)

    Wesemann, Stefan; Marzetta, Thomas L.

    2017-12-01

    For frequency division duplex channels, a simple pilot loop-back procedure has been proposed that allows the estimation of the UL & DL channels at an antenna array without relying on any digital signal processing at the terminal side. For this scheme, we derive the maximum likelihood (ML) estimators for the UL & DL channel subspaces, formulate the corresponding Cram\\'er-Rao bounds and show the asymptotic efficiency of both (SVD-based) estimators by means of Monte Carlo simulations. In addition, we illustrate how to compute the underlying (rank-1) SVD with quadratic time complexity by employing the power iteration method. To enable power control for the data transmission, knowledge of the channel gains is needed. Assuming that the UL & DL channels have on average the same gain, we formulate the ML estimator for the channel norm, and illustrate its robustness against strong noise by means of simulations.

  13. Statistical models for estimating daily streamflow in Michigan

    USGS Publications Warehouse

    Holtschlag, D.J.; Salehi, Habib

    1992-01-01

    Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.

  14. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  15. Telemetry advances in data compression and channel coding

    NASA Technical Reports Server (NTRS)

    Miller, Warner H.; Morakis, James C.; Yeh, Pen-Shu

    1990-01-01

    Addressed in this paper is the dependence of telecommunication channel, forward error correcting coding and source data compression coding on integrated circuit technology. Emphasis is placed on real time high speed Reed Solomon (RS) decoding using full custom VLSI technology. Performance curves of NASA's standard channel coder and a proposed standard lossless data compression coder are presented.

  16. Stochastic goal-oriented error estimation with memory

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  17. Wind power error estimation in resource assessments.

    PubMed

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  18. Wind Power Error Estimation in Resource Assessments

    PubMed Central

    Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444

  19. A Probabilistic Mass Estimation Algorithm for a Novel 7- Channel Capacitive Sample Verification Sensor

    NASA Technical Reports Server (NTRS)

    Wolf, Michael

    2012-01-01

    A document describes an algorithm created to estimate the mass placed on a sample verification sensor (SVS) designed for lunar or planetary robotic sample return missions. A novel SVS measures the capacitance between a rigid bottom plate and an elastic top membrane in seven locations. As additional sample material (soil and/or small rocks) is placed on the top membrane, the deformation of the membrane increases the capacitance. The mass estimation algorithm addresses both the calibration of each SVS channel, and also addresses how to combine the capacitances read from each of the seven channels into a single mass estimate. The probabilistic approach combines the channels according to the variance observed during the training phase, and provides not only the mass estimate, but also a value for the certainty of the estimate. SVS capacitance data is collected for known masses under a wide variety of possible loading scenarios, though in all cases, the distribution of sample within the canister is expected to be approximately uniform. A capacitance-vs-mass curve is fitted to this data, and is subsequently used to determine the mass estimate for the single channel s capacitance reading during the measurement phase. This results in seven different mass estimates, one for each SVS channel. Moreover, the variance of the calibration data is used to place a Gaussian probability distribution function (pdf) around this mass estimate. To blend these seven estimates, the seven pdfs are combined into a single Gaussian distribution function, providing the final mean and variance of the estimate. This blending technique essentially takes the final estimate as an average of the estimates of the seven channels, weighted by the inverse of the channel s variance.

  20. Methods for estimating the magnitude and frequency of peak discharges of rural, unregulated streams in Virginia

    USGS Publications Warehouse

    Bisese, James A.

    1995-01-01

    Methods are presented for estimating the peak discharges of rural, unregulated streams in Virginia. A Pearson Type III distribution is fitted to the logarithms of the unregulated annual peak-discharge records from 363 stream-gaging stations in Virginia to estimate the peak discharge at these stations for recurrence intervals of 2 to 500 years. Peak-discharge characteristics for 284 unregulated stations are divided into eight regions based on physiographic province, and regressed on basin characteristics, including drainage area, main channel length, main channel slope, mean basin elevation, percentage of forest cover, mean annual precipitation, and maximum rainfall intensity. Regression equations for each region are computed by use of the generalized least-squares method, which accounts for spatial and temporal correlation between nearby gaging stations. This regression technique weights the significance of each station to the regional equation based on the length of records collected at each cation, the correlation between annual peak discharges among the stations, and the standard deviation of the annual peak discharge for each station.Drainage area proved to be the only significant explanatory variable in four regions, while other regions have as many as three significant variables. Standard errors of the regression equations range from 30 to 80 percent. Alternate equations using drainage area only are provided for the five regions with more than one significant explanatory variable.Methods and sample computations are provided to estimate peak discharges at gaged and engaged sites in Virginia for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, and to adjust the regression estimates for sites on gaged streams where nearby gaging-station records are available.

  1. Quantum channels and memory effects

    NASA Astrophysics Data System (ADS)

    Caruso, Filippo; Giovannetti, Vittorio; Lupo, Cosmo; Mancini, Stefano

    2014-10-01

    Any physical process can be represented as a quantum channel mapping an initial state to a final state. Hence it can be characterized from the point of view of communication theory, i.e., in terms of its ability to transfer information. Quantum information provides a theoretical framework and the proper mathematical tools to accomplish this. In this context the notion of codes and communication capacities have been introduced by generalizing them from the classical Shannon theory of information transmission and error correction. The underlying assumption of this approach is to consider the channel not as acting on a single system, but on sequences of systems, which, when properly initialized allow one to overcome the noisy effects induced by the physical process under consideration. While most of the work produced so far has been focused on the case in which a given channel transformation acts identically and independently on the various elements of the sequence (memoryless configuration in jargon), correlated error models appear to be a more realistic way to approach the problem. A slightly different, yet conceptually related, notion of correlated errors applies to a single quantum system which evolves continuously in time under the influence of an external disturbance which acts on it in a non-Markovian fashion. This leads to the study of memory effects in quantum channels: a fertile ground where interesting novel phenomena emerge at the intersection of quantum information theory and other branches of physics. A survey is taken of the field of quantum channels theory while also embracing these specific and complex settings.

  2. Channel modelling for free-space optical inter-HAP links using adaptive ARQ transmission

    NASA Astrophysics Data System (ADS)

    Parthasarathy, S.; Giggenbach, D.; Kirstädter, A.

    2014-10-01

    Free-space optical (FSO) communication systems have seen significant developments in recent years due to growing need for very high data rates and tap-proof communication. The operation of an FSO link is suited to diverse variety of applications such as satellites, High Altitude Platforms (HAPs), Unmanned Aerial Vehicles (UAVs), aircrafts, ground stations and other areas involving both civil and military situations. FSO communication systems face challenges due to different effects of the atmospheric channel. FSO channel primarily suffers from scintillation effects due to Index of Refraction Turbulence (IRT). In addition, acquisition and pointing becomes more difficult because of the high directivity of the transmitted beam: Miss-pointing of the transmitted beam and tracking errors at the receiver generate additional fading of the optical signal. High Altitude Platforms (HAPs) are quasi-stationary vehicles operating in the stratosphere. The slowly varying but precisely determined time-of-flight of the Inter-HAP channel adds to its characteristics. To propose a suitable ARQ scheme, proper theoretical understanding of the optical atmospheric propagation and modeling of a specific scenario FSO channel is required. In this paper, a bi-directional symmetrical Inter-HAP link has been selected and modeled. The Inter-HAP channel model is then investigated via simulations in terms of optical scintillation induced by IRT and in presence of pointing error. The performance characteristic of the model is then quantified in terms of fading statistics from which the Packet Error Probability (PEP) is calculated. Based on the PEP characteristics, we propose suitable ARQ schemes.

  3. Sample Training Based Wildfire Segmentation by 2D Histogram θ-Division with Minimum Error

    PubMed Central

    Dong, Erqian; Sun, Mingui; Jia, Wenyan; Zhang, Dengyi; Yuan, Zhiyong

    2013-01-01

    A novel wildfire segmentation algorithm is proposed with the help of sample training based 2D histogram θ-division and minimum error. Based on minimum error principle and 2D color histogram, the θ-division methods were presented recently, but application of prior knowledge on them has not been explored. For the specific problem of wildfire segmentation, we collect sample images with manually labeled fire pixels. Then we define the probability function of error division to evaluate θ-division segmentations, and the optimal angle θ is determined by sample training. Performances in different color channels are compared, and the suitable channel is selected. To further improve the accuracy, the combination approach is presented with both θ-division and other segmentation methods such as GMM. Our approach is tested on real images, and the experiments prove its efficiency for wildfire segmentation. PMID:23878526

  4. Decorrelation of the true and estimated classifier errors in high-dimensional settings.

    PubMed

    Hanczar, Blaise; Hua, Jianping; Dougherty, Edward R

    2007-01-01

    The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error), in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting). Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, k-fold cross-validation, and .632 bootstrap). Moreover, three scenarios are considered: (1) feature selection, (2) known-feature set, and (3) all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection or using all features, with the better correlation between the latter two showing no general trend, but differing for different models.

  5. Error estimates for ice discharge calculated using the flux gate approach

    NASA Astrophysics Data System (ADS)

    Navarro, F. J.; Sánchez Gámez, P.

    2017-12-01

    Ice discharge to the ocean is usually estimated using the flux gate approach, in which ice flux is calculated through predefined flux gates close to the marine glacier front. However, published results usually lack a proper error estimate. In the flux calculation, both errors in cross-sectional area and errors in velocity are relevant. While for estimating the errors in velocity there are well-established procedures, the calculation of the error in the cross-sectional area requires the availability of ground penetrating radar (GPR) profiles transverse to the ice-flow direction. In this contribution, we use IceBridge operation GPR profiles collected in Ellesmere and Devon Islands, Nunavut, Canada, to compare the cross-sectional areas estimated using various approaches with the cross-sections estimated from GPR ice-thickness data. These error estimates are combined with those for ice-velocities calculated from Sentinel-1 SAR data, to get the error in ice discharge. Our preliminary results suggest, regarding area, that the parabolic cross-section approaches perform better than the quartic ones, which tend to overestimate the cross-sectional area for flight lines close to the central flowline. Furthermore, the results show that regional ice-discharge estimates made using parabolic approaches provide reasonable results, but estimates for individual glaciers can have large errors, up to 20% in cross-sectional area.

  6. Neural-Network Approach to Hyperspectral Data Analysis for Volcanic Ash Clouds Monitoring

    NASA Astrophysics Data System (ADS)

    Piscini, Alessandro; Ventress, Lucy; Carboni, Elisa; Grainger, Roy Gordon; Del Frate, Fabio

    2015-11-01

    In this study three artificial neural networks (ANN) were implemented in order to emulate a retrieval model and to estimate the ash Aerosol optical Depth (AOD), particle effective radius (reff) and cloud height from volcanic eruption using hyperspectral remotely sensed data. ANNs were trained using a selection of Infrared Atmospheric Sounding Interferometer (IASI) channels in Thermal Infrared (TIR) as inputs, and the corresponding ash parameters retrieved obtained using the Oxford retrievals as target outputs. The retrieval is demonstrated for the eruption of the Eyjafjallajo ̈kull volcano (Iceland) occurred in 2010. The results of validation provided root mean square error (RMSE) values between neural network outputs and targets lower than standard deviation (STD) of corresponding target outputs, therefore demonstrating the feasibility to estimate volcanic ash parameters using an ANN approach, and its importance in near real time monitoring activities, owing to its fast application. A high accuracy has been achieved for reff and cloud height estimation, while a decreasing in accuracy was obtained when applying the NN approach for AOD estimation, in particular for those values not well characterized during NN training phase.

  7. Finite Element A Posteriori Error Estimation for Heat Conduction. Degree awarded by George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

    2002-01-01

    This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

  8. Coordinated design of coding and modulation systems

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Work on partial unit memory codes continued; it was shown that for a given virtual state complexity, the maximum free distance over the class of all convolutional codes is achieved within the class of unit memory codes. The effect of phase-lock loop (PLL) tracking error on coding system performance was studied by using the channel cut-off rate as the measure of quality of a modulation system. Optimum modulation signal sets for a non-white Gaussian channel considered an heuristic selection rule based on a water-filling argument. The use of error correcting codes to perform data compression by the technique of syndrome source coding was researched and a weight-and-error-locations scheme was developed that is closely related to LDSC coding.

  9. Self-error-rejecting photonic qubit transmission in polarization-spatial modes with linear optical elements

    NASA Astrophysics Data System (ADS)

    Jiang, YuXiao; Guo, PengLiang; Gao, ChengYan; Wang, HaiBo; Alzahrani, Faris; Hobiny, Aatef; Deng, FuGuo

    2017-12-01

    We present an original self-error-rejecting photonic qubit transmission scheme for both the polarization and spatial states of photon systems transmitted over collective noise channels. In our scheme, we use simple linear-optical elements, including half-wave plates, 50:50 beam splitters, and polarization beam splitters, to convert spatial-polarization modes into different time bins. By using postselection in different time bins, the success probability of obtaining the uncorrupted states approaches 1/4 for single-photon transmission, which is not influenced by the coefficients of noisy channels. Our self-error-rejecting transmission scheme can be generalized to hyperentangled n-photon systems and is useful in practical high-capacity quantum communications with photon systems in two degrees of freedom.

  10. Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada

    NASA Astrophysics Data System (ADS)

    Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.

    2015-08-01

    Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion model can help in the understanding of the posterior estimates and percentage errors. Stable and realistic sub-regional and monthly flux estimates for western region of AB/SK can be obtained, but not for the eastern region of ON. This indicates that it is likely a real observation-based inversion for the annual provincial emissions will work for the western region whereas; improvements are needed with the current inversion setup before real inversion is performed for the eastern region.

  11. Re-Evaluation of the 1921 Peak Discharge at Skagit River near Concrete, Washington

    USGS Publications Warehouse

    Mastin, M.C.

    2007-01-01

    The peak discharge record at the U.S. Geological Survey (USGS) gaging station at Skagit River near Concrete, Washington, is a key record that has come under intense scrutiny by the scientific and lay person communities in the last 4 years. A peak discharge of 240,000 cubic feet per second for the flood on December 13, 1921, was determined in 1923 by USGS hydrologist James Stewart by means of a slope-area measurement. USGS then determined the peak discharges of three other large floods on the Skagit River (1897, 1909, and 1917) by extending the stage-discharge rating through the 1921 flood measurement. The 1921 estimate of peak discharge was recalculated by Flynn and Benson of the USGS after a channel roughness verification was completed based on the 1949 flood on the Skagit River. The 1949 recalculation indicated that the peak discharge probably was 6.2 percent lower than Stewart's original estimate but the USGS did not officially change the peak discharge from Stewart's estimate because it was not more than a 10-percent change (which is the USGS guideline for revising peak flows) and the estimate already had error bands of 15 percent. All these flood peaks are now being used by the U.S. Army Corps of Engineers to determine the 100-year flood discharge for the Skagit River Flood Study so any method to confirm or improve the 1921 peak discharge estimate is warranted. During the last 4 years, two floods have occurred on the Skagit River (2003, 2006) that has enabled the USGS to collect additional data, do further analysis, and yet again re-evaluate the 1921 peak discharge estimate. Since 1949, an island/bar in the study reach has reforested itself. This has complicated the flow hydraulics and made the most recent recalculation of the 1921 flood based on channel roughness verification that used 2003 and 2006 flood data less reliable. However, this recent recalculation did indicate that the original peak-discharge calculation by Stewart may be high, and it added to a body of evidence that indicates a revision in the 1921 peak discharge estimate is appropriate. The USGS has determined that a lower peak-discharge estimate (5.0 percent lower) similar to the 1949 estimates is most appropriate based on (1) a recalculation of the 1921 flood using a channel roughness verification from the 1949 flood data, (2) a recalculation of the 1921 flood using a channel roughness verification from 2003 and 2006 flood data, and (3) straight-line extension of the stage-discharge relation at the gage based on current-meter discharge measurements. Given the significance of the 1921 flood peak, revising the estimate is appropriate even though it is less than the 10-percent guideline established by the USGS for revision. Revising the peak is warranted because all work subsequent to 1921 point to the 1921 peak being lower than originally published.

  12. Contaminant point source localization error estimates as functions of data quantity and model quality

    DOE PAGES

    Hansen, Scott K.; Vesselinov, Velimir Valentinov

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less

  13. Estimation of suspended sediment concentration from turbidity measurements using artificial neural networks.

    PubMed

    Bayram, Adem; Kankal, Murat; Onsoy, Hizir

    2012-07-01

    Suspended sediment concentration (SSC) is generally determined from the direct measurement of sediment concentration of river or from sediment transport equations. Direct measurement is very costly and cannot be conducted for all river gauge stations. Therefore, correct estimation of suspended sediment amount carried by a river is very important in terms of water pollution, channel navigability, reservoir filling, fish habitat, river aesthetics and scientific interests. This study investigates the feasibility of using turbidity as a surrogate for SSC as in situ turbidity meters are being increasingly used to generate continuous records of SSC in rivers. For this reason, regression analysis (RA) and artificial neural networks (ANNs) were employed to estimate SSC based on in situ turbidity measurements. The SSC was firstly experimentally determined for the surface water samples collected from the six monitoring stations along the main branch of the stream Harsit, Eastern Black Sea Basin, Turkey. There were 144 data for each variable obtained on a fortnightly basis during March 2009 and February 2010. In the ANN method, the used data for training, testing and validation sets are 108, 24 and 12 of total 144 data, respectively. As the results of analyses, the smallest mean absolute error (MAE) and root mean square error (RMSE) values for validation set were obtained from the ANN method with 11.40 and 17.87, respectively. However these were 19.12 and 25.09 for RA. It was concluded that turbidity could be a surrogate for SSC in the streams, and the ANNs method used for the estimation of SSC provided acceptable results.

  14. Concurrent signal combining and channel estimation in digital communications

    DOEpatents

    Ormesher, Richard C [Albuquerque, NM; Mason, John J [Albuquerque, NM

    2011-08-30

    In the reception of digital information transmitted on a communication channel, a characteristic exhibited by the communication channel during transmission of the digital information is estimated based on a communication signal that represents the digital information and has been received via the communication channel. Concurrently with the estimating, the communication signal is used to decide what digital information was transmitted.

  15. High rate concatenated coding systems using bandwidth efficient trellis inner codes

    NASA Technical Reports Server (NTRS)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1989-01-01

    High-rate concatenated coding systems with bandwidth-efficient trellis inner codes and Reed-Solomon (RS) outer codes are investigated for application in high-speed satellite communication systems. Two concatenated coding schemes are proposed. In one the inner code is decoded with soft-decision Viterbi decoding, and the outer RS code performs error-correction-only decoding (decoding without side information). In the other, the inner code is decoded with a modified Viterbi algorithm, which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, whereas branch metrics are used to provide reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. The two schemes have been proposed for high-speed data communication on NASA satellite channels. The rates considered are at least double those used in current NASA systems, and the results indicate that high system reliability can still be achieved.

  16. Modeling the Zeeman effect in high altitude SSMIS channels for numerical weather prediction profiles: comparing a fast model and a line-by-line model

    NASA Astrophysics Data System (ADS)

    Larsson, R.; Milz, M.; Rayer, P.; Saunders, R.; Bell, W.; Booton, A.; Buehler, S. A.; Eriksson, P.; John, V.

    2015-10-01

    We present a comparison of a reference and a fast radiative transfer model using numerical weather prediction profiles for the Zeeman-affected high altitude Special Sensor Microwave Imager/Sounder channels 19-22. We find that the models agree well for channels 21 and 22 compared to the channels' system noise temperatures (1.9 and 1.3 K, respectively) and the expected profile errors at the affected altitudes (estimated to be around 5 K). For channel 22 there is a 0.5 K average difference between the models, with a standard deviation of 0.24 K for the full set of atmospheric profiles. Same channel, there is 1.2 K in average between the fast model and the sensor measurement, with 1.4 K standard deviation. For channel 21 there is a 0.9 K average difference between the models, with a standard deviation of 0.56 K. Same channel, there is 1.3 K in average between the fast model and the sensor measurement, with 2.4 K standard deviation. We consider the relatively small model differences as a validation of the fast Zeeman effect scheme for these channels. Both channels 19 and 20 have smaller average differences between the models (at below 0.2 K) and smaller standard deviations (at below 0.4 K) when both models use a two-dimensional magnetic field profile. However, when the reference model is switched to using a full three-dimensional magnetic field profile, the standard deviation to the fast model is increased to almost 2 K due to viewing geometry dependencies causing up to ± 7 K differences near the equator. The average differences between the two models remain small despite changing magnetic field configurations. We are unable to compare channels 19 and 20 to sensor measurements due to limited altitude range of the numerical weather prediction profiles. We recommended that numerical weather prediction software using the fast model takes the available fast Zeeman scheme into account for data assimilation of the affected sensor channels to better constrain the upper atmospheric temperatures.

  17. Modeling the Zeeman effect in high-altitude SSMIS channels for numerical weather prediction profiles: comparing a fast model and a line-by-line model

    NASA Astrophysics Data System (ADS)

    Larsson, Richard; Milz, Mathias; Rayer, Peter; Saunders, Roger; Bell, William; Booton, Anna; Buehler, Stefan A.; Eriksson, Patrick; John, Viju O.

    2016-03-01

    We present a comparison of a reference and a fast radiative transfer model using numerical weather prediction profiles for the Zeeman-affected high-altitude Special Sensor Microwave Imager/Sounder channels 19-22. We find that the models agree well for channels 21 and 22 compared to the channels' system noise temperatures (1.9 and 1.3 K, respectively) and the expected profile errors at the affected altitudes (estimated to be around 5 K). For channel 22 there is a 0.5 K average difference between the models, with a standard deviation of 0.24 K for the full set of atmospheric profiles. Concerning the same channel, there is 1.2 K on average between the fast model and the sensor measurement, with 1.4 K standard deviation. For channel 21 there is a 0.9 K average difference between the models, with a standard deviation of 0.56 K. Regarding the same channel, there is 1.3 K on average between the fast model and the sensor measurement, with 2.4 K standard deviation. We consider the relatively small model differences as a validation of the fast Zeeman effect scheme for these channels. Both channels 19 and 20 have smaller average differences between the models (at below 0.2 K) and smaller standard deviations (at below 0.4 K) when both models use a two-dimensional magnetic field profile. However, when the reference model is switched to using a full three-dimensional magnetic field profile, the standard deviation to the fast model is increased to almost 2 K due to viewing geometry dependencies, causing up to ±7 K differences near the equator. The average differences between the two models remain small despite changing magnetic field configurations. We are unable to compare channels 19 and 20 to sensor measurements due to limited altitude range of the numerical weather prediction profiles. We recommended that numerical weather prediction software using the fast model takes the available fast Zeeman scheme into account for data assimilation of the affected sensor channels to better constrain the upper atmospheric temperatures.

  18. Transport parameter estimation from lymph measurements and the Patlak equation.

    PubMed

    Watson, P D; Wolf, M B

    1992-01-01

    Two methods of estimating protein transport parameters for plasma-to-lymph transport data are presented. Both use IBM-compatible computers to obtain least-squares parameters for the solvent drag reflection coefficient and the permeability-surface area product using the Patlak equation. A matrix search approach is described, and the speed and convenience of this are compared with a commercially available gradient method. The results from both of these methods were different from those of a method reported by Reed, Townsley, and Taylor [Am. J. Physiol. 257 (Heart Circ. Physiol. 26): H1037-H1041, 1989]. It is shown that the Reed et al. method contains a systematic error. It is also shown that diffusion always plays an important role for transmembrane transport at the exit end of a membrane channel under all conditions of lymph flow rate and that the statement that diffusion becomes zero at high lymph flow rate depends on a mathematical definition of diffusion.

  19. A comparison between one year of daily global irradiation from ground-based measurements versus meteosat images from seven locations in Tunisia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Djemaa, A.B.; Delorme, C.

    1992-01-01

    Three numerical images from METEOSAT B2 per day have been processed over a period of 12 months, from October 1985 to September 1986, to estimate the daily values of available solar radiation in Tunisia. The methodology used, GISTEL, on the images of the visible' channel of METEOSAT, is described. Results are compared with measured radiation values from seven stations of the Institut de la Meteorologie de Tunisie.' Among more than 2,200 measured-estimated daily pairs, a high percentage, 89%, show a relative error of + or {minus}10%. Many figures concerning Sidi-Bou-Said, Kairouan, Thala, and Gafsa are presented to show the capabilitymore » of GISTEL to map the daily available solar radiation with a sufficient spatial resolution in countries where radiation measurements are too scarce.« less

  20. A Comparative Study of Co-Channel Interference Suppression Techniques

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon; Satorius, Ed; Paparisto, Gent; Polydoros, Andreas

    1997-01-01

    We describe three methods of combatting co-channel interference (CCI): a cross-coupled phase-locked loop (CCPLL); a phase-tracking circuit (PTC), and joint Viterbi estimation based on the maximum likelihood principle. In the case of co-channel FM-modulated voice signals, the CCPLL and PTC methods typically outperform the maximum likelihood estimators when the modulation parameters are dissimilar. However, as the modulation parameters become identical, joint Viterbi estimation provides for a more robust estimate of the co-channel signals and does not suffer as much from "signal switching" which especially plagues the CCPLL approach. Good performance for the PTC requires both dissimilar modulation parameters and a priori knowledge of the co-channel signal amplitudes. The CCPLL and joint Viterbi estimators, on the other hand, incorporate accurate amplitude estimates. In addition, application of the joint Viterbi algorithm to demodulating co-channel digital (BPSK) signals in a multipath environment is also discussed. It is shown in this case that if the interference is sufficiently small, a single trellis model is most effective in demodulating the co-channel signals.

  1. Topological Interference Management for K-User Downlink Massive MIMO Relay Network Channel.

    PubMed

    Selvaprabhu, Poongundran; Chinnadurai, Sunil; Li, Jun; Lee, Moon Ho

    2017-08-17

    In this paper, we study the emergence of topological interference alignment and the characterizing features of a multi-user broadcast interference relay channel. We propose an alternative transmission strategy named the relay space-time interference alignment (R-STIA) technique, in which a K -user multiple-input-multiple-output (MIMO) interference channel has massive antennas at the transmitter and relay. Severe interference from unknown transmitters affects the downlink relay network channel and degrades the system performance. An additional (unintended) receiver is introduced in the proposed R-STIA technique to overcome the above problem, since it has the ability to decode the desired signals for the intended receiver by considering cooperation between the receivers. The additional receiver also helps in recovering and reconstructing the interference signals with limited channel state information at the relay (CSIR). The Alamouti space-time transmission technique and minimum mean square error (MMSE) linear precoder are also used in the proposed scheme to detect the presence of interference signals. Numerical results show that the proposed R-STIA technique achieves a better performance in terms of the bit error rate (BER) and sum-rate compared to the existing broadcast channel schemes.

  2. Topological Interference Management for K-User Downlink Massive MIMO Relay Network Channel

    PubMed Central

    Li, Jun; Lee, Moon Ho

    2017-01-01

    In this paper, we study the emergence of topological interference alignment and the characterizing features of a multi-user broadcast interference relay channel. We propose an alternative transmission strategy named the relay space-time interference alignment (R-STIA) technique, in which a K-user multiple-input-multiple-output (MIMO) interference channel has massive antennas at the transmitter and relay. Severe interference from unknown transmitters affects the downlink relay network channel and degrades the system performance. An additional (unintended) receiver is introduced in the proposed R-STIA technique to overcome the above problem, since it has the ability to decode the desired signals for the intended receiver by considering cooperation between the receivers. The additional receiver also helps in recovering and reconstructing the interference signals with limited channel state information at the relay (CSIR). The Alamouti space-time transmission technique and minimum mean square error (MMSE) linear precoder are also used in the proposed scheme to detect the presence of interference signals. Numerical results show that the proposed R-STIA technique achieves a better performance in terms of the bit error rate (BER) and sum-rate compared to the existing broadcast channel schemes. PMID:28817071

  3. Simultaneous treatment of unspecified heteroskedastic model error distribution and mismeasured covariates for restricted moment models.

    PubMed

    Garcia, Tanya P; Ma, Yanyuan

    2017-10-01

    We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.

  4. Multiple-access relaying with network coding: iterative network/channel decoding with imperfect CSI

    NASA Astrophysics Data System (ADS)

    Vu, Xuan-Thang; Renzo, Marco Di; Duhamel, Pierre

    2013-12-01

    In this paper, we study the performance of the four-node multiple-access relay channel with binary Network Coding (NC) in various Rayleigh fading scenarios. In particular, two relay protocols, decode-and-forward (DF) and demodulate-and-forward (DMF) are considered. In the first case, channel decoding is performed at the relay before NC and forwarding. In the second case, only demodulation is performed at the relay. The contributions of the paper are as follows: (1) two joint network/channel decoding (JNCD) algorithms, which take into account possible decoding error at the relay, are developed in both DF and DMF relay protocols; (2) both perfect channel state information (CSI) and imperfect CSI at receivers are studied. In addition, we propose a practical method to forward the relays error characterization to the destination (quantization of the BER). This results in a fully practical scheme. (3) We show by simulation that the number of pilot symbols only affects the coding gain but not the diversity order, and that quantization accuracy affects both coding gain and diversity order. Moreover, when compared with the recent results using DMF protocol, our proposed DF protocol algorithm shows an improvement of 4 dB in fully interleaved Rayleigh fading channels and 0.7 dB in block Rayleigh fading channels.

  5. Fifty-year flood-inundation maps for La Lima, Honduras

    USGS Publications Warehouse

    Mastin, Mark C.; Olsen, T.D.

    2002-01-01

    After the devastating floods caused by Hurricane Mitch in 1998, maps of the areas and depths of the 50-year-flood inundation at 15 municipalities in Honduras were prepared as a tool for agencies involved in reconstruction and planning. This report, which is one in a series of 15, presents maps of areas in the municipality of La Lima that would be inundated by Rio Chamelecon with a discharge of 500 cubic meters per second, the approximate capacity of the river channel through the city of La Lima. The 50-year flood (2,400 cubic meters per second), the original design flow to be mapped, would inundate the entire area surveyed for this municipality. Because water-surface elevations of the 50-year flood could not be mapped properly without substantially expanding the area of the survey, the available data were used instead to estimate the channel capacity of Rio Chamelecon in La Lima by trial-and-error runs of different flows in a numerical model and to estimate the increase in height of levees needed to contain flows of 1,000 and 2,400 cubic meters per second. Geographic Information System (GIS) coverages of the flood inundation are available on a computer in the municipality of La Lima as part of the Municipal GIS project and on the Internet at the Flood Hazard Mapping Web page (http://mitchnts1.cr.usgs.gov/projects/floodhazard.html). These coverages allow users to view the flood inundation in much more detail than is possible using the maps in this report. Water-surface elevations for various discharges on Rio Chamelecon at La Lima were determined using HEC-RAS, a one-dimensional, steady-flow, step-backwater computer program. The channel and floodplain cross sections used in HEC-RAS were developed from an airborne light-detection-and-ranging (LIDAR) topographic survey of the area and ground surveys at three bridges. Top-of-levee or top-of-channel-bank elevations and locations at the cross sections were critical to estimating the channel capacity of Rio Chamelecon. These elevations and locations are provided along with the water-surface elevations for the 500-cubic-meter-per-second flow of Rio Chamelecon. Also, water-surface elevations of the 1,000 and 2,400 cubic-meter-per-second flows are provided, assuming that the existing levees are raised to contained the flows.

  6. Active phase correction of high resolution silicon photonic arrayed waveguide gratings

    DOE PAGES

    Gehl, M.; Trotter, D.; Starbuck, A.; ...

    2017-03-10

    Arrayed waveguide gratings provide flexible spectral filtering functionality for integrated photonic applications. Achieving narrow channel spacing requires long optical path lengths which can greatly increase the footprint of devices. High index contrast waveguides, such as those fabricated in silicon-on-insulator wafers, allow tight waveguide bends which can be used to create much more compact designs. Both the long optical path lengths and the high index contrast contribute to significant optical phase error as light propagates through the device. Thus, silicon photonic arrayed waveguide gratings require active or passive phase correction following fabrication. We present the design and fabrication of compact siliconmore » photonic arrayed waveguide gratings with channel spacings of 50, 10 and 1 GHz. The largest device, with 11 channels of 1 GHz spacing, has a footprint of only 1.1 cm 2. Using integrated thermo-optic phase shifters, the phase error is actively corrected. We present two methods of phase error correction and demonstrate state-of-the-art cross-talk performance for high index contrast arrayed waveguide gratings. As a demonstration of possible applications, we perform RF channelization with 1 GHz resolution. In addition, we generate unique spectral filters by applying non-zero phase offsets calculated by the Gerchberg Saxton algorithm.« less

  7. Active phase correction of high resolution silicon photonic arrayed waveguide gratings.

    PubMed

    Gehl, M; Trotter, D; Starbuck, A; Pomerene, A; Lentine, A L; DeRose, C

    2017-03-20

    Arrayed waveguide gratings provide flexible spectral filtering functionality for integrated photonic applications. Achieving narrow channel spacing requires long optical path lengths which can greatly increase the footprint of devices. High index contrast waveguides, such as those fabricated in silicon-on-insulator wafers, allow tight waveguide bends which can be used to create much more compact designs. Both the long optical path lengths and the high index contrast contribute to significant optical phase error as light propagates through the device. Therefore, silicon photonic arrayed waveguide gratings require active or passive phase correction following fabrication. Here we present the design and fabrication of compact silicon photonic arrayed waveguide gratings with channel spacings of 50, 10 and 1 GHz. The largest device, with 11 channels of 1 GHz spacing, has a footprint of only 1.1 cm2. Using integrated thermo-optic phase shifters, the phase error is actively corrected. We present two methods of phase error correction and demonstrate state-of-the-art cross-talk performance for high index contrast arrayed waveguide gratings. As a demonstration of possible applications, we perform RF channelization with 1 GHz resolution. Additionally, we generate unique spectral filters by applying non-zero phase offsets calculated by the Gerchberg Saxton algorithm.

  8. Feedback Power Control Strategies in Wireless Sensor Networks with Joint Channel Decoding

    PubMed Central

    Abrardo, Andrea; Ferrari, Gianluigi; Martalò, Marco; Perna, Fabio

    2009-01-01

    In this paper, we derive feedback power control strategies for block-faded multiple access schemes with correlated sources and joint channel decoding (JCD). In particular, upon the derivation of the feasible signal-to-noise ratio (SNR) region for the considered multiple access schemes, i.e., the multidimensional SNR region where error-free communications are, in principle, possible, two feedback power control strategies are proposed: (i) a classical feedback power control strategy, which aims at equalizing all link SNRs at the access point (AP), and (ii) an innovative optimized feedback power control strategy, which tries to make the network operational point fall in the feasible SNR region at the lowest overall transmit energy consumption. These strategies will be referred to as “balanced SNR” and “unbalanced SNR,” respectively. While they require, in principle, an unlimited power control range at the sources, we also propose practical versions with a limited power control range. We preliminary consider a scenario with orthogonal links and ideal feedback. Then, we analyze the robustness of the proposed power control strategies to possible non-idealities, in terms of residual multiple access interference and noisy feedback channels. Finally, we successfully apply the proposed feedback power control strategies to a limiting case of the class of considered multiple access schemes, namely a central estimating officer (CEO) scenario, where the sensors observe noisy versions of a common binary information sequence and the AP's goal is to estimate this sequence by properly fusing the soft-output information output by the JCD algorithm. PMID:22291536

  9. Cement bond evaluation method in horizontal wells using segmented bond tool

    NASA Astrophysics Data System (ADS)

    Song, Ruolong; He, Li

    2018-06-01

    Most of the existing cement evaluation technologies suffer from tool eccentralization due to gravity in highly deviated wells and horizontal wells. This paper proposes a correction method to lessen the effects of tool eccentralization on evaluation results of cement bond using segmented bond tool, which has an omnidirectional sonic transmitter and eight segmented receivers evenly arranged around the tool 2 ft from the transmitter. Using 3-D finite difference parallel numerical simulation method, we investigate the logging responses of centred and eccentred segmented bond tool in a variety of bond conditions. From the numerical results, we find that the tool eccentricity and channel azimuth can be estimated from measured sector amplitude. The average of the sector amplitude when the tool is eccentred can be corrected to the one when the tool is centred. Then the corrected amplitude will be used to calculate the channel size. The proposed method is applied to both synthetic and field data. For synthetic data, it turns out that this method can estimate the tool eccentricity with small error and the bond map is improved after correction. For field data, the tool eccentricity has a good agreement with the measured well deviation angle. Though this method still suffers from the low accuracy of calculating channel azimuth, the credibility of corrected bond map is improved especially in horizontal wells. It gives us a choice to evaluate the bond condition for horizontal wells using existing logging tool. The numerical results in this paper can provide aids for understanding measurements of segmented tool in both vertical and horizontal wells.

  10. Lightning energetics: Estimates of energy dissipation in channels, channel radii, and channel-heating risetimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borovsky, J.E.

    1998-05-01

    In this report, several lightning-channel parameters are calculated with the aid of an electrodynamic model of lightning. The electrodynamic model describes dart leaders and return strokes as electromagnetic waves that are guided along conducting lightning channels. According to the model, electrostatic energy is delivered to the channel by a leader, where it is stored around the outside of the channel; subsequently, the return stroke dissipates this locally stored energy. In this report this lightning-energy-flow scenario is developed further. Then the energy dissipated per unit length in lightning channels is calculated, where this quantity is now related to the linear chargemore » density on the channel, not to the cloud-to-ground electrostatic potential difference. Energy conservation is then used to calculate the radii of lightning channels: their initial radii at the onset of return strokes and their final radii after the channels have pressure expanded. Finally, the risetimes for channel heating during return strokes are calculated by defining an energy-storage radius around the channel and by estimating the radial velocity of energy flow toward the channel during a return stroke. In three appendices, values for the linear charge densities on lightning channels are calculated, estimates of the total length of branch channels are obtained, and values for the cloud-to-ground electrostatic potential difference are estimated. {copyright} 1998 American Geophysical Union« less

  11. Aerosol anomalies in Nimbus-7 coastal zone color scanner data obtained in Japan area

    NASA Technical Reports Server (NTRS)

    Fukushima, Hajime; Sugimori, Yasuhiro; Toratani, Mitsuhiro; Smith, Raymond C.; Yasuda, Yoshizumi

    1989-01-01

    About 400 CZCS (coastal zone color scanner) scenes covering the Japan area in November 1978-May 1982 were processed to study the applicability of the Gordon-Clark atmospheric correction scheme which produces water-leaving radiances Lw at 443 nm, 520 nm, and 550 nm as well as phytoplankton pigment maps. Typical spring-fall aerosol radiance in the images was found to be 0.8-1.5 micro-W/sq cm-nm-sr, which is about 50 percent more than reported for the US eastern coastal images. The correction for about half the data resulted in negative Lw (443) values, implying overestimation of the aerosol effect for this channel. Several possible reasons for this are considered, including deviation of the aerosol optical thickness tau(a) at 443 nm from that estimated by Angstrom's exponential law, which the algorithm assumes. The analysis shows that, assuming the use of the Gordon-Clark algorithm, and for a pigment concentration of about 1 microgram/l, -40 percent to +100 percent error in satellite estimates is common. Although this does not fully explain the negative Lw (443) in the satellite data, it seems to contribute to the problem significantly, together with other error sources, including one in the sensor calibration.

  12. Design of a fuzzy differential evolution algorithm to predict non-deposition sediment transport

    NASA Astrophysics Data System (ADS)

    Ebtehaj, Isa; Bonakdari, Hossein

    2017-12-01

    Since the flow entering a sewer contains solid matter, deposition at the bottom of the channel is inevitable. It is difficult to understand the complex, three-dimensional mechanism of sediment transport in sewer pipelines. Therefore, a method to estimate the limiting velocity is necessary for optimal designs. Due to the inability of gradient-based algorithms to train Adaptive Neuro-Fuzzy Inference Systems (ANFIS) for non-deposition sediment transport prediction, a new hybrid ANFIS method based on a differential evolutionary algorithm (ANFIS-DE) is developed. The training and testing performance of ANFIS-DE is evaluated using a wide range of dimensionless parameters gathered from the literature. The input combination used to estimate the densimetric Froude number ( Fr) parameters includes the volumetric sediment concentration ( C V ), ratio of median particle diameter to hydraulic radius ( d/R), ratio of median particle diameter to pipe diameter ( d/D) and overall friction factor of sediment ( λ s ). The testing results are compared with the ANFIS model and regression-based equation results. The ANFIS-DE technique predicted sediment transport at limit of deposition with lower root mean square error (RMSE = 0.323) and mean absolute percentage of error (MAPE = 0.065) and higher accuracy ( R 2 = 0.965) than the ANFIS model and regression-based equations.

  13. Extraction of incident irradiance from LWIR hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Lahaie, Pierre

    2014-10-01

    The atmospheric correction of thermal hyperspectral imagery can be separated in two distinct processes: Atmospheric Compensation (AC) and Temperature and Emissivity separation (TES). TES requires for input at each pixel, the ground leaving radiance and the atmospheric downwelling irradiance, which are the outputs of the AC process. The extraction from imagery of the downwelling irradiance requires assumptions about some of the pixels' nature, the sensor and the atmosphere. Another difficulty is that, often the sensor's spectral response is not well characterized. To deal with this unknown, we defined a spectral mean operator that is used to filter the ground leaving radiance and a computation of the downwelling irradiance from MODTRAN. A user will select a number of pixels in the image for which the emissivity is assumed to be known. The emissivity of these pixels is assumed to be smooth and that the only spectrally fast varying variable in the downwelling irradiance. Using these assumptions we built an algorithm to estimate the downwelling irradiance. The algorithm is used on all the selected pixels. The estimated irradiance is the average on the spectral channels of the resulting computation. The algorithm performs well in simulation and results are shown for errors in the assumed emissivity and for errors in the atmospheric profiles. The sensor noise influences mainly the required number of pixels.

  14. LDPC-PPM Coding Scheme for Optical Communication

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael

    2009-01-01

    In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.

  15. An Enhanced Non-Coherent Pre-Filter Design for Tracking Error Estimation in GNSS Receivers.

    PubMed

    Luo, Zhibin; Ding, Jicheng; Zhao, Lin; Wu, Mouyan

    2017-11-18

    Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration-which are the basis of tracking error estimation-are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (-0.25 cycle, 0.25 cycle) to (-0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio is less than 28.8 dB-Hz, in carrier frequency error estimation when carrier-to-noise density ratio is less than 20 dB-Hz, and in carrier phase error estimation when carrier-to-noise density belongs to (15, 23) dB-Hz ∪ (26, 50) dB-Hz.

  16. Enhanced Pedestrian Navigation Based on Course Angle Error Estimation Using Cascaded Kalman Filters

    PubMed Central

    Park, Chan Gook

    2018-01-01

    An enhanced pedestrian dead reckoning (PDR) based navigation algorithm, which uses two cascaded Kalman filters (TCKF) for the estimation of course angle and navigation errors, is proposed. The proposed algorithm uses a foot-mounted inertial measurement unit (IMU), waist-mounted magnetic sensors, and a zero velocity update (ZUPT) based inertial navigation technique with TCKF. The first stage filter estimates the course angle error of a human, which is closely related to the heading error of the IMU. In order to obtain the course measurements, the filter uses magnetic sensors and a position-trace based course angle. For preventing magnetic disturbance from contaminating the estimation, the magnetic sensors are attached to the waistband. Because the course angle error is mainly due to the heading error of the IMU, and the characteristic error of the heading angle is highly dependent on that of the course angle, the estimated course angle error is used as a measurement for estimating the heading error in the second stage filter. At the second stage, an inertial navigation system-extended Kalman filter-ZUPT (INS-EKF-ZUPT) method is adopted. As the heading error is estimated directly by using course-angle error measurements, the estimation accuracy for the heading and yaw gyro bias can be enhanced, compared with the ZUPT-only case, which eventually enhances the position accuracy more efficiently. The performance enhancements are verified via experiments, and the way-point position error for the proposed method is compared with those for the ZUPT-only case and with other cases that use ZUPT and various types of magnetic heading measurements. The results show that the position errors are reduced by a maximum of 90% compared with the conventional ZUPT based PDR algorithms. PMID:29690539

  17. Channel Estimation and Pilot Design for Massive MIMO Systems with Block-Structured Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Lv, ZhuoKai; Yang, Tiejun; Zhu, Chunhua

    2018-03-01

    Through utilizing the technology of compressive sensing (CS), the channel estimation methods can achieve the purpose of reducing pilots and improving spectrum efficiency. The channel estimation and pilot design scheme are explored during the correspondence under the help of block-structured CS in massive MIMO systems. The block coherence property of the aggregate system matrix can be minimized so that the pilot design scheme based on stochastic search is proposed. Moreover, the block sparsity adaptive matching pursuit (BSAMP) algorithm under the common sparsity model is proposed so that the channel estimation can be caught precisely. Simulation results are to be proved the proposed design algorithm with superimposed pilots design and the BSAMP algorithm can provide better channel estimation than existing methods.

  18. Physical Validation of TRMM TMI and PR Monthly Rain Products Over Oklahoma

    NASA Technical Reports Server (NTRS)

    Fisher, Brad L.

    2004-01-01

    The Tropical Rainfall Measuring Mission (TRMM) provides monthly rainfall estimates using data collected by the TRMM satellite. These estimates cover a substantial fraction of the earth's surface. The physical validation of TRMM estimates involves corroborating the accuracy of spaceborne estimates of areal rainfall by inferring errors and biases from ground-based rain estimates. The TRMM error budget consists of two major sources of error: retrieval and sampling. Sampling errors are intrinsic to the process of estimating monthly rainfall and occur because the satellite extrapolates monthly rainfall from a small subset of measurements collected only during satellite overpasses. Retrieval errors, on the other hand, are related to the process of collecting measurements while the satellite is overhead. One of the big challenges confronting the TRMM validation effort is how to best estimate these two main components of the TRMM error budget, which are not easily decoupled. This four-year study computed bulk sampling and retrieval errors for the TRMM microwave imager (TMI) and the precipitation radar (PR) by applying a technique that sub-samples gauge data at TRMM overpass times. Gridded monthly rain estimates are then computed from the monthly bulk statistics of the collected samples, providing a sensor-dependent gauge rain estimate that is assumed to include a TRMM equivalent sampling error. The sub-sampled gauge rain estimates are then used in conjunction with the monthly satellite and gauge (without sub- sampling) estimates to decouple retrieval and sampling errors. The computed mean sampling errors for the TMI and PR were 5.9% and 7.796, respectively, in good agreement with theoretical predictions. The PR year-to-year retrieval biases exceeded corresponding TMI biases, but it was found that these differences were partially due to negative TMI biases during cold months and positive TMI biases during warm months.

  19. GPM Precipitation Estimates over the Walnut Gulch Experimental Watershed/LTAR site in Southeastern Arizona

    NASA Astrophysics Data System (ADS)

    Goodrich, D. C.; Tan, J.; Petersen, W. A.; Unkrich, C. C.; Demaria, E. M.; Hazenberg, P.; Lakshmi, V.

    2017-12-01

    Precipitation profiles from the GPM Core Observatory Dual-frequency Precipitation Radar (DPR) form part of the a priori database used in GPM Goddard Profiling (GPROF) algorithm passive microwave radiometer retrievals of rainfall. The GPROF retrievals are in turn used as high quality precipitation estimates in gridded products such as IMERG. Due to the variability in and high surface emissivity of land surfaces, GPROF performs precipitation retrievals as a function of surface classes. As such, different surface types may possess different error characteristics, especially over arid regions where high quality ground measurements are often lacking. Importantly, the emissive properties of land also result in GPROF rainfall estimates being driven primarily by the higher frequency radiometer channels (e.g., > 89 GHz) where precipitation signals are most sensitive to coupling between the ice-phase and rainfall production. In this study, we evaluate the rainfall estimates from the Ku channel of the DPR as well as GPROF estimates from various passive microwave sensors. Our evaluation is conducted at the level of individual satellite pixels (5 to 15 km in diameter), against a dense network of weighing rain gauges (90 in 150 km2) in the USDA-ARS Walnut Gulch Experimental Watershed and Long-Term Agroecosystem Research (LTAR) site in southeastern Arizona. The multiple gauges in each satellite pixel and precise accumulation about the overpass time allow a spatially and temporally representative comparison between the satellite estimates and ground reference. Over Walnut Gulch, both the Ku and GPROF estimates are challenged to delineate between rain and no-rain. Probabilities of detection are relatively high, but false alarm ratios are also high. The rain intensities possess a negative bias across nearly all sensors. It is likely that storm types, arid conditions and the highly variable precipitation regime present a challenge to both rainfall retrieval algorithms. An array of ground-based sensors is being deployed during the 2017 monsoon season to better understand possible reasons for this discrepancy.

  20. Smooth empirical Bayes estimation of observation error variances in linear systems

    NASA Technical Reports Server (NTRS)

    Martz, H. F., Jr.; Lian, M. W.

    1972-01-01

    A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.

Top