A review of channel selection algorithms for EEG signal processing
NASA Astrophysics Data System (ADS)
Alotaiby, Turky; El-Samie, Fathi E. Abd; Alshebeili, Saleh A.; Ahmad, Ishtiaq
2015-12-01
Digital processing of electroencephalography (EEG) signals has now been popularly used in a wide variety of applications such as seizure detection/prediction, motor imagery classification, mental task classification, emotion classification, sleep state classification, and drug effects diagnosis. With the large number of EEG channels acquired, it has become apparent that efficient channel selection algorithms are needed with varying importance from one application to another. The main purpose of the channel selection process is threefold: (i) to reduce the computational complexity of any processing task performed on EEG signals by selecting the relevant channels and hence extracting the features of major importance, (ii) to reduce the amount of overfitting that may arise due to the utilization of unnecessary channels, for the purpose of improving the performance, and (iii) to reduce the setup time in some applications. Signal processing tools such as time-domain analysis, power spectral estimation, and wavelet transform have been used for feature extraction and hence for channel selection in most of channel selection algorithms. In addition, different evaluation approaches such as filtering, wrapper, embedded, hybrid, and human-based techniques have been widely used for the evaluation of the selected subset of channels. In this paper, we survey the recent developments in the field of EEG channel selection methods along with their applications and classify these methods according to the evaluation approach.
EDMC: An enhanced distributed multi-channel anti-collision algorithm for RFID reader system
NASA Astrophysics Data System (ADS)
Zhang, YuJing; Cui, Yinghua
2017-05-01
In this paper, we proposes an enhanced distributed multi-channel reader anti-collision algorithm for RFID environments which is based on the distributed multi-channel reader anti-collision algorithm for RFID environments (called DiMCA). We proposes a monitor method to decide whether reader receive the latest control news after it selected the data channel. The simulation result shows that it improves interrogation delay.
Silva, Adão; Gameiro, Atílio
2014-01-01
We present in this work a low-complexity algorithm to solve the sum rate maximization problem in multiuser MIMO broadcast channels with downlink beamforming. Our approach decouples the user selection problem from the resource allocation problem and its main goal is to create a set of quasiorthogonal users. The proposed algorithm exploits physical metrics of the wireless channels that can be easily computed in such a way that a null space projection power can be approximated efficiently. Based on the derived metrics we present a mathematical model that describes the dynamics of the user selection process which renders the user selection problem into an integer linear program. Numerical results show that our approach is highly efficient to form groups of quasiorthogonal users when compared to previously proposed algorithms in the literature. Our user selection algorithm achieves a large portion of the optimum user selection sum rate (90%) for a moderate number of active users. PMID:24574928
Dai, Shengfa; Wei, Qingguo
2017-01-01
Common spatial pattern algorithm is widely used to estimate spatial filters in motor imagery based brain-computer interfaces. However, use of a large number of channels will make common spatial pattern tend to over-fitting and the classification of electroencephalographic signals time-consuming. To overcome these problems, it is necessary to choose an optimal subset of the whole channels to save computational time and improve the classification accuracy. In this paper, a novel method named backtracking search optimization algorithm is proposed to automatically select the optimal channel set for common spatial pattern. Each individual in the population is a N-dimensional vector, with each component representing one channel. A population of binary codes generate randomly in the beginning, and then channels are selected according to the evolution of these codes. The number and positions of 1's in the code denote the number and positions of chosen channels. The objective function of backtracking search optimization algorithm is defined as the combination of classification error rate and relative number of channels. Experimental results suggest that higher classification accuracy can be achieved with much fewer channels compared to standard common spatial pattern with whole channels.
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286
Experiences with serial and parallel algorithms for channel routing using simulated annealing
NASA Technical Reports Server (NTRS)
Brouwer, Randall Jay
1988-01-01
Two algorithms for channel routing using simulated annealing are presented. Simulated annealing is an optimization methodology which allows the solution process to back up out of local minima that may be encountered by inappropriate selections. By properly controlling the annealing process, it is very likely that the optimal solution to an NP-complete problem such as channel routing may be found. The algorithm presented proposes very relaxed restrictions on the types of allowable transformations, including overlapping nets. By freeing that restriction and controlling overlap situations with an appropriate cost function, the algorithm becomes very flexible and can be applied to many extensions of channel routing. The selection of the transformation utilizes a number of heuristics, still retaining the pseudorandom nature of simulated annealing. The algorithm was implemented as a serial program for a workstation, and a parallel program designed for a hypercube computer. The details of the serial implementation are presented, including many of the heuristics used and some of the resulting solutions.
NASA Astrophysics Data System (ADS)
Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar
2011-12-01
This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.
Low complexity adaptive equalizers for underwater acoustic communications
NASA Astrophysics Data System (ADS)
Soflaei, Masoumeh; Azmi, Paeiz
2014-08-01
Interference signals due to scattering from surface and reflecting from bottom is one of the most important problems of reliable communications in shallow water channels. To solve this problem, one of the best suggested ways is to use adaptive equalizers. Convergence rate and misadjustment error in adaptive algorithms play important roles in adaptive equalizer performance. In this paper, affine projection algorithm (APA), selective regressor APA(SR-APA), family of selective partial update (SPU) algorithms, family of set-membership (SM) algorithms and selective partial update selective regressor APA (SPU-SR-APA) are compared with conventional algorithms such as the least mean square (LMS) in underwater acoustic communications. We apply experimental data from the Strait of Hormuz for demonstrating the efficiency of the proposed methods over shallow water channel. We observe that the values of the steady-state mean square error (MSE) of SR-APA, SPU-APA, SPU-normalized least mean square (SPU-NLMS), SPU-SR-APA, SM-APA and SM-NLMS algorithms decrease in comparison with the LMS algorithm. Also these algorithms have better convergence rates than LMS type algorithm.
Joint source-channel coding for motion-compensated DCT-based SNR scalable video.
Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K
2002-01-01
In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.
NASA Astrophysics Data System (ADS)
Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles
2008-12-01
We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.
Sniffer Channel Selection for Monitoring Wireless LANs
NASA Astrophysics Data System (ADS)
Song, Yuan; Chen, Xian; Kim, Yoo-Ah; Wang, Bing; Chen, Guanling
Wireless sniffers are often used to monitor APs in wireless LANs (WLANs) for network management, fault detection, traffic characterization, and optimizing deployment. It is cost effective to deploy single-radio sniffers that can monitor multiple nearby APs. However, since nearby APs often operate on orthogonal channels, a sniffer needs to switch among multiple channels to monitor its nearby APs. In this paper, we formulate and solve two optimization problems on sniffer channel selection. Both problems require that each AP be monitored by at least one sniffer. In addition, one optimization problem requires minimizing the maximum number of channels that a sniffer listens to, and the other requires minimizing the total number of channels that the sniffers listen to. We propose a novel LP-relaxation based algorithm, and two simple greedy heuristics for the above two optimization problems. Through simulation, we demonstrate that all the algorithms are effective in achieving their optimization goals, and the LP-based algorithm outperforms the greedy heuristics.
Routing channels in VLSI layout
NASA Astrophysics Data System (ADS)
Cai, Hong
A number of algorithms for the automatic routing of interconnections in Very Large Scale Integration (VLSI) building-block layouts are presented. Algorithms for the topological definition of channels, the global routing and the geometrical definition of channels are presented. In contrast to traditional approaches the definition and ordering of the channels is done after the global routing. This approach has the advantage that global routing information can be taken into account to select the optimal channel structure. A polynomial algorithm for the channel definition and ordering problem is presented. The existence of a conflict-free channel structure is guaranteed by enforcing a sliceable placement. Algorithms for finding the shortest connection path are described. A separate algorithm is developed for the power net routing, because the two power nets must be planarly routed with variable wire width. An integrated placement and routing system for generating building-block layout is briefly described. Some experimental results and design experiences in using the system are also presented. Very good results are obtained.
RLS Channel Estimation with Adaptive Forgetting Factor for DS-CDMA Frequency-Domain Equalization
NASA Astrophysics Data System (ADS)
Kojima, Yohei; Tomeba, Hiromichi; Takeda, Kazuaki; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can increase the downlink bit error rate (BER) performance of DS-CDMA beyond that possible with conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. Recently, we proposed a pilot-assisted channel estimation (CE) based on the MMSE criterion. Using MMSE-CE, the channel estimation accuracy is almost insensitive to the pilot chip sequence, and a good BER performance is achieved. In this paper, we propose a channel estimation scheme using one-tap recursive least square (RLS) algorithm, where the forgetting factor is adapted to the changing channel condition by the least mean square (LMS)algorithm, for DS-CDMA with FDE. We evaluate the BER performance using RLS-CE with adaptive forgetting factor in a frequency-selective fast Rayleigh fading channel by computer simulation.
NASA Astrophysics Data System (ADS)
Foronda, Augusto; Ohta, Chikara; Tamaki, Hisashi
Dirty paper coding (DPC) is a strategy to achieve the region capacity of multiple input multiple output (MIMO) downlink channels and a DPC scheduler is throughput optimal if users are selected according to their queue states and current rates. However, DPC is difficult to implement in practical systems. One solution, zero-forcing beamforming (ZFBF) strategy has been proposed to achieve the same asymptotic sum rate capacity as that of DPC with an exhaustive search over the entire user set. Some suboptimal user group selection schedulers with reduced complexity based on ZFBF strategy (ZFBF-SUS) and proportional fair (PF) scheduling algorithm (PF-ZFBF) have also been proposed to enhance the throughput and fairness among the users, respectively. However, they are not throughput optimal, fairness and throughput decrease if each user queue length is different due to different users channel quality. Therefore, we propose two different scheduling algorithms: a throughput optimal scheduling algorithm (ZFBF-TO) and a reduced complexity scheduling algorithm (ZFBF-RC). Both are based on ZFBF strategy and, at every time slot, the scheduling algorithms have to select some users based on user channel quality, user queue length and orthogonality among users. Moreover, the proposed algorithms have to produce the rate allocation and power allocation for the selected users based on a modified water filling method. We analyze the schedulers complexity and numerical results show that ZFBF-RC provides throughput and fairness improvements compared to the ZFBF-SUS and PF-ZFBF scheduling algorithms.
A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs.
Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan
2015-01-01
In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network.
A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs
Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan
2015-01-01
In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network. PMID:26571042
Hemmateenejad, Bahram; Akhond, Morteza; Miri, Ramin; Shamsipur, Mojtaba
2003-01-01
A QSAR algorithm, principal component-genetic algorithm-artificial neural network (PC-GA-ANN), has been applied to a set of newly synthesized calcium channel blockers, which are of special interest because of their role in cardiac diseases. A data set of 124 1,4-dihydropyridines bearing different ester substituents at the C-3 and C-5 positions of the dihydropyridine ring and nitroimidazolyl, phenylimidazolyl, and methylsulfonylimidazolyl groups at the C-4 position with known Ca(2+) channel binding affinities was employed in this study. Ten different sets of descriptors (837 descriptors) were calculated for each molecule. The principal component analysis was used to compress the descriptor groups into principal components. The most significant descriptors of each set were selected and used as input for the ANN. The genetic algorithm (GA) was used for the selection of the best set of extracted principal components. A feed forward artificial neural network with a back-propagation of error algorithm was used to process the nonlinear relationship between the selected principal components and biological activity of the dihydropyridines. A comparison between PC-GA-ANN and routine PC-ANN shows that the first model yields better prediction ability.
Evaluation of Dynamic Channel and Power Assignment for Cognitive Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Syed A. Ahmad; Umesh Shukla; Ryan E. Irwin
2011-03-01
In this paper, we develop a unifying optimization formulation to describe the Dynamic Channel and Power Assignment (DCPA) problem and evaluation method for comparing DCPA algorithms. DCPA refers to the allocation of transmit power and frequency channels to links in a cognitive network so as to maximize the total number of feasible links while minimizing the aggregate transmit power. We apply our evaluation method to five algorithms representative of DCPA used in literature. This comparison illustrates the tradeoffs between control modes (centralized versus distributed) and channel/power assignment techniques. We estimate the complexity of each algorithm. Through simulations, we evaluate themore » effectiveness of the algorithms in achieving feasible link allocations in the network, as well as their power efficiency. Our results indicate that, when few channels are available, the effectiveness of all algorithms is comparable and thus the one with smallest complexity should be selected. The Least Interfering Channel and Iterative Power Assignment (LICIPA) algorithm does not require cross-link gain information, has the overall lowest run time, and highest feasibility ratio of all the distributed algorithms; however, this comes at a cost of higher average power per link.« less
NASA Technical Reports Server (NTRS)
Zhou, Xiaoming (Inventor); Baras, John S. (Inventor)
2010-01-01
The present invention relates to an improved communications protocol which increases the efficiency of transmission in return channels on a multi-channel slotted Alohas system by incorporating advanced error correction algorithms, selective retransmission protocols and the use of reserved channels to satisfy the retransmission requests.
HF band filter bank multi-carrier spread spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laraway, Stephen Andrew; Moradi, Hussein; Farhang-Boroujeny, Behrouz
Abstract—This paper describes modifications to the filter bank multicarrier spread spectrum (FB-MC-SS) system, that was presented in [1] and [2], to enable transmission of this waveform in the HF skywave channel. FB-MC-SS is well suited for the HF channel because it performs well in channels with frequency selective fading and interference. This paper describes new algorithms for packet detection, timing recovery and equalization that are suitable for the HF channel. Also, an algorithm for optimizing the peak to average power ratio (PAPR) of the FBMC- SS waveform is presented. Application of this algorithm results in a waveform with low PAPR.more » Simulation results using a wide band HF channel model demonstrate the robustness of this system over a wide range of delay and Doppler spreads.« less
Automatic detection and classification of artifacts in single-channel EEG.
Olund, Thomas; Duun-Henriksen, Jonas; Kjaer, Troels W; Sorensen, Helge B D
2014-01-01
Ambulatory EEG monitoring can provide medical doctors important diagnostic information, without hospitalizing the patient. These recordings are however more exposed to noise and artifacts compared to clinically recorded EEG. An automatic artifact detection and classification algorithm for single-channel EEG is proposed to help identifying these artifacts. Features are extracted from the EEG signal and wavelet subbands. Subsequently a selection algorithm is applied in order to identify the best discriminating features. A non-linear support vector machine is used to discriminate among different artifact classes using the selected features. Single-channel (Fp1-F7) EEG recordings are obtained from experiments with 12 healthy subjects performing artifact inducing movements. The dataset was used to construct and validate the model. Both subject-specific and generic implementation, are investigated. The detection algorithm yield an average sensitivity and specificity above 95% for both the subject-specific and generic models. The classification algorithm show a mean accuracy of 78 and 64% for the subject-specific and generic model, respectively. The classification model was additionally validated on a reference dataset with similar results.
High capacity low delay packet broadcasting multiaccess schemes for satellite repeater systems
NASA Astrophysics Data System (ADS)
Bose, S. K.
1980-12-01
Demand assigned packet radio schemes using satellite repeaters can achieve high capacities but often exhibit relatively large delays under low traffic conditions when compared to random access. Several schemes which improve delay performance at low traffic but which have high capacity are presented and analyzed. These schemes allow random acess attempts by users, who are waiting for channel assignments. The performance of these are considered in the context of a multiple point communication system carrying fixed length messages between geographically distributed (ground) user terminals which are linked via a satellite repeater. Channel assignments are done following a BCC queueing discipline by a (ground) central controller on the basis of requests correctly received over a collision type access channel. In TBACR Scheme A, some of the forward message channels are set aside for random access transmissions; the rest are used in a demand assigned mode. Schemes B and C operate all their forward message channels in a demand assignment mode but, by means of appropriate algorithms for trailer channel selection, allow random access attempts on unassigned channels. The latter scheme also introduces framing and slotting of the time axis to implement a more efficient algorithm for trailer channel selection than the former.
Competitive Parallel Processing For Compression Of Data
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Fender, Antony R. H.
1990-01-01
Momentarily-best compression algorithm selected. Proposed competitive-parallel-processing system compresses data for transmission in channel of limited band-width. Likely application for compression lies in high-resolution, stereoscopic color-television broadcasting. Data from information-rich source like color-television camera compressed by several processors, each operating with different algorithm. Referee processor selects momentarily-best compressed output.
Hu, Yi; Loizou, Philipos C
2010-06-01
Attempts to develop noise-suppression algorithms that can significantly improve speech intelligibility in noise by cochlear implant (CI) users have met with limited success. This is partly because algorithms were sought that would work equally well in all listening situations. Accomplishing this has been quite challenging given the variability in the temporal/spectral characteristics of real-world maskers. A different approach is taken in the present study focused on the development of environment-specific noise suppression algorithms. The proposed algorithm selects a subset of the envelope amplitudes for stimulation based on the signal-to-noise ratio (SNR) of each channel. Binary classifiers, trained using data collected from a particular noisy environment, are first used to classify the mixture envelopes of each channel as either target-dominated (SNR>or=0 dB) or masker-dominated (SNR<0 dB). Only target-dominated channels are subsequently selected for stimulation. Results with CI listeners indicated substantial improvements (by nearly 44 percentage points at 5 dB SNR) in intelligibility with the proposed algorithm when tested with sentences embedded in three real-world maskers. The present study demonstrated that the environment-specific approach to noise reduction has the potential to restore speech intelligibility in noise to a level near to that attained in quiet.
Spectrum Access In Cognitive Radio Using a Two-Stage Reinforcement Learning Approach
NASA Astrophysics Data System (ADS)
Raj, Vishnu; Dias, Irene; Tholeti, Thulasi; Kalyani, Sheetal
2018-02-01
With the advent of the 5th generation of wireless standards and an increasing demand for higher throughput, methods to improve the spectral efficiency of wireless systems have become very important. In the context of cognitive radio, a substantial increase in throughput is possible if the secondary user can make smart decisions regarding which channel to sense and when or how often to sense. Here, we propose an algorithm to not only select a channel for data transmission but also to predict how long the channel will remain unoccupied so that the time spent on channel sensing can be minimized. Our algorithm learns in two stages - a reinforcement learning approach for channel selection and a Bayesian approach to determine the optimal duration for which sensing can be skipped. Comparisons with other learning methods are provided through extensive simulations. We show that the number of sensing is minimized with negligible increase in primary interference; this implies that lesser energy is spent by the secondary user in sensing and also higher throughput is achieved by saving on sensing.
Evolutionary Algorithm Based Feature Optimization for Multi-Channel EEG Classification.
Wang, Yubo; Veluvolu, Kalyana C
2017-01-01
The most BCI systems that rely on EEG signals employ Fourier based methods for time-frequency decomposition for feature extraction. The band-limited multiple Fourier linear combiner is well-suited for such band-limited signals due to its real-time applicability. Despite the improved performance of these techniques in two channel settings, its application in multiple-channel EEG is not straightforward and challenging. As more channels are available, a spatial filter will be required to eliminate the noise and preserve the required useful information. Moreover, multiple-channel EEG also adds the high dimensionality to the frequency feature space. Feature selection will be required to stabilize the performance of the classifier. In this paper, we develop a new method based on Evolutionary Algorithm (EA) to solve these two problems simultaneously. The real-valued EA encodes both the spatial filter estimates and the feature selection into its solution and optimizes it with respect to the classification error. Three Fourier based designs are tested in this paper. Our results show that the combination of Fourier based method with covariance matrix adaptation evolution strategy (CMA-ES) has the best overall performance.
Improvement in detection of small wildfires
NASA Astrophysics Data System (ADS)
Sleigh, William J.
1991-12-01
Detecting and imaging small wildfires with an Airborne Scanner is done against generally high background levels. The Airborne Scanner System used is a two-channel thermal IR scanner, with one channel selected for imaging the terrain and the other channel sensitive to hotter targets. If a relationship can be determined between the two channels that quantifies the background signal for hotter targets, then an algorithm can be determined that removes the background signal in that channel leaving only the fire signal. The relationship can be determined anywhere between various points in the signal processing of the radiometric data from the radiometric input to the quantized output of the system. As long as only linear operations are performed on the signal, the relationship will only depend on the system gain and offsets within the range of interest. The algorithm can be implemented either by using a look-up table or performing the calculation in the system computer. The current presentation will describe the algorithm, its derivation, and its implementation in the Firefly Wildfire Detection System by means of an off-the-shelf commercial scanner. Improvement over the previous algorithm used and the margin gained for improving the imaging of the terrain will be demonstrated.
Improvement in detection of small wildfires
NASA Technical Reports Server (NTRS)
Sleigh, William J.
1991-01-01
Detecting and imaging small wildfires with an Airborne Scanner is done against generally high background levels. The Airborne Scanner System used is a two-channel thermal IR scanner, with one channel selected for imaging the terrain and the other channel sensitive to hotter targets. If a relationship can be determined between the two channels that quantifies the background signal for hotter targets, then an algorithm can be determined that removes the background signal in that channel leaving only the fire signal. The relationship can be determined anywhere between various points in the signal processing of the radiometric data from the radiometric input to the quantized output of the system. As long as only linear operations are performed on the signal, the relationship will only depend on the system gain and offsets within the range of interest. The algorithm can be implemented either by using a look-up table or performing the calculation in the system computer. The current presentation will describe the algorithm, its derivation, and its implementation in the Firefly Wildfire Detection System by means of an off-the-shelf commercial scanner. Improvement over the previous algorithm used and the margin gained for improving the imaging of the terrain will be demonstrated.
NASA Astrophysics Data System (ADS)
Yoon, Jong Rak; Park, Kyu-Chil; Park, Jihyun
2015-07-01
Transmitted signals are markedly affected by sea surface and bottom boundaries in shallow water. The time variant reflection signals from such boundaries characterize the channel as a frequency-selective fading channel and cause intersymbol interference (ISI) in underwater acoustic communication. A channel-estimate-based equalizer is usually adopted to compensate for the reflected signals under this kind of acoustic channel. In this study, we apply two approaches for packet and continuous data transmission of the quadrature phase shift keying (QPSK) system. One is the use of a two-dimensional (2D) rotation matrix in a non-frequency-selective channel. The other is the use of two equalizers of types — the feed forward equalizer (FFE) and decision-directed equalizer (DDE) — with a normalized least mean square (NLMS) algorithm in a frequency-selective channel. The percentage improvement of packet transmission is notably better than that of continuous transmission.
Warren, Kristen M; Harvey, Joshua R; Chon, Ki H; Mendelson, Yitzhak
2016-03-07
Photoplethysmographic (PPG) waveforms are used to acquire pulse rate (PR) measurements from pulsatile arterial blood volume. PPG waveforms are highly susceptible to motion artifacts (MA), limiting the implementation of PR measurements in mobile physiological monitoring devices. Previous studies have shown that multichannel photoplethysmograms can successfully acquire diverse signal information during simple, repetitive motion, leading to differences in motion tolerance across channels. In this paper, we investigate the performance of a custom-built multichannel forehead-mounted photoplethysmographic sensor under a variety of intense motion artifacts. We introduce an advanced multichannel template-matching algorithm that chooses the channel with the least motion artifact to calculate PR for each time instant. We show that for a wide variety of random motion, channels respond differently to motion artifacts, and the multichannel estimate outperforms single-channel estimates in terms of motion tolerance, signal quality, and PR errors. We have acquired 31 data sets consisting of PPG waveforms corrupted by random motion and show that the accuracy of PR measurements achieved was increased by up to 2.7 bpm when the multichannel-switching algorithm was compared to individual channels. The percentage of PR measurements with error ≤ 5 bpm during motion increased by 18.9% when the multichannel switching algorithm was compared to the mean PR from all channels. Moreover, our algorithm enables automatic selection of the best signal fidelity channel at each time point among the multichannel PPG data.
NASA Technical Reports Server (NTRS)
Nguyen, Tien Manh
1989-01-01
MT's algorithm was developed as an aid in the design of space telecommunications systems when utilized with simultaneous range/command/telemetry operations. This algorithm provides selection of modulation indices for: (1) suppression of undesired signals to achieve desired link performance margins and/or to allow for a specified performance degradation in the data channel (command/telemetry) due to the presence of undesired signals (interferers); and (2) optimum power division between the carrier, the range, and the data channel. A software program using this algorithm was developed for use with MathCAD software. This software program, called the MT program, provides the computation of optimum modulation indices for all possible cases that are recommended by the Consultative Committee on Space Data System (CCSDS) (with emphasis on the squarewave, NASA/JPL ranging system).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C; Adcock, A; Azevedo, S
2010-12-28
Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple datamore » channels with redundant time samples and missing data points. The data channels can have different, time-varying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.« less
A Space-Time Signal Decomposition Algorithm for Downlink MIMO DS-CDMA Receivers
NASA Astrophysics Data System (ADS)
Wang, Yung-Yi; Fang, Wen-Hsien; Chen, Jiunn-Tsair
We propose a dimension reduction algorithm for the receiver of the downlink of direct-sequence code-division multiple access (DS-CDMA) systems in which both the transmitters and the receivers employ antenna arrays of multiple elements. To estimate the high order channel parameters, we develop a layered architecture using dimension-reduced parameter estimation algorithms to estimate the frequency-selective multipath channels. In the proposed architecture, to exploit the space-time geometric characteristics of multipath channels, spatial beamformers and constrained (or unconstrained) temporal filters are adopted for clustered-multipath grouping and path isolation. In conjunction with the multiple access interference (MAI) suppression techniques, the proposed architecture jointly estimates the direction of arrivals, propagation delays, and fading amplitudes of the downlink fading multipaths. With the outputs of the proposed architecture, the signals of interest can then be naturally detected by using path-wise maximum ratio combining. Compared to the traditional techniques, such as the Joint-Angle-and-Delay-Estimation (JADE) algorithm for DOA-delay joint estimation and the space-time minimum mean square error (ST-MMSE) algorithm for signal detection, computer simulations show that the proposed algorithm substantially mitigate the computational complexity at the expense of only slight performance degradation.
Interim Calibration Report for the SMMR Simulator
NASA Technical Reports Server (NTRS)
Gloersen, P.; Cavalieri, D.
1979-01-01
The calibration data obtained during the fall 1978 Nimbus-G underflight mission with the scanning multichannel microwave radiometer (SMMR) simulator on board the NASA CV-990 aircraft were analyzed and an interim calibration algorithm was developed. Data selected for the analysis consisted of in flight sky, first-year sea ice, and open water observations, as well as ground based observations of fixed targets with varied temperatures of selected instrument components. For most of the SMMR channels, a good fit to the selected data set was obtained with the algorithm.
A framework with Cucho algorithm for discovering regular plans in mobile clients
NASA Astrophysics Data System (ADS)
Tsiligaridis, John
2017-09-01
In a mobile computing system, broadcasting has become a very interesting and challenging research issue. The server continuously broadcasts data to mobile users; the data can be inserted into customized size relations and broadcasted as Regular Broadcast Plan (RBP) with multiple channels. Two algorithms, given the data size for each provided service, the Basic Regular (BRA) and the Partition Value Algorithm (PVA) can provide a static and dynamic RBP construction with multiple constraints solutions respectively. Servers have to define the data size of the services and can provide a feasible RBP working with many broadcasting plan operations. The operations become more complicated when there are many kinds of services and the sizes of data sets are unknown to the server. To that end a framework has been developed that also gives the ability to select low or high capacity channels for servicing. Theorems with new analytical results can provide direct conditions that can state the existence of solutions for the RBP problem with the compound criterion. Two kinds of solutions are provided: the equal and the non equal subrelation solutions. The Cucho Search Algorithm (CS) with the Levy flight behavior has been selected for the optimization. The CS for RBP (CSRP) is developed applying the theorems to the discovery of RBPs. An additional change to CS has been made in order to increase the local search. The CS can also discover RBPs with the minimum number of channels. From all the above modern servers can be upgraded with these possibilities in regards to RBPs discovery with fewer channels.
NASA Astrophysics Data System (ADS)
Fan, Tong-liang; Wen, Yu-cang; Kadri, Chaibou
Orthogonal frequency-division multiplexing (OFDM) is robust against frequency selective fading because of the increase of the symbol duration. However, the time-varying nature of the channel causes inter-carrier interference (ICI) which destroys the orthogonal of sub-carriers and degrades the system performance severely. To alleviate the detrimental effect of ICI, there is a need for ICI mitigation within one OFDM symbol. We propose an iterative Inter-Carrier Interference (ICI) estimation and cancellation technique for OFDM systems based on regularized constrained total least squares. In the proposed scheme, ICI aren't treated as additional additive white Gaussian noise (AWGN). The effect of Inter-Carrier Interference (ICI) and inter-symbol interference (ISI) on channel estimation is regarded as perturbation of channel. We propose a novel algorithm for channel estimation o based on regularized constrained total least squares. Computer simulations show that significant improvement can be obtained by the proposed scheme in fast fading channels.
Shan, Haijun; Xu, Haojie; Zhu, Shanan; He, Bin
2015-10-21
For sensorimotor rhythms based brain-computer interface (BCI) systems, classification of different motor imageries (MIs) remains a crucial problem. An important aspect is how many scalp electrodes (channels) should be used in order to reach optimal performance classifying motor imaginations. While the previous researches on channel selection mainly focus on MI tasks paradigms without feedback, the present work aims to investigate the optimal channel selection in MI tasks paradigms with real-time feedback (two-class control and four-class control paradigms). In the present study, three datasets respectively recorded from MI tasks experiment, two-class control and four-class control experiments were analyzed offline. Multiple frequency-spatial synthesized features were comprehensively extracted from every channel, and a new enhanced method IterRelCen was proposed to perform channel selection. IterRelCen was constructed based on Relief algorithm, but was enhanced from two aspects: change of target sample selection strategy and adoption of the idea of iterative computation, and thus performed more robust in feature selection. Finally, a multiclass support vector machine was applied as the classifier. The least number of channels that yield the best classification accuracy were considered as the optimal channels. One-way ANOVA was employed to test the significance of performance improvement among using optimal channels, all the channels and three typical MI channels (C3, C4, Cz). The results show that the proposed method outperformed other channel selection methods by achieving average classification accuracies of 85.2, 94.1, and 83.2 % for the three datasets, respectively. Moreover, the channel selection results reveal that the average numbers of optimal channels were significantly different among the three MI paradigms. It is demonstrated that IterRelCen has a strong ability for feature selection. In addition, the results have shown that the numbers of optimal channels in the three different motor imagery BCI paradigms are distinct. From a MI task paradigm, to a two-class control paradigm, and to a four-class control paradigm, the number of required channels for optimizing the classification accuracy increased. These findings may provide useful information to optimize EEG based BCI systems, and further improve the performance of noninvasive BCI.
Mustapha, Ibrahim; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A.; Sali, Aduwati; Mohamad, Hafizal
2015-01-01
It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution. Each member node selects an optimal cluster that satisfies pairwise constraints, minimizes network energy consumption and enhances channel sensing performance through an exploration technique. We first model the network energy consumption and then determine the optimal number of clusters for the network. The problem of selecting an optimal cluster is formulated as a Markov Decision Process (MDP) in the algorithm and the obtained simulation results show convergence, learning and adaptability of the algorithm to dynamic environment towards achieving an optimal solution. Performance comparisons of our algorithm with the Groupwise Spectrum Aware (GWSA)-based algorithm in terms of Sum of Square Error (SSE), complexity, network energy consumption and probability of detection indicate improved performance from the proposed approach. The results further reveal that an energy savings of 9% and a significant Primary User (PU) detection improvement can be achieved with the proposed approach. PMID:26287191
Mustapha, Ibrahim; Mohd Ali, Borhanuddin; Rasid, Mohd Fadlee A; Sali, Aduwati; Mohamad, Hafizal
2015-08-13
It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution. Each member node selects an optimal cluster that satisfies pairwise constraints, minimizes network energy consumption and enhances channel sensing performance through an exploration technique. We first model the network energy consumption and then determine the optimal number of clusters for the network. The problem of selecting an optimal cluster is formulated as a Markov Decision Process (MDP) in the algorithm and the obtained simulation results show convergence, learning and adaptability of the algorithm to dynamic environment towards achieving an optimal solution. Performance comparisons of our algorithm with the Groupwise Spectrum Aware (GWSA)-based algorithm in terms of Sum of Square Error (SSE), complexity, network energy consumption and probability of detection indicate improved performance from the proposed approach. The results further reveal that an energy savings of 9% and a significant Primary User (PU) detection improvement can be achieved with the proposed approach.
NASA Astrophysics Data System (ADS)
Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng
2006-12-01
An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.
NASA Astrophysics Data System (ADS)
Smith, J.; Gambacorta, A.; Barnet, C.; Smith, N.; Goldberg, M.; Pierce, B.; Wolf, W.; King, T.
2016-12-01
This work presents an overview of the NPP and J1 CrIS high resolution operational channel selection. Our methodology focuses on the spectral sensitivity characteristics of the available channels in order to maximize information content and spectral purity. These aspects are key to ensure accuracy in the retrieval products, particularly for trace gases. We will provide a demonstration of its global optimality by analyzing different test cases that are of particular interests to our JPSS Proving Ground and Risk Reduction user applications. A focus will be on high resolution trace gas retrieval capability in the context of the Alaska fire initiatives.
The sequence relay selection strategy based on stochastic dynamic programming
NASA Astrophysics Data System (ADS)
Zhu, Rui; Chen, Xihao; Huang, Yangchao
2017-07-01
Relay-assisted (RA) network with relay node selection is a kind of effective method to improve the channel capacity and convergence performance. However, most of the existing researches about the relay selection did not consider the statically channel state information and the selection cost. This shortage limited the performance and application of RA network in practical scenarios. In order to overcome this drawback, a sequence relay selection strategy (SRSS) was proposed. And the performance upper bound of SRSS was also analyzed in this paper. Furthermore, in order to make SRSS more practical, a novel threshold determination algorithm based on the stochastic dynamic program (SDP) was given to work with SRSS. Numerical results are also presented to exhibit the performance of SRSS with SDP.
Kupinski, M. K.; Clarkson, E.
2015-01-01
We present a new method for computing optimized channels for channelized quadratic observers (CQO) that is feasible for high-dimensional image data. The method for calculating channels is applicable in general and optimal for Gaussian distributed image data. Gradient-based algorithms for determining the channels are presented for five different information-based figures of merit (FOMs). Analytic solutions for the optimum channels for each of the five FOMs are derived for the case of equal mean data for both classes. The optimum channels for three of the FOMs under the equal mean condition are shown to be the same. This result is critical since some of the FOMs are much easier to compute. Implementing the CQO requires a set of channels and the first- and second-order statistics of channelized image data from both classes. The dimensionality reduction from M measurements to L channels is a critical advantage of CQO since estimating image statistics from channelized data requires smaller sample sizes and inverting a smaller covariance matrix is easier. In a simulation study we compare the performance of ideal and Hotelling observers to CQO. The optimal CQO channels are calculated using both eigenanalysis and a new gradient-based algorithm for maximizing Jeffrey's divergence (J). Optimal channel selection without eigenanalysis makes the J-CQO on large-dimensional image data feasible. PMID:26366764
An enhanced multi-channel bacterial foraging optimization algorithm for MIMO communication system
NASA Astrophysics Data System (ADS)
Palanimuthu, Senthilkumar Jayalakshmi; Muthial, Chandrasekaran
2017-04-01
Channel estimation and optimisation are the main challenging tasks in Multi Input Multi Output (MIMO) wireless communication systems. In this work, a Multi-Channel Bacterial Foraging Optimization Algorithm approach is proposed for the selection of antenna in a transmission area. The main advantage of this method is, it reduces the loss of bandwidth during data transmission effectively. Here, we considered the channel estimation and optimisation for improving the transmission speed and reducing the unused bandwidth. Initially, the message is given to the input of the communication system. Then, the symbol mapping process is performed for converting the message into signals. It will be encoded based on the space-time encoding technique. Here, the single signal is divided into multiple signals and it will be given to the input of space-time precoder. Hence, the multiplexing is applied to transmission channel estimation. In this paper, the Rayleigh channel is selected based on the bandwidth range. This is the Gaussian distribution type channel. Then, the demultiplexing is applied on the obtained signal that is the reverse function of multiplexing, which splits the combined signal arriving from a medium into the original information signal. Furthermore, the long-term evolution technique is used for scheduling the time to channels during transmission. Here, the hidden Markov model technique is employed to predict the status information of the channel. Finally, the signals are decoded and the reconstructed signal is obtained after performing the scheduling process. The experimental results evaluate the performance of the proposed MIMO communication system in terms of bit error rate, mean squared error, average throughput, outage capacity and signal to interference noise ratio.
Introduction of the ASGARD code (Automated Selection and Grouping of events in AIA Regional Data)
NASA Astrophysics Data System (ADS)
Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv K.; Fayock, Brian
2017-08-01
We have developed the ASGARD code to automatically detect and group brightenings ("events") in AIA data. The event selection and grouping can be optimized to the respective dataset with a multitude of control parameters. The code was initially written for IRIS data, but has since been optimized for AIA. However, the underlying algorithm is not limited to either and could be used for other data as well.Results from datasets in various AIA channels show that brightenings are reliably detected and that coherent coronal structures can be isolated by using the obtained information about the start, peak, and end times of events. We are presently working on a follow-up algorithm to automatically determine the heating and cooling timescales of coronal structures. This will be done by correlating the information from different AIA channels with different temperature responses. We will present the code and preliminary results.
NASA Astrophysics Data System (ADS)
Vahidi, Vahid; Saberinia, Ebrahim; Regentova, Emma E.
2017-10-01
A channel estimation (CE) method based on compressed sensing (CS) is proposed to estimate the sparse and doubly selective (DS) channel for hyperspectral image transmission from unmanned aircraft vehicles to ground stations. The proposed method contains three steps: (1) the priori estimate of the channel by orthogonal matching pursuit (OMP), (2) calculation of the linear minimum mean square error (LMMSE) estimate of the received pilots given the estimated channel, and (3) estimate of the complex amplitudes and Doppler shifts of the channel using the enhanced received pilot data applying a second round of a CS algorithm. The proposed method is named DS-LMMSE-OMP, and its performance is evaluated by simulating transmission of AVIRIS hyperspectral data via the communication channel and assessing their fidelity for the automated analysis after demodulation. The performance of the DS-LMMSE-OMP approach is compared with that of two other state-of-the-art CE methods. The simulation results exhibit up to 8-dB figure of merit in the bit error rate and 50% improvement in the hyperspectral image classification accuracy.
Maximum likelihood positioning algorithm for high-resolution PET scanners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gross-Weege, Nicolas, E-mail: nicolas.gross-weege@pmi.rwth-aachen.de, E-mail: schulz@pmi.rwth-aachen.de; Schug, David; Hallen, Patrick
2016-06-15
Purpose: In high-resolution positron emission tomography (PET), lightsharing elements are incorporated into typical detector stacks to read out scintillator arrays in which one scintillator element (crystal) is smaller than the size of the readout channel. In order to identify the hit crystal by means of the measured light distribution, a positioning algorithm is required. One commonly applied positioning algorithm uses the center of gravity (COG) of the measured light distribution. The COG algorithm is limited in spatial resolution by noise and intercrystal Compton scatter. The purpose of this work is to develop a positioning algorithm which overcomes this limitation. Methods:more » The authors present a maximum likelihood (ML) algorithm which compares a set of expected light distributions given by probability density functions (PDFs) with the measured light distribution. Instead of modeling the PDFs by using an analytical model, the PDFs of the proposed ML algorithm are generated assuming a single-gamma-interaction model from measured data. The algorithm was evaluated with a hot-rod phantom measurement acquired with the preclinical HYPERION II {sup D} PET scanner. In order to assess the performance with respect to sensitivity, energy resolution, and image quality, the ML algorithm was compared to a COG algorithm which calculates the COG from a restricted set of channels. The authors studied the energy resolution of the ML and the COG algorithm regarding incomplete light distributions (missing channel information caused by detector dead time). Furthermore, the authors investigated the effects of using a filter based on the likelihood values on sensitivity, energy resolution, and image quality. Results: A sensitivity gain of up to 19% was demonstrated in comparison to the COG algorithm for the selected operation parameters. Energy resolution and image quality were on a similar level for both algorithms. Additionally, the authors demonstrated that the performance of the ML algorithm is less prone to missing channel information. A likelihood filter visually improved the image quality, i.e., the peak-to-valley increased up to a factor of 3 for 2-mm-diameter phantom rods by rejecting 87% of the coincidences. A relative improvement of the energy resolution of up to 12.8% was also measured rejecting 91% of the coincidences. Conclusions: The developed ML algorithm increases the sensitivity by correctly handling missing channel information without influencing energy resolution or image quality. Furthermore, the authors showed that energy resolution and image quality can be improved substantially by rejecting events that do not comply well with the single-gamma-interaction model, such as Compton-scattered events.« less
Simakov, Nikolay A.
2010-01-01
A soft repulsion (SR) model of short range interactions between mobile ions and protein atoms is introduced in the framework of continuum representation of the protein and solvent. The Poisson-Nernst-Plank (PNP) theory of ion transport through biological channels is modified to incorporate this soft wall protein model. Two sets of SR parameters are introduced: the first is parameterized for all essential amino acid residues using all atom molecular dynamic simulations; the second is a truncated Lennard – Jones potential. We have further designed an energy based algorithm for the determination of the ion accessible volume, which is appropriate for a particular system discretization. The effects of these models of short-range interaction were tested by computing current-voltage characteristics of the α-hemolysin channel. The introduced SR potentials significantly improve prediction of channel selectivity. In addition, we studied the effect of choice of some space-dependent diffusion coefficient distributions on the predicted current-voltage properties. We conclude that the diffusion coefficient distributions largely affect total currents and have little effect on rectifications, selectivity or reversal potential. The PNP-SR algorithm is implemented in a new efficient parallel Poisson, Poisson-Boltzman and PNP equation solver, also incorporated in a graphical molecular modeling package HARLEM. PMID:21028776
Subspace techniques to remove artifacts from EEG: a quantitative analysis.
Teixeira, A R; Tome, A M; Lang, E W; Martins da Silva, A
2008-01-01
In this work we discuss and apply projective subspace techniques to both multichannel as well as single channel recordings. The single-channel approach is based on singular spectrum analysis(SSA) and the multichannel approach uses the extended infomax algorithm which is implemented in the opensource toolbox EEGLAB. Both approaches will be evaluated using artificial mixtures of a set of selected EEG signals. The latter were selected visually to contain as the dominant activity one of the characteristic bands of an electroencephalogram (EEG). The evaluation is performed both in the time and frequency domain by using correlation coefficients and coherence function, respectively.
Development of microwave rainfall retrieval algorithm for climate applications
NASA Astrophysics Data System (ADS)
KIM, J. H.; Shin, D. B.
2014-12-01
With the accumulated satellite datasets for decades, it is possible that satellite-based data could contribute to sustained climate applications. Level-3 products from microwave sensors for climate applications can be obtained from several algorithms. For examples, the Microwave Emission brightness Temperature Histogram (METH) algorithm produces level-3 rainfalls directly, whereas the Goddard profiling (GPROF) algorithm first generates instantaneous rainfalls and then temporal and spatial averaging process leads to level-3 products. The rainfall algorithm developed in this study follows a similar approach to averaging instantaneous rainfalls. However, the algorithm is designed to produce instantaneous rainfalls at an optimal resolution showing reduced non-linearity in brightness temperature (TB)-rain rate(R) relations. It is found that the resolution tends to effectively utilize emission channels whose footprints are relatively larger than those of scattering channels. This algorithm is mainly composed of a-priori databases (DBs) and a Bayesian inversion module. The DB contains massive pairs of simulated microwave TBs and rain rates, obtained by WRF (version 3.4) and RTTOV (version 11.1) simulations. To improve the accuracy and efficiency of retrieval process, data mining technique is additionally considered. The entire DB is classified into eight types based on Köppen climate classification criteria using reanalysis data. Among these sub-DBs, only one sub-DB which presents the most similar physical characteristics is selected by considering the thermodynamics of input data. When the Bayesian inversion is applied to the selected DB, instantaneous rain rate with 6 hours interval is retrieved. The retrieved monthly mean rainfalls are statistically compared with CMAP and GPCP, respectively.
Multi channel thermal hydraulic analysis of gas cooled fast reactor using genetic algorithm
NASA Astrophysics Data System (ADS)
Drajat, R. Z.; Su'ud, Z.; Soewono, E.; Gunawan, A. Y.
2012-05-01
There are three analyzes to be done in the design process of nuclear reactor i.e. neutronic analysis, thermal hydraulic analysis and thermodynamic analysis. The focus in this article is the thermal hydraulic analysis, which has a very important role in terms of system efficiency and the selection of the optimal design. This analysis is performed in a type of Gas Cooled Fast Reactor (GFR) using cooling Helium (He). The heat from nuclear fission reactions in nuclear reactors will be distributed through the process of conduction in fuel elements. Furthermore, the heat is delivered through a process of heat convection in the fluid flow in cooling channel. Temperature changes that occur in the coolant channels cause a decrease in pressure at the top of the reactor core. The governing equations in each channel consist of mass balance, momentum balance, energy balance, mass conservation and ideal gas equation. The problem is reduced to finding flow rates in each channel such that the pressure drops at the top of the reactor core are all equal. The problem is solved numerically with the genetic algorithm method. Flow rates and temperature distribution in each channel are obtained here.
Extraction of tidal channel networks from airborne scanning laser altimetry
NASA Astrophysics Data System (ADS)
Mason, David C.; Scott, Tania R.; Wang, Hai-Jing
Tidal channel networks are important features of the inter-tidal zone, and play a key role in tidal propagation and in the evolution of salt marshes and tidal flats. The study of their morphology is currently an active area of research, and a number of theories related to networks have been developed which require validation using dense and extensive observations of network forms and cross-sections. The conventional method of measuring networks is cumbersome and subjective, involving manual digitisation of aerial photographs in conjunction with field measurement of channel depths and widths for selected parts of the network. This paper describes a semi-automatic technique developed to extract networks from high-resolution LiDAR data of the inter-tidal zone. A multi-level knowledge-based approach has been implemented, whereby low-level algorithms first extract channel fragments based mainly on image properties then a high-level processing stage improves the network using domain knowledge. The approach adopted at low level uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels. The higher level processing includes a channel repair mechanism. The algorithm may be extended to extract networks from aerial photographs as well as LiDAR data. Its performance is illustrated using LiDAR data of two study sites, the River Ems, Germany and the Venice Lagoon. For the River Ems data, the error of omission for the automatic channel extractor is 26%, partly because numerous small channels are lost because they fall below the edge threshold, though these are less than 10 cm deep and unlikely to be hydraulically significant. The error of commission is lower, at 11%. For the Venice Lagoon data, the error of omission is 14%, but the error of commission is 42%, due partly to the difficulty of interpreting channels in these natural scenes. As a benchmark, previous work has shown that this type of algorithm specifically designed for extracting tidal networks from LiDAR data is able to achieve substantially improved results compared with those obtained using standard algorithms for drainage network extraction from Digital Terrain Models.
NASA Astrophysics Data System (ADS)
Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang
2017-07-01
The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.
High-dimensional cluster analysis with the Masked EM Algorithm
Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.
2014-01-01
Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694
Siuly; Li, Yan; Paul Wen, Peng
2014-03-01
Motor imagery (MI) tasks classification provides an important basis for designing brain-computer interface (BCI) systems. If the MI tasks are reliably distinguished through identifying typical patterns in electroencephalography (EEG) data, a motor disabled people could communicate with a device by composing sequences of these mental states. In our earlier study, we developed a cross-correlation based logistic regression (CC-LR) algorithm for the classification of MI tasks for BCI applications, but its performance was not satisfactory. This study develops a modified version of the CC-LR algorithm exploring a suitable feature set that can improve the performance. The modified CC-LR algorithm uses the C3 electrode channel (in the international 10-20 system) as a reference channel for the cross-correlation (CC) technique and applies three diverse feature sets separately, as the input to the logistic regression (LR) classifier. The present algorithm investigates which feature set is the best to characterize the distribution of MI tasks based EEG data. This study also provides an insight into how to select a reference channel for the CC technique with EEG signals considering the anatomical structure of the human brain. The proposed algorithm is compared with eight of the most recently reported well-known methods including the BCI III Winner algorithm. The findings of this study indicate that the modified CC-LR algorithm has potential to improve the identification performance of MI tasks in BCI systems. The results demonstrate that the proposed technique provides a classification improvement over the existing methods tested. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
High-Assurance System Support through 3-D Integration
2007-11-09
algorithms ), tagging, and in selected systems, offensive mecha- nisms. For example, we can exploit the control plane to tag all traffic traveling...October 2005. [35] D. Page. Theoretical use of cache memory as a cryptanalytic side-channel. Technical Report CSTR - 02-003, Department of Computer
Cassani, Raymundo; Falk, Tiago H.; Fraga, Francisco J.; Kanda, Paulo A. M.; Anghinah, Renato
2014-01-01
Over the last decade, electroencephalography (EEG) has emerged as a reliable tool for the diagnosis of cortical disorders such as Alzheimer's disease (AD). EEG signals, however, are susceptible to several artifacts, such as ocular, muscular, movement, and environmental. To overcome this limitation, existing diagnostic systems commonly depend on experienced clinicians to manually select artifact-free epochs from the collected multi-channel EEG data. Manual selection, however, is a tedious and time-consuming process, rendering the diagnostic system “semi-automated.” Notwithstanding, a number of EEG artifact removal algorithms have been proposed in the literature. The (dis)advantages of using such algorithms in automated AD diagnostic systems, however, have not been documented; this paper aims to fill this gap. Here, we investigate the effects of three state-of-the-art automated artifact removal (AAR) algorithms (both alone and in combination with each other) on AD diagnostic systems based on four different classes of EEG features, namely, spectral, amplitude modulation rate of change, coherence, and phase. The three AAR algorithms tested are statistical artifact rejection (SAR), blind source separation based on second order blind identification and canonical correlation analysis (BSS-SOBI-CCA), and wavelet enhanced independent component analysis (wICA). Experimental results based on 20-channel resting-awake EEG data collected from 59 participants (20 patients with mild AD, 15 with moderate-to-severe AD, and 24 age-matched healthy controls) showed the wICA algorithm alone outperforming other enhancement algorithm combinations across three tasks: diagnosis (control vs. mild vs. moderate), early detection (control vs. mild), and disease progression (mild vs. moderate), thus opening the doors for fully-automated systems that can assist clinicians with early detection of AD, as well as disease severity progression assessment. PMID:24723886
RAC-multi: reader anti-collision algorithm for multichannel mobile RFID networks.
Shin, Kwangcheol; Song, Wonil
2010-01-01
At present, RFID is installed on mobile devices such as mobile phones or PDAs and provides a means to obtain information about objects equipped with an RFID tag over a multi-channeled telecommunication networks. To use mobile RFIDs, reader collision problems should be addressed given that readers are continuously moving. Moreover, in a multichannel environment for mobile RFIDs, interference between adjacent channels should be considered. This work first defines a new concept of a reader collision problem between adjacent channels and then suggests a novel reader anti-collision algorithm for RFID readers that use multiple channels. To avoid interference with adjacent channels, the suggested algorithm separates data channels into odd and even numbered channels and allocates odd-numbered channels first to readers. It also sets an unused channel between the control channel and data channels to ensure that control messages and the signal of the adjacent channel experience no interference. Experimental results show that suggested algorithm shows throughput improvements ranging from 29% to 46% for tag identifications compared to the GENTLE reader anti-collision algorithm for multichannel RFID networks.
RAC-Multi: Reader Anti-Collision Algorithm for Multichannel Mobile RFID Networks
Shin, Kwangcheol; Song, Wonil
2010-01-01
At present, RFID is installed on mobile devices such as mobile phones or PDAs and provides a means to obtain information about objects equipped with an RFID tag over a multi-channeled telecommunication networks. To use mobile RFIDs, reader collision problems should be addressed given that readers are continuously moving. Moreover, in a multichannel environment for mobile RFIDs, interference between adjacent channels should be considered. This work first defines a new concept of a reader collision problem between adjacent channels and then suggests a novel reader anti-collision algorithm for RFID readers that use multiple channels. To avoid interference with adjacent channels, the suggested algorithm separates data channels into odd and even numbered channels and allocates odd-numbered channels first to readers. It also sets an unused channel between the control channel and data channels to ensure that control messages and the signal of the adjacent channel experience no interference. Experimental results show that suggested algorithm shows throughput improvements ranging from 29% to 46% for tag identifications compared to the GENTLE reader anti-collision algorithm for multichannel RFID networks. PMID:22315528
Escudero, Javier; Hornero, Roberto; Abásolo, Daniel; Fernández, Alberto; Poza, Jesús
2007-01-01
The aim of this study was to improve the diagnosis of Alzheimer's disease (AD) patients applying a blind source separation (BSS) and component selection procedure to their magnetoencephalogram (MEG) recordings. MEGs from 18 AD patients and 18 control subjects were decomposed with the algorithm for multiple unknown signals extraction. MEG channels and components were characterized by their mean frequency, spectral entropy, approximate entropy, and Lempel-Ziv complexity. Using Student's t-test, the components which accounted for the most significant differences between groups were selected. Then, these relevant components were used to partially reconstruct the MEG channels. By means of a linear discriminant analysis, we found that the BSS-preprocessed MEGs classified the subjects with an accuracy of 80.6%, whereas 72.2% accuracy was obtained without the BSS and component selection procedure.
NASA Technical Reports Server (NTRS)
Arduini, R. F.; Aherron, R. M.; Samms, R. W.
1984-01-01
A computational model of the deterministic and stochastic processes involved in multispectral remote sensing was designed to evaluate the performance of sensor systems and data processing algorithms for spectral feature classification. Accuracy in distinguishing between categories of surfaces or between specific types is developed as a means to compare sensor systems and data processing algorithms. The model allows studies to be made of the effects of variability of the atmosphere and of surface reflectance, as well as the effects of channel selection and sensor noise. Examples of these effects are shown.
Evaluation of stochastic differential equation approximation of ion channel gating models.
Bruce, Ian C
2009-04-01
Fox and Lu derived an algorithm based on stochastic differential equations for approximating the kinetics of ion channel gating that is simpler and faster than "exact" algorithms for simulating Markov process models of channel gating. However, the approximation may not be sufficiently accurate to predict statistics of action potential generation in some cases. The objective of this study was to develop a framework for analyzing the inaccuracies and determining their origin. Simulations of a patch of membrane with voltage-gated sodium and potassium channels were performed using an exact algorithm for the kinetics of channel gating and the approximate algorithm of Fox & Lu. The Fox & Lu algorithm assumes that channel gating particle dynamics have a stochastic term that is uncorrelated, zero-mean Gaussian noise, whereas the results of this study demonstrate that in many cases the stochastic term in the Fox & Lu algorithm should be correlated and non-Gaussian noise with a non-zero mean. The results indicate that: (i) the source of the inaccuracy is that the Fox & Lu algorithm does not adequately describe the combined behavior of the multiple activation particles in each sodium and potassium channel, and (ii) the accuracy does not improve with increasing numbers of channels.
NASA Astrophysics Data System (ADS)
Huo, Yanfeng; Duan, Minzheng; Tian, Wenshou; Min, Qilong
2015-08-01
A differential optical absorption spectroscopy (DOAS)-like algorithm is developed to retrieve the column-averaged dryair mole fraction of carbon dioxide from ground-based hyper-spectral measurements of the direct solar beam. Different to the spectral fitting method, which minimizes the difference between the observed and simulated spectra, the ratios of multiple channel-pairs—one weak and one strong absorption channel—are used to retrieve from measurements of the shortwave infrared (SWIR) band. Based on sensitivity tests, a super channel-pair is carefully selected to reduce the effects of solar lines, water vapor, air temperature, pressure, instrument noise, and frequency shift on retrieval errors. The new algorithm reduces computational cost and the retrievals are less sensitive to temperature and H2O uncertainty than the spectral fitting method. Multi-day Total Carbon Column Observing Network (TCCON) measurements under clear-sky conditions at two sites (Tsukuba and Bremen) are used to derive xxxx for the algorithm evaluation and validation. The DOAS-like results agree very well with those of the TCCON algorithm after correction of an airmass-dependent bias.
Adaptive recurrence quantum entanglement distillation for two-Kraus-operator channels
NASA Astrophysics Data System (ADS)
Ruan, Liangzhong; Dai, Wenhan; Win, Moe Z.
2018-05-01
Quantum entanglement serves as a valuable resource for many important quantum operations. A pair of entangled qubits can be shared between two agents by first preparing a maximally entangled qubit pair at one agent, and then sending one of the qubits to the other agent through a quantum channel. In this process, the deterioration of entanglement is inevitable since the noise inherent in the channel contaminates the qubit. To address this challenge, various quantum entanglement distillation (QED) algorithms have been developed. Among them, recurrence algorithms have advantages in terms of implementability and robustness. However, the efficiency of recurrence QED algorithms has not been investigated thoroughly in the literature. This paper puts forth two recurrence QED algorithms that adapt to the quantum channel to tackle the efficiency issue. The proposed algorithms have guaranteed convergence for quantum channels with two Kraus operators, which include phase-damping and amplitude-damping channels. Analytical results show that the convergence speed of these algorithms is improved from linear to quadratic and one of the algorithms achieves the optimal speed. Numerical results confirm that the proposed algorithms significantly improve the efficiency of QED.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buzatu, Adrian; /McGill U.
2006-08-01
Improving our ability to identify the top quark pair (t{bar t}) primary vertex (PV) on an event-by-event basis is essential for many analyses in the lepton-plus-jets channel performed by the Collider Detector at Fermilab (CDF) Collaboration. We compare the algorithm currently used by CDF (A1) with another algorithm (A2) using Monte Carlo simulation at high instantaneous luminosities. We confirm that A1 is more efficient than A2 at selecting the t{bar t} PV at all PV multiplicities, both with efficiencies larger than 99%. Event selection rejects events with a distance larger than 5 cm along the proton beam between the t{barmore » t} PV and the charged lepton. We find flat distributions for the signal over background significance of this cut for all cut values larger than 1 cm, for all PV multiplicities and for both algorithms. We conclude that any cut value larger than 1 cm is acceptable for both algorithms under the Tevatron's expected instantaneous luminosity improvements.« less
CFTLB: a novel cross-layer fault tolerant and load balancing protocol for WMN
NASA Astrophysics Data System (ADS)
Krishnaveni, N. N.; Chitra, K.
2017-12-01
Wireless mesh network (WMN) forms a wireless backbone framework for multi-hop transmission among the routers and clients in the extensible coverage area. To improve the throughput of WMNs with multiple gateways (GWs), several issues related to GW selection, load balancing and frequent link failures due to the presence of dynamic obstacles and channel interference should be addressed. This paper presents a novel cross-layer fault tolerant and load balancing (CFTLB) protocol to overcome the issues in WMN. Initially, the neighbour GW is searched and channel load is calculated. The GW having least channel load is selected which is estimated during the arrival of the new node. The proposed algorithm finds the alternate GWs and calculates the channel availability under high loading scenarios. If the current load in the GW is high, another GW is found and channel availability is calculated. Besides, it initiates the channel switching and establishes the communication with the mesh client effectively. The utilisation of hashing technique in proposed CFTLB verifies the status of the packets and achieves better performance in terms of router average throughput, throughput, average channel access time and lower end-to-end delay, communication overhead and average data loss in the channel compared to the existing protocols.
Fingerprint recognition of alien invasive weeds based on the texture character and machine learning
NASA Astrophysics Data System (ADS)
Yu, Jia-Jia; Li, Xiao-Li; He, Yong; Xu, Zheng-Hao
2008-11-01
Multi-spectral imaging technique based on texture analysis and machine learning was proposed to discriminate alien invasive weeds with similar outline but different categories. The objectives of this study were to investigate the feasibility of using Multi-spectral imaging, especially the near-infrared (NIR) channel (800 nm+/-10 nm) to find the weeds' fingerprints, and validate the performance with specific eigenvalues by co-occurrence matrix. Veronica polita Pries, Veronica persica Poir, longtube ground ivy, Laminum amplexicaule Linn. were selected in this study, which perform different effect in field, and are alien invasive species in China. 307 weed leaves' images were randomly selected for the calibration set, while the remaining 207 samples for the prediction set. All images were pretreated by Wallis filter to adjust the noise by uneven lighting. Gray level co-occurrence matrix was applied to extract the texture character, which shows density, randomness correlation, contrast and homogeneity of texture with different algorithms. Three channels (green channel by 550 nm+/-10 nm, red channel by 650 nm+/-10 nm and NIR channel by 800 nm+/-10 nm) were respectively calculated to get the eigenvalues.Least-squares support vector machines (LS-SVM) was applied to discriminate the categories of weeds by the eigenvalues from co-occurrence matrix. Finally, recognition ratio of 83.35% by NIR channel was obtained, better than the results by green channel (76.67%) and red channel (69.46%). The prediction results of 81.35% indicated that the selected eigenvalues reflected the main characteristics of weeds' fingerprint based on multi-spectral (especially by NIR channel) and LS-SVM model.
Optimization of spectral bands for hyperspectral remote sensing of forest vegetation
NASA Astrophysics Data System (ADS)
Dmitriev, Egor V.; Kozoderov, Vladimir V.
2013-10-01
Optimization principles of accounting for the most informative spectral channels in hyperspectral remote sensing data processing serve to enhance the efficiency of the employed high-productive computers. The problem of pattern recognition of the remotely sensed land surface objects with the accent on the forests is outlined from the point of view of the spectral channels optimization on the processed hyperspectral images. The relevant computational procedures are tested using the images obtained by the produced in Russia hyperspectral camera that was installed on a gyro-stabilized platform to conduct the airborne flight campaigns. The Bayesian classifier is used for the pattern recognition of the forests with different tree species and age. The probabilistically optimal algorithm constructed on the basis of the maximum likelihood principle is described to minimize the probability of misclassification given by this classifier. The classification error is the major category to estimate the accuracy of the applied algorithm by the known holdout cross-validation method. Details of the related techniques are presented. Results are shown of selecting the spectral channels of the camera while processing the images having in mind radiometric distortions that diminish the classification accuracy. The spectral channels are selected of the obtained subclasses extracted from the proposed validation techniques and the confusion matrices are constructed that characterize the age composition of the classified pine species as well as the broad age-class recognition for the pine and birch species with the fully illuminated parts of their crowns.
2014-01-01
We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588
NASA Astrophysics Data System (ADS)
Chang, Huan; Yin, Xiao-li; Cui, Xiao-zhou; Zhang, Zhi-chao; Ma, Jian-xin; Wu, Guo-hua; Zhang, Li-jia; Xin, Xiang-jun
2017-12-01
Practical orbital angular momentum (OAM)-based free-space optical (FSO) communications commonly experience serious performance degradation and crosstalk due to atmospheric turbulence. In this paper, we propose a wave-front sensorless adaptive optics (WSAO) system with a modified Gerchberg-Saxton (GS)-based phase retrieval algorithm to correct distorted OAM beams. We use the spatial phase perturbation (SPP) GS algorithm with a distorted probe Gaussian beam as the only input. The principle and parameter selections of the algorithm are analyzed, and the performance of the algorithm is discussed. The simulation results show that the proposed adaptive optics (AO) system can significantly compensate for distorted OAM beams in single-channel or multiplexed OAM systems, which provides new insights into adaptive correction systems using OAM beams.
Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications
NASA Astrophysics Data System (ADS)
Qian, Xuewen; Deng, Honggui; He, Hailang
2017-10-01
Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.
Adaptive software-defined coded modulation for ultra-high-speed optical transport
NASA Astrophysics Data System (ADS)
Djordjevic, Ivan B.; Zhang, Yequn
2013-10-01
In optically-routed networks, different wavelength channels carrying the traffic to different destinations can have quite different optical signal-to-noise ratios (OSNRs) and signal is differently impacted by various channel impairments. Regardless of the data destination, an optical transport system (OTS) must provide the target bit-error rate (BER) performance. To provide target BER regardless of the data destination we adjust the forward error correction (FEC) strength. Depending on the information obtained from the monitoring channels, we select the appropriate code rate matching to the OSNR range that current channel OSNR falls into. To avoid frame synchronization issues, we keep the codeword length fixed independent of the FEC code being employed. The common denominator is the employment of quasi-cyclic (QC-) LDPC codes in FEC. For high-speed implementation, low-complexity LDPC decoding algorithms are needed, and some of them will be described in this invited paper. Instead of conventional QAM based modulation schemes, we employ the signal constellations obtained by optimum signal constellation design (OSCD) algorithm. To improve the spectral efficiency, we perform the simultaneous rate adaptation and signal constellation size selection so that the product of number of bits per symbol × code rate is closest to the channel capacity. Further, we describe the advantages of using 4D signaling instead of polarization-division multiplexed (PDM) QAM, by using the 4D MAP detection, combined with LDPC coding, in a turbo equalization fashion. Finally, to solve the problems related to the limited bandwidth of information infrastructure, high energy consumption, and heterogeneity of optical networks, we describe an adaptive energy-efficient hybrid coded-modulation scheme, which in addition to amplitude, phase, and polarization state employs the spatial modes as additional basis functions for multidimensional coded-modulation.
Scalable video transmission over Rayleigh fading channels using LDPC codes
NASA Astrophysics Data System (ADS)
Bansal, Manu; Kondi, Lisimachos P.
2005-03-01
In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.
A motion-classification strategy based on sEMG-EEG signal combination for upper-limb amputees.
Li, Xiangxin; Samuel, Oluwarotimi Williams; Zhang, Xu; Wang, Hui; Fang, Peng; Li, Guanglin
2017-01-07
Most of the modern motorized prostheses are controlled with the surface electromyography (sEMG) recorded on the residual muscles of amputated limbs. However, the residual muscles are usually limited, especially after above-elbow amputations, which would not provide enough sEMG for the control of prostheses with multiple degrees of freedom. Signal fusion is a possible approach to solve the problem of insufficient control commands, where some non-EMG signals are combined with sEMG signals to provide sufficient information for motion intension decoding. In this study, a motion-classification method that combines sEMG and electroencephalography (EEG) signals were proposed and investigated, in order to improve the control performance of upper-limb prostheses. Four transhumeral amputees without any form of neurological disease were recruited in the experiments. Five motion classes including hand-open, hand-close, wrist-pronation, wrist-supination, and no-movement were specified. During the motion performances, sEMG and EEG signals were simultaneously acquired from the skin surface and scalp of the amputees, respectively. The two types of signals were independently preprocessed and then combined as a parallel control input. Four time-domain features were extracted and fed into a classifier trained by the Linear Discriminant Analysis (LDA) algorithm for motion recognition. In addition, channel selections were performed by using the Sequential Forward Selection (SFS) algorithm to optimize the performance of the proposed method. The classification performance achieved by the fusion of sEMG and EEG signals was significantly better than that obtained by single signal source of either sEMG or EEG. An increment of more than 14% in classification accuracy was achieved when using a combination of 32-channel sEMG and 64-channel EEG. Furthermore, based on the SFS algorithm, two optimized electrode arrangements (10-channel sEMG + 10-channel EEG, 10-channel sEMG + 20-channel EEG) were obtained with classification accuracies of 84.2 and 87.0%, respectively, which were about 7.2 and 10% higher than the accuracy by using only 32-channel sEMG input. This study demonstrated the feasibility of fusing sEMG and EEG signals towards improving motion classification accuracy for above-elbow amputees, which might enhance the control performances of multifunctional myoelectric prostheses in clinical application. The study was approved by the ethics committee of Institutional Review Board of Shenzhen Institutes of Advanced Technology, and the reference number is SIAT-IRB-150515-H0077.
Computer Aided Synthesis or Measurement Schemes for Telemetry applications
1997-09-02
5.2.5. Frame structure generation The algorithm generating the frame structure should take as inputs the sampling frequency requirements of the channels...these channels into the frame structure. Generally there can be a lot of ways to divide channels among groups. The algorithm implemented in...groups) first. The algorithm uses the function "try_permutation" recursively to distribute channels among the groups, and the function "try_subtable
NASA Astrophysics Data System (ADS)
Chakraborty, Tamal; Saha Misra, Iti
2016-03-01
Secondary Users (SUs) in a Cognitive Radio Network (CRN) face unpredictable interruptions in transmission due to the random arrival of Primary Users (PUs), leading to spectrum handoff or dropping instances. An efficient spectrum handoff algorithm, thus, becomes one of the indispensable components in CRN, especially for real-time communication like Voice over IP (VoIP). In this regard, this paper investigates the effects of spectrum handoff on the Quality of Service (QoS) for VoIP traffic in CRN, and proposes a real-time spectrum handoff algorithm in two phases. The first phase (VAST-VoIP based Adaptive Sensing and Transmission) adaptively varies the channel sensing and transmission durations to perform intelligent dropping decisions. The second phase (ProReact-Proactive and Reactive Handoff) deploys efficient channel selection mechanisms during spectrum handoff for resuming communication. Extensive performance analysis in analytical and simulation models confirms a decrease in spectrum handoff delay for VoIP SUs by more than 40% and 60%, compared to existing proactive and reactive algorithms, respectively and ensures a minimum 10% reduction in call-dropping probability with respect to the previous works in this domain. The effective SU transmission duration is also maximized under the proposed algorithm, thereby making it suitable for successful VoIP communication.
Segmenting texts from outdoor images taken by mobile phones using color features
NASA Astrophysics Data System (ADS)
Liu, Zongyi; Zhou, Hanning
2011-01-01
Recognizing texts from images taken by mobile phones with low resolution has wide applications. It has been shown that a good image binarization can substantially improve the performances of OCR engines. In this paper, we present a framework to segment texts from outdoor images taken by mobile phones using color features. The framework consists of three steps: (i) the initial process including image enhancement, binarization and noise filtering, where we binarize the input images in each RGB channel, and apply component level noise filtering; (ii) grouping components into blocks using color features, where we compute the component similarities by dynamically adjusting the weights of RGB channels, and merge groups hierachically, and (iii) blocks selection, where we use the run-length features and choose the Support Vector Machine (SVM) as the classifier. We tested the algorithm using 13 outdoor images taken by an old-style LG-64693 mobile phone with 640x480 resolution. We compared the segmentation results with Tsar's algorithm, a state-of-the-art camera text detection algorithm, and show that our algorithm is more robust, particularly in terms of the false alarm rates. In addition, we also evaluated the impacts of our algorithm on the Abbyy's FineReader, one of the most popular commercial OCR engines in the market.
Rehan, Waqas; Fischer, Stefan; Rehan, Maaz
2016-09-12
Wireless sensor networks (WSNs) have become more and more diversified and are today able to also support high data rate applications, such as multimedia. In this case, per-packet channel handshaking/switching may result in inducing additional overheads, such as energy consumption, delays and, therefore, data loss. One of the solutions is to perform stream-based channel allocation where channel handshaking is performed once before transmitting the whole data stream. Deciding stream-based channel allocation is more critical in case of multichannel WSNs where channels of different quality/stability are available and the wish for high performance requires sensor nodes to switch to the best among the available channels. In this work, we will focus on devising mechanisms that perform channel quality/stability estimation in order to improve the accommodation of stream-based communication in multichannel wireless sensor networks. For performing channel quality assessment, we have formulated a composite metric, which we call channel rank measurement (CRM), that can demarcate channels into good, intermediate and bad quality on the basis of the standard deviation of the received signal strength indicator (RSSI) and the average of the link quality indicator (LQI) of the received packets. CRM is then used to generate a data set for training a supervised machine learning-based algorithm (which we call Normal Equation based Channel quality prediction (NEC) algorithm) in such a way that it may perform instantaneous channel rank estimation of any channel. Subsequently, two robust extensions of the NEC algorithm are proposed (which we call Normal Equation based Weighted Moving Average Channel quality prediction (NEWMAC) algorithm and Normal Equation based Aggregate Maturity Criteria with Beta Tracking based Channel weight prediction (NEAMCBTC) algorithm), that can perform channel quality estimation on the basis of both current and past values of channel rank estimation. In the end, simulations are made using MATLAB, and the results show that the Extended version of NEAMCBTC algorithm (Ext-NEAMCBTC) outperforms the compared techniques in terms of channel quality and stability assessment. It also minimizes channel switching overheads (in terms of switching delays and energy consumption) for accommodating stream-based communication in multichannel WSNs.
Rehan, Waqas; Fischer, Stefan; Rehan, Maaz
2016-01-01
Wireless sensor networks (WSNs) have become more and more diversified and are today able to also support high data rate applications, such as multimedia. In this case, per-packet channel handshaking/switching may result in inducing additional overheads, such as energy consumption, delays and, therefore, data loss. One of the solutions is to perform stream-based channel allocation where channel handshaking is performed once before transmitting the whole data stream. Deciding stream-based channel allocation is more critical in case of multichannel WSNs where channels of different quality/stability are available and the wish for high performance requires sensor nodes to switch to the best among the available channels. In this work, we will focus on devising mechanisms that perform channel quality/stability estimation in order to improve the accommodation of stream-based communication in multichannel wireless sensor networks. For performing channel quality assessment, we have formulated a composite metric, which we call channel rank measurement (CRM), that can demarcate channels into good, intermediate and bad quality on the basis of the standard deviation of the received signal strength indicator (RSSI) and the average of the link quality indicator (LQI) of the received packets. CRM is then used to generate a data set for training a supervised machine learning-based algorithm (which we call Normal Equation based Channel quality prediction (NEC) algorithm) in such a way that it may perform instantaneous channel rank estimation of any channel. Subsequently, two robust extensions of the NEC algorithm are proposed (which we call Normal Equation based Weighted Moving Average Channel quality prediction (NEWMAC) algorithm and Normal Equation based Aggregate Maturity Criteria with Beta Tracking based Channel weight prediction (NEAMCBTC) algorithm), that can perform channel quality estimation on the basis of both current and past values of channel rank estimation. In the end, simulations are made using MATLAB, and the results show that the Extended version of NEAMCBTC algorithm (Ext-NEAMCBTC) outperforms the compared techniques in terms of channel quality and stability assessment. It also minimizes channel switching overheads (in terms of switching delays and energy consumption) for accommodating stream-based communication in multichannel WSNs. PMID:27626429
Receiver Diversity Combining Using Evolutionary Algorithms in Rayleigh Fading Channel
Akbari, Mohsen; Manesh, Mohsen Riahi
2014-01-01
In diversity combining at the receiver, the output signal-to-noise ratio (SNR) is often maximized by using the maximal ratio combining (MRC) provided that the channel is perfectly estimated at the receiver. However, channel estimation is rarely perfect in practice, which results in deteriorating the system performance. In this paper, an imperialistic competitive algorithm (ICA) is proposed and compared with two other evolutionary based algorithms, namely, particle swarm optimization (PSO) and genetic algorithm (GA), for diversity combining of signals travelling across the imperfect channels. The proposed algorithm adjusts the combiner weights of the received signal components in such a way that maximizes the SNR and minimizes the bit error rate (BER). The results indicate that the proposed method eliminates the need of channel estimation and can outperform the conventional diversity combining methods. PMID:25045725
NASA Astrophysics Data System (ADS)
Lisitsa, Y. V.; Yatskou, M. M.; Apanasovich, V. V.; Apanasovich, T. V.
2015-09-01
We have developed an algorithm for segmentation of cancer cell nuclei in three-channel luminescent images of microbiological specimens. The algorithm is based on using a correlation between fluorescence signals in the detection channels for object segmentation, which permits complete automation of the data analysis procedure. We have carried out a comparative analysis of the proposed method and conventional algorithms implemented in the CellProfiler and ImageJ software packages. Our algorithm has an object localization uncertainty which is 2-3 times smaller than for the conventional algorithms, with comparable segmentation accuracy.
NASA Astrophysics Data System (ADS)
Lim, H.; Choi, M.; Kim, J.; Go, S.; Chan, P.; Kasai, Y.
2017-12-01
This study attempts to retrieve the aerosol optical properties (AOPs) based on the spectral matching method, with using three visible and one near infrared channels (470, 510, 640, 860nm). This method requires the preparation of look-up table (LUT) approach based on the radiative transfer modeling. Cloud detection is one of the most important processes for guaranteed quality of AOPs. Since the AHI has several infrared channels, which are very advantageous for cloud detection, clouds can be removed by using brightness temperature difference (BTD) and spatial variability test. The Yonsei Aerosol Retrieval (YAER) algorithm is basically utilized on a dark surface, therefore a bright surface (e.g., desert, snow) should be removed first. Then we consider the characteristics of the reflectance of land and ocean surface using three visible channels. The known surface reflectivity problem in high latitude area can be solved in this algorithm by selecting appropriate channels through improving tests. On the other hand, we retrieved the AOPs by obtaining the visible surface reflectance using NIR to normalized difference vegetation index short wave infrared (NDVIswir) relationship. ESR tends to underestimate urban and cropland area, we improved the visible surface reflectance considering urban effect. In this version, ocean surface reflectance is using the new cox and munk method which considers ocean bidirectional reflectance distribution function (BRDF). Input of this method has wind speed, chlorophyll, salinity and so on. Based on validation results with the sun-photometer measurement in AErosol Robotic NETwork (AERONET), we confirm that the quality of Aerosol Optical Depth (AOD) from the YAER algorithm is comparable to the product from the Japan Aerospace Exploration Agency (JAXA) retrieval algorithm. Our future update includes a consideration of improvement land surface reflectance by hybrid approach, and non-spherical aerosols. This will improve the quality of YAER algorithm more, particularly retrieval for the dust particle over the bright surface in East Asia.
Estimation of saturated pixel values in digital color imaging
Zhang, Xuemei; Brainard, David H.
2007-01-01
Pixel saturation, where the incident light at a pixel causes one of the color channels of the camera sensor to respond at its maximum value, can produce undesirable artifacts in digital color images. We present a Bayesian algorithm that estimates what the saturated channel's value would have been in the absence of saturation. The algorithm uses the non-saturated responses from the other color channels, together with a multivariate Normal prior that captures the correlation in response across color channels. The appropriate parameters for the prior may be estimated directly from the image data, since most image pixels are not saturated. Given the prior, the responses of the non-saturated channels, and the fact that the true response of the saturated channel is known to be greater than the saturation level, the algorithm returns the optimal expected mean square estimate for the true response. Extensions of the algorithm to the case where more than one channel is saturated are also discussed. Both simulations and examples with real images are presented to show that the algorithm is effective. PMID:15603065
A multichannel block-matching denoising algorithm for spectral photon-counting CT images.
Harrison, Adam P; Xu, Ziyue; Pourmorteza, Amir; Bluemke, David A; Mollura, Daniel J
2017-06-01
We present a denoising algorithm designed for a whole-body prototype photon-counting computed tomography (PCCT) scanner with up to 4 energy thresholds and associated energy-binned images. Spectral PCCT images can exhibit low signal to noise ratios (SNRs) due to the limited photon counts in each simultaneously-acquired energy bin. To help address this, our denoising method exploits the correlation and exact alignment between energy bins, adapting the highly-effective block-matching 3D (BM3D) denoising algorithm for PCCT. The original single-channel BM3D algorithm operates patch-by-patch. For each small patch in the image, a patch grouping action collects similar patches from the rest of the image, which are then collaboratively filtered together. The resulting performance hinges on accurate patch grouping. Our improved multi-channel version, called BM3D_PCCT, incorporates two improvements. First, BM3D_PCCT uses a more accurate shared patch grouping based on the image reconstructed from photons detected in all 4 energy bins. Second, BM3D_PCCT performs a cross-channel decorrelation, adding a further dimension to the collaborative filtering process. These two improvements produce a more effective algorithm for PCCT denoising. Preliminary results compare BM3D_PCCT against BM3D_Naive, which denoises each energy bin independently. Experiments use a three-contrast PCCT image of a canine abdomen. Within five regions of interest, selected from paraspinal muscle, liver, and visceral fat, BM3D_PCCT reduces the noise standard deviation by 65.0%, compared to 40.4% for BM3D_Naive. Attenuation values of the contrast agents in calibration vials also cluster much tighter to their respective lines of best fit. Mean angular differences (in degrees) for the original, BM3D_Naive, and BM3D_PCCT images, respectively, were 15.61, 7.34, and 4.45 (iodine); 12.17, 7.17, and 4.39 (galodinium); and 12.86, 6.33, and 3.96 (bismuth). We outline a multi-channel denoising algorithm tailored for spectral PCCT images, demonstrating improved performance over an independent, yet state-of-the-art, single-channel approach. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Proceedings of the Conference on Moments and Signal
NASA Astrophysics Data System (ADS)
Purdue, P.; Solomon, H.
1992-09-01
The focus of this paper is (1) to describe systematic methodologies for selecting nonlinear transformations for blind equalization algorithms (and thus new types of cumulants), and (2) to give an overview of the existing blind equalization algorithms and point out their strengths as well as weaknesses. It is shown that all blind equalization algorithms belong in one of the following three categories, depending where the nonlinear transformation is being applied on the data: (1) the Bussgang algorithms, where the nonlinearity is in the output of the adaptive equalization filter; (2) the polyspectra (or Higher-Order Spectra) algorithms, where the nonlinearity is in the input of the adaptive equalization filter; and (3) the algorithms where the nonlinearity is inside the adaptive filter, i.e., the nonlinear filter or neural network. We describe methodologies for selecting nonlinear transformations based on various optimality criteria such as MSE or MAP. We illustrate that such existing algorithms as Sato, Benveniste-Goursat, Godard or CMA, Stop-and-Go, and Donoho are indeed special cases of the Bussgang family of techniques when the nonlinearity is memoryless. We present results that demonstrate the polyspectra-based algorithms exhibit faster convergence rate than Bussgang algorithms. However, this improved performance is at the expense of more computations per iteration. We also show that blind equalizers based on nonlinear filters or neural networks are more suited for channels that have nonlinear distortions.
NASA Astrophysics Data System (ADS)
Tomiwa, K. G.
2017-09-01
The search for new physics in the H → γγ+met relies on how well the missing transverse energy is reconstructed. The Met algorithm used by the ATLAS experiment in turns uses input variables like photon and jets which depend on the reconstruction of the primary vertex. This document presents the performance of di-photon vertex reconstruction algorithms (hardest vertex method and Neural Network method). Comparing the performance of these algorithms for the nominal Standard Model sample and the Beyond Standard Model sample, we see the overall performance of the Neural Network method of primary vertex selection performed better than the Hardest vertex method.
Levine, Judah
2016-01-01
A method is presented for synchronizing the time of a clock to a remote time standard when the channel connecting the two has significant delay variation that can be described only statistically. The method compares the Allan deviation of the channel fluctuations to the free-running stability of the local clock, and computes the optimum interval between requests based on one of three selectable requirements: (1) choosing the highest possible accuracy, (2) choosing the best tradeoff of cost vs. accuracy, or (3) minimizing the number of requests to realize a specific accuracy. Once the interval between requests is chosen, the final step is to steer the local clock based on the received data. A typical adjustment algorithm, which supports both the statistical considerations based on the Allan deviation comparison and the timely detection of errors is included as an example. PMID:26529759
Multiuser TOA Estimation Algorithm in DS-CDMA Sparse Channel for Radiolocation
NASA Astrophysics Data System (ADS)
Kim, Sunwoo
This letter considers multiuser time delay estimation in a sparse channel environment for radiolocation. The generalized successive interference cancellation (GSIC) algorithm is used to eliminate the multiple access interference (MAI). To adapt GSIC to sparse channels the alternating maximization (AM) algorithm is considered, and the continuous time delay of each path is estimated without requiring a priori known data sequences.
Application of Reinforcement Learning in Cognitive Radio Networks: Models and Algorithms
Yau, Kok-Lim Alvin; Poh, Geong-Sen; Chien, Su Fong; Al-Rawi, Hasan A. A.
2014-01-01
Cognitive radio (CR) enables unlicensed users to exploit the underutilized spectrum in licensed spectrum whilst minimizing interference to licensed users. Reinforcement learning (RL), which is an artificial intelligence approach, has been applied to enable each unlicensed user to observe and carry out optimal actions for performance enhancement in a wide range of schemes in CR, such as dynamic channel selection and channel sensing. This paper presents new discussions of RL in the context of CR networks. It provides an extensive review on how most schemes have been approached using the traditional and enhanced RL algorithms through state, action, and reward representations. Examples of the enhancements on RL, which do not appear in the traditional RL approach, are rules and cooperative learning. This paper also reviews performance enhancements brought about by the RL algorithms and open issues. This paper aims to establish a foundation in order to spark new research interests in this area. Our discussion has been presented in a tutorial manner so that it is comprehensive to readers outside the specialty of RL and CR. PMID:24995352
NASA Astrophysics Data System (ADS)
Silva, João Carlos; Souto, Nuno; Cercas, Francisco; Dinis, Rui
A MMSE (Minimum Mean Square Error) DS-CDMA (Direct Sequence-Code Division Multiple Access) receiver coupled with a low-complexity iterative interference suppression algorithm was devised for a MIMO/BLAST (Multiple Input, Multiple Output / Bell Laboratories Layered Space Time) system in order to improve system performance, considering frequency selective fading channels. The scheme is compared against the simple MMSE receiver, for both QPSK and 16QAM modulations, under SISO (Single Input, Single Output) and MIMO systems, the latter with 2Tx by 2Rx and 4Tx by 4Rx (MIMO order 2 and 4 respectively) antennas. To assess its performance in an existing system, the uncoded UMTS HSDPA (High Speed Downlink Packet Access) standard was considered.
Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes.
Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M
2018-04-12
Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods.
Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes
Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M.
2018-01-01
Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods. PMID:29649114
Two algorithms for neural-network design and training with application to channel equalization.
Sweatman, C Z; Mulgrew, B; Gibson, G J
1998-01-01
We describe two algorithms for designing and training neural-network classifiers. The first, the linear programming slab algorithm (LPSA), is motivated by the problem of reconstructing digital signals corrupted by passage through a dispersive channel and by additive noise. It constructs a multilayer perceptron (MLP) to separate two disjoint sets by using linear programming methods to identify network parameters. The second, the perceptron learning slab algorithm (PLSA), avoids the computational costs of linear programming by using an error-correction approach to identify parameters. Both algorithms operate in highly constrained parameter spaces and are able to exploit symmetry in the classification problem. Using these algorithms, we develop a number of procedures for the adaptive equalization of a complex linear 4-quadrature amplitude modulation (QAM) channel, and compare their performance in a simulation study. Results are given for both stationary and time-varying channels, the latter based on the COST 207 GSM propagation model.
Hamada, Yuki; O'Connor, Ben L.; Orr, Andrew B.; ...
2016-03-26
In this paper, understanding the spatial patterns of ephemeral streams is crucial for understanding how hydrologic processes influence the abundance and distribution of wildlife habitats in desert regions. Available methods for mapping ephemeral streams at the watershed scale typically underestimate the size of channel networks. Although remote sensing is an effective means of collecting data and obtaining information on large, inaccessible areas, conventional techniques for extracting channel features are not sufficient in regions that have small topographic gradients and subtle target-background spectral contrast. By using very high resolution multispectral imagery, we developed a new algorithm that applies landscape information tomore » map ephemeral channels in desert regions of the Southwestern United States where utility-scale solar energy development is occurring. Knowledge about landscape features and structures was integrated into the algorithm using a series of spectral transformation and spatial statistical operations to integrate information about landscape features and structures. The algorithm extracted ephemeral stream channels at a local scale, with the result that approximately 900% more ephemeral streams was identified than what were identified by using the U.S. Geological Survey’s National Hydrography Dataset. The accuracy of the algorithm in detecting channel areas was as high as 92%, and its accuracy in delineating channel center lines was 91% when compared to a subset of channel networks that were digitized by using the very high resolution imagery. Although the algorithm captured stream channels in desert landscapes across various channel sizes and forms, it often underestimated stream headwaters and channels obscured by bright soils and sparse vegetation. While further improvement is warranted, the algorithm provides an effective means of obtaining detailed information about ephemeral streams, and it could make a significant contribution toward improving the hydrological modelling of desert environments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamada, Yuki; O'Connor, Ben L.; Orr, Andrew B.
In this paper, understanding the spatial patterns of ephemeral streams is crucial for understanding how hydrologic processes influence the abundance and distribution of wildlife habitats in desert regions. Available methods for mapping ephemeral streams at the watershed scale typically underestimate the size of channel networks. Although remote sensing is an effective means of collecting data and obtaining information on large, inaccessible areas, conventional techniques for extracting channel features are not sufficient in regions that have small topographic gradients and subtle target-background spectral contrast. By using very high resolution multispectral imagery, we developed a new algorithm that applies landscape information tomore » map ephemeral channels in desert regions of the Southwestern United States where utility-scale solar energy development is occurring. Knowledge about landscape features and structures was integrated into the algorithm using a series of spectral transformation and spatial statistical operations to integrate information about landscape features and structures. The algorithm extracted ephemeral stream channels at a local scale, with the result that approximately 900% more ephemeral streams was identified than what were identified by using the U.S. Geological Survey’s National Hydrography Dataset. The accuracy of the algorithm in detecting channel areas was as high as 92%, and its accuracy in delineating channel center lines was 91% when compared to a subset of channel networks that were digitized by using the very high resolution imagery. Although the algorithm captured stream channels in desert landscapes across various channel sizes and forms, it often underestimated stream headwaters and channels obscured by bright soils and sparse vegetation. While further improvement is warranted, the algorithm provides an effective means of obtaining detailed information about ephemeral streams, and it could make a significant contribution toward improving the hydrological modelling of desert environments.« less
Stratiform/convective rain delineation for TRMM microwave imager
NASA Astrophysics Data System (ADS)
Islam, Tanvir; Srivastava, Prashant K.; Dai, Qiang; Gupta, Manika; Wan Jaafar, Wan Zurina
2015-10-01
This article investigates the potential for using machine learning algorithms to delineate stratiform/convective (S/C) rain regimes for passive microwave imager taking calibrated brightness temperatures as only spectral parameters. The algorithms have been implemented for the Tropical Rainfall Measuring Mission (TRMM) microwave imager (TMI), and calibrated as well as validated taking the Precipitation Radar (PR) S/C information as the target class variables. Two different algorithms are particularly explored for the delineation. The first one is metaheuristic adaptive boosting algorithm that includes the real, gentle, and modest versions of the AdaBoost. The second one is the classical linear discriminant analysis that includes the Fisher's and penalized versions of the linear discriminant analysis. Furthermore, prior to the development of the delineation algorithms, a feature selection analysis has been conducted for a total of 85 features, which contains the combinations of brightness temperatures from 10 GHz to 85 GHz and some derived indexes, such as scattering index, polarization corrected temperature, and polarization difference with the help of mutual information aided minimal redundancy maximal relevance criterion (mRMR). It has been found that the polarization corrected temperature at 85 GHz and the features derived from the "addition" operator associated with the 85 GHz channels have good statistical dependency to the S/C target class variables. Further, it has been shown how the mRMR feature selection technique helps to reduce the number of features without deteriorating the results when applying through the machine learning algorithms. The proposed scheme is able to delineate the S/C rain regimes with reasonable accuracy. Based on the statistical validation experience from the validation period, the Matthews correlation coefficients are in the range of 0.60-0.70. Since, the proposed method does not rely on any a priori information, this makes it very suitable for other microwave sensors having similar channels to the TMI. The method could possibly benefit the constellation sensors in the Global Precipitation Measurement (GPM) mission era.
A Degree Distribution Optimization Algorithm for Image Transmission
NASA Astrophysics Data System (ADS)
Jiang, Wei; Yang, Junjie
2016-09-01
Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.
Investigation of cloud/water vapor motion winds from geostationary satellite
NASA Technical Reports Server (NTRS)
Nieman, Steve; Velden, Chris; Hayden, Kit; Menzel, Paul
1993-01-01
Work has been primarily focussed on three tasks: (1) comparison of wind fields produced at MSFC with the CO2 autowind/autoeditor system newly installed in NESDIS operations; (2) evaluation of techniques for improved tracer selection through use of cloud classification predictors; and (3) development of height assignment algorithm with water vapor channel radiances. The contract goal is to improve the CIMSS wind system by developing new techniques and assimilating better existing techniques. The work reported here was done in collaboration with the NESDIS scientists working on the operational winds software, so that NASA funded research can benefit NESDIS operational algorithms.
Enhancing the Selection of Backoff Interval Using Fuzzy Logic over Wireless Ad Hoc Networks
Ranganathan, Radha; Kannan, Kathiravan
2015-01-01
IEEE 802.11 is the de facto standard for medium access over wireless ad hoc network. The collision avoidance mechanism (i.e., random binary exponential backoff—BEB) of IEEE 802.11 DCF (distributed coordination function) is inefficient and unfair especially under heavy load. In the literature, many algorithms have been proposed to tune the contention window (CW) size. However, these algorithms make every node select its backoff interval between [0, CW] in a random and uniform manner. This randomness is incorporated to avoid collisions among the nodes. But this random backoff interval can change the optimal order and frequency of channel access among competing nodes which results in unfairness and increased delay. In this paper, we propose an algorithm that schedules the medium access in a fair and effective manner. This algorithm enhances IEEE 802.11 DCF with additional level of contention resolution that prioritizes the contending nodes according to its queue length and waiting time. Each node computes its unique backoff interval using fuzzy logic based on the input parameters collected from contending nodes through overhearing. We evaluate our algorithm against IEEE 802.11, GDCF (gentle distributed coordination function) protocols using ns-2.35 simulator and show that our algorithm achieves good performance. PMID:25879066
NASA Astrophysics Data System (ADS)
Zhang, Hao; Chen, Minghua; Parekh, Abhay; Ramchandran, Kannan
2011-09-01
We design a distributed multi-channel P2P Video-on-Demand (VoD) system using "plug-and-play" helpers. Helpers are heterogenous "micro-servers" with limited storage, bandwidth and number of users they can serve simultaneously. Our proposed system has the following salient features: (1) it jointly optimizes over helper-user connection topology, video storage distribution and transmission bandwidth allocation; (2) it minimizes server load, and is adaptable to varying supply and demand patterns across multiple video channels irrespective of video popularity; and (3) it is fully distributed and requires little or no maintenance overhead. The combinatorial nature of the problem and the system demand for distributed algorithms makes the problem uniquely challenging. By utilizing Lagrangian decomposition and Markov chain approximation based arguments, we address this challenge by designing two distributed algorithms running in tandem: a primal-dual storage and bandwidth allocation algorithm and a "soft-worst-neighbor-choking" topology-building algorithm. Our scheme provably converges to a near-optimal solution, and is easy to implement in practice. Packet-level simulation results show that the proposed scheme achieves minimum sever load under highly heterogeneous combinations of supply and demand patterns, and is robust to system dynamics of user/helper churn, user/helper asynchrony, and random delays in the network.
Li, Junfeng; Yang, Lin; Zhang, Jianping; Yan, Yonghong; Hu, Yi; Akagi, Masato; Loizou, Philipos C
2011-05-01
A large number of single-channel noise-reduction algorithms have been proposed based largely on mathematical principles. Most of these algorithms, however, have been evaluated with English speech. Given the different perceptual cues used by native listeners of different languages including tonal languages, it is of interest to examine whether there are any language effects when the same noise-reduction algorithm is used to process noisy speech in different languages. A comparative evaluation and investigation is taken in this study of various single-channel noise-reduction algorithms applied to noisy speech taken from three languages: Chinese, Japanese, and English. Clean speech signals (Chinese words and Japanese words) were first corrupted by three types of noise at two signal-to-noise ratios and then processed by five single-channel noise-reduction algorithms. The processed signals were finally presented to normal-hearing listeners for recognition. Intelligibility evaluation showed that the majority of noise-reduction algorithms did not improve speech intelligibility. Consistent with a previous study with the English language, the Wiener filtering algorithm produced small, but statistically significant, improvements in intelligibility for car and white noise conditions. Significant differences between the performances of noise-reduction algorithms across the three languages were observed.
Quantifying natural delta variability using a multiple-point geostatistics prior uncertainty model
NASA Astrophysics Data System (ADS)
Scheidt, Céline; Fernandes, Anjali M.; Paola, Chris; Caers, Jef
2016-10-01
We address the question of quantifying uncertainty associated with autogenic pattern variability in a channelized transport system by means of a modern geostatistical method. This question has considerable relevance for practical subsurface applications as well, particularly those related to uncertainty quantification relying on Bayesian approaches. Specifically, we show how the autogenic variability in a laboratory experiment can be represented and reproduced by a multiple-point geostatistical prior uncertainty model. The latter geostatistical method requires selection of a limited set of training images from which a possibly infinite set of geostatistical model realizations, mimicking the training image patterns, can be generated. To that end, we investigate two methods to determine how many training images and what training images should be provided to reproduce natural autogenic variability. The first method relies on distance-based clustering of overhead snapshots of the experiment; the second method relies on a rate of change quantification by means of a computer vision algorithm termed the demon algorithm. We show quantitatively that with either training image selection method, we can statistically reproduce the natural variability of the delta formed in the experiment. In addition, we study the nature of the patterns represented in the set of training images as a representation of the "eigenpatterns" of the natural system. The eigenpattern in the training image sets display patterns consistent with previous physical interpretations of the fundamental modes of this type of delta system: a highly channelized, incisional mode; a poorly channelized, depositional mode; and an intermediate mode between the two.
Utilization of all Spectral Channels of IASI for the Retrieval of the Atmospheric State
NASA Astrophysics Data System (ADS)
Del Bianco, S.; Cortesi, U.; Carli, B.
2010-12-01
The retrieval of atmospheric state parameters from broadband measurements acquired by high spectral resolution sensors, such as the Infrared Atmospheric Sounding Interferometer (IASI) onboard the Meteorological Operational (MetOp) platform, generally requires to deal with a prohibitively large number of spectral elements available from a single observation (8461 samples in the case of IASI, covering the 645-2760 cm-1 range with a resolution of 0.5 cm-1 and a spectral sampling of 0.25 cm-1). Most inversion algorithms developed for both operational and scientific analysis of IASI spectra perform a reduction of the data - typically based on channel selection, super-channel clustering or Principal Component Analysis (PCA) techniques - in order to handle the high dimensionality of the problem. Accordingly, simultaneous processing of all IASI channels received relatively low attention. Here we prove the feasibility of a retrieval approach exploiting all spectral channels of IASI, to extract information on water vapor, temperature and ozone profiles. This multi-target retrieval removes the systematic errors due to interfering parameters and makes the channel selection no longer necessary. The challenging computation is made possible by the use of a coarse spectral grid for the forward model calculation and by the abatement of the associated modeling errors through the use of a variance-covariance matrix of the residuals that takes into account all the forward model errors.
Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao
2012-01-01
Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.
Array signal recovery algorithm for a single-RF-channel DBF array
NASA Astrophysics Data System (ADS)
Zhang, Duo; Wu, Wen; Fang, Da Gang
2016-12-01
An array signal recovery algorithm based on sparse signal reconstruction theory is proposed for a single-RF-channel digital beamforming (DBF) array. A single-RF-channel antenna array is a low-cost antenna array in which signals are obtained from all antenna elements by only one microwave digital receiver. The spatially parallel array signals are converted into time-sequence signals, which are then sampled by the system. The proposed algorithm uses these time-sequence samples to recover the original parallel array signals by exploiting the second-order sparse structure of the array signals. Additionally, an optimization method based on the artificial bee colony (ABC) algorithm is proposed to improve the reconstruction performance. Using the proposed algorithm, the motion compensation problem for the single-RF-channel DBF array can be solved effectively, and the angle and Doppler information for the target can be simultaneously estimated. The effectiveness of the proposed algorithms is demonstrated by the results of numerical simulations.
Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network
Lin, Kai; Wang, Di; Hu, Long
2016-01-01
With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC). The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S) evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods. PMID:27376302
Algorithm for Simulating Atmospheric Turbulence and Aeroelastic Effects on Simulator Motion Systems
NASA Technical Reports Server (NTRS)
Ercole, Anthony V.; Cardullo, Frank M.; Kelly, Lon C.; Houck, Jacob A.
2012-01-01
Atmospheric turbulence produces high frequency accelerations in aircraft, typically greater than the response to pilot input. Motion system equipped flight simulators must present cues representative of the aircraft response to turbulence in order to maintain the integrity of the simulation. Currently, turbulence motion cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. This report presents a new turbulence motion cueing algorithm, referred to as the augmented turbulence channel. Like the previous turbulence algorithms, the output of the channel only augments the vertical degree of freedom of motion. This algorithm employs a parallel aircraft model and an optional high bandwidth cueing filter. Simulation of aeroelastic effects is also an area where frequency content must be preserved by the cueing algorithm. The current aeroelastic implementation uses a similar secondary channel that supplements the primary motion cue. Two studies were conducted using the NASA Langley Visual Motion Simulator and Cockpit Motion Facility to evaluate the effect of the turbulence channel and aeroelastic model on pilot control input. Results indicate that the pilot is better correlated with the aircraft response, when the augmented channel is in place.
Infrared traffic image enhancement algorithm based on dark channel prior and gamma correction
NASA Astrophysics Data System (ADS)
Zheng, Lintao; Shi, Hengliang; Gu, Ming
2017-07-01
The infrared traffic image acquired by the intelligent traffic surveillance equipment has low contrast, little hierarchical differences in perceptions of image and the blurred vision effect. Therefore, infrared traffic image enhancement, being an indispensable key step, is applied to nearly all infrared imaging based traffic engineering applications. In this paper, we propose an infrared traffic image enhancement algorithm that is based on dark channel prior and gamma correction. In existing research dark channel prior, known as a famous image dehazing method, here is used to do infrared image enhancement for the first time. Initially, in the proposed algorithm, the original degraded infrared traffic image is transformed with dark channel prior as the initial enhanced result. A further adjustment based on the gamma curve is needed because initial enhanced result has lower brightness. Comprehensive validation experiments reveal that the proposed algorithm outperforms the current state-of-the-art algorithms.
Ion flux through membrane channels--an enhanced algorithm for the Poisson-Nernst-Planck model.
Dyrka, Witold; Augousti, Andy T; Kotulska, Malgorzata
2008-09-01
A novel algorithmic scheme for numerical solution of the 3D Poisson-Nernst-Planck model is proposed. The algorithmic improvements are universal and independent of the detailed physical model. They include three major steps: an adjustable gradient-based step value, an adjustable relaxation coefficient, and an optimized segmentation of the modeled space. The enhanced algorithm significantly accelerates the speed of computation and reduces the computational demands. The theoretical model was tested on a regular artificial channel and validated on a real protein channel-alpha-hemolysin, proving its efficiency. (c) 2008 Wiley Periodicals, Inc.
Multi-carrier Communications over Time-varying Acoustic Channels
NASA Astrophysics Data System (ADS)
Aval, Yashar M.
Acoustic communication is an enabling technology for many autonomous undersea systems, such as those used for ocean monitoring, offshore oil and gas industry, aquaculture, or port security. There are three main challenges in achieving reliable high-rate underwater communication: the bandwidth of acoustic channels is extremely limited, the propagation delays are long, and the Doppler distortions are more pronounced than those found in wireless radio channels. In this dissertation we focus on assessing the fundamental limitations of acoustic communication, and designing efficient signal processing methods that cam overcome these limitations. We address the fundamental question of acoustic channel capacity (achievable rate) for single-input-multi-output (SIMO) acoustic channels using a per-path Rician fading model, and focusing on two scenarios: narrowband channels where the channel statistics can be approximated as frequency- independent, and wideband channels where the nominal path loss is frequency-dependent. In each scenario, we compare several candidate power allocation techniques, and show that assigning uniform power across all frequencies for the first scenario, and assigning uniform power across a selected frequency-band for the second scenario, are the best practical choices in most cases, because the long propagation delay renders the feedback information outdated for power allocation based on the estimated channel response. We quantify our results using the channel information extracted form the 2010 Mobile Acoustic Communications Experiment (MACE'10). Next, we focus on achieving reliable high-rate communication over underwater acoustic channels. Specifically, we investigate orthogonal frequency division multiplexing (OFDM) as the state-of-the-art technique for dealing with frequency-selective multipath channels, and propose a class of methods that compensate for the time-variation of the underwater acoustic channel. These methods are based on multiple-FFT demodulation, and are implemented as partial (P), shaped (S), fractional (F), and Taylor series expansion (T) FFT demodulation. They replace the conventional FFT demodulation with a few FFTs and a combiner. The input to each FFT is a specific transformation of the input signal (P,S,F,T), while the combiner performs weighted summation of the FFT outputs. We design an adaptive algorithm of stochastic gradient type to learn the combiner weights for coherent and differentially coherent detection. The algorithm is cast into the framework of multiple receiving elements to take advantage of spatial diversity. Synthetic data, as well as experimental data from the MACE'10 experiment are used to demonstrate the performance of the proposed methods, showing significant improvement over conventional detection techniques with or without inter-carrier interference equalization (5 dB--7 dB on average over multiple hours), as well as improved bandwidth efficiency.
A multi-channel biomimetic neuroprosthesis to support treadmill gait training in stroke patients.
Chia, Noelia; Ambrosini, Emilia; Baccinelli, Walter; Nardone, Antonio; Monticone, Marco; Ferrigno, Giancarlo; Pedrocchi, Alessandra; Ferrante, Simona
2015-01-01
This study presents an innovative multi-channel neuroprosthesis that induces a biomimetic activation of the main lower-limb muscles during treadmill gait training to be used in the rehabilitation of stroke patients. The electrostimulation strategy replicates the physiological muscle synergies used by healthy subjects to walk on a treadmill at their self-selected speed. This strategy is mapped to the current gait sub-phases, which are identified in real time by a custom algorithm. This algorithm divides the gait cycle into six sub-phases, based on two inertial sensors placed laterally on the shanks. Therefore, the pre-defined stimulation profiles are expanded or stretched based on the actual gait pattern of each single subject. A preliminary experimental protocol, involving 10 healthy volunteers, was carried out to extract the muscle synergies and validate the gait-detection algorithm, which were afterwards used in the development of the neuroprosthesis. The feasibility of the neuroprosthesis was tested on one healthy subject who simulated different gait patterns, and a chronic stroke patient. The results showed the correct functioning of the system. A pilot study of the neurorehabilitation treatment for stroke patients is currently being carried out.
NASA Astrophysics Data System (ADS)
Lv, ZhuoKai; Yang, Tiejun; Zhu, Chunhua
2018-03-01
Through utilizing the technology of compressive sensing (CS), the channel estimation methods can achieve the purpose of reducing pilots and improving spectrum efficiency. The channel estimation and pilot design scheme are explored during the correspondence under the help of block-structured CS in massive MIMO systems. The block coherence property of the aggregate system matrix can be minimized so that the pilot design scheme based on stochastic search is proposed. Moreover, the block sparsity adaptive matching pursuit (BSAMP) algorithm under the common sparsity model is proposed so that the channel estimation can be caught precisely. Simulation results are to be proved the proposed design algorithm with superimposed pilots design and the BSAMP algorithm can provide better channel estimation than existing methods.
Sparsity-aware multiple relay selection in large multi-hop decode-and-forward relay networks
NASA Astrophysics Data System (ADS)
Gouissem, A.; Hamila, R.; Al-Dhahir, N.; Foufou, S.
2016-12-01
In this paper, we propose and investigate two novel techniques to perform multiple relay selection in large multi-hop decode-and-forward relay networks. The two proposed techniques exploit sparse signal recovery theory to select multiple relays using the orthogonal matching pursuit algorithm and outperform state-of-the-art techniques in terms of outage probability and computation complexity. To reduce the amount of collected channel state information (CSI), we propose a limited-feedback scheme where only a limited number of relays feedback their CSI. Furthermore, a detailed performance-complexity tradeoff investigation is conducted for the different studied techniques and verified by Monte Carlo simulations.
Water Quality Monitoring for Lake Constance with a Physically Based Algorithm for MERIS Data.
Odermatt, Daniel; Heege, Thomas; Nieke, Jens; Kneubühler, Mathias; Itten, Klaus
2008-08-05
A physically based algorithm is used for automatic processing of MERIS level 1B full resolution data. The algorithm is originally used with input variables for optimization with different sensors (i.e. channel recalibration and weighting), aquatic regions (i.e. specific inherent optical properties) or atmospheric conditions (i.e. aerosol models). For operational use, however, a lake-specific parameterization is required, representing an approximation of the spatio-temporal variation in atmospheric and hydrooptic conditions, and accounting for sensor properties. The algorithm performs atmospheric correction with a LUT for at-sensor radiance, and a downhill simplex inversion of chl-a, sm and y from subsurface irradiance reflectance. These outputs are enhanced by a selective filter, which makes use of the retrieval residuals. Regular chl-a sampling measurements by the Lake's protection authority coinciding with MERIS acquisitions were used for parameterization, training and validation.
Color enhancement and image defogging in HSI based on Retinex model
NASA Astrophysics Data System (ADS)
Gao, Han; Wei, Ping; Ke, Jun
2015-08-01
Retinex is a luminance perceptual algorithm based on color consistency. It has a good performance in color enhancement. But in some cases, the traditional Retinex algorithms, both Single-Scale Retinex(SSR) and Multi-Scale Retinex(MSR) in RGB color space, do not work well and will cause color deviation. To solve this problem, we present improved SSR and MSR algorithms. Compared to other Retinex algorithms, we implement Retinex algorithms in HSI(Hue, Saturation, Intensity) color space, and use a parameter αto improve quality of the image. Moreover, the algorithms presented in this paper has a good performance in image defogging. Contrasted with traditional Retinex algorithms, we use intensity channel to obtain reflection information of an image. The intensity channel is processed using a Gaussian center-surround image filter to get light information, which should be removed from intensity channel. After that, we subtract the light information from intensity channel to obtain the reflection image, which only includes the attribute of the objects in image. Using the reflection image and a parameter α, which is an arbitrary scale factor set manually, we improve the intensity channel, and complete the color enhancement. Our experiments show that this approach works well compared with existing methods for color enhancement. Besides a better performance in color deviation problem and image defogging, a visible improvement in the image quality for human contrast perception is also observed.
Channel estimation based on quantized MMP for FDD massive MIMO downlink
NASA Astrophysics Data System (ADS)
Guo, Yao-ting; Wang, Bing-he; Qu, Yi; Cai, Hua-jie
2016-10-01
In this paper, we consider channel estimation for Massive MIMO systems operating in frequency division duplexing mode. By exploiting the sparsity of propagation paths in Massive MIMO channel, we develop a compressed sensing(CS) based channel estimator which can reduce the pilot overhead. As compared with the conventional least squares (LS) and linear minimum mean square error(LMMSE) estimation, the proposed algorithm is based on the quantized multipath matching pursuit - MMP - reduced the pilot overhead and performs better than other CS algorithms. The simulation results demonstrate the advantage of the proposed algorithm over various existing methods including the LS, LMMSE, CoSaMP and conventional MMP estimators.
NASA Astrophysics Data System (ADS)
Yan, H.; Zheng, M. J.; Zhu, D. Y.; Wang, H. T.; Chang, W. S.
2015-07-01
When using clutter suppression interferometry (CSI) algorithm to perform signal processing in a three-channel wide-area surveillance radar system, the primary concern is to effectively suppress the ground clutter. However, a portion of moving target's energy is also lost in the process of channel cancellation, which is often neglected in conventional applications. In this paper, we firstly investigate the two-dimensional (radial velocity dimension and squint angle dimension) residual amplitude of moving targets after channel cancellation with CSI algorithm. Then, a new approach is proposed to increase the two-dimensional detection probability of moving targets by reserving the maximum value of the three channel cancellation results in non-uniformly spaced channel system. Besides, theoretical expression of the false alarm probability with the proposed approach is derived in the paper. Compared with the conventional approaches in uniformly spaced channel system, simulation results validate the effectiveness of the proposed approach. To our knowledge, it is the first time that the two-dimensional detection probability of CSI algorithm is studied.
Refining atmosphere light to improve the dark channel prior algorithm
NASA Astrophysics Data System (ADS)
Gan, Ling; Li, Dagang; Zhou, Can
2017-05-01
The defogging image gotten through dark channel prior algorithm has some shortcomings, such like color distortion, dimmer light and detail-loss near the observer. The main reasons are that the atmosphere light is estimated as one value and its change in different scene depth is not considered. So we modeled the atmosphere, one parameter of the defogging model. Firstly, we scatter the atmosphere light into equivalent point and build discrete model of the light. Secondly, we build some rough and possible models through analyzing the relationship between the atmosphere light and the medium transmission. Finally, by analyzing the results of many experiments qualitatively and quantitatively, we get the selected and optimized model. Although using this method causes the time-consuming to increase slightly, the evaluations, histogram correlation coefficient and peak signal-to-noise ratio are improved significantly and the defogging result is more conformed to human visual. And the color and the details near the observer in the defogging image are better than that achieved by the primal method.
IDMA-Based MAC Protocol for Satellite Networks with Consideration on Channel Quality
2014-01-01
In order to overcome the shortcomings of existing medium access control (MAC) protocols based on TDMA or CDMA in satellite networks, interleave division multiple access (IDMA) technique is introduced into satellite communication networks. Therefore, a novel wide-band IDMA MAC protocol based on channel quality is proposed in this paper, consisting of a dynamic power allocation algorithm, a rate adaptation algorithm, and a call admission control (CAC) scheme. Firstly, the power allocation algorithm combining the technique of IDMA SINR-evolution and channel quality prediction is developed to guarantee high power efficiency even in terrible channel conditions. Secondly, the effective rate adaptation algorithm, based on accurate channel information per timeslot and by the means of rate degradation, can be realized. What is more, based on channel quality prediction, the CAC scheme, combining the new power allocation algorithm, rate scheduling, and buffering strategies together, is proposed for the emerging IDMA systems, which can support a variety of traffic types, and offering quality of service (QoS) requirements corresponding to different priority levels. Simulation results show that the new wide-band IDMA MAC protocol can make accurate estimation of available resource considering the effect of multiuser detection (MUD) and QoS requirements of multimedia traffic, leading to low outage probability as well as high overall system throughput. PMID:25126592
Statistical Feature Extraction for Artifact Removal from Concurrent fMRI-EEG Recordings
Liu, Zhongming; de Zwart, Jacco A.; van Gelderen, Peter; Kuo, Li-Wei; Duyn, Jeff H.
2011-01-01
We propose a set of algorithms for sequentially removing artifacts related to MRI gradient switching and cardiac pulsations from electroencephalography (EEG) data recorded during functional magnetic resonance imaging (fMRI). Special emphases are directed upon the use of statistical metrics and methods for the extraction and selection of features that characterize gradient and pulse artifacts. To remove gradient artifacts, we use a channel-wise filtering based on singular value decomposition (SVD). To remove pulse artifacts, we first decompose data into temporally independent components and then select a compact cluster of components that possess sustained high mutual information with the electrocardiogram (ECG). After the removal of these components, the time courses of remaining components are filtered by SVD to remove the temporal patterns phase-locked to the cardiac markers derived from the ECG. The filtered component time courses are then inversely transformed into multi-channel EEG time series free of pulse artifacts. Evaluation based on a large set of simultaneous EEG-fMRI data obtained during a variety of behavioral tasks, sensory stimulations and resting conditions showed excellent data quality and robust performance attainable by the proposed methods. These algorithms have been implemented as a Matlab-based toolbox made freely available for public access and research use. PMID:22036675
Statistical feature extraction for artifact removal from concurrent fMRI-EEG recordings.
Liu, Zhongming; de Zwart, Jacco A; van Gelderen, Peter; Kuo, Li-Wei; Duyn, Jeff H
2012-02-01
We propose a set of algorithms for sequentially removing artifacts related to MRI gradient switching and cardiac pulsations from electroencephalography (EEG) data recorded during functional magnetic resonance imaging (fMRI). Special emphasis is directed upon the use of statistical metrics and methods for the extraction and selection of features that characterize gradient and pulse artifacts. To remove gradient artifacts, we use channel-wise filtering based on singular value decomposition (SVD). To remove pulse artifacts, we first decompose data into temporally independent components and then select a compact cluster of components that possess sustained high mutual information with the electrocardiogram (ECG). After the removal of these components, the time courses of remaining components are filtered by SVD to remove the temporal patterns phase-locked to the cardiac timing markers derived from the ECG. The filtered component time courses are then inversely transformed into multi-channel EEG time series free of pulse artifacts. Evaluation based on a large set of simultaneous EEG-fMRI data obtained during a variety of behavioral tasks, sensory stimulations and resting conditions showed excellent data quality and robust performance attainable with the proposed methods. These algorithms have been implemented as a Matlab-based toolbox made freely available for public access and research use. Published by Elsevier Inc.
Evaluation of brightness temperature from a forward model of ground-based microwave radiometer
NASA Astrophysics Data System (ADS)
Rambabu, S.; Pillai, J. S.; Agarwal, A.; Pandithurai, G.
2014-06-01
Ground-based microwave radiometers are getting great attention in recent years due to their capability to profile the temperature and humidity at high temporal and vertical resolution in the lower troposphere. The process of retrieving these parameters from the measurements of radiometric brightness temperature ( T B ) includes the inversion algorithm, which uses the back ground information from a forward model. In the present study, an algorithm development and evaluation of this forward model for a ground-based microwave radiometer, being developed by Society for Applied Microwave Electronics Engineering and Research (SAMEER) of India, is presented. Initially, the analysis of absorption coefficient and weighting function at different frequencies was made to select the channels. Further the range of variation of T B for these selected channels for the year 2011, over the two stations Mumbai and Delhi is discussed. Finally the comparison between forward-model simulated T B s and radiometer measured T B s at Mahabaleshwar (73.66 ∘E and 17.93∘N) is done to evaluate the model. There is good agreement between model simulations and radiometer observations, which suggests that these forward model simulations can be used as background for inversion models for retrieving the temperature and humidity profiles.
NASA Astrophysics Data System (ADS)
Saito, Takahiro; Takahashi, Hiromi; Komatsu, Takashi
2006-02-01
The Retinex theory was first proposed by Land, and deals with separation of irradiance from reflectance in an observed image. The separation problem is an ill-posed problem. Land and others proposed various Retinex separation algorithms. Recently, Kimmel and others proposed a variational framework that unifies the previous Retinex algorithms such as the Poisson-equation-type Retinex algorithms developed by Horn and others, and presented a Retinex separation algorithm with the time-evolution of a linear diffusion process. However, the Kimmel's separation algorithm cannot achieve physically rational separation, if true irradiance varies among color channels. To cope with this problem, we introduce a nonlinear diffusion process into the time-evolution. Moreover, as to its extension to color images, we present two approaches to treat color channels: the independent approach to treat each color channel separately and the collective approach to treat all color channels collectively. The latter approach outperforms the former. Furthermore, we apply our separation algorithm to a high quality chroma key in which before combining a foreground frame and a background frame into an output image a color of each pixel in the foreground frame are spatially adaptively corrected through transformation of the separated irradiance. Experiments demonstrate superiority of our separation algorithm over the Kimmel's separation algorithm.
Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun
2018-01-01
Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition. PMID:29786665
Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun
2018-05-22
Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition.
Development of GK-2A cloud optical and microphysical properties retrieval algorithm
NASA Astrophysics Data System (ADS)
Yang, Y.; Yum, S. S.; Um, J.
2017-12-01
Cloud and aerosol radiative forcing is known to be one of the the largest uncertainties in climate change prediction. To reduce this uncertainty, remote sensing observation of cloud radiative and microphysical properties have been used since 1970s and the corresponding remote sensing techniques and instruments have been developed. As a part of such effort, Geo-KOMPSAT-2A (Geostationary Korea Multi-Purpose Satellite-2A, GK-2A) will be launched in 2018. On the GK-2A, the Advanced Meteorological Imager (AMI) is primary instrument which have 3 visible, 3 near-infrared, and 10 infrared channels. To retrieve optical and microphysical properties of clouds using AMI measurements, the preliminary version of new cloud retrieval algorithm for GK-2A was developed and several validation tests were conducted. This algorithm retrieves cloud optical thickness (COT), cloud effective radius (CER), liquid water path (LWP), and ice water path (IWP), so we named this algorithm as Daytime Cloud Optical thickness, Effective radius and liquid and ice Water path (DCOEW). The DCOEW uses cloud reflectance at visible and near-infrared channels as input data. An optimal estimation (OE) approach that requires appropriate a-priori values and measurement error information is used to retrieve COT and CER. LWP and IWP are calculated using empirical relationships between COT/CER and cloud water path that were determined previously. To validate retrieved cloud properties, we compared DCOEW output data with other operational satellite data. For COT and CER validation, we used two different data sets. To compare algorithms that use cloud reflectance at visible and near-IR channels as input data, MODIS MYD06 cloud product was selected. For the validation with cloud products that are based on microwave measurements, COT(2B-TAU)/CER(2C-ICE) data retrieved from CloudSat cloud profiling radar (W-band, 94 GHz) was used. For cloud water path validation, AMSR-2 Level-3 Cloud liquid water data was used. Detailed results will be shown at the conference.
Belief propagation decoding of quantum channels by passing quantum messages
NASA Astrophysics Data System (ADS)
Renes, Joseph M.
2017-07-01
The belief propagation (BP) algorithm is a powerful tool in a wide range of disciplines from statistical physics to machine learning to computational biology, and is ubiquitous in decoding classical error-correcting codes. The algorithm works by passing messages between nodes of the factor graph associated with the code and enables efficient decoding of the channel, in some cases even up to the Shannon capacity. Here we construct the first BP algorithm which passes quantum messages on the factor graph and is capable of decoding the classical-quantum channel with pure state outputs. This gives explicit decoding circuits whose number of gates is quadratic in the code length. We also show that this decoder can be modified to work with polar codes for the pure state channel and as part of a decoder for transmitting quantum information over the amplitude damping channel. These represent the first explicit capacity-achieving decoders for non-Pauli channels.
1D-VAR Retrieval Using Superchannels
NASA Technical Reports Server (NTRS)
Liu, Xu; Zhou, Daniel; Larar, Allen; Smith, William L.; Schluessel, Peter; Mango, Stephen; SaintGermain, Karen
2008-01-01
Since modern ultra-spectral remote sensors have thousands of channels, it is difficult to include all of them in a 1D-var retrieval system. We will describe a physical inversion algorithm, which includes all available channels for the atmospheric temperature, moisture, cloud, and surface parameter retrievals. Both the forward model and the inversion algorithm compress the channel radiances into super channels. These super channels are obtained by projecting the radiance spectra onto a set of pre-calculated eigenvectors. The forward model provides both super channel properties and jacobian in EOF space directly. For ultra-spectral sensors such as Infrared Atmospheric Sounding Interferometer (IASI) and the NPOESS Airborne Sounder Testbed Interferometer (NAST), a compression ratio of more than 80 can be achieved, leading to a significant reduction in computations involved in an inversion process. Results will be shown applying the algorithm to real IASI and NAST data.
NASA Astrophysics Data System (ADS)
Kazantsev, Daniil; Jørgensen, Jakob S.; Andersen, Martin S.; Lionheart, William R. B.; Lee, Peter D.; Withers, Philip J.
2018-06-01
Rapid developments in photon-counting and energy-discriminating detectors have the potential to provide an additional spectral dimension to conventional x-ray grayscale imaging. Reconstructed spectroscopic tomographic data can be used to distinguish individual materials by characteristic absorption peaks. The acquired energy-binned data, however, suffer from low signal-to-noise ratio, acquisition artifacts, and frequently angular undersampled conditions. New regularized iterative reconstruction methods have the potential to produce higher quality images and since energy channels are mutually correlated it can be advantageous to exploit this additional knowledge. In this paper, we propose a novel method which jointly reconstructs all energy channels while imposing a strong structural correlation. The core of the proposed algorithm is to employ a variational framework of parallel level sets to encourage joint smoothing directions. In particular, the method selects reference channels from which to propagate structure in an adaptive and stochastic way while preferring channels with a high data signal-to-noise ratio. The method is compared with current state-of-the-art multi-channel reconstruction techniques including channel-wise total variation and correlative total nuclear variation regularization. Realistic simulation experiments demonstrate the performance improvements achievable by using correlative regularization methods.
Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods
NASA Astrophysics Data System (ADS)
Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong
2008-12-01
Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.
Zhang, Xiong; Zhao, Yacong; Zhang, Yu; Zhong, Xuefei; Fan, Zhaowen
2018-01-01
The novel human-computer interface (HCI) using bioelectrical signals as input is a valuable tool to improve the lives of people with disabilities. In this paper, surface electromyography (sEMG) signals induced by four classes of wrist movements were acquired from four sites on the lower arm with our designed system. Forty-two features were extracted from the time, frequency and time-frequency domains. Optimal channels were determined from single-channel classification performance rank. The optimal-feature selection was according to a modified entropy criteria (EC) and Fisher discrimination (FD) criteria. The feature selection results were evaluated by four different classifiers, and compared with other conventional feature subsets. In online tests, the wearable system acquired real-time sEMG signals. The selected features and trained classifier model were used to control a telecar through four different paradigms in a designed environment with simple obstacles. Performance was evaluated based on travel time (TT) and recognition rate (RR). The results of hardware evaluation verified the feasibility of our acquisition systems, and ensured signal quality. Single-channel analysis results indicated that the channel located on the extensor carpi ulnaris (ECU) performed best with mean classification accuracy of 97.45% for all movement’s pairs. Channels placed on ECU and the extensor carpi radialis (ECR) were selected according to the accuracy rank. Experimental results showed that the proposed FD method was better than other feature selection methods and single-type features. The combination of FD and random forest (RF) performed best in offline analysis, with 96.77% multi-class RR. Online results illustrated that the state-machine paradigm with a 125 ms window had the highest maneuverability and was closest to real-life control. Subjects could accomplish online sessions by three sEMG-based paradigms, with average times of 46.02, 49.06 and 48.08 s, respectively. These experiments validate the feasibility of proposed real-time wearable HCI system and algorithms, providing a potential assistive device interface for persons with disabilities. PMID:29543737
NASA Astrophysics Data System (ADS)
Chang, Kai-Wei; L'Ecuyer, Tristan S.; Kahn, Brian H.; Natraj, Vijay
2017-05-01
Hyperspectral instruments such as Atmospheric Infrared Sounder (AIRS) have spectrally dense observations effective for ice cloud retrievals. However, due to the large number of channels, only a small subset is typically used. It is crucial that this subset of channels be chosen to contain the maximum possible information about the retrieved variables. This study describes an information content analysis designed to select optimal channels for ice cloud retrievals. To account for variations in ice cloud properties, we perform channel selection over an ensemble of cloud regimes, extracted with a clustering algorithm, from a multiyear database at a tropical Atmospheric Radiation Measurement site. Multiple satellite viewing angles over land and ocean surfaces are considered to simulate the variations in observation scenarios. The results suggest that AIRS channels near wavelengths of 14, 10.4, 4.2, and 3.8 μm contain the most information. With an eye toward developing a joint AIRS-MODIS (Moderate Resolution Imaging Spectroradiometer) retrieval, the analysis is also applied to combined measurements from both instruments. While application of this method to MODIS yields results consistent with previous channel sensitivity studies, the analysis shows that this combination may yield substantial improvement in cloud retrievals. MODIS provides most information on optical thickness and particle size, aided by a better constraint on cloud vertical placement from AIRS. An alternate scenario where cloud top boundaries are supplied by the active sensors in the A-train is also explored. The more robust cloud placement afforded by active sensors shifts the optimal channels toward the window region and shortwave infrared, further constraining optical thickness and particle size.
NASA Astrophysics Data System (ADS)
Rousseau, Yannick Y.; Van de Wiel, Marco J.; Biron, Pascale M.
2017-10-01
Meandering river channels are often associated with cohesive banks. Yet only a few river modelling packages include geotechnical and plant effects. Existing packages are solely compatible with single-threaded channels, require a specific mesh structure, derive lateral migration rates from hydraulic properties, determine stability based on friction angle, rely on nonphysical assumptions to describe cutoffs, or exclude floodplain processes and vegetation. In this paper, we evaluate the accuracy of a new geotechnical module that was developed and coupled with Telemac-Mascaret to address these limitations. Innovatively, the newly developed module relies on a fully configurable, universal genetic algorithm with tournament selection that permits it (1) to assess geotechnical stability along potentially unstable slope profiles intersecting liquid-solid boundaries, and (2) to predict the shape and extent of slump blocks while considering mechanical plant effects, bank hydrology, and the hydrostatic pressure caused by flow. The profiles of unstable banks are altered while ensuring mass conservation. Importantly, the new stability module is independent of mesh structure and can operate efficiently along multithreaded channels, cutoffs, and islands. Data collected along a 1.5-km-long reach of the semialluvial Medway Creek, Canada, over a period of 3.5 years are used to evaluate the capacity of the coupled model to accurately predict bank retreat in meandering river channels and to evaluate the extent to which the new model can be applied to a natural river reach located in a complex environment. Our results indicate that key geotechnical parameters can indeed be adjusted to fit observations, even with a minimal calibration effort, and that the model correctly identifies the location of the most severely eroded bank regions. The combined use of genetic and spatial analysis algorithms, in particular for the evaluation of geotechnical stability independently of the hydrodynamic mesh, permits the consideration of biophysical conditions for an extended river reach with complex bank geometries, with only a minor increase in run time. Further improvements with respect to plant representation could assist scientists in better understanding channel-floodplain interactions and in evaluating channel designs in river management projects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papantoni-Kazakos, P.; Paterakis, M.
1988-07-01
For many communication applications with time constraints (e.g., transmission of packetized voice messages), a critical performance measure is the percentage of messages transmitted within a given amount of time after their generation at the transmitting station. This report presents a random-access algorithm (RAA) suitable for time-constrained applications. Performance analysis demonstrates that significant message-delay improvement is attained at the expense of minimal traffic loss. Also considered is the case of noisy channels. The noise effect appears at erroneously observed channel feedback. Error sensitivity analysis shows that the proposed random-access algorithm is insensitive to feedback channel errors. Window Random-Access Algorithms (RAAs) aremore » considered next. These algorithms constitute an important subclass of Multiple-Access Algorithms (MAAs); they are distributive, and they attain high throughput and low delays by controlling the number of simultaneously transmitting users.« less
Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao
2012-01-01
Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6∼8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3∼5 pattern classes considering the trade-off between time consumption and classification rate. PMID:22736979
Nonlinear Algorithms for Channel Equalization and Map Symbol Detection.
NASA Astrophysics Data System (ADS)
Giridhar, K.
The transfer of information through a communication medium invariably results in various kinds of distortion to the transmitted signal. In this dissertation, a feed -forward neural network-based equalizer, and a family of maximum a posteriori (MAP) symbol detectors are proposed for signal recovery in the presence of intersymbol interference (ISI) and additive white Gaussian noise. The proposed neural network-based equalizer employs a novel bit-mapping strategy to handle multilevel data signals in an equivalent bipolar representation. It uses a training procedure to learn the channel characteristics, and at the end of training, the multilevel symbols are recovered from the corresponding inverse bit-mapping. When the channel characteristics are unknown and no training sequences are available, blind estimation of the channel (or its inverse) and simultaneous data recovery is required. Convergence properties of several existing Bussgang-type blind equalization algorithms are studied through computer simulations, and a unique gain independent approach is used to obtain a fair comparison of their rates of convergence. Although simple to implement, the slow convergence of these Bussgang-type blind equalizers make them unsuitable for many high data-rate applications. Rapidly converging blind algorithms based on the principle of MAP symbol-by -symbol detection are proposed, which adaptively estimate the channel impulse response (CIR) and simultaneously decode the received data sequence. Assuming a linear and Gaussian measurement model, the near-optimal blind MAP symbol detector (MAPSD) consists of a parallel bank of conditional Kalman channel estimators, where the conditioning is done on each possible data subsequence that can convolve with the CIR. This algorithm is also extended to the recovery of convolutionally encoded waveforms in the presence of ISI. Since the complexity of the MAPSD algorithm increases exponentially with the length of the assumed CIR, a suboptimal decision-feedback mechanism is introduced to truncate the channel memory "seen" by the MAPSD section. Also, simpler gradient-based updates for the channel estimates, and a metric pruning technique are used to further reduce the MAPSD complexity. Spatial diversity MAP combiners are developed to enhance the error rate performance and combat channel fading. As a first application of the MAPSD algorithm, dual-mode recovery techniques for TDMA (time-division multiple access) mobile radio signals are presented. Combined estimation of the symbol timing and the multipath parameters is proposed, using an auxiliary extended Kalman filter during the training cycle, and then tracking of the fading parameters is performed during the data cycle using the blind MAPSD algorithm. For the second application, a single-input receiver is employed to jointly recover cochannel narrowband signals. Assuming known channels, this two-stage joint MAPSD (JMAPSD) algorithm is compared to the optimal joint maximum likelihood sequence estimator, and to the joint decision-feedback detector. A blind MAPSD algorithm for the joint recovery of cochannel signals is also presented. Computer simulation results are provided to quantify the performance of the various algorithms proposed in this dissertation.
Comparing Binaural Pre-processing Strategies I: Instrumental Evaluation.
Baumgärtel, Regina M; Krawczyk-Becker, Martin; Marquardt, Daniel; Völker, Christoph; Hu, Hongmei; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Ernst, Stephan M A; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias
2015-12-30
In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios. © The Author(s) 2015.
Comparing Binaural Pre-processing Strategies I
Krawczyk-Becker, Martin; Marquardt, Daniel; Völker, Christoph; Hu, Hongmei; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Ernst, Stephan M. A.; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias
2015-01-01
In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios. PMID:26721920
NASA Technical Reports Server (NTRS)
Peterson, Harold; Koshak, William J.
2009-01-01
An algorithm has been developed to estimate the altitude distribution of one-meter lightning channel segments. The algorithm is required as part of a broader objective that involves improving the lightning NOx emission inventories of both regional air quality and global chemistry/climate models. The algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand; VHF source amplitude thresholding and smoothing were applied to optimize results. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. One-meter channel segment altitude distributions were also obtained for the different seasons.
A hierarchical exact accelerated stochastic simulation algorithm
NASA Astrophysics Data System (ADS)
Orendorff, David; Mjolsness, Eric
2012-12-01
A new algorithm, "HiER-leap" (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled "blocks" and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms.
System for Processing Coded OFDM Under Doppler and Fading
NASA Technical Reports Server (NTRS)
Tsou, Haiping; Darden, Scott; Lee, Dennis; Yan, Tsun-Yee
2005-01-01
An advanced communication system has been proposed for transmitting and receiving coded digital data conveyed as a form of quadrature amplitude modulation (QAM) on orthogonal frequency-division multiplexing (OFDM) signals in the presence of such adverse propagation-channel effects as large dynamic Doppler shifts and frequency-selective multipath fading. Such adverse channel effects are typical of data communications between mobile units or between mobile and stationary units (e.g., telemetric transmissions from aircraft to ground stations). The proposed system incorporates novel signal processing techniques intended to reduce the losses associated with adverse channel effects while maintaining compatibility with the high-speed physical layer specifications defined for wireless local area networks (LANs) as the standard 802.11a of the Institute of Electrical and Electronics Engineers (IEEE 802.11a). OFDM is a multi-carrier modulation technique that is widely used for wireless transmission of data in LANs and in metropolitan area networks (MANs). OFDM has been adopted in IEEE 802.11a and some other industry standards because it affords robust performance under frequency-selective fading. However, its intrinsic frequency-diversity feature is highly sensitive to synchronization errors; this sensitivity poses a challenge to preserve coherence between the component subcarriers of an OFDM system in order to avoid intercarrier interference in the presence of large dynamic Doppler shifts as well as frequency-selective fading. As a result, heretofore, the use of OFDM has been limited primarily to applications involving small or zero Doppler shifts. The proposed system includes a digital coherent OFDM communication system that would utilize enhanced 802.1la-compatible signal-processing algorithms to overcome effects of frequency-selective fading and large dynamic Doppler shifts. The overall transceiver design would implement a two-frequency-channel architecture (see figure) that would afford frequency diversity for reducing the adverse effects of multipath fading. By using parallel concatenated convolutional codes (also known as Turbo codes) across the dual-channel and advanced OFDM signal processing within each channel, the proposed system is intended to achieve at least an order of magnitude improvement in received signal-to-noise ratio under adverse channel effects while preserving spectral efficiency.
Algorithmic complexity of quantum capacity
NASA Astrophysics Data System (ADS)
Oskouei, Samad Khabbazi; Mancini, Stefano
2018-04-01
We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.
Protective and control relays as coal-mine power-supply ACS subsystem
NASA Astrophysics Data System (ADS)
Kostin, V. N.; Minakova, T. E.
2017-10-01
The paper presents instantaneous selective short-circuit protection for the cabling of the underground part of a coal mine and central control algorithms as a Coal-Mine Power-Supply ACS Subsystem. In order to improve the reliability of electricity supply and reduce the mining equipment down-time, a dual channel relay protection and central control system is proposed as a subsystem of the coal-mine power-supply automated control system (PS ACS).
Coordinated Beamforming for MISO Interference Channel: Complexity Analysis and Efficient Algorithms
2010-01-01
Algorithm The cyclic coordinate descent algorithm is also known as the nonlinear Gauss - Seidel iteration [32]. There are several studies of this type of...vkρ(vi−1). It can be shown that the above BB gradient projection direction is always a descent direction. The R-linear convergence of the BB method has...KKT solution ) of the inexact pricing algorithm for MISO interference channel. The latter is interesting since the convergence of the original pricing
Combined Dust Detection Algorithm by Using MODIS Infrared Channels over East Asia
NASA Technical Reports Server (NTRS)
Park, Sang Seo; Kim, Jhoon; Lee, Jaehwa; Lee, Sukjo; Kim, Jeong Soo; Chang, Lim Seok; Ou, Steve
2014-01-01
A new dust detection algorithm is developed by combining the results of multiple dust detectionmethods using IR channels onboard the MODerate resolution Imaging Spectroradiometer (MODIS). Brightness Temperature Difference (BTD) between two wavelength channels has been used widely in previous dust detection methods. However, BTDmethods have limitations in identifying the offset values of the BTDto discriminate clear-sky areas. The current algorithm overcomes the disadvantages of previous dust detection methods by considering the Brightness Temperature Ratio (BTR) values of the dual wavelength channels with 30-day composite, the optical properties of the dust particles, the variability of surface properties, and the cloud contamination. Therefore, the current algorithm shows improvements in detecting the dust loaded region over land during daytime. Finally, the confidence index of the current dust algorithm is shown in 10 × 10 pixels of the MODIS observations. From January to June, 2006, the results of the current algorithm are within 64 to 81% of those found using the fine mode fraction (FMF) and aerosol index (AI) from the MODIS and Ozone Monitoring Instrument (OMI). The agreement between the results of the current algorithm and the OMI AI over the non-polluted land also ranges from 60 to 67% to avoid errors due to the anthropogenic aerosol. In addition, the developed algorithm shows statistically significant results at four AErosol RObotic NETwork (AERONET) sites in East Asia.
Super-resolution for imagery from integrated microgrid polarimeters.
Hardie, Russell C; LeMaster, Daniel A; Ratliff, Bradley M
2011-07-04
Imagery from microgrid polarimeters is obtained by using a mosaic of pixel-wise micropolarizers on a focal plane array (FPA). Each distinct polarization image is obtained by subsampling the full FPA image. Thus, the effective pixel pitch for each polarization channel is increased and the sampling frequency is decreased. As a result, aliasing artifacts from such undersampling can corrupt the true polarization content of the scene. Here we present the first multi-channel multi-frame super-resolution (SR) algorithms designed specifically for the problem of image restoration in microgrid polarization imagers. These SR algorithms can be used to address aliasing and other degradations, without sacrificing field of view or compromising optical resolution with an anti-aliasing filter. The new SR methods are designed to exploit correlation between the polarimetric channels. One of the new SR algorithms uses a form of regularized least squares and has an iterative solution. The other is based on the faster adaptive Wiener filter SR method. We demonstrate that the new multi-channel SR algorithms are capable of providing significant enhancement of polarimetric imagery and that they outperform their independent channel counterparts.
NASA Astrophysics Data System (ADS)
Heilman, Jesse Alan
The search for the production of four top quarks decaying in the dileptonic channel in proton-proton collisions at the LHC is presented. The analysis utilises the data recorded by the CMS experiment at sqrt{s} = 13 TeV in 2015, which corresponds to an integrated luminosity of 2.6 inverse femtobarns. A boosted decision tree algorithm is used to select signal and suppress background events. Upper limits on dileptonic four top quark production of 14.9 times the predicted standard model cross section observed and 22.3 +16.2-8.4 times the predicted standard model cross section expected are calculated at the 95% confidence level. A combination is then performed with a parallel analysis of the single lepton channel to extend the reach of the search.
Bargaining and the MISO Interference Channel
NASA Astrophysics Data System (ADS)
Nokleby, Matthew; Swindlehurst, A. Lee
2009-12-01
We examine the MISO interference channel under cooperative bargaining theory. Bargaining approaches such as the Nash and Kalai-Smorodinsky solutions have previously been used in wireless networks to strike a balance between max-sum efficiency and max-min equity in users' rates. However, cooperative bargaining for the MISO interference channel has only been studied extensively for the two-user case. We present an algorithm that finds the optimal Kalai-Smorodinsky beamformers for an arbitrary number of users. We also consider joint scheduling and beamformer selection, using gradient ascent to find a stationary point of the Kalai-Smorodinsky objective function. When interference is strong, the flexibility allowed by scheduling compensates for the performance loss due to local optimization. Finally, we explore the benefits of power control, showing that power control provides nontrivial throughput gains when the number of transmitter/receiver pairs is greater than the number of transmit antennas.
An Algorithm For Climate-Quality Atmospheric Profiling Continuity From EOS Aqua To Suomi-NPP
NASA Astrophysics Data System (ADS)
Moncet, J. L.
2015-12-01
We will present results from an algorithm that is being developed to produce climate-quality atmospheric profiling earth system data records (ESDRs) for application to hyperspectral sounding instrument data from Suomi-NPP, EOS Aqua, and other spacecraft. The current focus is on data from the S-NPP Cross-track Infrared Sounder (CrIS) and Advanced Technology Microwave Sounder (ATMS) instruments as well as the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua. The algorithm development at Atmospheric and Environmental Research (AER) has common heritage with the optimal estimation (OE) algorithm operationally processing S-NPP data in the Interface Data Processing Segment (IDPS), but the ESDR algorithm has a flexible, modular software structure to support experimentation and collaboration and has several features adapted to the climate orientation of ESDRs. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. The radiative transfer component uses an enhanced version of optimal spectral sampling (OSS) with updated spectroscopy, treatment of emission that is not in local thermodynamic equilibrium (non-LTE), efficiency gains with "global" optimal sampling over all channels, and support for channel selection. The algorithm is designed for adaptive treatment of clouds, with capability to apply "cloud clearing" or simultaneous cloud parameter retrieval, depending on conditions. We will present retrieval results demonstrating the impact of a new capability to perform the retrievals on sigma or hybrid vertical grid (as opposed to a fixed pressure grid), which particularly affects profile accuracy over land with variable terrain height and with sharp vertical structure near the surface. In addition, we will show impacts of alternative treatments of regularization of the inversion. While OE algorithms typically implement regularization by using background estimates from climatological or numerical forecast data, those sources are problematic for climate applications due to the imprint of biases from past climate analyses or from model error.
NASA Astrophysics Data System (ADS)
Guegan, Loic; Murad, Nour Mohammad; Bonhommeau, Sylvain
2018-03-01
This paper deals with the modeling of the over sea radio channel and aims to establish sea turtles localization off the coast of Reunion Island, and also on Europa Island in the Mozambique Channel. In order to model this radio channel, a framework measurement protocol is proposed. The over sea measured channel is integrated to the localization algorithm to estimate the turtle trajectory based on Power of Arrival (PoA) technique compared to GPS localization. Moreover, cross correlation tool is used to characterize the over sea propagation channel. First measurement of the radio channel on the Reunion Island coast combine to the POA algorithm show an error of 18 m for 45% of the approximated points.
Smartphone-Based Indoor Localization with Bluetooth Low Energy Beacons
Zhuang, Yuan; Yang, Jun; Li, You; Qi, Longning; El-Sheimy, Naser
2016-01-01
Indoor wireless localization using Bluetooth Low Energy (BLE) beacons has attracted considerable attention after the release of the BLE protocol. In this paper, we propose an algorithm that uses the combination of channel-separate polynomial regression model (PRM), channel-separate fingerprinting (FP), outlier detection and extended Kalman filtering (EKF) for smartphone-based indoor localization with BLE beacons. The proposed algorithm uses FP and PRM to estimate the target’s location and the distances between the target and BLE beacons respectively. We compare the performance of distance estimation that uses separate PRM for three advertisement channels (i.e., the separate strategy) with that use an aggregate PRM generated through the combination of information from all channels (i.e., the aggregate strategy). The performance of FP-based location estimation results of the separate strategy and the aggregate strategy are also compared. It was found that the separate strategy can provide higher accuracy; thus, it is preferred to adopt PRM and FP for each BLE advertisement channel separately. Furthermore, to enhance the robustness of the algorithm, a two-level outlier detection mechanism is designed. Distance and location estimates obtained from PRM and FP are passed to the first outlier detection to generate improved distance estimates for the EKF. After the EKF process, the second outlier detection algorithm based on statistical testing is further performed to remove the outliers. The proposed algorithm was evaluated by various field experiments. Results show that the proposed algorithm achieved the accuracy of <2.56 m at 90% of the time with dense deployment of BLE beacons (1 beacon per 9 m), which performs 35.82% better than <3.99 m from the Propagation Model (PM) + EKF algorithm and 15.77% more accurate than <3.04 m from the FP + EKF algorithm. With sparse deployment (1 beacon per 18 m), the proposed algorithm achieves the accuracies of <3.88 m at 90% of the time, which performs 49.58% more accurate than <8.00 m from the PM + EKF algorithm and 21.41% better than <4.94 m from the FP + EKF algorithm. Therefore, the proposed algorithm is especially useful to improve the localization accuracy in environments with sparse beacon deployment. PMID:27128917
Smartphone-Based Indoor Localization with Bluetooth Low Energy Beacons.
Zhuang, Yuan; Yang, Jun; Li, You; Qi, Longning; El-Sheimy, Naser
2016-04-26
Indoor wireless localization using Bluetooth Low Energy (BLE) beacons has attracted considerable attention after the release of the BLE protocol. In this paper, we propose an algorithm that uses the combination of channel-separate polynomial regression model (PRM), channel-separate fingerprinting (FP), outlier detection and extended Kalman filtering (EKF) for smartphone-based indoor localization with BLE beacons. The proposed algorithm uses FP and PRM to estimate the target's location and the distances between the target and BLE beacons respectively. We compare the performance of distance estimation that uses separate PRM for three advertisement channels (i.e., the separate strategy) with that use an aggregate PRM generated through the combination of information from all channels (i.e., the aggregate strategy). The performance of FP-based location estimation results of the separate strategy and the aggregate strategy are also compared. It was found that the separate strategy can provide higher accuracy; thus, it is preferred to adopt PRM and FP for each BLE advertisement channel separately. Furthermore, to enhance the robustness of the algorithm, a two-level outlier detection mechanism is designed. Distance and location estimates obtained from PRM and FP are passed to the first outlier detection to generate improved distance estimates for the EKF. After the EKF process, the second outlier detection algorithm based on statistical testing is further performed to remove the outliers. The proposed algorithm was evaluated by various field experiments. Results show that the proposed algorithm achieved the accuracy of <2.56 m at 90% of the time with dense deployment of BLE beacons (1 beacon per 9 m), which performs 35.82% better than <3.99 m from the Propagation Model (PM) + EKF algorithm and 15.77% more accurate than <3.04 m from the FP + EKF algorithm. With sparse deployment (1 beacon per 18 m), the proposed algorithm achieves the accuracies of <3.88 m at 90% of the time, which performs 49.58% more accurate than <8.00 m from the PM + EKF algorithm and 21.41% better than <4.94 m from the FP + EKF algorithm. Therefore, the proposed algorithm is especially useful to improve the localization accuracy in environments with sparse beacon deployment.
Linear methods for reducing EMG contamination in peripheral nerve motor decodes.
Kagan, Zachary B; Wendelken, Suzanne; Page, David M; Davis, Tyler; Hutchinson, Douglas T; Clark, Gregory A; Warren, David J
2016-08-01
Signals recorded from the peripheral nervous system (PNS) with high channel count penetrating microelectrode arrays, such as the Utah Slanted Electrode Array (USEA), often have electromyographic (EMG) signals contaminating the neural signal. This common-mode signal source may prevent single neural units from successfully being detected, thus hindering motor decode algorithms. Reducing this EMG contamination may lead to more accurate motor decode performance. A virtual reference (VR), created by a weighted linear combination of signals from a subset of all available channels, can be used to reduce this EMG contamination. Four methods of determining individual channel weights and six different methods of selecting subsets of channels were investigated (24 different VR types in total). The methods of determining individual channel weights were equal weighting, regression-based weighting, and two different proximity-based weightings. The subsets of channels were selected by a radius-based criteria, such that a channel was included if it was within a particular radius of inclusion from the target channel. These six radii of inclusion were 1.5, 2.9, 3.2, 5, 8.4, and 12.8 electrode-distances; the 12.8 electrode radius includes all USEA electrodes. We found that application of a VR improves the detectability of neural events via increasing the SNR, but we found no statistically meaningful difference amongst the VR types we examined. The computational complexity of implementation varies with respect to the method of determining channel weights and the number of channels in a subset, but does not correlate with VR performance. Hence, we examined the computational costs of calculating and applying the VR and based on these criteria, we recommend an equal weighting method of assigning weights with a 3.2 electrode-distance radius of inclusion. Further, we found empirically that application of the recommended VR will require less than 1 ms for 33.3 ms of data from one USEA.
NASA Astrophysics Data System (ADS)
Zeng, Rongping; Badano, Aldo; Myers, Kyle J.
2017-04-01
We showed in our earlier work that the choice of reconstruction methods does not affect the optimization of DBT acquisition parameters (angular span and number of views) using simulated breast phantom images in detecting lesions with a channelized Hotelling observer (CHO). In this work we investigate whether the model-observer based conclusion is valid when using humans to interpret images. We used previously generated DBT breast phantom images and recruited human readers to find the optimal geometry settings associated with two reconstruction algorithms, filtered back projection (FBP) and simultaneous algebraic reconstruction technique (SART). The human reader results show that image quality trends as a function of the acquisition parameters are consistent between FBP and SART reconstructions. The consistent trends confirm that the optimization of DBT system geometry is insensitive to the choice of reconstruction algorithm. The results also show that humans perform better in SART reconstructed images than in FBP reconstructed images. In addition, we applied CHOs with three commonly used channel models, Laguerre-Gauss (LG) channels, square (SQR) channels and sparse difference-of-Gaussian (sDOG) channels. We found that LG channels predict human performance trends better than SQR and sDOG channel models for the task of detecting lesions in tomosynthesis backgrounds. Overall, this work confirms that the choice of reconstruction algorithm is not critical for optimizing DBT system acquisition parameters.
Reduced-rank technique for joint channel estimation in TD-SCDMA systems
NASA Astrophysics Data System (ADS)
Kamil Marzook, Ali; Ismail, Alyani; Mohd Ali, Borhanuddin; Sali, Adawati; Khatun, Sabira
2013-02-01
In time division-synchronous code division multiple access systems, increasing the system capacity by exploiting the inserting of the largest number of users in one time slot (TS) requires adding more estimation processes to estimate the joint channel matrix for the whole system. The increase in the number of channel parameters due the increase in the number of users in one TS directly affects the precision of the estimator's performance. This article presents a novel channel estimation with low complexity, which relies on reducing the rank order of the total channel matrix H. The proposed method exploits the rank deficiency of H to reduce the number of parameters that characterise this matrix. The adopted reduced-rank technique is based on truncated singular value decomposition algorithm. The algorithms for reduced-rank joint channel estimation (JCE) are derived and compared against traditional full-rank JCEs: least squares (LS) or Steiner and enhanced (LS or MMSE) algorithms. Simulation results of the normalised mean square error showed the superiority of reduced-rank estimators. In addition, the channel impulse responses founded by reduced-rank estimator for all active users offers considerable performance improvement over the conventional estimator along the channel window length.
Hue-preserving and saturation-improved color histogram equalization algorithm.
Song, Ki Sun; Kang, Hee; Kang, Moon Gi
2016-06-01
In this paper, an algorithm is proposed to improve contrast and saturation without color degradation. The local histogram equalization (HE) method offers better performance than the global HE method, whereas the local HE method sometimes produces undesirable results due to the block-based processing. The proposed contrast-enhancement (CE) algorithm reflects the characteristics of the global HE method in the local HE method to avoid the artifacts, while global and local contrasts are enhanced. There are two ways to apply the proposed CE algorithm to color images. One is luminance processing methods, and the other one is each channel processing methods. However, these ways incur excessive or reduced saturation and color degradation problems. The proposed algorithm solves these problems by using channel adaptive equalization and similarity of ratios between the channels. Experimental results show that the proposed algorithm enhances contrast and saturation while preserving the hue and producing better performance than existing methods in terms of objective evaluation metrics.
Multi-Class Motor Imagery EEG Decoding for Brain-Computer Interfaces
Wang, Deng; Miao, Duoqian; Blohm, Gunnar
2012-01-01
Recent studies show that scalp electroencephalography (EEG) as a non-invasive interface has great potential for brain-computer interfaces (BCIs). However, one factor that has limited practical applications for EEG-based BCI so far is the difficulty to decode brain signals in a reliable and efficient way. This paper proposes a new robust processing framework for decoding of multi-class motor imagery (MI) that is based on five main processing steps. (i) Raw EEG segmentation without the need of visual artifact inspection. (ii) Considering that EEG recordings are often contaminated not just by electrooculography (EOG) but also other types of artifacts, we propose to first implement an automatic artifact correction method that combines regression analysis with independent component analysis for recovering the original source signals. (iii) The significant difference between frequency components based on event-related (de-) synchronization and sample entropy is then used to find non-contiguous discriminating rhythms. After spectral filtering using the discriminating rhythms, a channel selection algorithm is used to select only relevant channels. (iv) Feature vectors are extracted based on the inter-class diversity and time-varying dynamic characteristics of the signals. (v) Finally, a support vector machine is employed for four-class classification. We tested our proposed algorithm on experimental data that was obtained from dataset 2a of BCI competition IV (2008). The overall four-class kappa values (between 0.41 and 0.80) were comparable to other models but without requiring any artifact-contaminated trial removal. The performance showed that multi-class MI tasks can be reliably discriminated using artifact-contaminated EEG recordings from a few channels. This may be a promising avenue for online robust EEG-based BCI applications. PMID:23087607
NASA Astrophysics Data System (ADS)
Chen, Huaiyu; Cao, Li
2017-06-01
In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.
Pinning impulsive control algorithms for complex network
NASA Astrophysics Data System (ADS)
Sun, Wen; Lü, Jinhu; Chen, Shihua; Yu, Xinghuo
2014-03-01
In this paper, we further investigate the synchronization of complex dynamical network via pinning control in which a selection of nodes are controlled at discrete times. Different from most existing work, the pinning control algorithms utilize only the impulsive signals at discrete time instants, which may greatly improve the communication channel efficiency and reduce control cost. Two classes of algorithms are designed, one for strongly connected complex network and another for non-strongly connected complex network. It is suggested that in the strongly connected network with suitable coupling strength, a single controller at any one of the network's nodes can always pin the network to its homogeneous solution. In the non-strongly connected case, the location and minimum number of nodes needed to pin the network are determined by the Frobenius normal form of the coupling matrix. In addition, the coupling matrix is not necessarily symmetric or irreducible. Illustrative examples are then given to validate the proposed pinning impulsive control algorithms.
Ren, Peng; Qian, Jiansheng
2016-01-01
This study proposes a novel power-efficient and anti-fading clustering based on a cross-layer that is specific to the time-varying fading characteristics of channels in the monitoring of coal mine faces with wireless sensor networks. The number of active sensor nodes and a sliding window are set up such that the optimal number of cluster heads (CHs) is selected in each round. Based on a stable expected number of CHs, we explore the channel efficiency between nodes and the base station by using a probe frame and the joint surplus energy in assessing the CH selection. Moreover, the sending power of a node in different periods is regulated by the signal fade margin method. The simulation results demonstrate that compared with several common algorithms, the power-efficient and fading-aware clustering with a cross-layer (PEAFC-CL) protocol features a stable network topology and adaptability under signal time-varying fading, which effectively prolongs the lifetime of the network and reduces network packet loss, thus making it more applicable to the complex and variable environment characteristic of a coal mine face. PMID:27338380
Stewart, C M; Newlands, S D; Perachio, A A
2004-12-01
Rapid and accurate discrimination of single units from extracellular recordings is a fundamental process for the analysis and interpretation of electrophysiological recordings. We present an algorithm that performs detection, characterization, discrimination, and analysis of action potentials from extracellular recording sessions. The program was entirely written in LabVIEW (National Instruments), and requires no external hardware devices or a priori information about action potential shapes. Waveform events are detected by scanning the digital record for voltages that exceed a user-adjustable trigger. Detected events are characterized to determine nine different time and voltage levels for each event. Various algebraic combinations of these waveform features are used as axis choices for 2-D Cartesian plots of events. The user selects axis choices that generate distinct clusters. Multiple clusters may be defined as action potentials by manually generating boundaries of arbitrary shape. Events defined as action potentials are validated by visual inspection of overlain waveforms. Stimulus-response relationships may be identified by selecting any recorded channel for comparison to continuous and average cycle histograms of binned unit data. The algorithm includes novel aspects of feature analysis and acquisition, including higher acquisition rates for electrophysiological data compared to other channels. The program confirms that electrophysiological data may be discriminated with high-speed and efficiency using algebraic combinations of waveform features derived from high-speed digital records.
Efficient Implementation of a Symbol Timing Estimator for Broadband PLC.
Nombela, Francisco; García, Enrique; Mateos, Raúl; Hernández, Álvaro
2015-08-21
Broadband Power Line Communications (PLC) have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.
NASA Astrophysics Data System (ADS)
Ling, Jun
Achieving reliable underwater acoustic communications (UAC) has long been recognized as a challenging problem owing to the scarce bandwidth available and the reverberant spread in both time and frequency domains. To pursue high data rates, we consider a multi-input multi-output (MIMO) UAC system, and our focus is placed on two main issues regarding a MIMO UAC system: (1) channel estimation, which involves the design of the training sequences and the development of a reliable channel estimation algorithm, and (2) symbol detection, which requires interference cancelation schemes due to simultaneous transmission from multiple transducers. To enhance channel estimation performance, we present a cyclic approach for designing training sequences with good auto- and cross-correlation properties, and a channel estimation algorithm called the iterative adaptive approach (IAA). Sparse channel estimates can be obtained by combining IAA with the Bayesian information criterion (BIC). Moreover, we present sparse learning via iterative minimization (SLIM) and demonstrate that SLIM gives similar performance to IAA but at a much lower computational cost. Furthermore, an extension of the SLIM algorithm is introduced to estimate the sparse and frequency modulated acoustic channels. The extended algorithm is referred to as generalization of SLIM (GoSLIM). Regarding symbol detection, a linear minimum mean-squared error based detection scheme, called RELAX-BLAST, which is a combination of vertical Bell Labs layered space-time (V-BLAST) algorithm and the cyclic principle of the RELAX algorithm, is presented and it is shown that RELAX-BLAST outperforms V-BLAST. We show that RELAX-BLAST can be implemented efficiently by making use of the conjugate gradient method and diagonalization properties of circulant matrices. This fast implementation approach requires only simple fast Fourier transform operations and facilitates parallel implementations. The effectiveness of the proposed MIMO schemes is verified by both computer simulations and experimental results obtained by analyzing the measurements acquired in multiple in-water experiments.
Wanting Wang; John J. Qu; Xianjun Hao; Yongqiang Liu; William T. Sommers
2006-01-01
Traditional fire detection algorithms mainly rely on hot spot detection using thermal infrared (TIR) channels with fixed or contextual thresholds. Three solar reflectance channels (0.65 μm, 0.86 μm, and 2.1 μm) were recently adopted into the MODIS version 4 contextual algorithm to improve the active fire detection. In the southeastern United...
Multi-channel, passive, short-range anti-aircraft defence system
NASA Astrophysics Data System (ADS)
Gapiński, Daniel; Krzysztofik, Izabela; Koruba, Zbigniew
2018-01-01
The paper presents a novel method for tracking several air targets simultaneously. The developed concept concerns a multi-channel, passive, short-range anti-aircraft defence system based on the programmed selection of air targets and an algorithm of simultaneous synchronisation of several modified optical scanning seekers. The above system is supposed to facilitate simultaneous firing of several self-guided infrared rocket missiles at many different air targets. From the available information, it appears that, currently, there are no passive self-guided seekers that fulfil such tasks. This paper contains theoretical discussions and simulations of simultaneous detection and tracking of many air targets by mutually integrated seekers of several rocket missiles. The results of computer simulation research have been presented in a graphical form.
NASA Astrophysics Data System (ADS)
Bansal, Shonak; Singh, Arun Kumar; Gupta, Neena
2017-02-01
In real-life, multi-objective engineering design problems are very tough and time consuming optimization problems due to their high degree of nonlinearities, complexities and inhomogeneity. Nature-inspired based multi-objective optimization algorithms are now becoming popular for solving multi-objective engineering design problems. This paper proposes original multi-objective Bat algorithm (MOBA) and its extended form, namely, novel parallel hybrid multi-objective Bat algorithm (PHMOBA) to generate shortest length Golomb ruler called optimal Golomb ruler (OGR) sequences at a reasonable computation time. The OGRs found their application in optical wavelength division multiplexing (WDM) systems as channel-allocation algorithm to reduce the four-wave mixing (FWM) crosstalk. The performances of both the proposed algorithms to generate OGRs as optical WDM channel-allocation is compared with other existing classical computing and nature-inspired algorithms, including extended quadratic congruence (EQC), search algorithm (SA), genetic algorithms (GAs), biogeography based optimization (BBO) and big bang-big crunch (BB-BC) optimization algorithms. Simulations conclude that the proposed parallel hybrid multi-objective Bat algorithm works efficiently as compared to original multi-objective Bat algorithm and other existing algorithms to generate OGRs for optical WDM systems. The algorithm PHMOBA to generate OGRs, has higher convergence and success rate than original MOBA. The efficiency improvement of proposed PHMOBA to generate OGRs up to 20-marks, in terms of ruler length and total optical channel bandwidth (TBW) is 100 %, whereas for original MOBA is 85 %. Finally the implications for further research are also discussed.
Joint channel estimation and multi-user detection for multipath fading channels in DS-CDMA systems
NASA Astrophysics Data System (ADS)
Wu, Sau-Hsuan; Kuo, C.-C. Jay
2002-11-01
The technique of joint blind channel estimation and multiple access interference (MAI) suppression for an asynchronous code-division multiple-access (CDMA) system is investigated in this research. To identify and track dispersive time-varying fading channels and to avoid the phase ambiguity that come with the second-order statistic approaches, a sliding-window scheme using the expectation maximization (EM) algorithm is proposed. The complexity of joint channel equalization and symbol detection for all users increases exponentially with system loading and the channel memory. The situation is exacerbated if strong inter-symbol interference (ISI) exists. To reduce the complexity and the number of samples required for channel estimation, a blind multiuser detector is developed. Together with multi-stage interference cancellation using soft outputs provided by this detector, our algorithm can track fading channels with no phase ambiguity even when channel gains attenuate close to zero.
Rezaee, Kh.; Azizi, E.; Haddadnia, J.
2016-01-01
Background Epilepsy is a severe disorder of the central nervous system that predisposes the person to recurrent seizures. Fifty million people worldwide suffer from epilepsy; after Alzheimer’s and stroke, it is the third widespread nervous disorder. Objective In this paper, an algorithm to detect the onset of epileptic seizures based on the analysis of brain electrical signals (EEG) has been proposed. 844 hours of EEG were recorded form 23 pediatric patients consecutively with 163 occurrences of seizures. Signals had been collected from Children’s Hospital Boston with a sampling frequency of 256 Hz through 18 channels in order to assess epilepsy surgery. By selecting effective features from seizure and non-seizure signals of each individual and putting them into two categories, the proposed algorithm detects the onset of seizures quickly and with high sensitivity. Method In this algorithm, L-sec epochs of signals are displayed in form of a third-order tensor in spatial, spectral and temporal spaces by applying wavelet transform. Then, after applying general tensor discriminant analysis (GTDA) on tensors and calculating mapping matrix, feature vectors are extracted. GTDA increases the sensitivity of the algorithm by storing data without deleting them. Finally, K-Nearest neighbors (KNN) is used to classify the selected features. Results The results of simulating algorithm on algorithm standard dataset shows that the algorithm is capable of detecting 98 percent of seizures with an average delay of 4.7 seconds and the average error rate detection of three errors in 24 hours. Conclusion Today, the lack of an automated system to detect or predict the seizure onset is strongly felt. PMID:27672628
Analytical optimal pulse shapes obtained with the aid of genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerrero, Rubén D., E-mail: rdguerrerom@unal.edu.co; Arango, Carlos A.; Reyes, Andrés
2015-09-28
We propose a methodology to design optimal pulses for achieving quantum optimal control on molecular systems. Our approach constrains pulse shapes to linear combinations of a fixed number of experimentally relevant pulse functions. Quantum optimal control is obtained by maximizing a multi-target fitness function using genetic algorithms. As a first application of the methodology, we generated an optimal pulse that successfully maximized the yield on a selected dissociation channel of a diatomic molecule. Our pulse is obtained as a linear combination of linearly chirped pulse functions. Data recorded along the evolution of the genetic algorithm contained important information regarding themore » interplay between radiative and diabatic processes. We performed a principal component analysis on these data to retrieve the most relevant processes along the optimal path. Our proposed methodology could be useful for performing quantum optimal control on more complex systems by employing a wider variety of pulse shape functions.« less
ID card number detection algorithm based on convolutional neural network
NASA Astrophysics Data System (ADS)
Zhu, Jian; Ma, Hanjie; Feng, Jie; Dai, Leiyan
2018-04-01
In this paper, a new detection algorithm based on Convolutional Neural Network is presented in order to realize the fast and convenient ID information extraction in multiple scenarios. The algorithm uses the mobile device equipped with Android operating system to locate and extract the ID number; Use the special color distribution of the ID card, select the appropriate channel component; Use the image threshold segmentation, noise processing and morphological processing to take the binary processing for image; At the same time, the image rotation and projection method are used for horizontal correction when image was tilting; Finally, the single character is extracted by the projection method, and recognized by using Convolutional Neural Network. Through test shows that, A single ID number image from the extraction to the identification time is about 80ms, the accuracy rate is about 99%, It can be applied to the actual production and living environment.
Cross contrast multi-channel image registration using image synthesis for MR brain images.
Chen, Min; Carass, Aaron; Jog, Amod; Lee, Junghoon; Roy, Snehashis; Prince, Jerry L
2017-02-01
Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Xi, Songnan; Zoltowski, Michael D.
2008-04-01
Multiuser multiple-input multiple-output (MIMO) systems are considered in this paper. We continue our research on uplink transmit beamforming design for multiple users under the assumption that the full multiuser channel state information, which is the collection of the channel state information between each of the users and the base station, is known not only to the receiver but also to all the transmitters. We propose an algorithm for designing optimal beamforming weights in terms of maximizing the signal-to-interference-plus-noise ratio (SINR). Through statistical modeling, we decouple the original mathematically intractable optimization problem and achieved a closed-form solution. As in our previous work, the minimum mean-squared error (MMSE) receiver with successive interference cancellation (SIC) is adopted for multiuser detection. The proposed scheme is compared with an existing jointly optimized transceiver design, referred to as the joint transceiver in this paper, and our previously proposed eigen-beamforming algorithm. Simulation results demonstrate that our algorithm, with much less computational burden, accomplishes almost the same performance as the joint transceiver for spatially independent MIMO channel and even better performance for spatially correlated MIMO channels. And it always works better than our previously proposed eigen beamforming algorithm.
Modulation Depth Estimation and Variable Selection in State-Space Models for Neural Interfaces
Hochberg, Leigh R.; Donoghue, John P.; Brown, Emery N.
2015-01-01
Rapid developments in neural interface technology are making it possible to record increasingly large signal sets of neural activity. Various factors such as asymmetrical information distribution and across-channel redundancy may, however, limit the benefit of high-dimensional signal sets, and the increased computational complexity may not yield corresponding improvement in system performance. High-dimensional system models may also lead to overfitting and lack of generalizability. To address these issues, we present a generalized modulation depth measure using the state-space framework that quantifies the tuning of a neural signal channel to relevant behavioral covariates. For a dynamical system, we develop computationally efficient procedures for estimating modulation depth from multivariate data. We show that this measure can be used to rank neural signals and select an optimal channel subset for inclusion in the neural decoding algorithm. We present a scheme for choosing the optimal subset based on model order selection criteria. We apply this method to neuronal ensemble spike-rate decoding in neural interfaces, using our framework to relate motor cortical activity with intended movement kinematics. With offline analysis of intracortical motor imagery data obtained from individuals with tetraplegia using the BrainGate neural interface, we demonstrate that our variable selection scheme is useful for identifying and ranking the most information-rich neural signals. We demonstrate that our approach offers several orders of magnitude lower complexity but virtually identical decoding performance compared to greedy search and other selection schemes. Our statistical analysis shows that the modulation depth of human motor cortical single-unit signals is well characterized by the generalized Pareto distribution. Our variable selection scheme has wide applicability in problems involving multisensor signal modeling and estimation in biomedical engineering systems. PMID:25265627
Channel coding for underwater acoustic single-carrier CDMA communication system
NASA Astrophysics Data System (ADS)
Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong
2017-01-01
CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.
GREAT: a gradient-based color-sampling scheme for Retinex.
Lecca, Michela; Rizzi, Alessandro; Serapioni, Raul Paolo
2017-04-01
Modeling the local color spatial distribution is a crucial step for the algorithms of the Milano Retinex family. Here we present GREAT, a novel, noise-free Milano Retinex implementation based on an image-aware spatial color sampling. For each channel of a color input image, GREAT computes a 2D set of edges whose magnitude exceeds a pre-defined threshold. Then GREAT re-scales the channel intensity of each image pixel, called target, by the average of the intensities of the selected edges weighted by a function of their positions, gradient magnitudes, and intensities relative to the target. In this way, GREAT enhances the input image, adjusting its brightness, contrast and dynamic range. The use of the edges as pixels relevant to color filtering is justified by the importance that edges play in human color sensation. The name GREAT comes from the expression "Gradient RElevAnce for ReTinex," which refers to the threshold-based definition of a gradient relevance map for edge selection and thus for image color filtering.
Somers, Ben; Bertrand, Alexander
2016-12-01
Chronic, 24/7 EEG monitoring requires the use of highly miniaturized EEG modules, which only measure a few EEG channels over a small area. For improved spatial coverage, a wireless EEG sensor network (WESN) can be deployed, consisting of multiple EEG modules, which interact through short-distance wireless communication. In this paper, we aim to remove eye blink artifacts in each EEG channel of a WESN by optimally exploiting the correlation between EEG signals from different modules, under stringent communication bandwidth constraints. We apply a distributed canonical correlation analysis (CCA-)based algorithm, in which each module only transmits an optimal linear combination of its local EEG channels to the other modules. The method is validated on both synthetic and real EEG data sets, with emulated wireless transmissions. While strongly reducing the amount of data that is shared between nodes, we demonstrate that the algorithm achieves the same eye blink artifact removal performance as the equivalent centralized CCA algorithm, which is at least as good as other state-of-the-art multi-channel algorithms that require a transmission of all channels. Due to their potential for extreme miniaturization, WESNs are viewed as an enabling technology for chronic EEG monitoring. However, multi-channel analysis is hampered in WESNs due to the high energy cost for wireless communication. This paper shows that multi-channel eye blink artifact removal is possible with a significantly reduced wireless communication between EEG modules.
NASA Astrophysics Data System (ADS)
Somers, Ben; Bertrand, Alexander
2016-12-01
Objective. Chronic, 24/7 EEG monitoring requires the use of highly miniaturized EEG modules, which only measure a few EEG channels over a small area. For improved spatial coverage, a wireless EEG sensor network (WESN) can be deployed, consisting of multiple EEG modules, which interact through short-distance wireless communication. In this paper, we aim to remove eye blink artifacts in each EEG channel of a WESN by optimally exploiting the correlation between EEG signals from different modules, under stringent communication bandwidth constraints. Approach. We apply a distributed canonical correlation analysis (CCA-)based algorithm, in which each module only transmits an optimal linear combination of its local EEG channels to the other modules. The method is validated on both synthetic and real EEG data sets, with emulated wireless transmissions. Main results. While strongly reducing the amount of data that is shared between nodes, we demonstrate that the algorithm achieves the same eye blink artifact removal performance as the equivalent centralized CCA algorithm, which is at least as good as other state-of-the-art multi-channel algorithms that require a transmission of all channels. Significance. Due to their potential for extreme miniaturization, WESNs are viewed as an enabling technology for chronic EEG monitoring. However, multi-channel analysis is hampered in WESNs due to the high energy cost for wireless communication. This paper shows that multi-channel eye blink artifact removal is possible with a significantly reduced wireless communication between EEG modules.
Single-channel mixed signal blind source separation algorithm based on multiple ICA processing
NASA Astrophysics Data System (ADS)
Cheng, Xiefeng; Li, Ji
2017-01-01
Take separating the fetal heart sound signal from the mixed signal that get from the electronic stethoscope as the research background, the paper puts forward a single-channel mixed signal blind source separation algorithm based on multiple ICA processing. Firstly, according to the empirical mode decomposition (EMD), the single-channel mixed signal get multiple orthogonal signal components which are processed by ICA. The multiple independent signal components are called independent sub component of the mixed signal. Then by combining with the multiple independent sub component into single-channel mixed signal, the single-channel signal is expanded to multipath signals, which turns the under-determined blind source separation problem into a well-posed blind source separation problem. Further, the estimate signal of source signal is get by doing the ICA processing. Finally, if the separation effect is not very ideal, combined with the last time's separation effect to the single-channel mixed signal, and keep doing the ICA processing for more times until the desired estimated signal of source signal is get. The simulation results show that the algorithm has good separation effect for the single-channel mixed physiological signals.
Minimum Interference Channel Assignment Algorithm for Multicast in a Wireless Mesh Network.
Choi, Sangil; Park, Jong Hyuk
2016-12-02
Wireless mesh networks (WMNs) have been considered as one of the key technologies for the configuration of wireless machines since they emerged. In a WMN, wireless routers provide multi-hop wireless connectivity between hosts in the network and also allow them to access the Internet via gateway devices. Wireless routers are typically equipped with multiple radios operating on different channels to increase network throughput. Multicast is a form of communication that delivers data from a source to a set of destinations simultaneously. It is used in a number of applications, such as distributed games, distance education, and video conferencing. In this study, we address a channel assignment problem for multicast in multi-radio multi-channel WMNs. In a multi-radio multi-channel WMN, two nearby nodes will interfere with each other and cause a throughput decrease when they transmit on the same channel. Thus, an important goal for multicast channel assignment is to reduce the interference among networked devices. We have developed a minimum interference channel assignment (MICA) algorithm for multicast that accurately models the interference relationship between pairs of multicast tree nodes using the concept of the interference factor and assigns channels to tree nodes to minimize interference within the multicast tree. Simulation results show that MICA achieves higher throughput and lower end-to-end packet delay compared with an existing channel assignment algorithm named multi-channel multicast (MCM). In addition, MICA achieves much lower throughput variation among the destination nodes than MCM.
Minimum Interference Channel Assignment Algorithm for Multicast in a Wireless Mesh Network
Choi, Sangil; Park, Jong Hyuk
2016-01-01
Wireless mesh networks (WMNs) have been considered as one of the key technologies for the configuration of wireless machines since they emerged. In a WMN, wireless routers provide multi-hop wireless connectivity between hosts in the network and also allow them to access the Internet via gateway devices. Wireless routers are typically equipped with multiple radios operating on different channels to increase network throughput. Multicast is a form of communication that delivers data from a source to a set of destinations simultaneously. It is used in a number of applications, such as distributed games, distance education, and video conferencing. In this study, we address a channel assignment problem for multicast in multi-radio multi-channel WMNs. In a multi-radio multi-channel WMN, two nearby nodes will interfere with each other and cause a throughput decrease when they transmit on the same channel. Thus, an important goal for multicast channel assignment is to reduce the interference among networked devices. We have developed a minimum interference channel assignment (MICA) algorithm for multicast that accurately models the interference relationship between pairs of multicast tree nodes using the concept of the interference factor and assigns channels to tree nodes to minimize interference within the multicast tree. Simulation results show that MICA achieves higher throughput and lower end-to-end packet delay compared with an existing channel assignment algorithm named multi-channel multicast (MCM). In addition, MICA achieves much lower throughput variation among the destination nodes than MCM. PMID:27918438
Integrated segmentation of cellular structures
NASA Astrophysics Data System (ADS)
Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo
2011-03-01
Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.
LANDSAT-D investigations in snow hydrology
NASA Technical Reports Server (NTRS)
Dozier, J.
1983-01-01
Progress on the registration of TM data to digital topographic data; on comparison of TM, MSS and NOAA meteorological satellite data for snowcover mapping; and on radiative transfer models for atmospheric correction is reported. Some methods for analyzing spatial contiguity of snow within the snow covered area were selected. The methods are based on a two-channel version of the grey level co-occurence matrix, combined with edge detection derived from an algorithm for computing slopes and exposures from digital terrain data.
Robust Control for the Mercury Laser Altimeter
NASA Technical Reports Server (NTRS)
Rosenberg, Jacob S.
2006-01-01
Mercury Laser Altimeter Science Algorithms is a software system for controlling the laser altimeter aboard the Messenger spacecraft, which is to enter into orbit about Mercury in 2011. The software will control the altimeter by dynamically modifying hardware inputs for gain, threshold, channel-disable flags, range-window start location, and range-window width, by using ranging information provided by the spacecraft and noise counts from instrument hardware. In addition, because of severe bandwidth restrictions, the software also selects returns for downlink.
The ground state of the Frenkel-Kontorova model
NASA Astrophysics Data System (ADS)
Babushkin, A. Yu.; Abkaryan, A. K.; Dobronets, B. S.; Krasikov, V. S.; Filonov, A. N.
2016-09-01
The continual approximation of the ground state of the discrete Frenkel-Kontorova model is tested using a symmetric algorithm of numerical simulation. A "kaleidoscope effect" is found, which means that the curves representing the dependences of the relative extension of an N-atom chain vary periodically with increasing N. Stairs of structural transitions for N ≫ 1 are analyzed by the channel selection method with the approximation N = ∞. Images of commensurable and incommensurable structures are constructed. The commensurable-incommensurable phase transitions are stepwise.
Modeling of glow discharge in a gas flow
NASA Astrophysics Data System (ADS)
Galeev, I. G.; Asadullin, T. Ya
2017-11-01
The discharge plasma of positive column in electronegative gas flow has been described by two-dimensional integro-differential system of equations written in the approximation of “narrow channel”. An efficient algorithm of solving this system of equations is suggested. In this work an implicit method is used of solution with selection gradient of the electric field along the channel. The simulation of discharge characteristics was conducted under various boundary conditions at the inlet of the discharge area and with various full discharge currents.
NASA Astrophysics Data System (ADS)
Huang, Chengjun; Chen, Xiang; Cao, Shuai; Qiu, Bensheng; Zhang, Xu
2017-08-01
Objective. To realize accurate muscle force estimation, a novel framework is proposed in this paper which can extract the input of the prediction model from the appropriate activation area of the skeletal muscle. Approach. Surface electromyographic (sEMG) signals from the biceps brachii muscle during isometric elbow flexion were collected with a high-density (HD) electrode grid (128 channels) and the external force at three contraction levels was measured at the wrist synchronously. The sEMG envelope matrix was factorized into a matrix of basis vectors with each column representing an activation pattern and a matrix of time-varying coefficients by a nonnegative matrix factorization (NMF) algorithm. The activation pattern with the highest activation intensity, which was defined as the sum of the absolute values of the time-varying coefficient curve, was considered as the major activation pattern, and its channels with high weighting factors were selected to extract the input activation signal of a force estimation model based on the polynomial fitting technique. Main results. Compared with conventional methods using the whole channels of the grid, the proposed method could significantly improve the quality of force estimation and reduce the electrode number. Significance. The proposed method provides a way to find proper electrode placement for force estimation, which can be further employed in muscle heterogeneity analysis, myoelectric prostheses and the control of exoskeleton devices.
Enhancing a Simple MODIS Cloud Mask Algorithm for the Landsat Data Continuity Mission
NASA Technical Reports Server (NTRS)
Wilson, Michael J.; Oreopoulos, Lazarous
2011-01-01
The presence of clouds in images acquired by the Landsat series of satellites is usually an undesirable, but generally unavoidable fact. With the emphasis of the program being on land imaging, the suspended liquid/ice particles of which clouds are made of fully or partially obscure the desired observational target. Knowing the amount and location of clouds in a Landsat scene is therefore valuable information for scene selection, for making clear-sky composites from multiple scenes, and for scheduling future acquisitions. The two instruments in the upcoming Landsat Data Continuity Mission (LDCM) will include new channels that will enhance our ability to detect high clouds which are often also thin in the sense that a large fraction of solar radiation can pass through them. This work studies the potential impact of these new channels on enhancing LDCM's cloud detection capabilities compared to previous Landsat missions. We revisit a previously published scheme for cloud detection and add new tests to capture more of the thin clouds that are harder to detect with the more limited arsenal channels. Since there are no Landsat data yet that include the new LDCM channels, we resort to data from another instrument, MODIS, which has these bands, as well as the other bands of LDCM, to test the capabilities of our new algorithm. By comparing our revised scheme's performance against the performance of the official MODIS cloud detection scheme, we conclude that the new scheme performs better than the earlier scheme which was not very good at thin cloud detection.
NASA Astrophysics Data System (ADS)
Stark, Giordon; Atlas Collaboration
2015-04-01
The Global Feature Extraction (gFEX) module is a Level 1 jet trigger system planned for installation in ATLAS during the Phase 1 upgrade in 2018. The gFEX selects large-radius jets for capturing Lorentz-boosted objects by means of wide-area jet algorithms refined by subjet information. The architecture of the gFEX permits event-by-event local pile-up suppression for these jets using the same subtraction techniques developed for offline analyses. The gFEX architecture is also suitable for other global event algorithms such as missing transverse energy (MET), centrality for heavy ion collisions, and ``jets without jets.'' The gFEX will use 4 processor FPGAs to perform calculations on the incoming data and a Hybrid APU-FPGA for slow control of the module. The gFEX is unique in both design and implementation and substantially enhance the selectivity of the L1 trigger and increases sensitivity to key physics channels.
Hyperspectral data discrimination methods
NASA Astrophysics Data System (ADS)
Casasent, David P.; Chen, Xuewen
2000-12-01
Hyperspectral data provides spectral response information that provides detailed chemical, moisture, and other description of constituent parts of an item. These new sensor data are useful in USDA product inspection. However, such data introduce problems such as the curse of dimensionality, the need to reduce the number of features used to accommodate realistic small training set sizes, and the need to employ discriminatory features and still achieve good generalization (comparable training and test set performance). Several two-step methods are compared to a new and preferable single-step spectral decomposition algorithm. Initial results on hyperspectral data for good/bad almonds and for good/bad (aflatoxin infested) corn kernels are presented. The hyperspectral application addressed differs greatly from prior USDA work (PLS) in which the level of a specific channel constituent in food was estimated. A validation set (separate from the test set) is used in selecting algorithm parameters. Threshold parameters are varied to select the best Pc operating point. Initial results show that nonlinear features yield improved performance.
Multifocus watermarking approach based on discrete cosine transform.
Waheed, Safa Riyadh; Alkawaz, Mohammed Hazim; Rehman, Amjad; Almazyad, Abdulaziz S; Saba, Tanzila
2016-05-01
Image fusion process consolidates data and information from various images of same sight into a solitary image. Each of the source images might speak to a fractional perspective of the scene, and contains both "pertinent" and "immaterial" information. In this study, a new image fusion method is proposed utilizing the Discrete Cosine Transform (DCT) to join the source image into a solitary minimized image containing more exact depiction of the sight than any of the individual source images. In addition, the fused image comes out with most ideal quality image without bending appearance or loss of data. DCT algorithm is considered efficient in image fusion. The proposed scheme is performed in five steps: (1) RGB colour image (input image) is split into three channels R, G, and B for source images. (2) DCT algorithm is applied to each channel (R, G, and B). (3) The variance values are computed for the corresponding 8 × 8 blocks of each channel. (4) Each block of R of source images is compared with each other based on the variance value and then the block with maximum variance value is selected to be the block in the new image. This process is repeated for all channels of source images. (5) Inverse discrete cosine transform is applied on each fused channel to convert coefficient values to pixel values, and then combined all the channels to generate the fused image. The proposed technique can potentially solve the problem of unwanted side effects such as blurring or blocking artifacts by reducing the quality of the subsequent image in image fusion process. The proposed approach is evaluated using three measurement units: the average of Q(abf), standard deviation, and peak Signal Noise Rate. The experimental results of this proposed technique have shown good results as compared with older techniques. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Mayvan, Ali D.; Aghaeinia, Hassan; Kazemi, Mohammad
2017-12-01
This paper focuses on robust transceiver design for throughput enhancement on the interference channel (IC), under imperfect channel state information (CSI). In this paper, two algorithms are proposed to improve the throughput of the multi-input multi-output (MIMO) IC. Each transmitter and receiver has, respectively, M and N antennas and IC operates in a time division duplex mode. In the first proposed algorithm, each transceiver adjusts its filter to maximize the expected value of signal-to-interference-plus-noise ratio (SINR). On the other hand, the second algorithm tries to minimize the variances of the SINRs to hedge against the variability due to CSI error. Taylor expansion is exploited to approximate the effect of CSI imperfection on mean and variance. The proposed robust algorithms utilize the reciprocity of wireless networks to optimize the estimated statistical properties in two different working modes. Monte Carlo simulations are employed to investigate sum rate performance of the proposed algorithms and the advantage of incorporating variation minimization into the transceiver design.
NASA Technical Reports Server (NTRS)
Meyer, Kerry; Yang, Yuekui; Platnick, Steven
2016-01-01
This paper presents an investigation of the expected uncertainties of a single channel cloud optical thickness (COT) retrieval technique, as well as a simple cloud-temperature-threshold-based thermodynamic phase approach, in support of the Deep Space Climate Observatory (DSCOVR) mission. DSCOVR cloud products will be derived from Earth Polychromatic Imaging Camera (EPIC) observations in the ultraviolet and visible spectra. Since EPIC is not equipped with a spectral channel in the shortwave or mid-wave infrared that is sensitive to cloud effective radius (CER), COT will be inferred from a single visible channel with the assumption of appropriate CER values for liquid and ice phase clouds. One month of Aqua MODIS daytime granules from April 2005 is selected for investigating cloud phase sensitivity, and a subset of these granules that has similar EPIC sun-view geometry is selected for investigating COT uncertainties. EPIC COT retrievals are simulated with the same algorithm as the operational MODIS cloud products (MOD06), except using fixed phase-dependent CER values. Uncertainty estimates are derived by comparing the single channel COT retrievals with the baseline bi-spectral MODIS retrievals. Results show that a single channel COT retrieval is feasible for EPIC. For ice clouds, single channel retrieval errors are minimal (less than 2 percent) due to the particle- size insensitivity of the assumed ice crystal (i.e., severely roughened aggregate of hexagonal columns) scattering properties at visible wavelengths, while for liquid clouds the error is mostly limited to within 10 percent, although for thin clouds (COT less than 2) the error can be higher. Potential uncertainties in EPIC cloud masking and cloud temperature retrievals are not considered in this study.
Meyer, Kerry; Yang, Yuekui; Platnick, Steven
2018-01-01
This paper presents an investigation of the expected uncertainties of a single channel cloud optical thickness (COT) retrieval technique, as well as a simple cloud temperature threshold based thermodynamic phase approach, in support of the Deep Space Climate Observatory (DSCOVR) mission. DSCOVR cloud products will be derived from Earth Polychromatic Imaging Camera (EPIC) observations in the ultraviolet and visible spectra. Since EPIC is not equipped with a spectral channel in the shortwave or mid-wave infrared that is sensitive to cloud effective radius (CER), COT will be inferred from a single visible channel with the assumption of appropriate CER values for liquid and ice phase clouds. One month of Aqua MODIS daytime granules from April 2005 is selected for investigating cloud phase sensitivity, and a subset of these granules that has similar EPIC sun-view geometry is selected for investigating COT uncertainties. EPIC COT retrievals are simulated with the same algorithm as the operational MODIS cloud products (MOD06), except using fixed phase-dependent CER values. Uncertainty estimates are derived by comparing the single channel COT retrievals with the baseline bi-spectral MODIS retrievals. Results show that a single channel COT retrieval is feasible for EPIC. For ice clouds, single channel retrieval errors are minimal (< 2%) due to the particle size insensitivity of the assumed ice crystal (i.e., severely roughened aggregate of hexagonal columns) scattering properties at visible wavelengths, while for liquid clouds the error is mostly limited to within 10%, although for thin clouds (COT < 2) the error can be higher. Potential uncertainties in EPIC cloud masking and cloud temperature retrievals are not considered in this study. PMID:29619116
Meyer, Kerry; Yang, Yuekui; Platnick, Steven
2016-01-01
This paper presents an investigation of the expected uncertainties of a single channel cloud optical thickness (COT) retrieval technique, as well as a simple cloud temperature threshold based thermodynamic phase approach, in support of the Deep Space Climate Observatory (DSCOVR) mission. DSCOVR cloud products will be derived from Earth Polychromatic Imaging Camera (EPIC) observations in the ultraviolet and visible spectra. Since EPIC is not equipped with a spectral channel in the shortwave or mid-wave infrared that is sensitive to cloud effective radius (CER), COT will be inferred from a single visible channel with the assumption of appropriate CER values for liquid and ice phase clouds. One month of Aqua MODIS daytime granules from April 2005 is selected for investigating cloud phase sensitivity, and a subset of these granules that has similar EPIC sun-view geometry is selected for investigating COT uncertainties. EPIC COT retrievals are simulated with the same algorithm as the operational MODIS cloud products (MOD06), except using fixed phase-dependent CER values. Uncertainty estimates are derived by comparing the single channel COT retrievals with the baseline bi-spectral MODIS retrievals. Results show that a single channel COT retrieval is feasible for EPIC. For ice clouds, single channel retrieval errors are minimal (< 2%) due to the particle size insensitivity of the assumed ice crystal (i.e., severely roughened aggregate of hexagonal columns) scattering properties at visible wavelengths, while for liquid clouds the error is mostly limited to within 10%, although for thin clouds (COT < 2) the error can be higher. Potential uncertainties in EPIC cloud masking and cloud temperature retrievals are not considered in this study.
NASA Astrophysics Data System (ADS)
Meyer, Kerry; Yang, Yuekui; Platnick, Steven
2016-04-01
This paper presents an investigation of the expected uncertainties of a single-channel cloud optical thickness (COT) retrieval technique, as well as a simple cloud-temperature-threshold-based thermodynamic phase approach, in support of the Deep Space Climate Observatory (DSCOVR) mission. DSCOVR cloud products will be derived from Earth Polychromatic Imaging Camera (EPIC) observations in the ultraviolet and visible spectra. Since EPIC is not equipped with a spectral channel in the shortwave or mid-wave infrared that is sensitive to cloud effective radius (CER), COT will be inferred from a single visible channel with the assumption of appropriate CER values for liquid and ice phase clouds. One month of Aqua MODerate-resolution Imaging Spectroradiometer (MODIS) daytime granules from April 2005 is selected for investigating cloud phase sensitivity, and a subset of these granules that has similar EPIC Sun-view geometry is selected for investigating COT uncertainties. EPIC COT retrievals are simulated with the same algorithm as the operational MODIS cloud products (MOD06), except using fixed phase-dependent CER values. Uncertainty estimates are derived by comparing the single-channel COT retrievals with the baseline bi-spectral MODIS retrievals. Results show that a single-channel COT retrieval is feasible for EPIC. For ice clouds, single-channel retrieval errors are minimal (< 2 %) due to the particle size insensitivity of the assumed ice crystal (i.e., severely roughened aggregate of hexagonal columns) scattering properties at visible wavelengths, while for liquid clouds the error is mostly limited to within 10 %, although for thin clouds (COT < 2) the error can be higher. Potential uncertainties in EPIC cloud masking and cloud temperature retrievals are not considered in this study.
A three-dimensional spectral algorithm for simulations of transition and turbulence
NASA Technical Reports Server (NTRS)
Zang, T. A.; Hussaini, M. Y.
1985-01-01
A spectral algorithm for simulating three dimensional, incompressible, parallel shear flows is described. It applies to the channel, to the parallel boundary layer, and to other shear flows with one wall bounded and two periodic directions. Representative applications to the channel and to the heated boundary layer are presented.
Wearable EEG via lossless compression.
Dufort, Guillermo; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo
2016-08-01
This work presents a wearable multi-channel EEG recording system featuring a lossless compression algorithm. The algorithm, based in a previously reported algorithm by the authors, exploits the existing temporal correlation between samples at different sampling times, and the spatial correlation between different electrodes across the scalp. The low-power platform is able to compress, by a factor between 2.3 and 3.6, up to 300sps from 64 channels with a power consumption of 176μW/ch. The performance of the algorithm compares favorably with the best compression rates reported up to date in the literature.
Control channels in the brain and their influence on brain executive functions
NASA Astrophysics Data System (ADS)
Meng, Qinglei; Choa, Fow-Sen; Hong, Elliot; Wang, Zhiguang; Islam, Mohammad
2014-05-01
In a computer network there are distinct data channels and control channels where massive amount of visual information are transported through data channels but the information streams are routed and controlled by intelligent algorithm through "control channels". Recent studies on cognition and consciousness have shown that the brain control channels are closely related to the brainwave beta (14-40 Hz) and alpha (7-13 Hz) oscillations. The high-beta wave is used by brain to synchronize local neural activities and the alpha oscillation is for desynchronization. When two sensory inputs are simultaneously presented to a person, the high-beta is used to select one of the inputs and the alpha is used to deselect the other so that only one input will get the attention. In this work we demonstrated that we can scan a person's brain using binaural beats technique and identify the individual's preferred control channels. The identified control channels can then be used to influence the subject's brain executive functions. In the experiment, an EEG measurement system was used to record and identify a subject's control channels. After these channels were identified, the subject was asked to do Stroop tests. Binaural beats was again used to produce these control-channel frequencies on the subject's brain when we recorded the completion time of each test. We found that the high-beta signal indeed speeded up the subject's executive function performance and reduced the time to complete incongruent tests, while the alpha signal didn't seem to be able to slow down the executive function performance.
Based on the CSI regional segmentation indoor localization algorithm
NASA Astrophysics Data System (ADS)
Zeng, Xi; Lin, Wei; Lan, Jingwei
2017-08-01
To solve the problem of high cost and low accuracy, the method of Channel State Information (CSI) regional segmentation are proposed in the indoor positioning. Because Channel State Information (CSI) stability, and effective against multipath effect, we used the Channel State Information (CSI) to segment location area. The method Acquisition CSI the influence of different link to pinpoint the location of the area. Then the method can improve the accuracy of positioning, and reduce the cost of the fingerprint localization algorithm.
On Channel-Discontinuity-Constraint Routing in Wireless Networks☆
Sankararaman, Swaminathan; Efrat, Alon; Ramasubramanian, Srinivasan; Agarwal, Pankaj K.
2011-01-01
Multi-channel wireless networks are increasingly deployed as infrastructure networks, e.g. in metro areas. Network nodes frequently employ directional antennas to improve spatial throughput. In such networks, between two nodes, it is of interest to compute a path with a channel assignment for the links such that the path and link bandwidths are the same. This is achieved when any two consecutive links are assigned different channels, termed as “Channel-Discontinuity-Constraint” (CDC). CDC-paths are also useful in TDMA systems, where, preferably, consecutive links are assigned different time-slots. In the first part of this paper, we develop a t-spanner for CDC-paths using spatial properties; a sub-network containing O(n/θ) links, for any θ > 0, such that CDC-paths increase in cost by at most a factor t = (1−2 sin (θ/2))−2. We propose a novel distributed algorithm to compute the spanner using an expected number of O(n log n) fixed-size messages. In the second part, we present a distributed algorithm to find minimum-cost CDC-paths between two nodes using O(n2) fixed-size messages, by developing an extension of Edmonds’ algorithm for minimum-cost perfect matching. In a centralized implementation, our algorithm runs in O(n2) time improving the previous best algorithm which requires O(n3) running time. Moreover, this running time improves to O(n/θ) when used in conjunction with the spanner developed. PMID:24443646
NASA Technical Reports Server (NTRS)
Yuan, Lu; LeBlanc, James
1998-01-01
This thesis investigates the effects of the High Power Amplifier (HPA) and the filters over a satellite or telemetry channel. The Volterra series expression is presented for the nonlinear channel with memory, and the algorithm is based on the finite-state machine model. A RAM-based algorithm operating on the receiver side, Pre-cursor Enhanced RAM-FSE Canceler (PERC) is developed. A high order modulation scheme , 16-QAM is used for simulation, the results show that PERC provides an efficient and reliable method to transmit data on the bandlimited nonlinear channel. The contribution of PERC algorithm is that it includes both pre-cursors and post-cursors as the RAM address lines, and suggests a new way to make decision on the pre-addresses. Compared with the RAM-DFE structure that only includes post- addresses, the BER versus Eb/NO performance of PERC is substantially enhanced. Experiments are performed for PERC algorithms with different parameters on AWGN channels, and the results are compared and analyzed. The investigation of this thesis includes software simulation and hardware verification. Hardware is setup to collect actual TWT data. Simulation on both the software-generated data and the real-world data are performed. Practical limitations are considered for the hardware collected data. Simulation results verified the reliability of the PERC algorithm. This work was conducted at NMSU in the Center for Space Telemetering and Telecommunications Systems in the Klipsch School of Electrical and Computer Engineering Department.
Progress towards NASA MODIS and Suomi NPP Cloud Property Data Record Continuity
NASA Astrophysics Data System (ADS)
Platnick, S.; Meyer, K.; Holz, R.; Ackerman, S. A.; Heidinger, A.; Wind, G.; Platnick, S. E.; Wang, C.; Marchant, B.; Frey, R.
2017-12-01
The Suomi NPP VIIRS imager provides an opportunity to extend the 17+ year EOS MODIS climate data record into the next generation operational era. Similar to MODIS, VIIRS provides visible through IR observations at moderate spatial resolution with a 1330 LT equatorial crossing consistent with the MODIS on the Aqua platform. However, unlike MODIS, VIIRS lacks key water vapor and CO2 absorbing channels used for high cloud detection and cloud-top property retrievals. In addition, there is a significant mismatch in the spectral location of the 2.2 μm shortwave-infrared channels used for cloud optical/microphysical retrievals and cloud thermodynamic phase. Given these instrument differences between MODIS EOS and VIIRS S-NPP/JPSS, a merged MODIS-VIIRS cloud record to serve the science community in the coming decades requires different algorithm approaches than those used for MODIS alone. This new approach includes two parallel efforts: (1) Imager-only algorithms with only spectral channels common to VIIRS and MODIS (i.e., eliminate use of MODIS CO2 and NIR/IR water vapor channels). Since the algorithms are run with similar spectral observations, they provide a basis for establishing a continuous cloud data record across the two imagers. (2) Merged imager and sounder measurements (i.e.., MODIS-AIRS, VIIRS-CrIS) in lieu of higher-spatial resolution MODIS absorption channels absent on VIIRS. The MODIS-VIIRS continuity algorithm for cloud optical property retrievals leverages heritage algorithms that produce the existing MODIS cloud mask (MOD35), optical and microphysical properties product (MOD06), and the NOAA AWG Cloud Height Algorithm (ACHA). We discuss our progress towards merging the MODIS observational record with VIIRS in order to generate cloud optical property climate data record continuity across the observing systems. In addition, we summarize efforts to reconcile apparent radiometric biases between analogous imager channels, a critical consideration for obtaining inter-sensor climate data record continuity.
NASA Technical Reports Server (NTRS)
Stowe, Larry L.; Ignatov, Alexander M.; Singh, Ramdas R.
1997-01-01
A revised (phase 2) single-channel algorithm for aerosol optical thickness, tau(sup A)(sub SAT), retrieval over oceans from radiances in channel 1 (0.63 microns) of the Advanced Very High Resolution Radiometer (AVHRR) has been implemented at the National Oceanic and Atmospheric Administration's National Environmental Satellite Data and Information Service for the NOAA 14 satellite launched December 30, 1994. It is based on careful validation of its operational predecessor (phase 1 algorithm), implemented for NOAA 14 in 1989. Both algorithms scale the upward satellite radiances in cloud-free conditions to aerosol optical thickness using an updated radiative transfer model of the ocean and atmosphere. Application of the phase 2 algorithm to three matchup Sun-photometer and satellite data sets, one with NOAA 9 in 1988 and two with NOAA 11 in 1989 and 1991, respectively, show systematic error is less than 10%, with a random error of sigma(sub tau) approx. equal 0.04. First results of tau(sup A)(sub SAT) retrievals from NOAA 14 using the phase 2 algorithm, and from checking its internal consistency, are presented. The potential two-channel (phase 3) algorithm for the retrieval of an aerosol size parameter, such as the Junge size distribution exponent, by adding either channel 2 (0.83 microns) from the current AVHRR instrument, or a 1.6-microns channel to be available on the Tropical Rainfall Measurement Mission and the NOAA-KLM satellites by 1997 is under investigation. The possibility of using this additional information in the retrieval of a more accurate estimate of aerosol optical thickness is being explored.
Progress towards MODIS and VIIRS Cloud Optical Property Data Record Continuity
NASA Astrophysics Data System (ADS)
Meyer, K.; Platnick, S. E.; Wind, G.; Amarasinghe, N.; Holz, R.; Ackerman, S. A.; Heidinger, A. K.
2016-12-01
The launch of Suomi NPP in the fall of 2011 began the next generation of U.S. operational polar orbiting Earth observations, and its VIIRS imager provides an opportunity to extend the 15+ year climate data record of MODIS EOS. Similar to MODIS, VIIRS provides visible through IR observations at moderate spatial resolution with a 1330 LT equatorial crossing consistent with the MODIS on the Aqua platform. However, unlike MODIS, VIIRS lacks key water vapor and CO2 absorbing channels used for high cloud detection and cloud-top property retrievals, and there is a significant change in the spectral location of the 2.1μm shortwave-infrared channel used for cloud optical/microphysical retrievals and cloud thermodynamic phase. Given these instrument differences between MODIS EOS and VIIRS S-NPP/JPSS, we discuss our progress towards merging the MODIS observational record with VIIRS in order to generate cloud optical property climate data record continuity across the observing systems. The MODIS-VIIRS continuity algorithm for cloud optical property retrievals leverages heritage algorithms that produce the existing MODIS cloud optical and microphysical properties product (MOD06); the NOAA AWG/CLAVR-x cloud-top property algorithm and a common MODIS-VIIRS cloud mask feed into the optical property algorithm. To account for the different channel sets of MODIS and VIIRS, each algorithm nominally uses a subset of channels common to both imagers. Data granule and aggregated examples for the current version of the continuity algorithm (MODAWG) will be shown. In addition, efforts to reconcile apparent radiometric biases between analogous channels of the two imagers, a critical consideration for obtaining inter-sensor climate data record continuity, will be discussed.
Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio
2018-03-01
To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhao, Yongli; Zhu, Ye; Wang, Chunhui; Yu, Xiaosong; Liu, Chuan; Liu, Binglin; Zhang, Jie
2017-07-01
With the capacity increasing in optical networks enabled by spatial division multiplexing (SDM) technology, spatial division multiplexing elastic optical networks (SDM-EONs) attract much attention from both academic and industry. Super-channel is an important type of service provisioning in SDM-EONs. This paper focuses on the issue of super-channel construction in SDM-EONs. Mixed super-channel oriented routing, spectrum and core assignment (MS-RSCA) algorithm is proposed in SDM-EONs considering inter-core crosstalk. Simulation results show that MS-RSCA can improve spectrum resource utilization and reduce blocking probability significantly compared with the baseline RSCA algorithms.
NASA Astrophysics Data System (ADS)
Kim, Hyo-Su; Kim, Dong-Hoi
The dynamic channel allocation (DCA) scheme in multi-cell systems causes serious inter-cell interference (ICI) problem to some existing calls when channels for new calls are allocated. Such a problem can be addressed by advanced centralized DCA design that is able to minimize ICI. Thus, in this paper, a centralized DCA is developed for the downlink of multi-cell orthogonal frequency division multiple access (OFDMA) systems with full spectral reuse. However, in practice, as the search space of channel assignment for centralized DCA scheme in multi-cell systems grows exponentially with the increase of the number of required calls, channels, and cells, it becomes an NP-hard problem and is currently too complicated to find an optimum channel allocation. In this paper, we propose an ant colony optimization (ACO) based DCA scheme using a low-complexity ACO algorithm which is a kind of heuristic algorithm in order to solve the aforementioned problem. Simulation results demonstrate significant performance improvements compared to the existing schemes in terms of the grade of service (GoS) performance and the forced termination probability of existing calls without degrading the system performance of the average throughput.
Orio, Patricio; Soudry, Daniel
2012-01-01
Background The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled gating particles, while the DA was modeled using uncoupled gating particles. Implementations of DA with coupled particles, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. Main Contributions We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable – allowing an easy, transparent and efficient DA implementation, avoiding unnecessary approximations. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods, except when short time steps or low channel numbers were used. PMID:22629320
An FPGA-based trigger for the phase II of the MEG experiment
NASA Astrophysics Data System (ADS)
Baldini, A.; Bemporad, C.; Cei, F.; Galli, L.; Grassi, M.; Morsani, F.; Nicolò, D.; Ritt, S.; Venturini, M.
2016-07-01
For the phase II of MEG, we are going to develop a combined trigger and DAQ system. Here we focus on the former side, which operates an on-line reconstruction of detector signals and event selection within 450 μs from event occurrence. Trigger concentrator boards (TCB) are under development to gather data from different crates, each connected to a set of detector channels, to accomplish higher-level algorithms to issue a trigger in the case of a candidate signal event. We describe the major features of the new system, in comparison with phase I, as well as its performances in terms of selection efficiency and background rejection.
Communication target object recognition for D2D connection with feature size limit
NASA Astrophysics Data System (ADS)
Ok, Jiheon; Kim, Soochang; Kim, Young-hoon; Lee, Chulhee
2015-03-01
Recently, a new concept of device-to-device (D2D) communication, which is called "point-and-link communication" has attracted great attentions due to its intuitive and simple operation. This approach enables user to communicate with target devices without any pre-identification information such as SSIDs, MAC addresses by selecting the target image displayed on the user's own device. In this paper, we present an efficient object matching algorithm that can be applied to look(point)-and-link communications for mobile services. Due to the limited channel bandwidth and low computational power of mobile terminals, the matching algorithm should satisfy low-complexity, low-memory and realtime requirements. To meet these requirements, we propose fast and robust feature extraction by considering the descriptor size and processing time. The proposed algorithm utilizes a HSV color histogram, SIFT (Scale Invariant Feature Transform) features and object aspect ratios. To reduce the descriptor size under 300 bytes, a limited number of SIFT key points were chosen as feature points and histograms were binarized while maintaining required performance. Experimental results show the robustness and the efficiency of the proposed algorithm.
Filter bank common spatial patterns in mental workload estimation.
Arvaneh, Mahnaz; Umilta, Alberto; Robertson, Ian H
2015-01-01
EEG-based workload estimation technology provides a real time means of assessing mental workload. Such technology can effectively enhance the performance of the human-machine interaction and the learning process. When designing workload estimation algorithms, a crucial signal processing component is the feature extraction step. Despite several studies on this field, the spatial properties of the EEG signals were mostly neglected. Since EEG inherently has a poor spacial resolution, features extracted individually from each EEG channel may not be sufficiently efficient. This problem becomes more pronounced when we use low-cost but convenient EEG sensors with limited stability which is the case in practical scenarios. To address this issue, in this paper, we introduce a filter bank common spatial patterns algorithm combined with a feature selection method to extract spatio-spectral features discriminating different mental workload levels. To evaluate the proposed algorithm, we carry out a comparative analysis between two representative types of working memory tasks using data recorded from an Emotiv EPOC headset which is a mobile low-cost EEG recording device. The experimental results showed that the proposed spatial filtering algorithm outperformed the state-of-the algorithms in terms of the classification accuracy.
Electronics and triggering challenges for the CMS High Granularity Calorimeter
NASA Astrophysics Data System (ADS)
Lobanov, A.
2018-02-01
The High Granularity Calorimeter (HGCAL), presently being designed by the CMS collaboration to replace the CMS endcap calorimeters for the High Luminosity phase of LHC, will feature six million channels distributed over 52 longitudinal layers. The requirements for the front-end electronics are extremely challenging, including high dynamic range (0.2 fC-10 pC), low noise (~2000 e- to be able to calibrate on single minimum ionising particles throughout the detector lifetime) and low power consumption (~20 mW/channel), as well as the need to select and transmit trigger information with a high granularity. Exploiting the intrinsic precision-timing capabilities of silicon sensors also requires careful design of the front-end electronics as well as the whole system, particularly clock distribution. The harsh radiation environment and requirement to keep the whole detector as dense as possible will require novel solutions to the on-detector electronics layout. Processing the data from the HGCAL imposes equally large challenges on the off-detector electronics, both for the hardware and incorporated algorithms. We present an overview of the complete electronics architecture, as well as the performance of prototype components and algorithms.
Evaluation of a multi-channel algorithm for reducing transient sounds.
Keshavarzi, Mahmoud; Baer, Thomas; Moore, Brian C J
2018-05-15
The objective was to evaluate and select appropriate parameters for a multi-channel transient reduction (MCTR) algorithm for detecting and attenuating transient sounds in speech. In each trial, the same sentence was played twice. A transient sound was presented in both sentences, but its level varied across the two depending on whether or not it had been processed by the MCTR and on the "strength" of the processing. The participant indicated their preference for which one was better and by how much in terms of the balance between the annoyance produced by the transient and the audibility of the transient (they were told that the transient should still be audible). Twenty English-speaking participants were tested, 10 with normal hearing and 10 with mild-to-moderate hearing-impairment. Frequency-dependent linear amplification was provided for the latter. The results for both participant groups indicated that sounds processed using the MCTR were preferred over the unprocessed sounds. For the hearing-impaired participants, the medium and strong settings of the MCTR were preferred over the weak setting. The medium and strong settings of the MCTR reduced the annoyance produced by the transients while maintaining their audibility.
Gao, Wei-Wei; Shen, Jian-Xin; Wang, Yu-Liang; Liang, Chun; Zuo, Jing
2013-02-01
In order to automatically detect hemorrhages in fundus images, and develop an automated diabetic retinopathy screening system, a novel algorithm named locally adaptive region growing based on multi-template matching was established and studied. Firstly, spectral signature of major anatomical structures in fundus was studied, so that the right channel among RGB channels could be selected for different segmentation objects. Secondly, the fundus image was preprocessed by means of HSV brightness correction and contrast limited adaptive histogram equalization (CLAHE). Then, seeds of region growing were founded out by removing optic disc and vessel from the resulting image of normalized cross-correlation (NCC) template matching on the previous preprocessed image with several templates. Finally, locally adaptive region growing segmentation was used to find out the exact contours of hemorrhages, and the automated detection of the lesions was accomplished. The approach was tested on 90 different resolution fundus images with variable color, brightness and quality. Results suggest that the approach could fast and effectively detect hemorrhages in fundus images, and it is stable and robust. As a result, the approach can meet the clinical demands.
Liu, Tao; Djordjevic, Ivan B
2014-12-29
In this paper, we first describe an optimal signal constellation design algorithm suitable for the coherent optical channels dominated by the linear phase noise. Then, we modify this algorithm to be suitable for the nonlinear phase noise dominated channels. In optimization procedure, the proposed algorithm uses the cumulative log-likelihood function instead of the Euclidian distance. Further, an LDPC coded modulation scheme is proposed to be used in combination with signal constellations obtained by proposed algorithm. Monte Carlo simulations indicate that the LDPC-coded modulation schemes employing the new constellation sets, obtained by our new signal constellation design algorithm, outperform corresponding QAM constellations significantly in terms of transmission distance and have better nonlinearity tolerance.
Real-time image dehazing using local adaptive neighborhoods and dark-channel-prior
NASA Astrophysics Data System (ADS)
Valderrama, Jesus A.; Díaz-Ramírez, Víctor H.; Kober, Vitaly; Hernandez, Enrique
2015-09-01
A real-time algorithm for single image dehazing is presented. The algorithm is based on calculation of local neighborhoods of a hazed image inside a moving window. The local neighborhoods are constructed by computing rank-order statistics. Next the dark-channel-prior approach is applied to the local neighborhoods to estimate the transmission function of the scene. By using the suggested approach there is no need for applying a refining algorithm to the estimated transmission such as the soft matting algorithm. To achieve high-rate signal processing the proposed algorithm is implemented exploiting massive parallelism on a graphics processing unit (GPU). Computer simulation results are carried out to test the performance of the proposed algorithm in terms of dehazing efficiency and speed of processing. These tests are performed using several synthetic and real images. The obtained results are analyzed and compared with those obtained with existing dehazing algorithms.
A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching
Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Zhang, Peng
2017-01-01
Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images. PMID:28885547
A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching.
Li, Ming; Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Wang, Lei; Pan, Yuanjin; Zhang, Peng
2017-09-08
Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images.
User's Guide for Mixed-Size Sediment Transport Model for Networks of One-Dimensional Open Channels
Bennett, James P.
2001-01-01
This user's guide describes a mathematical model for predicting the transport of mixed sizes of sediment by flow in networks of one-dimensional open channels. The simulation package is useful for general sediment routing problems, prediction of erosion and deposition following dam removal, and scour in channels at road embankment crossings or other artificial structures. The model treats input hydrographs as stepwise steady-state, and the flow computation algorithm automatically switches between sub- and supercritical flow as dictated by channel geometry and discharge. A variety of boundary conditions including weirs and rating curves may be applied both external and internal to the flow network. The model may be used to compute flow around islands and through multiple openings in embankments, but the network must be 'simple' in the sense that the flow directions in all channels can be specified before simulation commences. The location and shape of channel banks are user specified, and all bedelevation changes take place between these banks and above a user-specified bedrock elevation. Computation of sediment-transport emphasizes the sand-size range (0.0625-2.0 millimeter) but the user may select any desired range of particle diameters including silt and finer (<0.0625 millimeter). As part of data input, the user may set the original bed-sediment composition of any number of layers of known thickness. The model computes the time evolution of total transport and the size composition of bed- and suspended-load sand through any cross section of interest. It also tracks bed -surface elevation and size composition. The model is written in the FORTRAN programming language for implementation on personal computers using the WINDOWS operating system and, along with certain graphical output display capability, is accessed from a graphical user interface (GUI). The GUI provides a framework for selecting input files and parameters of a number of components of the sediment-transport process. There are no restrictions in the use of the model as to numbers of channels, channel junctions, cross sections per channel, or points defining the cross sections. Following completion of the simulation computations, the GUI accommodates display of longitudinal plots of either bed elevation and size composition, or of transport rate and size composition of the various components, for individual channels and selected times during the simulation period. For individual cross sections, the GUI also allows display of time series of transport rate and size composition of the various components and of bed elevation and size composition.
Sensitivity analysis of a new SWIR-channel measuring tropospheric CH 4 and CO from space
NASA Astrophysics Data System (ADS)
Jongma, Rienk T.; Gloudemans, Annemieke M. S.; Hoogeveen, Ruud W. M.; Aben, Ilse; de Vries, Johan; Escudero-Sanz, Isabel; van den Oord, Gijsbertus; Levelt, Pieternel F.
2006-08-01
In preparation for future atmospheric space missions a consortium of Dutch organizations is performing design studies on a nadir viewing grating-based imaging spectrometer using OMI and SCIAMACHY heritage. The spectrometer measures selected species (O 3, NO II, HCHO, H IIO, SO II, aerosols (optical depth, type and absorption index), CO and CH4) with sensitivity down to the Earth's surface, thus addressing science issues on air quality and climate. It includes 3 UV-VIS channels continuously covering the 270-490 nm range, a NIR-channel covering the 710-775 nm range, and a SWIR-channel covering the 2305-2385 nm range. This instrument concept is, named TROPOMI, part of the TRAQ-mission proposal to ESA in response to the Call for Earth Explorer Ideas 2005, and, named TROPI, part of the CAMEO-proposal prepared for the US NRC decadal study-call on Earth science and applications from space. The SWIR-channel is optional in the TROPOMI/TRAQ instrument and included as baseline in the TROPI/CAMEO instrument. This paper focuses on derivation of the instrument requirements of the SWIR-channel by presenting the results of retrieval studies. Synthetic detector spectra are generated by the combination of a forward model and an instrument simulator that includes the properties of state-of-the-art detector technology. The synthetic spectra are input to the CO and CH 4 IMLM retrieval algorithm originally developed for SCIAMACHY. The required accuracy of the Level-2 SWIR data products defines the main instrument parameters like spectral resolution and sampling, telescope aperture, detector temperature, and optical bench temperature. The impact of selected calibration and retrieval errors on the Level-2 products has been characterized. The current status of the SWIR-channel optical design with its demanding requirements on ground-pixel size, spectral resolution, and signal-to-noise ratio will be presented.
NASA Astrophysics Data System (ADS)
Choi, Wonjoon; Yoon, Myungchul; Roh, Byeong-Hee
Eavesdropping on backward channels in RFID environments may cause severe privacy problems because it means the exposure of personal information related to tags that each person has. However, most existing RFID tag security schemes are focused on the forward channel protections. In this paper, we propose a simple but effective method to solve the backward channel eavesdropping problem based on Randomized-tree walking algorithm for securing tag ID information and privacy in RFID-based applications. In order to show the efficiency of the proposed scheme, we derive two performance models for the cases when CRC is used and not used. It is shown that the proposed method can lower the probability of eavesdropping on backward channels near to ‘0.’
NASA Astrophysics Data System (ADS)
Hortos, William S.
1997-04-01
The use of artificial neural networks (NNs) to address the channel assignment problem (CAP) for cellular time-division multiple access and code-division multiple access networks has previously been investigated by this author and many others. The investigations to date have been based on a hexagonal cell structure established by omnidirectional antennas at the base stations. No account was taken of the use of spatial isolation enabled by directional antennas to reduce interference between mobiles. Any reduction in interference translates into increased capacity and consequently alters the performance of the NNs. Previous studies have sought to improve the performance of Hopfield- Tank network algorithms and self-organizing feature map algorithms applied primarily to static channel assignment (SCA) for cellular networks that handle uniformly distributed, stationary traffic in each cell for a single type of service. The resulting algorithms minimize energy functions representing interference constraint and ad hoc conditions that promote convergence to optimal solutions. While the structures of the derived neural network algorithms (NNAs) offer the potential advantages of inherent parallelism and adaptability to changing system conditions, this potential has yet to be fulfilled the CAP for emerging mobile networks. The next-generation communication infrastructures must accommodate dynamic operating conditions. Macrocell topologies are being refined to microcells and picocells that can be dynamically sectored by adaptively controlled, directional antennas and programmable transceivers. These networks must support the time-varying demands for personal communication services (PCS) that simultaneously carry voice, data and video and, thus, require new dynamic channel assignment (DCA) algorithms. This paper examines the impact of dynamic cell sectoring and geometric conditioning on NNAs developed for SCA in omnicell networks with stationary traffic to improve the metrics of convergence rate and call blocking. Genetic algorithms (GAs) are also considered in PCS networks as a means to overcome the known weakness of Hopfield NNAs in determining global minima. The resulting GAs for DCA in PCS networks are compared to improved DCA algorithms based on Hopfield NNs for stationary cellular networks. Algorithm performance is compared on the basis of rate of convergence, blocking probability, analytic complexity, and parametric sensitivity to transient traffic demands and channel interference.
Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors
NASA Technical Reports Server (NTRS)
Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.
2009-01-01
A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.
NASA Astrophysics Data System (ADS)
Meng, Luming; Sheong, Fu Kit; Zeng, Xiangze; Zhu, Lizhe; Huang, Xuhui
2017-07-01
Constructing Markov state models from large-scale molecular dynamics simulation trajectories is a promising approach to dissect the kinetic mechanisms of complex chemical and biological processes. Combined with transition path theory, Markov state models can be applied to identify all pathways connecting any conformational states of interest. However, the identified pathways can be too complex to comprehend, especially for multi-body processes where numerous parallel pathways with comparable flux probability often coexist. Here, we have developed a path lumping method to group these parallel pathways into metastable path channels for analysis. We define the similarity between two pathways as the intercrossing flux between them and then apply the spectral clustering algorithm to lump these pathways into groups. We demonstrate the power of our method by applying it to two systems: a 2D-potential consisting of four metastable energy channels and the hydrophobic collapse process of two hydrophobic molecules. In both cases, our algorithm successfully reveals the metastable path channels. We expect this path lumping algorithm to be a promising tool for revealing unprecedented insights into the kinetic mechanisms of complex multi-body processes.
Automatic channel trimming for control systems: A concept
NASA Technical Reports Server (NTRS)
Vandervoort, R. J.; Sykes, H. A.
1977-01-01
Set of bias signals added to channel inputs automatically normalize differences between channels. Algorithm and second feedback loop compute trim biases. Concept could be applied to regulators and multichannel servosystems for remote manipulators in undersea mining.
Zhang, Guosong; Hovem, Jens M.; Dong, Hefeng
2012-01-01
Underwater communication channels are often complicated, and in particular multipath propagation may cause intersymbol interference (ISI). This paper addresses how to remove ISI, and evaluates the performance of three different receiver structures and their implementations. Using real data collected in a high-frequency (10–14 kHz) field experiment, the receiver structures are evaluated by off-line data processing. The three structures are multichannel decision feedback equalizer (DFE), passive time reversal receiver (passive-phase conjugation (PPC) with a single channel DFE), and the joint PPC with multichannel DFE. In sparse channels, dominant arrivals represent the channel information, and the matching pursuit (MP) algorithm which exploits the channel sparseness has been investigated for PPC processing. In the assessment, it is found that: (1) it is advantageous to obtain spatial gain using the adaptive multichannel combining scheme; and (2) the MP algorithm improves the performance of communications using PPC processing. PMID:22438755
Integer cosine transform compression for Galileo at Jupiter: A preliminary look
NASA Technical Reports Server (NTRS)
Ekroot, L.; Dolinar, S.; Cheung, K.-M.
1993-01-01
The Galileo low-gain antenna mission has a severely rate-constrained channel over which we wish to send large amounts of information. Because of this link pressure, compression techniques for image and other data are being selected. The compression technique that will be used for images is the integer cosine transform (ICT). This article investigates the compression performance of Galileo's ICT algorithm as applied to Galileo images taken during the early portion of the mission and to images that simulate those expected from the encounter at Jupiter.
NASA Astrophysics Data System (ADS)
Wibisana, H.; Zainab, S.; Dara K., A.
2018-01-01
Chlorophyll-a is one of the parameters used to detect the presence of fish populations, as well as one of the parameters to state the quality of a water. Research on chlorophyll concentrations has been extensively investigated as well as with chlorophyll-a mapping using remote sensing satellites. Mapping of chlorophyll concentration is used to obtain an optimal picture of the condition of waters that is often used as a fishing area by the fishermen. The role of remote sensing is a technological breakthrough in broadly monitoring the condition of waters. And in the process to get a complete picture of the aquatic conditions it would be used an algorithm that can provide an image of the concentration of chlorophyll at certain points scattered in the research area of capture fisheries. Remote sensing algorithms have been widely used by researchers to detect the presence of chlorophyll content, where the channels corresponding to the mapping of chlorophyll -concentrations from Landsat 8 images are canals 4, 3 and 2. With multiple channels from Landsat-8 satellite imagery used for chlorophyll detection, optimum algorithmic search can be formulated to obtain maximum results of chlorophyll-a concentration in the research area. From the calculation of remote sensing algorithm hence can be known the suitable algorithm for condition at coast of Pasuruan, where green channel give good enough correlation equal to R2 = 0,853 with algorithm for Chlorophyll-a (mg / m3) = 0,093 (R (-0) Red - 3,7049, from this result it can be concluded that there is a good correlation of the green channel that can illustrate the concentration of chlorophyll scattered along the coast of Pasuruan
Design of a Novel Flexible Capacitive Sensing Mattress for Monitoring Sleeping Respiratory
Chang, Wen-Ying; Huang, Chien-Chun; Chen, Chi-Chun; Chang, Chih-Cheng; Yang, Chin-Lung
2014-01-01
In this paper, an algorithm to extract respiration signals using a flexible projected capacitive sensing mattress (FPCSM) designed for personal health assessment is proposed. Unlike the interfaces of conventional measurement systems for poly-somnography (PSG) and other alternative contemporary systems, the proposed FPCSM uses projected capacitive sensing capability that is not worn or attached to the body. The FPCSM is composed of a multi-electrode sensor array that can not only observe gestures and motion behaviors, but also enables the FPCSM to function as a respiration monitor during sleep using the proposed approach. To improve long-term monitoring when body movement is possible, the FPCSM enables the selection of data from the sensing array, and the FPCSM methodology selects the electrodes with the optimal signals after the application of a channel reduction algorithm that counts the reversals in the capacitive sensing signals as a quality indicator. The simple algorithm is implemented in the time domain. The FPCSM system is used in experimental tests and is simultaneously compared with a commercial PSG system for verification. Multiple synchronous measurements are performed from different locations of body contact, and parallel data sets are collected. The experimental comparison yields a correlation coefficient of 0.88 between FPCSM and PSG, demonstrating the feasibility of the system design. PMID:25420152
Extraction of incident irradiance from LWIR hyperspectral imagery
NASA Astrophysics Data System (ADS)
Lahaie, Pierre
2014-10-01
The atmospheric correction of thermal hyperspectral imagery can be separated in two distinct processes: Atmospheric Compensation (AC) and Temperature and Emissivity separation (TES). TES requires for input at each pixel, the ground leaving radiance and the atmospheric downwelling irradiance, which are the outputs of the AC process. The extraction from imagery of the downwelling irradiance requires assumptions about some of the pixels' nature, the sensor and the atmosphere. Another difficulty is that, often the sensor's spectral response is not well characterized. To deal with this unknown, we defined a spectral mean operator that is used to filter the ground leaving radiance and a computation of the downwelling irradiance from MODTRAN. A user will select a number of pixels in the image for which the emissivity is assumed to be known. The emissivity of these pixels is assumed to be smooth and that the only spectrally fast varying variable in the downwelling irradiance. Using these assumptions we built an algorithm to estimate the downwelling irradiance. The algorithm is used on all the selected pixels. The estimated irradiance is the average on the spectral channels of the resulting computation. The algorithm performs well in simulation and results are shown for errors in the assumed emissivity and for errors in the atmospheric profiles. The sensor noise influences mainly the required number of pixels.
Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui
2016-06-01
Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively.
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Formaggio, A. R.; Dossantos, J. R.; Dias, L. A. V.
1984-01-01
Automatic pre-processing technique called Principal Components (PRINCO) in analyzing LANDSAT digitized data, for land use and vegetation cover, on the Brazilian cerrados was evaluated. The chosen pilot area, 223/67 of MSS/LANDSAT 3, was classified on a GE Image-100 System, through a maximum-likehood algorithm (MAXVER). The same procedure was applied to the PRINCO treated image. PRINCO consists of a linear transformation performed on the original bands, in order to eliminate the information redundancy of the LANDSAT channels. After PRINCO only two channels were used thus reducing computer effort. The original channels and the PRINCO channels grey levels for the five identified classes (grassland, "cerrado", burned areas, anthropic areas, and gallery forest) were obtained through the MAXVER algorithm. This algorithm also presented the average performance for both cases. In order to evaluate the results, the Jeffreys-Matusita distance (JM-distance) between classes was computed. The classification matrix, obtained through MAXVER, after a PRINCO pre-processing, showed approximately the same average performance in the classes separability.
Recognition of neural brain activity patterns correlated with complex motor activity
NASA Astrophysics Data System (ADS)
Kurkin, Semen; Musatov, Vyacheslav Yu.; Runnova, Anastasia E.; Grubov, Vadim V.; Efremova, Tatyana Yu.; Zhuravlev, Maxim O.
2018-04-01
In this paper, based on the apparatus of artificial neural networks, a technique for recognizing and classifying patterns corresponding to imaginary movements on electroencephalograms (EEGs) obtained from a group of untrained subjects was developed. The works on the selection of the optimal type, topology, training algorithms and neural network parameters were carried out from the point of view of the most accurate and fast recognition and classification of patterns on multi-channel EEGs associated with the imagination of movements. The influence of the number and choice of the analyzed channels of a multichannel EEG on the quality of recognition of imaginary movements was also studied, and optimal configurations of electrode arrangements were obtained. The effect of pre-processing of EEG signals is analyzed from the point of view of improving the accuracy of recognition of imaginary movements.
P300 Chinese input system based on Bayesian LDA.
Jin, Jing; Allison, Brendan Z; Brunner, Clemens; Wang, Bei; Wang, Xingyu; Zhang, Jianhua; Neuper, Christa; Pfurtscheller, Gert
2010-02-01
A brain-computer interface (BCI) is a new communication channel between humans and computers that translates brain activity into recognizable command and control signals. Attended events can evoke P300 potentials in the electroencephalogram. Hence, the P300 has been used in BCI systems to spell, control cursors or robotic devices, and other tasks. This paper introduces a novel P300 BCI to communicate Chinese characters. To improve classification accuracy, an optimization algorithm (particle swarm optimization, PSO) is used for channel selection (i.e., identifying the best electrode configuration). The effects of different electrode configurations on classification accuracy were tested by Bayesian linear discriminant analysis offline. The offline results from 11 subjects show that this new P300 BCI can effectively communicate Chinese characters and that the features extracted from the electrodes obtained by PSO yield good performance.
An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm
Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En
2015-01-01
A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction. PMID:26287193
Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En
2015-08-13
A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.
NASA Astrophysics Data System (ADS)
Liu, Chong-xin; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Tian, Qing-hua; Tian, Feng; Wang, Yong-jun; Rao, Lan; Mao, Yaya; Li, Deng-ao
2018-01-01
During the last decade, the orthogonal frequency division multiplexing radio-over-fiber (OFDM-ROF) system with adaptive modulation technology is of great interest due to its capability of raising the spectral efficiency dramatically, reducing the effects of fiber link or wireless channel, and improving the communication quality. In this study, according to theoretical analysis of nonlinear distortion and frequency selective fading on the transmitted signal, a low-complexity adaptive modulation algorithm is proposed in combination with sub-carrier grouping technology. This algorithm achieves the optimal performance of the system by calculating the average combined signal-to-noise ratio of each group and dynamically adjusting the origination modulation format according to the preset threshold and user's requirements. At the same time, this algorithm takes the sub-carrier group as the smallest unit in the initial bit allocation and the subsequent bit adjustment. So, the algorithm complexity is only 1 /M (M is the number of sub-carriers in each group) of Fischer algorithm, which is much smaller than many classic adaptive modulation algorithms, such as Hughes-Hartogs algorithm, Chow algorithm, and is in line with the development direction of green and high speed communication. Simulation results show that the performance of OFDM-ROF system with the improved algorithm is much better than those without adaptive modulation, and the BER of the former achieves 10e1 to 10e2 times lower than the latter when SNR values gets larger. We can obtain that this low complexity adaptive modulation algorithm is extremely useful for the OFDM-ROF system.
A proposed study of multiple scattering through clouds up to 1 THz
NASA Technical Reports Server (NTRS)
Gerace, G. C.; Smith, E. K.
1992-01-01
A rigorous computation of the electromagnetic field scattered from an atmospheric liquid water cloud is proposed. The recent development of a fast recursive algorithm (Chew algorithm) for computing the fields scattered from numerous scatterers now makes a rigorous computation feasible. A method is presented for adapting this algorithm to a general case where there are an extremely large number of scatterers. It is also proposed to extend a new binary PAM channel coding technique (El-Khamy coding) to multiple levels with non-square pulse shapes. The Chew algorithm can be used to compute the transfer function of a cloud channel. Then the transfer function can be used to design an optimum El-Khamy code. In principle, these concepts can be applied directly to the realistic case of a time-varying cloud (adaptive channel coding and adaptive equalization). A brief review is included of some preliminary work on cloud dispersive effects on digital communication signals and on cloud liquid water spectra and correlations.
A noise resistant symmetric key cryptosystem based on S8 S-boxes and chaotic maps
NASA Astrophysics Data System (ADS)
Hussain, Iqtadar; Anees, Amir; Aslam, Muhammad; Ahmed, Rehan; Siddiqui, Nasir
2018-04-01
In this manuscript, we have proposed an encryption algorithm to encrypt any digital data. The proposed algorithm is primarily based on the substitution-permutation in which the substitution process is performed by the S 8 Substitution boxes. The proposed algorithm incorporates three different chaotic maps. We have analysed the behaviour of chaos by secure communication in great length, and accordingly, we have applied those chaotic sequences in the proposed encryption algorithm. The simulation and statistical results revealed that the proposed encryption scheme is secure against different attacks. Moreover, the encryption scheme can tolerate the channel noise as well; if the encrypted data is corrupted by the unauthenticated user or by the channel noise, the decryption can still be successfully done with some distortion. The overall results confirmed that the presented work has good cryptographic features, low computational complexity and resistant to the channel noise which makes it suitable for low profile mobile applications.
NASA Astrophysics Data System (ADS)
Khamukhin, A. A.
2017-02-01
Simple navigation algorithms are needed for small autonomous unmanned aerial vehicles (UAVs). These algorithms can be implemented in a small microprocessor with low power consumption. This will help to reduce the weight of the UAVs computing equipment and to increase the flight range. The proposed algorithm uses only the number of opaque channels (ommatidia in bees) through which a target can be seen by moving an observer from location 1 to 2 toward the target. The distance estimation is given relative to the distance between locations 1 and 2. The simple scheme of an appositional compound eye to develop calculation formula is proposed. The distance estimation error analysis shows that it decreases with an increase of the total number of opaque channels to a certain limit. An acceptable error of about 2 % is achieved with the angle of view from 3 to 10° when the total number of opaque channels is 21600.
Rifai Chai; Naik, Ganesh R; Sai Ho Ling; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T
2017-07-01
This paper presents a classification of driver fatigue with electroencephalography (EEG) channels selection analysis. The system employs independent component analysis (ICA) with scalp map back projection to select the dominant of EEG channels. After channel selection, the features of the selected EEG channels were extracted based on power spectral density (PSD), and then classified using a Bayesian neural network. The results of the ICA decomposition with the back-projected scalp map and a threshold showed that the EEG channels can be reduced from 32 channels into 16 dominants channels involved in fatigue assessment as chosen channels, which included AF3, F3, FC1, FC5, T7, CP5, P3, O1, P4, P8, CP6, T8, FC2, F8, AF4, FP2. The result of fatigue vs. alert classification of the selected 16 channels yielded a sensitivity of 76.8%, specificity of 74.3% and an accuracy of 75.5%. Also, the classification results of the selected 16 channels are comparable to those using the original 32 channels. So, the selected 16 channels is preferable for ergonomics improvement of EEG-based fatigue classification system.
Weather, Climate, and Society: New Demands on Science and Services
NASA Technical Reports Server (NTRS)
2010-01-01
A new algorithm has been constructed to estimate the path length of lightning channels for the purpose of improving the model predictions of lightning NOx in both regional air quality and global chemistry/climate models. This algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. Channel length distributions were also obtained for the different seasons.
An Augmented Reality Endoscope System for Ureter Position Detection.
Yu, Feng; Song, Enmin; Liu, Hong; Li, Yunlong; Zhu, Jun; Hung, Chih-Cheng
2018-06-25
Iatrogenic injury of ureter in the clinical operation may cause the serious complication and kidney damage. To avoid such a medical accident, it is necessary to provide the ureter position information to the doctor. For the detection of ureter position, an ureter position detection and display system with the augmented ris proposed to detect the ureter that is covered by human tissue. There are two key issues which should be considered in this new system. One is how to detect the covered ureter that cannot be captured by the electronic endoscope and the other is how to display the ureter position that provides stable and high-quality images. Simultaneously, any delayed processing of the system should disturb the surgery. The aided hardware detection method and target detection algorithms are proposed in this system. To mark the ureter position, a surface-lighting plastic optical fiber (POF) with the encoded light-emitting diode (LED) light is used to indicate the ureter position. The monochrome channel filtering algorithm (MCFA) is proposed to locate the ureter region more precisely. The ureter position is extracted using the proposed automatic region growing algorithm (ARGA) that utilizes the statistical information of the monochrome channel for the selection of growing seed point. In addition, according to the pulse signal of encoded light, the recognition of bright and dark frames based on the aided hardware (BDAH) is proposed to expedite the processing speed. Experimental results demonstrate that the proposed endoscope system can identify 92.04% ureter region in average.
NASA Astrophysics Data System (ADS)
Noh, Young-Chan; Sohn, Byung-Ju; Kim, Yoonjae; Joo, Sangwon; Bell, William; Saunders, Roger
2017-11-01
A new set of Infrared Atmospheric Sounding Interferometer (IASI) channels was re-selected from 314 EUMETSAT channels. In selecting channels, we calculated the impact of the individually added channel on the improvement in the analysis outputs from a one-dimensional variational analysis (1D-Var) for the Unified Model (UM) data assimilation system at the Met Office, using the channel score index (CSI) as a figure of merit. Then, 200 channels were selected in order by counting each individual channel's CSI contribution. Compared with the operationally used 183 channels for the UM at the Met Office, the new set shares 149 channels, while the other 51 channels are new. Also examined is the selection from the entropy reduction method with the same 1D-Var approach. Results suggest that channel selection can be made in a more objective fashion using the proposed CSI method. This is because the most important channels can be selected across the whole IASI observation spectrum. In the experimental trial runs using the UM global assimilation system, the new channels had an overall neutral impact in terms of improvement in forecasts, as compared with results from the operational channels. However, upper-tropospheric moist biases shown in the control run with operational channels were significantly reduced in the experimental trial with the newly selected channels. The reduction of moist biases was mainly due to the additional water vapor channels, which are sensitive to the upper-tropospheric water vapor.
Systolic Signal Processor/High Frequency Direction Finding
1990-10-01
MUSIC ) algorithm and the finite impulse response (FIR) filter onto the testbed hardware was supported by joint sponsorship of the block and major bid...computational throughput. The systolic implementations of a four-channel finite impulse response (FIR) filter and multiple signal classification ( MUSIC ... MUSIC ) algorithm was mated to a bank of finite impulse response (FIR) filters and a four-channel data acquisition subsystem. A complete description
A New, More Physically Based Algorithm, for Retrieving Aerosol Properties over Land from MODIS
NASA Technical Reports Server (NTRS)
Levy, Robert C.; Kaufman, Yoram J.; Remer, Lorraine A.; Mattoo, Shana
2004-01-01
The MOD Imaging Spectrometer (MODIS) has been successfully retrieving aerosol properties, beginning in early 2000 from Terra and from mid 2002 from Aqua. Over land, the retrieval algorithm makes use of three MODIS channels, in the blue, red and infrared wavelengths. As part of the validation exercises, retrieved spectral aerosol optical thickness (AOT) has been compared via scatterplots against spectral AOT measured by the global Aerosol Robotic NETwork (AERONET). On one hand, global and long term validation looks promising, with two-thirds (average plus and minus one standard deviation) of all points falling between published expected error bars. On the other hand, regression of these points shows a positive y-offset and a slope less than 1.0. For individual regions, such as along the U.S. East Coast, the offset and slope are even worse. Here, we introduce an overhaul of the algorithm for retrieving aerosol properties over land. Some well-known weaknesses in the current aerosol retrieval from MODIS include: a) rigid assumptions about the underlying surface reflectance, b) limited aerosol models to choose from, c) simplified (scalar) radiative transfer (RT) calculations used to simulate satellite observations, and d) assumption that aerosol is transparent in the infrared channel. The new algorithm attempts to address all four problems: a) The new algorithm will include surface type information, instead of fixed ratios of the reflectance in the visible channels to the mid-IR reflectance. b) It will include updated aerosol optical properties to reflect the growing aerosol retrieved from eight-plus years of AERONE". operation. c) The effects of polarization will be including using vector RT calculations. d) Most importantly, the new algorithm does not assume that aerosol is transparent in the infrared channel. It will be an inversion of reflectance observed in the three channels (blue, red, and infrared), rather than iterative single channel retrievals. Thus, this new formulation of the MODIS aerosol retrieval over land includes more physically based surface, aerosol and radiative transfer with fewer potentially erroneous assumptions.
Architecture and Implementation of OpenPET Firmware and Embedded Software
Abu-Nimeh, Faisal T.; Ito, Jennifer; Moses, William W.; Peng, Qiyu; Choong, Woon-Seng
2016-01-01
OpenPET is an open source, modular, extendible, and high-performance platform suitable for multi-channel data acquisition and analysis. Due to the flexibility of the hardware, firmware, and software architectures, the platform is capable of interfacing with a wide variety of detector modules not only in medical imaging but also in homeland security applications. Analog signals from radiation detectors share similar characteristics – a pulse whose area is proportional to the deposited energy and whose leading edge is used to extract a timing signal. As a result, a generic design method of the platform is adopted for the hardware, firmware, and software architectures and implementations. The analog front-end is hosted on a module called a Detector Board, where each board can filter, combine, timestamp, and process multiple channels independently. The processed data is formatted and sent through a backplane bus to a module called Support Board, where 1 Support Board can host up to eight Detector Board modules. The data in the Support Board, coming from 8 Detector Board modules, can be aggregated or correlated (if needed) depending on the algorithm implemented or runtime mode selected. It is then sent out to a computer workstation for further processing. The number of channels (detector modules), to be processed, mandates the overall OpenPET System Configuration, which is designed to handle up to 1,024 channels using 16-channel Detector Boards in the Standard System Configuration and 16,384 channels using 32-channel Detector Boards in the Large System Configuration. PMID:27110034
Information theoretical assessment of visual communication with wavelet coding
NASA Astrophysics Data System (ADS)
Rahman, Zia-ur
1995-06-01
A visual communication channel can be characterized by the efficiency with which it conveys information, and the quality of the images restored from the transmitted data. Efficient data representation requires the use of constraints of the visual communication channel. Our information theoretic analysis combines the design of the wavelet compression algorithm with the design of the visual communication channel. Shannon's communication theory, Wiener's restoration filter, and the critical design factors of image gathering and display are combined to provide metrics for measuring the efficiency of data transmission, and for quantitatively assessing the visual quality of the restored image. These metrics are: a) the mutual information (Eta) between the radiance the radiance field and the restored image, and b) the efficiency of the channel which can be roughly measured by as the ratio (Eta) /H, where H is the average number of bits being used to transmit the data. Huck, et al. (Journal of Visual Communication and Image Representation, Vol. 4, No. 2, 1993) have shown that channels desinged to maximize (Eta) , also maximize. Our assessment provides a framework for designing channels which provide the highest possible visual quality for a given amount of data under the critical design limitations of the image gathering and display devices. Results show that a trade-off exists between the maximum realizable information of the channel and its efficiency: an increase in one leads to a decrease in the other. The final selection of which of these quantities to maximize is, of course, application dependent.
An Adaptive Channel Access Method for Dynamic Super Dense Wireless Sensor Networks.
Lei, Chunyang; Bie, Hongxia; Fang, Gengfa; Zhang, Xuekun
2015-12-03
Super dense and distributed wireless sensor networks have become very popular with the development of small cell technology, Internet of Things (IoT), Machine-to-Machine (M2M) communications, Vehicular-to-Vehicular (V2V) communications and public safety networks. While densely deployed wireless networks provide one of the most important and sustainable solutions to improve the accuracy of sensing and spectral efficiency, a new channel access scheme needs to be designed to solve the channel congestion problem introduced by the high dynamics of competing nodes accessing the channel simultaneously. In this paper, we firstly analyzed the channel contention problem using a novel normalized channel contention analysis model which provides information on how to tune the contention window according to the state of channel contention. We then proposed an adaptive channel contention window tuning algorithm in which the contention window tuning rate is set dynamically based on the estimated channel contention level. Simulation results show that our proposed adaptive channel access algorithm based on fast contention window tuning can achieve more than 95 % of the theoretical optimal throughput and 0 . 97 of fairness index especially in dynamic and dense networks.
Efficient universal quantum channel simulation in IBM's cloud quantum computer
NASA Astrophysics Data System (ADS)
Wei, Shi-Jie; Xin, Tao; Long, Gui-Lu
2018-07-01
The study of quantum channels is an important field and promises a wide range of applications, because any physical process can be represented as a quantum channel that transforms an initial state into a final state. Inspired by the method of performing non-unitary operators by the linear combination of unitary operations, we proposed a quantum algorithm for the simulation of the universal single-qubit channel, described by a convex combination of "quasi-extreme" channels corresponding to four Kraus operators, and is scalable to arbitrary higher dimension. We demonstrated the whole algorithm experimentally using the universal IBM cloud-based quantum computer and studied the properties of different qubit quantum channels. We illustrated the quantum capacity of the general qubit quantum channels, which quantifies the amount of quantum information that can be protected. The behavior of quantum capacity in different channels revealed which types of noise processes can support information transmission, and which types are too destructive to protect information. There was a general agreement between the theoretical predictions and the experiments, which strongly supports our method. By realizing the arbitrary qubit channel, this work provides a universally- accepted way to explore various properties of quantum channels and novel prospect for quantum communication.
Neural network cloud top pressure and height for MODIS
NASA Astrophysics Data System (ADS)
Håkansson, Nina; Adok, Claudia; Thoss, Anke; Scheirer, Ronald; Hörnquist, Sara
2018-06-01
Cloud top height retrieval from imager instruments is important for nowcasting and for satellite climate data records. A neural network approach for cloud top height retrieval from the imager instrument MODIS (Moderate Resolution Imaging Spectroradiometer) is presented. The neural networks are trained using cloud top layer pressure data from the CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) dataset. Results are compared with two operational reference algorithms for cloud top height: the MODIS Collection 6 Level 2 height product and the cloud top temperature and height algorithm in the 2014 version of the NWC SAF (EUMETSAT (European Organization for the Exploitation of Meteorological Satellites) Satellite Application Facility on Support to Nowcasting and Very Short Range Forecasting) PPS (Polar Platform System). All three techniques are evaluated using both CALIOP and CPR (Cloud Profiling Radar for CloudSat (CLOUD SATellite)) height. Instruments like AVHRR (Advanced Very High Resolution Radiometer) and VIIRS (Visible Infrared Imaging Radiometer Suite) contain fewer channels useful for cloud top height retrievals than MODIS, therefore several different neural networks are investigated to test how infrared channel selection influences retrieval performance. Also a network with only channels available for the AVHRR1 instrument is trained and evaluated. To examine the contribution of different variables, networks with fewer variables are trained. It is shown that variables containing imager information for neighboring pixels are very important. The error distributions of the involved cloud top height algorithms are found to be non-Gaussian. Different descriptive statistic measures are presented and it is exemplified that bias and SD (standard deviation) can be misleading for non-Gaussian distributions. The median and mode are found to better describe the tendency of the error distributions and IQR (interquartile range) and MAE (mean absolute error) are found to give the most useful information of the spread of the errors. For all descriptive statistics presented MAE, IQR, RMSE (root mean square error), SD, mode, median, bias and percentage of absolute errors above 0.25, 0.5, 1 and 2 km the neural network perform better than the reference algorithms both validated with CALIOP and CPR (CloudSat). The neural networks using the brightness temperatures at 11 and 12 µm show at least 32 % (or 623 m) lower MAE compared to the two operational reference algorithms when validating with CALIOP height. Validation with CPR (CloudSat) height gives at least 25 % (or 430 m) reduction of MAE.
Sodium and potassium competition in potassium-selective and non-selective channels
NASA Astrophysics Data System (ADS)
Sauer, David B.; Zeng, Weizhong; Canty, John; Lam, Yeeling; Jiang, Youxing
2013-11-01
Potassium channels selectively conduct K+, primarily to the exclusion of Na+, despite the fact that both ions can bind within the selectivity filter. Here we perform crystallographic titration and single-channel electrophysiology to examine the competition of Na+ and K+ binding within the filter of two NaK channel mutants; one is the potassium-selective NaK2K mutant and the other is the non-selective NaK2CNG, a CNG channel pore mimic. With high-resolution structures of these engineered NaK channel constructs, we explicitly describe the changes in K+ occupancy within the filter upon Na+ competition by anomalous diffraction. Our results demonstrate that the non-selective NaK2CNG still retains a K+-selective site at equilibrium, whereas the NaK2K channel filter maintains two high-affinity K+ sites. A double-barrier mechanism is proposed to explain K+ channel selectivity at low K+ concentrations.
An improved dehazing algorithm of aerial high-definition image
NASA Astrophysics Data System (ADS)
Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying
2016-01-01
For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.
Zhang, Yong; Li, Yuan; Rong, Zhi-Guo
2010-06-01
Remote sensors' channel spectral response function (SRF) was one of the key factors to influence the quantitative products' inversion algorithm, accuracy and the geophysical characteristics. Aiming at the adjustments of FY-2E's split window channels' SRF, detailed comparisons between the FY-2E and FY-2C corresponding channels' SRF differences were carried out based on three data collections: the NOAA AVHRR corresponding channels' calibration look up tables, field measured water surface radiance and atmospheric profiles at Lake Qinghai and radiance calculated from the PLANK function within all dynamic range of FY-2E/C. The results showed that the adjustments of FY-2E's split window channels' SRF would result in the spectral range's movements and influence the inversion algorithms of some ground quantitative products. On the other hand, these adjustments of FY-2E SRFs would increase the brightness temperature differences between FY-2E's two split window channels within all dynamic range relative to FY-2C's. This would improve the inversion ability of FY-2E's split window channels.
Numerical Simulation of 3-D Supersonic Viscous Flow in an Experimental MHD Channel
NASA Technical Reports Server (NTRS)
Kato, Hiromasa; Tannehill, John C.; Gupta, Sumeet; Mehta, Unmeel B.
2004-01-01
The 3-D supersonic viscous flow in an experimental MHD channel has been numerically simulated. The experimental MHD channel is currently in operation at NASA Ames Research Center. The channel contains a nozzle section, a center section, and an accelerator section where magnetic and electric fields can be imposed on the flow. In recent tests, velocity increases of up to 40% have been achieved in the accelerator section. The flow in the channel is numerically computed using a new 3-D parabolized Navier-Stokes (PNS) algorithm that has been developed to efficiently compute MHD flows in the low magnetic Reynolds number regime. The MHD effects are modeled by introducing source terms into the PNS equations which can then be solved in a very e5uent manner. To account for upstream (elliptic) effects, the flowfield can be computed using multiple streamwise sweeps with an iterated PNS algorithm. The new algorithm has been used to compute two test cases that match the experimental conditions. In both cases, magnetic and electric fields are applied to the flow. The computed results are in good agreement with the available experimental data.
Centralized Routing and Scheduling Using Multi-Channel System Single Transceiver in 802.16d
NASA Astrophysics Data System (ADS)
Al-Hemyari, A.; Noordin, N. K.; Ng, Chee Kyun; Ismail, A.; Khatun, S.
This paper proposes a cross-layer optimized strategy that reduces the effect of interferences from neighboring nodes within a mesh networks. This cross-layer design relies on the routing information in network layer and the scheduling table in medium access control (MAC) layer. A proposed routing algorithm in network layer is exploited to find the best route for all subscriber stations (SS). Also, a proposed centralized scheduling algorithm in MAC layer is exploited to assign a time slot for each possible node transmission. The cross-layer optimized strategy is using multi-channel single transceiver and single channel single transceiver systems for WiMAX mesh networks (WMNs). Each node in WMN has a transceiver that can be tuned to any available channel for eliminating the secondary interference. Among the considered parameters in the performance analysis are interference from the neighboring nodes, hop count to the base station (BS), number of children per node, slot reuse, load balancing, quality of services (QoS), and node identifier (ID). Results show that the proposed algorithms significantly improve the system performance in terms of length of scheduling, channel utilization ratio (CUR), system throughput, and average end to end transmission delay.
Accurate Sybil Attack Detection Based on Fine-Grained Physical Channel Information.
Wang, Chundong; Zhu, Likun; Gong, Liangyi; Zhao, Zhentang; Yang, Lei; Liu, Zheli; Cheng, Xiaochun
2018-03-15
With the development of the Internet-of-Things (IoT), wireless network security has more and more attention paid to it. The Sybil attack is one of the famous wireless attacks that can forge wireless devices to steal information from clients. These forged devices may constantly attack target access points to crush the wireless network. In this paper, we propose a novel Sybil attack detection based on Channel State Information (CSI). This detection algorithm can tell whether the static devices are Sybil attackers by combining a self-adaptive multiple signal classification algorithm with the Received Signal Strength Indicator (RSSI). Moreover, we develop a novel tracing scheme to cluster the channel characteristics of mobile devices and detect dynamic attackers that change their channel characteristics in an error area. Finally, we experiment on mobile and commercial WiFi devices. Our algorithm can effectively distinguish the Sybil devices. The experimental results show that our Sybil attack detection system achieves high accuracy for both static and dynamic scenarios. Therefore, combining the phase and similarity of channel features, the multi-dimensional analysis of CSI can effectively detect Sybil nodes and improve the security of wireless networks.
Accurate Sybil Attack Detection Based on Fine-Grained Physical Channel Information
Wang, Chundong; Zhao, Zhentang; Yang, Lei; Liu, Zheli; Cheng, Xiaochun
2018-01-01
With the development of the Internet-of-Things (IoT), wireless network security has more and more attention paid to it. The Sybil attack is one of the famous wireless attacks that can forge wireless devices to steal information from clients. These forged devices may constantly attack target access points to crush the wireless network. In this paper, we propose a novel Sybil attack detection based on Channel State Information (CSI). This detection algorithm can tell whether the static devices are Sybil attackers by combining a self-adaptive multiple signal classification algorithm with the Received Signal Strength Indicator (RSSI). Moreover, we develop a novel tracing scheme to cluster the channel characteristics of mobile devices and detect dynamic attackers that change their channel characteristics in an error area. Finally, we experiment on mobile and commercial WiFi devices. Our algorithm can effectively distinguish the Sybil devices. The experimental results show that our Sybil attack detection system achieves high accuracy for both static and dynamic scenarios. Therefore, combining the phase and similarity of channel features, the multi-dimensional analysis of CSI can effectively detect Sybil nodes and improve the security of wireless networks. PMID:29543773
NASA Technical Reports Server (NTRS)
Li, Rong-Rong; Kaufman, Yoram J.
2002-01-01
We have developed an algorithm to detect suspended sediments and shallow coastal waters using imaging data acquired with the Moderate Resolution Imaging SpectroRadiometer (MODIS). The MODIS instruments on board the NASA Terra and Aqua Spacecrafts are equipped with one set of narrow channels located in a wide 0.4 - 2.5 micron spectral range. These channels were designed primarily for remote sensing of the land surface and atmosphere. We have found that the set of land and cloud channels are also quite useful for remote sensing of the bright coastal waters. We have developed an empirical algorithm, which uses the narrow MODIS channels in this wide spectral range, for identifying areas with suspended sediments in turbid waters and shallow waters with bottom reflections. In our algorithm, we take advantage of the strong water absorption at wavelengths longer than 1 micron that does not allow illumination of sediments in the water or a shallow ocean floor. MODIS data acquired over the east coast of China, west coast of Africa, Arabian Sea, Mississippi Delta, and west coast of Florida are used in this study.
NASA Astrophysics Data System (ADS)
Li, R.; Kaufman, Y.
2002-12-01
ABSTRACT We have developed an algorithm to detect suspended sediments and shallow coastal waters using imaging data acquired with the Moderate Resolution Imaging SpectroRadiometer (MODIS). The MODIS instruments on board the NASA Terra and Aqua Spacecrafts are equipped with one set of narrow channels located in a wide 0.4 - 2.5 micron spectral range. These channels were designed primarily for remote sensing of the land surface and atmosphere. We have found that the set of land and cloud channels are also quite useful for remote sensing of the bright coastal waters. We have developed an empirical algorithm, which uses the narrow MODIS channels in this wide spectral range, for identifying areas with suspended sediments in turbid waters and shallow waters with bottom reflections. In our algorithm, we take advantage of the strong water absorption at wavelengths longer than 1 æm that does not allow illumination of sediments in the water or a shallow ocean floor. MODIS data acquired over the east coast of China, west coast of Africa, Arabian Sea, Mississippi Delta, and west coast of Florida are used in this study.
Rainfall Estimates from the TMI and the SSM/I
NASA Technical Reports Server (NTRS)
Hong, Ye; Kummerow, Christian D.; Olson, William S.; Viltard, Nicolas
1999-01-01
The Tropical Rainfall Measuring Mission (TRMM), which is a joint Japan-U.S. Earth observing satellite, has been successfully launched from Japan on November 27, 1997. The main purpose of the TRMM is to measure quantitatively rainfall over the tropics for the research of climate and weather. One of three rainfall measuring instruments abroad the TRMM is the high resolution TRMM Microwave Imager (TMI). The TMI instrument is essentially the copy of the SSM/I with a dual-polarized pair of 10.7 GHz channels added to increase the dynamic range of rainfall estimates. In addition, the 21.3 GHz water vapor absorption channel is designed in the TMI as opposed to the 22.235 GHz in the SSM/I to avoid saturation in the tropics. This paper will present instantaneous rain rates estimated from the coincident TMI and SSM/I observations. The algorithm for estimating instantaneous rainfall rates from both sensors is the Goddard Profiling algorithm (Gprof). The Gprof algorithm is a physically based, multichannel rainfall retrieval algorithm, The algorithm is very portable and can be used for various sensors with different channels and resolutions. The comparison of rain rates estimated from TMI and SSM/I on the same rain regions will be performed. The results from the comparison and the insight of tile retrieval algorithm will be given.
Image defog algorithm based on open close filter and gradient domain recursive bilateral filter
NASA Astrophysics Data System (ADS)
Liu, Daqian; Liu, Wanjun; Zhao, Qingguo; Fei, Bowen
2017-11-01
To solve the problems of fuzzy details, color distortion, low brightness of the image obtained by the dark channel prior defog algorithm, an image defog algorithm based on open close filter and gradient domain recursive bilateral filter, referred to as OCRBF, was put forward. The algorithm named OCRBF firstly makes use of weighted quad tree to obtain more accurate the global atmospheric value, then exploits multiple-structure element morphological open and close filter towards the minimum channel map to obtain a rough scattering map by dark channel prior, makes use of variogram to correct the transmittance map,and uses gradient domain recursive bilateral filter for the smooth operation, finally gets recovery images by image degradation model, and makes contrast adjustment to get bright, clear and no fog image. A large number of experimental results show that the proposed defog method in this paper can be good to remove the fog , recover color and definition of the fog image containing close range image, image perspective, the image including the bright areas very well, compared with other image defog algorithms,obtain more clear and natural fog free images with details of higher visibility, what's more, the relationship between the time complexity of SIDA algorithm and the number of image pixels is a linear correlation.
NASA Astrophysics Data System (ADS)
Yuan, Chunhua; Wang, Jiang; Yi, Guosheng
2017-03-01
Estimation of ion channel parameters is crucial to spike initiation of neurons. The biophysical neuron models have numerous ion channel parameters, but only a few of them play key roles in the firing patterns of the models. So we choose three parameters featuring the adaptation in the Ermentrout neuron model to be estimated. However, the traditional particle swarm optimization (PSO) algorithm is still easy to fall into local optimum and has the premature convergence phenomenon in the study of some problems. In this paper, we propose an improved method that uses a concave function and dynamic logistic chaotic mapping mixed to adjust the inertia weights of the fitness value, effectively improve the global convergence ability of the algorithm. The perfect predicting firing trajectories of the rebuilt model using the estimated parameters prove that only estimating a few important ion channel parameters can establish the model well and the proposed algorithm is effective. Estimations using two classic PSO algorithms are also compared to the improved PSO to verify that the algorithm proposed in this paper can avoid local optimum and quickly converge to the optimal value. The results provide important theoretical foundations for building biologically realistic neuron models.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena
2011-01-01
The Goddard DISC has generated products derived from AIRS/AMSU-A observations, starting from September 2002 when the AIRS instrument became stable, using the AIRS Science Team Version-5 retrieval algorithm. The AIRS Science Team Version-6 retrieval algorithm will be finalized in September 2011. This paper describes some of the significant improvements contained in the Version-6 retrieval algorithm, compared to that used in Version-5, with an emphasis on the improvement of atmospheric temperature profiles, ocean and land surface skin temperatures, and ocean and land surface spectral emissivities. AIRS contains 2378 spectral channels covering portions of the spectral region 650 cm(sup -1) (15.38 micrometers) - 2665 cm(sup -1) (3.752 micrometers). These spectral regions contain significant absorption features from two CO2 absorption bands, the 15 micrometers (longwave) CO2 band, and the 4.3 micrometers (shortwave) CO2 absorption band. There are also two atmospheric window regions, the 12 micrometer - 8 micrometer (longwave) window, and the 4.17 micrometer - 3.75 micrometer (shortwave) window. Historically, determination of surface and atmospheric temperatures from satellite observations was performed using primarily observations in the longwave window and CO2 absorption regions. According to cloud clearing theory, more accurate soundings of both surface skin and atmospheric temperatures can be obtained under partial cloud cover conditions if one uses observations in longwave channels to determine coefficients which generate cloud cleared radiances R(sup ^)(sub i) for all channels, and uses R(sup ^)(sub i) only from shortwave channels in the determination of surface and atmospheric temperatures. This procedure is now being used in the AIRS Version-6 Retrieval Algorithm. Results are presented for both daytime and nighttime conditions showing improved Version-6 surface and atmospheric soundings under partial cloud cover.
Second-order Poisson Nernst-Planck solver for ion channel transport
Zheng, Qiong; Chen, Duan; Wei, Guo-Wei
2010-01-01
The Poisson Nernst-Planck (PNP) theory is a simplified continuum model for a wide variety of chemical, physical and biological applications. Its ability of providing quantitative explanation and increasingly qualitative predictions of experimental measurements has earned itself much recognition in the research community. Numerous computational algorithms have been constructed for the solution of the PNP equations. However, in the realistic ion-channel context, no second order convergent PNP algorithm has ever been reported in the literature, due to many numerical obstacles, including discontinuous coefficients, singular charges, geometric singularities, and nonlinear couplings. The present work introduces a number of numerical algorithms to overcome the abovementioned numerical challenges and constructs the first second-order convergent PNP solver in the ion-channel context. First, a Dirichlet to Neumann mapping (DNM) algorithm is designed to alleviate the charge singularity due to the protein structure. Additionally, the matched interface and boundary (MIB) method is reformulated for solving the PNP equations. The MIB method systematically enforces the interface jump conditions and achieves the second order accuracy in the presence of complex geometry and geometric singularities of molecular surfaces. Moreover, two iterative schemes are utilized to deal with the coupled nonlinear equations. Furthermore, extensive and rigorous numerical validations are carried out over a number of geometries, including a sphere, two proteins and an ion channel, to examine the numerical accuracy and convergence order of the present numerical algorithms. Finally, application is considered to a real transmembrane protein, the Gramicidin A channel protein. The performance of the proposed numerical techniques is tested against a number of factors, including mesh sizes, diffusion coefficient profiles, iterative schemes, ion concentrations, and applied voltages. Numerical predictions are compared with experimental measurements. PMID:21552336
Optimal Refueling Pattern Search for a CANDU Reactor Using a Genetic Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quang Binh, DO; Gyuhong, ROH; Hangbok, CHOI
2006-07-01
This paper presents the results from the application of genetic algorithms to a refueling optimization of a Canada deuterium uranium (CANDU) reactor. This work aims at making a mathematical model of the refueling optimization problem including the objective function and constraints and developing a method based on genetic algorithms to solve the problem. The model of the optimization problem and the proposed method comply with the key features of the refueling strategy of the CANDU reactor which adopts an on-power refueling operation. In this study, a genetic algorithm combined with an elitism strategy was used to automatically search for themore » refueling patterns. The objective of the optimization was to maximize the discharge burn-up of the refueling bundles, minimize the maximum channel power, or minimize the maximum change in the zone controller unit (ZCU) water levels. A combination of these objectives was also investigated. The constraints include the discharge burn-up, maximum channel power, maximum bundle power, channel power peaking factor and the ZCU water level. A refueling pattern that represents the refueling rate and channels was coded by a one-dimensional binary chromosome, which is a string of binary numbers 0 and 1. A computer program was developed in FORTRAN 90 running on an HP 9000 workstation to conduct the search for the optimal refueling patterns for a CANDU reactor at the equilibrium state. The results showed that it was possible to apply genetic algorithms to automatically search for the refueling channels of the CANDU reactor. The optimal refueling patterns were compared with the solutions obtained from the AUTOREFUEL program and the results were consistent with each other. (authors)« less
Online track detection in triggerless mode for INO
NASA Astrophysics Data System (ADS)
Jain, A.; Padmini, S.; Joseph, A. N.; Mahesh, P.; Preetha, N.; Behere, A.; Sikder, S. S.; Majumder, G.; Behera, S. P.
2018-03-01
The India based Neutrino Observatory (INO) is a proposed particle physics research project to study the atmospheric neutrinos. INO-Iron Calorimeter (ICAL) will consist of 28,800 detectors having 3.6 million electronic channels expected to activate with 100 Hz single rate, producing data at a rate of 3 GBps. Data collected contains a few real hits generated by muon tracks and the remaining noise-induced spurious hits. Estimated reduction factor after filtering out data of interest from generated data is of the order of 103. This makes trigger generation critical for efficient data collection and storage. Trigger is generated by detecting coincidence across multiple channels satisfying trigger criteria, within a small window of 200 ns in the trigger region. As the probability of neutrino interaction is very low, track detection algorithm has to be efficient and fast enough to process 5 × 106 events-candidates/s without introducing significant dead time, so that not even a single neutrino event is missed out. A hardware based trigger system is presently proposed for on-line track detection considering stringent timing requirements. Though the trigger system can be designed with scalability, a lot of hardware devices and interconnections make it a complex and expensive solution with limited flexibility. A software based track detection approach working on the hit information offers an elegant solution with possibility of varying trigger criteria for selecting various potentially interesting physics events. An event selection approach for an alternative triggerless readout scheme has been developed. The algorithm is mathematically simple, robust and parallelizable. It has been validated by detecting simulated muon events for energies of the range of 1 GeV-10 GeV with 100% efficiency at a processing rate of 60 μs/event on a 16 core machine. The algorithm and result of a proof-of-concept for its faster implementation over multiple cores is presented. The paper also discusses about harnessing the computing capabilities of multi-core computing farm, thereby optimizing number of nodes required for the proposed system.
Analysis of A Drug Target-based Classification System using Molecular Descriptors.
Lu, Jing; Zhang, Pin; Bi, Yi; Luo, Xiaomin
2016-01-01
Drug-target interaction is an important topic in drug discovery and drug repositioning. KEGG database offers a drug annotation and classification using a target-based classification system. In this study, we gave an investigation on five target-based classes: (I) G protein-coupled receptors; (II) Nuclear receptors; (III) Ion channels; (IV) Enzymes; (V) Pathogens, using molecular descriptors to represent each drug compound. Two popular feature selection methods, maximum relevance minimum redundancy and incremental feature selection, were adopted to extract the important descriptors. Meanwhile, an optimal prediction model based on nearest neighbor algorithm was constructed, which got the best result in identifying drug target-based classes. Finally, some key descriptors were discussed to uncover their important roles in the identification of drug-target classes.
Spatial and Temporal Varying Thresholds for Cloud Detection in Satellite Imagery
NASA Technical Reports Server (NTRS)
Jedlovec, Gary; Haines, Stephanie
2007-01-01
A new cloud detection technique has been developed and applied to both geostationary and polar orbiting satellite imagery having channels in the thermal infrared and short wave infrared spectral regions. The bispectral composite threshold (BCT) technique uses only the 11 micron and 3.9 micron channels, and composite imagery generated from these channels, in a four-step cloud detection procedure to produce a binary cloud mask at single pixel resolution. A unique aspect of this algorithm is the use of 20-day composites of the 11 micron and the 11 - 3.9 micron channel difference imagery to represent spatially and temporally varying clear-sky thresholds for the bispectral cloud tests. The BCT cloud detection algorithm has been applied to GOES and MODIS data over the continental United States over the last three years with good success. The resulting products have been validated against "truth" datasets (generated by the manual determination of the sky conditions from available satellite imagery) for various seasons from the 2003-2005 periods. The day and night algorithm has been shown to determine the correct sky conditions 80-90% of the time (on average) over land and ocean areas. Only a small variation in algorithm performance occurs between day-night, land-ocean, and between seasons. The algorithm performs least well. during he winter season with only 80% of the sky conditions determined correctly. The algorithm was found to under-determine clouds at night and during times of low sun angle (in geostationary satellite data) and tends to over-determine the presence of clouds during the day, particularly in the summertime. Since the spectral tests use only the short- and long-wave channels common to most multispectral scanners; the application of the BCT technique to a variety of satellite sensors including SEVERI should be straightforward and produce similar performance results.
NASA Astrophysics Data System (ADS)
Arslan, Musa T.; Tofighi, Mohammad; Sevimli, Rasim A.; ćetin, Ahmet E.
2015-05-01
One of the main disadvantages of using commercial broadcasts in a Passive Bistatic Radar (PBR) system is the range resolution. Using multiple broadcast channels to improve the radar performance is offered as a solution to this problem. However, it suffers from detection performance due to the side-lobes that matched filter creates for using multiple channels. In this article, we introduce a deconvolution algorithm to suppress the side-lobes. The two-dimensional matched filter output of a PBR is further analyzed as a deconvolution problem. The deconvolution algorithm is based on making successive projections onto the hyperplanes representing the time delay of a target. Resulting iterative deconvolution algorithm is globally convergent because all constraint sets are closed and convex. Simulation results in an FM based PBR system are presented.
Estimating Soil Moisture Using Polsar Data: a Machine Learning Approach
NASA Astrophysics Data System (ADS)
Khedri, E.; Hasanlou, M.; Tabatabaeenejad, A.
2017-09-01
Soil moisture is an important parameter that affects several environmental processes. This parameter has many important functions in numerous sciences including agriculture, hydrology, aerology, flood prediction, and drought occurrence. However, field procedures for moisture calculations are not feasible in a vast agricultural region territory. This is due to the difficulty in calculating soil moisture in vast territories and high-cost nature as well as spatial and local variability of soil moisture. Polarimetric synthetic aperture radar (PolSAR) imaging is a powerful tool for estimating soil moisture. These images provide a wide field of view and high spatial resolution. For estimating soil moisture, in this study, a model of support vector regression (SVR) is proposed based on obtained data from AIRSAR in 2003 in C, L, and P channels. In this endeavor, sequential forward selection (SFS) and sequential backward selection (SBS) are evaluated to select suitable features of polarized image dataset for high efficient modeling. We compare the obtained data with in-situ data. Output results show that the SBS-SVR method results in higher modeling accuracy compared to SFS-SVR model. Statistical parameters obtained from this method show an R2 of 97% and an RMSE of lower than 0.00041 (m3/m3) for P, L, and C channels, which has provided better accuracy compared to other feature selection algorithms.
Blind color isolation for color-channel-based fringe pattern profilometry using digital projection
NASA Astrophysics Data System (ADS)
Hu, Yingsong; Xi, Jiangtao; Chicharo, Joe; Yang, Zongkai
2007-08-01
We present an algorithm for estimating the color demixing matrix based on the color fringe patterns captured from the reference plane or the surface of the object. The advantage of this algorithm is that it is a blind approach to calculating the demixing matrix in the sense that no extra images are required for color calibration before performing profile measurement. Simulation and experimental results convince us that the proposed algorithm can significantly reduce the influence of the color cross talk and at the same time improve the measurement accuracy of the color-channel-based phase-shifting profilometry.
Decoding communities in networks
NASA Astrophysics Data System (ADS)
Radicchi, Filippo
2018-02-01
According to a recent information-theoretical proposal, the problem of defining and identifying communities in networks can be interpreted as a classical communication task over a noisy channel: memberships of nodes are information bits erased by the channel, edges and nonedges in the network are parity bits introduced by the encoder but degraded through the channel, and a community identification algorithm is a decoder. The interpretation is perfectly equivalent to the one at the basis of well-known statistical inference algorithms for community detection. The only difference in the interpretation is that a noisy channel replaces a stochastic network model. However, the different perspective gives the opportunity to take advantage of the rich set of tools of coding theory to generate novel insights on the problem of community detection. In this paper, we illustrate two main applications of standard coding-theoretical methods to community detection. First, we leverage a state-of-the-art decoding technique to generate a family of quasioptimal community detection algorithms. Second and more important, we show that the Shannon's noisy-channel coding theorem can be invoked to establish a lower bound, here named as decodability bound, for the maximum amount of noise tolerable by an ideal decoder to achieve perfect detection of communities. When computed for well-established synthetic benchmarks, the decodability bound explains accurately the performance achieved by the best community detection algorithms existing on the market, telling us that only little room for their improvement is still potentially left.
Decoding communities in networks.
Radicchi, Filippo
2018-02-01
According to a recent information-theoretical proposal, the problem of defining and identifying communities in networks can be interpreted as a classical communication task over a noisy channel: memberships of nodes are information bits erased by the channel, edges and nonedges in the network are parity bits introduced by the encoder but degraded through the channel, and a community identification algorithm is a decoder. The interpretation is perfectly equivalent to the one at the basis of well-known statistical inference algorithms for community detection. The only difference in the interpretation is that a noisy channel replaces a stochastic network model. However, the different perspective gives the opportunity to take advantage of the rich set of tools of coding theory to generate novel insights on the problem of community detection. In this paper, we illustrate two main applications of standard coding-theoretical methods to community detection. First, we leverage a state-of-the-art decoding technique to generate a family of quasioptimal community detection algorithms. Second and more important, we show that the Shannon's noisy-channel coding theorem can be invoked to establish a lower bound, here named as decodability bound, for the maximum amount of noise tolerable by an ideal decoder to achieve perfect detection of communities. When computed for well-established synthetic benchmarks, the decodability bound explains accurately the performance achieved by the best community detection algorithms existing on the market, telling us that only little room for their improvement is still potentially left.
New algorithms for microwave measurements of ocean winds
NASA Technical Reports Server (NTRS)
Wentz, F. J.; Peteherych, S.
1984-01-01
Improved second generation wind algorithms are used to process the three month SEASAT SMMR and SASS data sets. The new algorithms are derived without using in situ anemometer measurements. All known biases in the sensors prime measurements are removed, and the algorithms prime model functions are internally self-consistent. The computed SMMR and SASS winds are collocated and compared on a 150 km cell-by-cell basis, giving a total of 115444 wind comparisons. The comparisons are done using three different sets of SMMR channels. When the 6.6H SMMR channel is used for wind retrieval, the SMMR and SASS winds agree to within 1.3 m/s over the SASS primary swath. At nadir where the radar cross section is less sensitive to wind, the agreement degrades to 1.9 m/s. The agreement is very good for winds from 0 to 15 m/s. Above 15 m/s, the off-nadir SASS winds are consistently lower than the SMMR winds, while at nadir the high SASS winds are greater than SMMR's. When 10.7H is used for the SMMR wind channel, the SMMR/SASS wind comparisons are not quite as good. When the frequency of the wind channel is increased to 18 GHz, the SMMR/SASS agreement substantially degrades to about 5 m/s.
Side-information-dependent correlation channel estimation in hash-based distributed video coding.
Deligiannis, Nikos; Barbarien, Joeri; Jacobs, Marc; Munteanu, Adrian; Skodras, Athanassios; Schelkens, Peter
2012-04-01
In the context of low-cost video encoding, distributed video coding (DVC) has recently emerged as a potential candidate for uplink-oriented applications. This paper builds on a concept of correlation channel (CC) modeling, which expresses the correlation noise as being statistically dependent on the side information (SI). Compared with classical side-information-independent (SII) noise modeling adopted in current DVC solutions, it is theoretically proven that side-information-dependent (SID) modeling improves the Wyner-Ziv coding performance. Anchored in this finding, this paper proposes a novel algorithm for online estimation of the SID CC parameters based on already decoded information. The proposed algorithm enables bit-plane-by-bit-plane successive refinement of the channel estimation leading to progressively improved accuracy. Additionally, the proposed algorithm is included in a novel DVC architecture that employs a competitive hash-based motion estimation technique to generate high-quality SI at the decoder. Experimental results corroborate our theoretical gains and validate the accuracy of the channel estimation algorithm. The performance assessment of the proposed architecture shows remarkable and consistent coding gains over a germane group of state-of-the-art distributed and standard video codecs, even under strenuous conditions, i.e., large groups of pictures and highly irregular motion content.
Meng, Jianjun; Edelman, Bradley J.; Olsoe, Jaron; Jacobs, Gabriel; Zhang, Shuying; Beyko, Angeliki; He, Bin
2018-01-01
Motor imagery–based brain–computer interface (BCI) using electroencephalography (EEG) has demonstrated promising applications by directly decoding users' movement related mental intention. The selection of control signals, e.g., the channel configuration and decoding algorithm, plays a vital role in the online performance and progressing of BCI control. While several offline analyses report the effect of these factors on BCI accuracy for a single session—performance increases asymptotically by increasing the number of channels, saturates, and then decreases—no online study, to the best of our knowledge, has yet been performed to compare for a single session or across training. The purpose of the current study is to assess, in a group of forty-five subjects, the effect of channel number and decoding method on the progression of BCI performance across multiple training sessions and the corresponding neurophysiological changes. The 45 subjects were divided into three groups using Laplacian Filtering (LAP/S) with nine channels, Common Spatial Pattern (CSP/L) with 40 channels and CSP (CSP/S) with nine channels for online decoding. At the first training session, subjects using CSP/L displayed no significant difference compared to CSP/S but a higher average BCI performance over those using LAP/S. Despite the average performance when using the LAP/S method was initially lower, but LAP/S displayed improvement over first three sessions, whereas the other two groups did not. Additionally, analysis of the recorded EEG during BCI control indicates that the LAP/S produces control signals that are more strongly correlated with the target location and a higher R-square value was shown at the fifth session. In the present study, we found that subjects' average online BCI performance using a large EEG montage does not show significantly better performance after the first session than a smaller montage comprised of a common subset of these electrodes. The LAP/S method with a small EEG montage allowed the subjects to improve their skills across sessions, but no improvement was shown for the CSP method. PMID:29681792
Meng, Jianjun; Edelman, Bradley J; Olsoe, Jaron; Jacobs, Gabriel; Zhang, Shuying; Beyko, Angeliki; He, Bin
2018-01-01
Motor imagery-based brain-computer interface (BCI) using electroencephalography (EEG) has demonstrated promising applications by directly decoding users' movement related mental intention. The selection of control signals, e.g., the channel configuration and decoding algorithm, plays a vital role in the online performance and progressing of BCI control. While several offline analyses report the effect of these factors on BCI accuracy for a single session-performance increases asymptotically by increasing the number of channels, saturates, and then decreases-no online study, to the best of our knowledge, has yet been performed to compare for a single session or across training. The purpose of the current study is to assess, in a group of forty-five subjects, the effect of channel number and decoding method on the progression of BCI performance across multiple training sessions and the corresponding neurophysiological changes. The 45 subjects were divided into three groups using Laplacian Filtering (LAP/S) with nine channels, Common Spatial Pattern (CSP/L) with 40 channels and CSP (CSP/S) with nine channels for online decoding. At the first training session, subjects using CSP/L displayed no significant difference compared to CSP/S but a higher average BCI performance over those using LAP/S. Despite the average performance when using the LAP/S method was initially lower, but LAP/S displayed improvement over first three sessions, whereas the other two groups did not. Additionally, analysis of the recorded EEG during BCI control indicates that the LAP/S produces control signals that are more strongly correlated with the target location and a higher R-square value was shown at the fifth session. In the present study, we found that subjects' average online BCI performance using a large EEG montage does not show significantly better performance after the first session than a smaller montage comprised of a common subset of these electrodes. The LAP/S method with a small EEG montage allowed the subjects to improve their skills across sessions, but no improvement was shown for the CSP method.
Tuning the ion selectivity of tetrameric cation channels by changing the number of ion binding sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derebe, Mehabaw G.; Sauer, David B.; Zeng, Weizhong
2015-11-30
Selective ion conduction across ion channel pores is central to cellular physiology. To understand the underlying principles of ion selectivity in tetrameric cation channels, we engineered a set of cation channel pores based on the nonselective NaK channel and determined their structures to high resolution. These structures showcase an ensemble of selectivity filters with a various number of contiguous ion binding sites ranging from 2 to 4, with each individual site maintaining a geometry and ligand environment virtually identical to that of equivalent sites in K{sup +} channel selectivity filters. Combined with single channel electrophysiology, we show that only themore » channel with four ion binding sites is K{sup +} selective, whereas those with two or three are nonselective and permeate Na{sup +} and K{sup +} equally well. These observations strongly suggest that the number of contiguous ion binding sites in a single file is the key determinant of the channel's selectivity properties and the presence of four sites in K{sup +} channels is essential for highly selective and efficient permeation of K{sup +} ions.« less
Design and optimization of all-optical networks
NASA Astrophysics Data System (ADS)
Xiao, Gaoxi
1999-10-01
In this thesis, we present our research results on the design and optimization of all-optical networks. We divide our results into the following four parts: 1.In the first part, we consider broadcast-and-select networks. In our research, we propose an alternative and cheaper network configuration to hide the tuning time. In addition, we derive lower bounds on the optimal schedule lengths and prove that they are tighter than the best existing bounds. 2.In the second part, we consider all-optical wide area networks. We propose a set of algorithms for allocating a given number of WCs to the nodes. We adopt a simulation-based optimization approach, in which we collect utilization statistics of WCs from computer simulation and then perform optimization to allocate the WCs. Therefore, our algorithms are widely applicable and they are not restricted to any particular model and assumption. We have conducted extensive computer simulation on regular and irregular networks under both uniform and non-uniform traffic. We see that our method can get nearly the same performance as that of full wavelength conversion by using a much smaller number of WCs. Compared with the best existing method, the results show that our algorithms can significantly reduce (1)the overall blocking probability (i.e., better mean quality of service) and (2)the maximum of the blocking probabilities experienced at all the source nodes (i.e., better fairness). Equivalently, for a given performance requirement on blocking probability, our algorithms can significantly reduce the number of WCs required. 3.In the third part, we design and optimize the physical topology of all-optical wide area networks. We show that the design problem is NP-complete and we propose a heuristic algorithm called two-stage cut saturation algorithm for this problem. Simulation results show that (1)the proposed algorithm can efficiently design networks with low cost and high utilization, and (2)if wavelength converters are available to support full wavelength conversion, the cost of the links can be significantly reduced. 4.In the fourth part, we consider all-optical wide area networks with multiple fibers per link. We design a node configuration for all-optical networks. We exploit the flexibility that, to establish a lightpath across a node, we can select any one of the available channels in the incoming link and any one of the available channels in the outgoing link. As a result, the proposed node configuration requires a small number of small optical switches while it can achieve nearly the same performance as the existing one. And there is no additional crosstalk other than the intrinsic crosstalk within each single-chip optical switch.* (Abstract shortened by UMI.) *Originally published in DAI Vol. 60, No. 2. Reprinted here with corrected author name.
A hybrid frame concealment algorithm for H.264/AVC.
Yan, Bo; Gharavi, Hamid
2010-01-01
In packet-based video transmissions, packets loss due to channel errors may result in the loss of the whole video frame. Recently, many error concealment algorithms have been proposed in order to combat channel errors; however, most of the existing algorithms can only deal with the loss of macroblocks and are not able to conceal the whole missing frame. In order to resolve this problem, in this paper, we have proposed a new hybrid motion vector extrapolation (HMVE) algorithm to recover the whole missing frame, and it is able to provide more accurate estimation for the motion vectors of the missing frame than other conventional methods. Simulation results show that it is highly effective and significantly outperforms other existing frame recovery methods.
Robust Group Sparse Beamforming for Multicast Green Cloud-RAN With Imperfect CSI
NASA Astrophysics Data System (ADS)
Shi, Yuanming; Zhang, Jun; Letaief, Khaled B.
2015-09-01
In this paper, we investigate the network power minimization problem for the multicast cloud radio access network (Cloud-RAN) with imperfect channel state information (CSI). The key observation is that network power minimization can be achieved by adaptively selecting active remote radio heads (RRHs) via controlling the group-sparsity structure of the beamforming vector. However, this yields a non-convex combinatorial optimization problem, for which we propose a three-stage robust group sparse beamforming algorithm. In the first stage, a quadratic variational formulation of the weighted mixed l1/l2-norm is proposed to induce the group-sparsity structure in the aggregated beamforming vector, which indicates those RRHs that can be switched off. A perturbed alternating optimization algorithm is then proposed to solve the resultant non-convex group-sparsity inducing optimization problem by exploiting its convex substructures. In the second stage, we propose a PhaseLift technique based algorithm to solve the feasibility problem with a given active RRH set, which helps determine the active RRHs. Finally, the semidefinite relaxation (SDR) technique is adopted to determine the robust multicast beamformers. Simulation results will demonstrate the convergence of the perturbed alternating optimization algorithm, as well as, the effectiveness of the proposed algorithm to minimize the network power consumption for multicast Cloud-RAN.
Zhang, Jia-Hua; Li, Xin; Yao, Feng-Mei; Li, Xian-Hua
2009-08-01
Land surface temperature (LST) is an important parameter in the study on the exchange of substance and energy between land surface and air for the land surface physics process at regional and global scales. Many applications of satellites remotely sensed data must provide exact and quantificational LST, such as drought, high temperature, forest fire, earthquake, hydrology and the vegetation monitor, and the models of global circulation and regional climate also need LST as input parameter. Therefore, the retrieval of LST using remote sensing technology becomes one of the key tasks in quantificational remote sensing study. Normally, in the spectrum bands, the thermal infrared (TIR, 3-15 microm) and microwave bands (1 mm-1 m) are important for retrieval of the LST. In the present paper, firstly, several methods for estimating the LST on the basis of thermal infrared (TIR) remote sensing were synthetically reviewed, i. e., the LST measured with an ground-base infrared thermometer, the LST retrieval from mono-window algorithm (MWA), single-channel algorithm (SCA), split-window techniques (SWT) and multi-channels algorithm(MCA), single-channel & multi-angle algorithm and multi-channels algorithm & multi-angle algorithm, and retrieval method of land surface component temperature using thermal infrared remotely sensed satellite observation. Secondly, the study status of land surface emissivity (epsilon) was presented. Thirdly, in order to retrieve LST for all weather conditions, microwave remotely sensed data, instead of thermal infrared data, have been developed recently, and the LST retrieval method from passive microwave remotely sensed data was also introduced. Finally, the main merits and shortcomings of different kinds of LST retrieval methods were discussed, respectively.
Tang, Bo-Hui; Wu, Hua-; Li, Zhao-Liang; Nerry, Françoise
2012-07-30
This work addressed the validation of the MODIS-derived bidirectional reflectivity retrieval algorithm in mid-infrared (MIR) channel, proposed by Tang and Li [Int. J. Remote Sens. 29, 4907 (2008)], with ground-measured data, which were collected from a field campaign that took place in June 2004 at the ONERA (Office National d'Etudes et de Recherches Aérospatiales) center of Fauga-Mauzac, on the PIRRENE (Programme Interdisciplinaire de Recherche sur la Radiométrie en Environnement Extérieur) experiment site [Opt. Express 15, 12464 (2007)]. The leaving-surface spectral radiances measured by a BOMEM (MR250 Series) Fourier transform interferometer were used to calculate the ground brightness temperatures with the combination of the inversion of the Planck function and the spectral response functions of MODIS channels 22 and 23, and then to estimate the ground brightness temperature without the contribution of the solar direct beam and the bidirectional reflectivity by using Tang and Li's proposed algorithm. On the other hand, the simultaneously measured atmospheric profiles were used to obtain the atmospheric parameters and then to calculate the ground brightness temperature without the contribution of the solar direct beam, based on the atmospheric radiative transfer equation in the MIR region. Comparison of those two kinds of brightness temperature obtained by two different methods indicated that the Root Mean Square Error (RMSE) between the brightness temperatures estimated respectively using Tang and Li's algorithm and the atmospheric radiative transfer equation is 1.94 K. In addition, comparison of the hemispherical-directional reflectances derived by Tang and Li's algorithm with those obtained from the field measurements showed that the RMSE is 0.011, which indicates that Tang and Li's algorithm is feasible to retrieve the bidirectional reflectivity in MIR channel from MODIS data.
Han, Guangjie; Li, Shanshan; Zhu, Chunsheng; Jiang, Jinfang; Zhang, Wenbo
2017-02-08
Marine environmental monitoring provides crucial information and support for the exploitation, utilization, and protection of marine resources. With the rapid development of information technology, the development of three-dimensional underwater acoustic sensor networks (3D UASNs) provides a novel strategy to acquire marine environment information conveniently, efficiently and accurately. However, the specific propagation effects of acoustic communication channel lead to decreased successful information delivery probability with increased distance. Therefore, we investigate two probabilistic neighborhood-based data collection algorithms for 3D UASNs which are based on a probabilistic acoustic communication model instead of the traditional deterministic acoustic communication model. An autonomous underwater vehicle (AUV) is employed to traverse along the designed path to collect data from neighborhoods. For 3D UASNs without prior deployment knowledge, partitioning the network into grids can allow the AUV to visit the central location of each grid for data collection. For 3D UASNs in which the deployment knowledge is known in advance, the AUV only needs to visit several selected locations by constructing a minimum probabilistic neighborhood covering set to reduce data latency. Otherwise, by increasing the transmission rounds, our proposed algorithms can provide a tradeoff between data collection latency and information gain. These algorithms are compared with basic Nearest-neighbor Heuristic algorithm via simulations. Simulation analyses show that our proposed algorithms can efficiently reduce the average data collection completion time, corresponding to a decrease of data latency.
Regalia, Giulia; Coelli, Stefania; Biffi, Emilia; Ferrigno, Giancarlo; Pedrocchi, Alessandra
2016-01-01
Neuronal spike sorting algorithms are designed to retrieve neuronal network activity on a single-cell level from extracellular multiunit recordings with Microelectrode Arrays (MEAs). In typical analysis of MEA data, one spike sorting algorithm is applied indiscriminately to all electrode signals. However, this approach neglects the dependency of algorithms' performances on the neuronal signals properties at each channel, which require data-centric methods. Moreover, sorting is commonly performed off-line, which is time and memory consuming and prevents researchers from having an immediate glance at ongoing experiments. The aim of this work is to provide a versatile framework to support the evaluation and comparison of different spike classification algorithms suitable for both off-line and on-line analysis. We incorporated different spike sorting "building blocks" into a Matlab-based software, including 4 feature extraction methods, 3 feature clustering methods, and 1 template matching classifier. The framework was validated by applying different algorithms on simulated and real signals from neuronal cultures coupled to MEAs. Moreover, the system has been proven effective in running on-line analysis on a standard desktop computer, after the selection of the most suitable sorting methods. This work provides a useful and versatile instrument for a supported comparison of different options for spike sorting towards more accurate off-line and on-line MEA data analysis.
Pedrocchi, Alessandra
2016-01-01
Neuronal spike sorting algorithms are designed to retrieve neuronal network activity on a single-cell level from extracellular multiunit recordings with Microelectrode Arrays (MEAs). In typical analysis of MEA data, one spike sorting algorithm is applied indiscriminately to all electrode signals. However, this approach neglects the dependency of algorithms' performances on the neuronal signals properties at each channel, which require data-centric methods. Moreover, sorting is commonly performed off-line, which is time and memory consuming and prevents researchers from having an immediate glance at ongoing experiments. The aim of this work is to provide a versatile framework to support the evaluation and comparison of different spike classification algorithms suitable for both off-line and on-line analysis. We incorporated different spike sorting “building blocks” into a Matlab-based software, including 4 feature extraction methods, 3 feature clustering methods, and 1 template matching classifier. The framework was validated by applying different algorithms on simulated and real signals from neuronal cultures coupled to MEAs. Moreover, the system has been proven effective in running on-line analysis on a standard desktop computer, after the selection of the most suitable sorting methods. This work provides a useful and versatile instrument for a supported comparison of different options for spike sorting towards more accurate off-line and on-line MEA data analysis. PMID:27239191
Simulation of 3-D Nonequilibrium Seeded Air Flow in the NASA-Ames MHD Channel
NASA Technical Reports Server (NTRS)
Gupta, Sumeet; Tannehill, John C.; Mehta, Unmeel B.
2004-01-01
The 3-D nonequilibrium seeded air flow in the NASA-Ames experimental MHD channel has been numerically simulated. The channel contains a nozzle section, a center section, and an accelerator section where magnetic and electric fields can be imposed on the flow. In recent tests, velocity increases of up to 40% have been achieved in the accelerator section. The flow in the channel is numerically computed us ing a 3-D parabolized Navier-Stokes (PNS) algorithm that has been developed to efficiently compute MHD flows in the low magnetic Reynolds number regime: The MHD effects are modeled by introducing source terms into the PNS equations which can then be solved in a very efficient manner. The algorithm has been extended in the present study to account for nonequilibrium seeded air flows. The electrical conductivity of the flow is determined using the program of Park. The new algorithm has been used to compute two test cases that match the experimental conditions. In both cases, magnetic and electric fields are applied to the seeded flow. The computed results are in good agreement with the experimental data.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Geogdzhayev, Igor V.; Cairns, Brian; Rossow, William B.; Lacis, Andrew A.
1999-01-01
This paper outlines the methodology of interpreting channel 1 and 2 AVHRR radiance data over the oceans and describes a detailed analysis of the sensitivity of monthly averages of retrieved aerosol parameters to the assumptions made in different retrieval algorithms. The analysis is based on using real AVHRR data and exploiting accurate numerical techniques for computing single and multiple scattering and spectral absorption of light in the vertically inhomogeneous atmosphere-ocean system. We show that two-channel algorithms can be expected to provide significantly more accurate and less biased retrievals of the aerosol optical thickness than one-channel algorithms and that imperfect cloud screening and calibration uncertainties are by far the largest sources of errors in the retrieved aerosol parameters. Both underestimating and overestimating aerosol absorption as well as the potentially strong variability of the real part of the aerosol refractive index may lead to regional and/or seasonal biases in optical thickness retrievals. The Angstrom exponent appears to be the most invariant aerosol size characteristic and should be retrieved along with optical thickness as the second aerosol parameter.
Performance of convolutional codes on fading channels typical of planetary entry missions
NASA Technical Reports Server (NTRS)
Modestino, J. W.; Mui, S. Y.; Reale, T. J.
1974-01-01
The performance of convolutional codes in fading channels typical of the planetary entry channel is examined in detail. The signal fading is due primarily to turbulent atmospheric scattering of the RF signal transmitted from an entry probe through a planetary atmosphere. Short constraint length convolutional codes are considered in conjunction with binary phase-shift keyed modulation and Viterbi maximum likelihood decoding, and for longer constraint length codes sequential decoding utilizing both the Fano and Zigangirov-Jelinek (ZJ) algorithms are considered. Careful consideration is given to the modeling of the channel in terms of a few meaningful parameters which can be correlated closely with theoretical propagation studies. For short constraint length codes the bit error probability performance was investigated as a function of E sub b/N sub o parameterized by the fading channel parameters. For longer constraint length codes the effect was examined of the fading channel parameters on the computational requirements of both the Fano and ZJ algorithms. The effects of simple block interleaving in combatting the memory of the channel is explored, using the analytic approach or digital computer simulation.
Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, V.; Farvardin, N.
1986-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
Sample Training Based Wildfire Segmentation by 2D Histogram θ-Division with Minimum Error
Dong, Erqian; Sun, Mingui; Jia, Wenyan; Zhang, Dengyi; Yuan, Zhiyong
2013-01-01
A novel wildfire segmentation algorithm is proposed with the help of sample training based 2D histogram θ-division and minimum error. Based on minimum error principle and 2D color histogram, the θ-division methods were presented recently, but application of prior knowledge on them has not been explored. For the specific problem of wildfire segmentation, we collect sample images with manually labeled fire pixels. Then we define the probability function of error division to evaluate θ-division segmentations, and the optimal angle θ is determined by sample training. Performances in different color channels are compared, and the suitable channel is selected. To further improve the accuracy, the combination approach is presented with both θ-division and other segmentation methods such as GMM. Our approach is tested on real images, and the experiments prove its efficiency for wildfire segmentation. PMID:23878526
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Miller, Warner H.; Venbrux, Jack; Liu, Norley; Rice, Robert F.
1993-01-01
Data compression has been proposed for several flight missions as a means of either reducing on board mass data storage, increasing science data return through a bandwidth constrained channel, reducing TDRSS access time, or easing ground archival mass storage requirement. Several issues arise with the implementation of this technology. These include the requirement of a clean channel, onboard smoothing buffer, onboard processing hardware and on the algorithm itself, the adaptability to scene changes and maybe even versatility to the various mission types. This paper gives an overview of an ongoing effort being performed at Goddard Space Flight Center for implementing a lossless data compression scheme for space flight. We will provide analysis results on several data systems issues, the performance of the selected lossless compression scheme, the status of the hardware processor and current development plan.
NASA Astrophysics Data System (ADS)
Nguyen, T. K. T.; Navratilova, Z.; Cabral, H.; Wang, L.; Gielen, G.; Battaglia, F. P.; Bartic, C.
2014-08-01
Objective. Closed-loop operation of neuro-electronic systems is desirable for both scientific and clinical (neuroprosthesis) applications. Integrating optical stimulation with recording capability further enhances the selectivity of neural stimulation. We have developed a system enabling the local delivery of optical stimuli and the simultaneous electrical measuring of the neural activities in a closed-loop approach. Approach. The signal analysis is performed online through the implementation of a template matching algorithm. The system performance is demonstrated with the recorded data and in awake rats. Main results. Specifically, the neural activities are simultaneously recorded, detected, classified online (through spike sorting) from 32 channels, and used to trigger a light emitting diode light source using generated TTL signals. Significance. A total processing time of 8 ms is achieved, suitable for optogenetic studies of brain mechanisms online.
Fast convergent frequency-domain MIMO equalizer for few-mode fiber communication systems
NASA Astrophysics Data System (ADS)
He, Xuan; Weng, Yi; Wang, Junyi; Pan, Z.
2018-02-01
Space division multiplexing using few-mode fibers has been extensively explored to sustain the continuous traffic growth. In few-mode fiber optical systems, both spatial and polarization modes are exploited to transmit parallel channels, thus increasing the overall capacity. However, signals on spatial channels inevitably suffer from the intrinsic inter-modal coupling and large accumulated differential mode group delay (DMGD), which causes spatial modes de-multiplex even harder. Many research articles have demonstrated that frequency domain adaptive multi-input multi-output (MIMO) equalizer can effectively compensate the DMGD and demultiplex the spatial channels with digital signal processing (DSP). However, the large accumulated DMGD usually requires a large number of training blocks for the initial convergence of adaptive MIMO equalizers, which will decrease the overall system efficiency and even degrade the equalizer performance in fast-changing optical channels. Least mean square (LMS) algorithm is always used in MIMO equalization to dynamically demultiplex the spatial signals. We have proposed to use signal power spectral density (PSD) dependent method and noise PSD directed method to improve the convergence speed of adaptive frequency domain LMS algorithm. We also proposed frequency domain recursive least square (RLS) algorithm to further increase the convergence speed of MIMO equalizer at cost of greater hardware complexity. In this paper, we will compare the hardware complexity and convergence speed of signal PSD dependent and noise power directed algorithms against the conventional frequency domain LMS algorithm. In our numerical study of a three-mode 112 Gbit/s PDM-QPSK optical system with 3000 km transmission, the noise PSD directed and signal PSD dependent methods could improve the convergence speed by 48.3% and 36.1% respectively, at cost of 17.2% and 10.7% higher hardware complexity. We will also compare the frequency domain RLS algorithm against conventional frequency domain LMS algorithm. Our numerical study shows that, in a three-mode 224 Gbit/s PDM-16-QAM system with 3000 km transmission, the RLS algorithm could improve the convergence speed by 53.7% over conventional frequency domain LMS algorithm.
Holland, Katherine D; Bouley, Thomas M; Horn, Paul S
2017-07-01
Variants in neuronal voltage-gated sodium channel α-subunits genes SCN1A, SCN2A, and SCN8A are common in early onset epileptic encephalopathies and other autosomal dominant childhood epilepsy syndromes. However, in clinical practice, missense variants are often classified as variants of uncertain significance when missense variants are identified but heritability cannot be determined. Genetic testing reports often include results of computational tests to estimate pathogenicity and the frequency of that variant in population-based databases. The objective of this work was to enhance clinicians' understanding of results by (1) determining how effectively computational algorithms predict epileptogenicity of sodium channel (SCN) missense variants; (2) optimizing their predictive capabilities; and (3) determining if epilepsy-associated SCN variants are present in population-based databases. This will help clinicians better understand the results of indeterminate SCN test results in people with epilepsy. Pathogenic, likely pathogenic, and benign variants in SCNs were identified using databases of sodium channel variants. Benign variants were also identified from population-based databases. Eight algorithms commonly used to predict pathogenicity were compared. In addition, logistic regression was used to determine if a combination of algorithms could better predict pathogenicity. Based on American College of Medical Genetic Criteria, 440 variants were classified as pathogenic or likely pathogenic and 84 were classified as benign or likely benign. Twenty-eight variants previously associated with epilepsy were present in population-based gene databases. The output provided by most computational algorithms had a high sensitivity but low specificity with an accuracy of 0.52-0.77. Accuracy could be improved by adjusting the threshold for pathogenicity. Using this adjustment, the Mendelian Clinically Applicable Pathogenicity (M-CAP) algorithm had an accuracy of 0.90 and a combination of algorithms increased the accuracy to 0.92. Potentially pathogenic variants are present in population-based sources. Most computational algorithms overestimate pathogenicity; however, a weighted combination of several algorithms increased classification accuracy to >0.90. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
Demodulation Algorithms for the Ofdm Signals in the Time- and Frequency-Scattering Channels
NASA Astrophysics Data System (ADS)
Bochkov, G. N.; Gorokhov, K. V.; Kolobkov, A. V.
2016-06-01
We consider a method based on the generalized maximum-likelihood rule for solving the problem of reception of the signals with orthogonal frequency division multiplexing of their harmonic components (OFDM signals) in the time- and frequency-scattering channels. The coherent and incoherent demodulators effectively using the time scattering due to the fast fading of the signal are developed. Using computer simulation, we performed comparative analysis of the proposed algorithms and well-known signal-reception algorithms with equalizers. The proposed symbolby-symbol detector with decision feedback and restriction of the number of searched variants is shown to have the best bit-error-rate performance. It is shown that under conditions of the limited accuracy of estimating the communication-channel parameters, the incoherent OFDMsignal detectors with differential phase-shift keying can ensure a better bit-error-rate performance compared with the coherent OFDM-signal detectors with absolute phase-shift keying.
NASA Astrophysics Data System (ADS)
Xu, Ding; Li, Qun
2017-01-01
This paper addresses the power allocation problem for cognitive radio (CR) based on hybrid-automatic-repeat-request (HARQ) with chase combining (CC) in Nakagamimslow fading channels. We assume that, instead of the perfect instantaneous channel state information (CSI), only the statistical CSI is available at the secondary user (SU) transmitter. The aim is to minimize the SU outage probability under the primary user (PU) interference outage constraint. Using the Lagrange multiplier method, an iterative and recursive algorithm is derived to obtain the optimal power allocation for each transmission round. Extensive numerical results are presented to illustrate the performance of the proposed algorithm.
Robust vector quantization for noisy channels
NASA Technical Reports Server (NTRS)
Demarca, J. R. B.; Farvardin, N.; Jayant, N. S.; Shoham, Y.
1988-01-01
The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance.
Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F
2011-04-01
To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast three-dimensional MRI data acquisition. Copyright © 2011 Wiley-Liss, Inc.
Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F.
2011-01-01
Purpose To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Materials and Methods Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in-vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Results Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. Conclusion The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast 3D MRI data acquisition. PMID:21448967
NASA Astrophysics Data System (ADS)
Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.
2013-04-01
The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.
Effective pore size and radius of capture for K(+) ions in K-channels.
Moldenhauer, Hans; Díaz-Franulic, Ignacio; González-Nilo, Fernando; Naranjo, David
2016-02-02
Reconciling protein functional data with crystal structure is arduous because rare conformations or crystallization artifacts occur. Here we present a tool to validate the dimensions of open pore structures of potassium-selective ion channels. We used freely available algorithms to calculate the molecular contour of the pore to determine the effective internal pore radius (r(E)) in several K-channel crystal structures. r(E) was operationally defined as the radius of the biggest sphere able to enter the pore from the cytosolic side. We obtained consistent r(E) estimates for MthK and Kv1.2/2.1 structures, with r(E) = 5.3-5.9 Å and r(E) = 4.5-5.2 Å, respectively. We compared these structural estimates with functional assessments of the internal mouth radii of capture (r(C)) for two electrophysiological counterparts, the large conductance calcium activated K-channel (r(C) = 2.2 Å) and the Shaker Kv-channel (r(C) = 0.8 Å), for MthK and Kv1.2/2.1 structures, respectively. Calculating the difference between r(E) and r(C), produced consistent size radii of 3.1-3.7 Å and 3.6-4.4 Å for hydrated K(+) ions. These hydrated K(+) estimates harmonize with others obtained with diverse experimental and theoretical methods. Thus, these findings validate MthK and the Kv1.2/2.1 structures as templates for open BK and Kv-channels, respectively.
A high data rate universal lattice decoder on FPGA
NASA Astrophysics Data System (ADS)
Ma, Jing; Huang, Xinming; Kura, Swapna
2005-06-01
This paper presents the architecture design of a high data rate universal lattice decoder for MIMO channels on FPGA platform. A phost strategy based lattice decoding algorithm is modified in this paper to reduce the complexity of the closest lattice point search. The data dependency of the improved algorithm is examined and a parallel and pipeline architecture is developed with the iterative decoding function on FPGA and the division intensive channel matrix preprocessing on DSP. Simulation results demonstrate that the improved lattice decoding algorithm provides better bit error rate and less iteration number compared with the original algorithm. The system prototype of the decoder shows that it supports data rate up to 7Mbit/s on a Virtex2-1000 FPGA, which is about 8 times faster than the original algorithm on FPGA platform and two-orders of magnitude better than its implementation on a DSP platform.
Architecture and Implementation of OpenPET Firmware and Embedded Software
Abu-Nimeh, Faisal T.; Ito, Jennifer; Moses, William W.; ...
2016-01-11
OpenPET is an open source, modular, extendible, and high-performance platform suitable for multi-channel data acquisition and analysis. Due to the versatility of the hardware, firmware, and software architectures, the platform is capable of interfacing with a wide variety of detector modules not only in medical imaging but also in homeland security applications. Analog signals from radiation detectors share similar characteristics-a pulse whose area is proportional to the deposited energy and whose leading edge is used to extract a timing signal. As a result, a generic design method of the platform is adopted for the hardware, firmware, and software architectures andmore » implementations. The analog front-end is hosted on a module called a Detector Board, where each board can filter, combine, timestamp, and process multiple channels independently. The processed data is formatted and sent through a backplane bus to a module called Support Board, where 1 Support Board can host up to eight Detector Board modules. The data in the Support Board, coming from 8 Detector Board modules, can be aggregated or correlated (if needed) depending on the algorithm implemented or runtime mode selected. It is then sent out to a computer workstation for further processing. The number of channels (detector modules), to be processed, mandates the overall OpenPET System Configuration, which is designed to handle up to 1,024 channels using 16-channel Detector Boards in the Standard System Configuration and 16,384 channels using 32-channel Detector Boards in the Large System Configuration.« less
Retrieving Volcanic SO2 from the 4-UV channels on DSCOVR/EPIC
NASA Astrophysics Data System (ADS)
Fisher, B. L.; Krotkov, N. A.; Carn, S. A.; Taylor, S.; Li, C.; Bhartia, P. K.; Huang, L. K.; Haffner, D. P.
2017-12-01
Since arriving at the L1 Lagrange point in June 2015, the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) has been collecting continuous full disk images of the sunlit earth from a distance of 1.5 million km. EPIC is a 10-band spectroradiometer that that has a field of view (FoV) at the earth's surface of about 25 km, providing a unique opportunity to observe the initial appearance and evolution of SO2 plumes from volcanic eruptions at about 90 minute temporal resolution. Our algorithm uses the 317.5, 325, 340 and 388 nm UV channels on EPIC to retrieve volcanic SO2, total column ozone, Lambertian equivalent reflectivity and its spectral dependence. The MS_SO2 algorithm has been successfully applied to the data from legacy and current NASA missions (e.g., Nimbus7/TOMS, SNPP/OMPS, and Aura/OMI). The separation between ozone and SO2 is possible due differences in the cross sections at the two shortest UV channels. The images for each spectral channel are not perfectly aligned due to the earth's rotation, geo-rectification, cloud noise, exposure time and spacecraft jitter. These issues introduce additional noise, for a multi-channel inversion. In this presentation, we describe some modifications to the algorithm that attempt to account for these issues. By comparing the plume areas, mass tonnage and peak SO2 values from other low earth observing satellites, it is shown that the algorithm significantly improves the identification of the plume, while eliminating false positives.
Aerosol Correction for Remotely Sensed Sea Surface Temperatures From the NOAA AVHRR: Phase II
NASA Astrophysics Data System (ADS)
Nalli, N. R.; Ignatov, A.
2002-05-01
For over two decades, the National Oceanic and Atmospheric Administration (NOAA) has produced global retrievals of sea surface temperature (SST) using infrared (IR) data from the Advanced Very High Resolution Radiometer (AVHRR). The standard multichannel retrieval algorithms are derived from regression analyses of AVHRR window channel brightness temperatures against in situ buoy measurements under non-cloudy conditions thus providing a correction for IR attenuation due to molecular water vapor absorption. However, for atmospheric conditions with elevated aerosol levels (e.g., arising from dust, biomass burning and volcanic eruptions), such algorithms lead to significant negative biases in SST because of IR attenuation arising from aerosol absorption and scattering. This research presents the development of a 2nd-phase aerosol correction algorithm for daytime AVHRR SST. To accomplish this, a long-term (1990-1998), global AVHRR-buoy matchup database was created by merging the Pathfinder Atmospheres (PATMOS) and Oceans (PFMDB) data sets. The merged data are unique in that they include multi-year, global daytime estimates of aerosol optical depth (AOD) derived from AVHRR channels 1 and 2 (0.63 and 0.83 μ m, respectively), along with an effective Angstrom exponent derived from the AOD retrievals (Ignatov and Nalli, 2002). Recent enhancements in the aerosol data constitute an improvement over the Phase I algorithm (Nalli and Stowe, 2002) which relied only on channel 1 AOD and the ratio of normalized reflectance from channels 1 and 2. The Angstrom exponent and channel 2 AOD provide important statistical information about the particle size distribution of the aerosol. The SST bias can be parametrically expressed as a function of observed AVHRR channels 1 and 2 slant-path AOD, normalized reflectance ratio and the Angstrom exponent. Based upon these empirical relationships, aerosol correction equations are then derived for the daytime multichannel and nonlinear SST (MCSST and NLSST) algorithms. Separate sets of coefficients are utilized for two aerosol modes, these being stratospheric/tropospheric (e.g., volcanic aerosol) and tropospheric (e.g., dust, smoke). The algorithms are subsequently applied to retrospective PATMOS data to demonstrate the potential for climate applications. The minimization of cold biases in the AVHRR SST, as demonstrated in this work, should improve its overall utility for the general user community.
NASA Technical Reports Server (NTRS)
Wolf, Michael
2012-01-01
A document describes an algorithm created to estimate the mass placed on a sample verification sensor (SVS) designed for lunar or planetary robotic sample return missions. A novel SVS measures the capacitance between a rigid bottom plate and an elastic top membrane in seven locations. As additional sample material (soil and/or small rocks) is placed on the top membrane, the deformation of the membrane increases the capacitance. The mass estimation algorithm addresses both the calibration of each SVS channel, and also addresses how to combine the capacitances read from each of the seven channels into a single mass estimate. The probabilistic approach combines the channels according to the variance observed during the training phase, and provides not only the mass estimate, but also a value for the certainty of the estimate. SVS capacitance data is collected for known masses under a wide variety of possible loading scenarios, though in all cases, the distribution of sample within the canister is expected to be approximately uniform. A capacitance-vs-mass curve is fitted to this data, and is subsequently used to determine the mass estimate for the single channel s capacitance reading during the measurement phase. This results in seven different mass estimates, one for each SVS channel. Moreover, the variance of the calibration data is used to place a Gaussian probability distribution function (pdf) around this mass estimate. To blend these seven estimates, the seven pdfs are combined into a single Gaussian distribution function, providing the final mean and variance of the estimate. This blending technique essentially takes the final estimate as an average of the estimates of the seven channels, weighted by the inverse of the channel s variance.
NASA Astrophysics Data System (ADS)
Ni, Y. Q.; Fan, K. Q.; Zheng, G.; Chan, T. H. T.; Ko, J. M.
2003-08-01
An automatic modal identification program is developed for continuous extraction of modal parameters of three cable-supported bridges in Hong Kong which are instrumented with a long-term monitoring system. The program employs the Complex Modal Indication Function (CMIF) algorithm to identify modal properties from continuous ambient vibration measurements in an on-line manner. By using the LabVIEW graphical programming language, the software realizes the algorithm in Virtual Instrument (VI) style. The applicability and implementation issues of the developed software are demonstrated by using one-year measurement data acquired from 67 channels of accelerometers deployed on the cable-stayed Ting Kau Bridge. With the continuously identified results, normal variability of modal vectors caused by varying environmental and operational conditions is observed. Such observation is very helpful for selection of appropriate measured modal vectors for structural health monitoring applications.
Liu, Jianbo; Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Neal, Maxwell; Cashmere, David J; Germain, Anne; Reifman, Jaques
2018-02-01
Electroencephalography (EEG) recordings during sleep are often contaminated by muscle and ocular artefacts, which can affect the results of spectral power analyses significantly. However, the extent to which these artefacts affect EEG spectral power across different sleep states has not been quantified explicitly. Consequently, the effectiveness of automated artefact-rejection algorithms in minimizing these effects has not been characterized fully. To address these issues, we analysed standard 10-channel EEG recordings from 20 subjects during one night of sleep. We compared their spectral power when the recordings were contaminated by artefacts and after we removed them by visual inspection or by using automated artefact-rejection algorithms. During both rapid eye movement (REM) and non-REM (NREM) sleep, muscle artefacts contaminated no more than 5% of the EEG data across all channels. However, they corrupted delta, beta and gamma power levels substantially by up to 126, 171 and 938%, respectively, relative to the power level computed from artefact-free data. Although ocular artefacts were infrequent during NREM sleep, they affected up to 16% of the frontal and temporal EEG channels during REM sleep, primarily corrupting delta power by up to 33%. For both REM and NREM sleep, the automated artefact-rejection algorithms matched power levels to within ~10% of the artefact-free power level for each EEG channel and frequency band. In summary, although muscle and ocular artefacts affect only a small fraction of EEG data, they affect EEG spectral power significantly. This suggests the importance of using artefact-rejection algorithms before analysing EEG data. © 2017 European Sleep Research Society.
Automated brain tumor segmentation using spatial accuracy-weighted hidden Markov Random Field.
Nie, Jingxin; Xue, Zhong; Liu, Tianming; Young, Geoffrey S; Setayesh, Kian; Guo, Lei; Wong, Stephen T C
2009-09-01
A variety of algorithms have been proposed for brain tumor segmentation from multi-channel sequences, however, most of them require isotropic or pseudo-isotropic resolution of the MR images. Although co-registration and interpolation of low-resolution sequences, such as T2-weighted images, onto the space of the high-resolution image, such as T1-weighted image, can be performed prior to the segmentation, the results are usually limited by partial volume effects due to interpolation of low-resolution images. To improve the quality of tumor segmentation in clinical applications where low-resolution sequences are commonly used together with high-resolution images, we propose the algorithm based on Spatial accuracy-weighted Hidden Markov random field and Expectation maximization (SHE) approach for both automated tumor and enhanced-tumor segmentation. SHE incorporates the spatial interpolation accuracy of low-resolution images into the optimization procedure of the Hidden Markov Random Field (HMRF) to segment tumor using multi-channel MR images with different resolutions, e.g., high-resolution T1-weighted and low-resolution T2-weighted images. In experiments, we evaluated this algorithm using a set of simulated multi-channel brain MR images with known ground-truth tissue segmentation and also applied it to a dataset of MR images obtained during clinical trials of brain tumor chemotherapy. The results show that more accurate tumor segmentation results can be obtained by comparing with conventional multi-channel segmentation algorithms.
The Error Structure of the SMAP Single and Dual Channel Soil Moisture Retrievals
NASA Astrophysics Data System (ADS)
Dong, Jianzhi; Crow, Wade T.; Bindlish, Rajat
2018-01-01
Knowledge of the temporal error structure for remotely sensed surface soil moisture retrievals can improve our ability to exploit them for hydrologic and climate studies. This study employs a triple collocation analysis to investigate both the total variance and temporal autocorrelation of errors in Soil Moisture Active and Passive (SMAP) products generated from two separate soil moisture retrieval algorithms, the vertically polarized brightness temperature-based single-channel algorithm (SCA-V, the current baseline SMAP algorithm) and the dual-channel algorithm (DCA). A key assumption made in SCA-V is that real-time vegetation opacity can be accurately captured using only a climatology for vegetation opacity. Results demonstrate that while SCA-V generally outperforms DCA, SCA-V can produce larger total errors when this assumption is significantly violated by interannual variability in vegetation health and biomass. Furthermore, larger autocorrelated errors in SCA-V retrievals are found in areas with relatively large vegetation opacity deviations from climatological expectations. This implies that a significant portion of the autocorrelated error in SCA-V is attributable to the violation of its vegetation opacity climatology assumption and suggests that utilizing a real (as opposed to climatological) vegetation opacity time series in the SCA-V algorithm would reduce the magnitude of autocorrelated soil moisture retrieval errors.
FPGA implementation of low complexity LDPC iterative decoder
NASA Astrophysics Data System (ADS)
Verma, Shivani; Sharma, Sanjay
2016-07-01
Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.
Viterbi equalization for long-distance, high-speed underwater laser communication
NASA Astrophysics Data System (ADS)
Hu, Siqi; Mi, Le; Zhou, Tianhua; Chen, Weibiao
2017-07-01
In long-distance, high-speed underwater laser communication, because of the strong absorption and scattering processes, the laser pulse is stretched with the increase in communication distance and the decrease in water clarity. The maximum communication bandwidth is limited by laser-pulse stretching. Improving the communication rate increases the intersymbol interference (ISI). To reduce the effect of ISI, the Viterbi equalization (VE) algorithm is used to estimate the maximum-likelihood receiving sequence. The Monte Carlo method is used to simulate the stretching of the received laser pulse and the maximum communication rate at a wavelength of 532 nm in Jerlov IB and Jerlov II water channels with communication distances of 80, 100, and 130 m, respectively. The high-data rate communication performance for the VE and hard-decision algorithms is compared. The simulation results show that the VE algorithm can be used to reduce the ISI by selecting the minimum error path. The trade-off between the high-data rate communication performance and minor bit-error rate performance loss makes VE a promising option for applications in long-distance, high-speed underwater laser communication systems.
NASA Astrophysics Data System (ADS)
Biswas, Rahul; Blackburn, Lindy; Cao, Junwei; Essick, Reed; Hodge, Kari Alison; Katsavounidis, Erotokritos; Kim, Kyungmin; Kim, Young-Min; Le Bigot, Eric-Olivier; Lee, Chang-Hwan; Oh, John J.; Oh, Sang Hoon; Son, Edwin J.; Tao, Ye; Vaulin, Ruslan; Wang, Xiaoge
2013-09-01
The sensitivity of searches for astrophysical transients in data from the Laser Interferometer Gravitational-wave Observatory (LIGO) is generally limited by the presence of transient, non-Gaussian noise artifacts, which occur at a high enough rate such that accidental coincidence across multiple detectors is non-negligible. These “glitches” can easily be mistaken for transient gravitational-wave signals, and their robust identification and removal will help any search for astrophysical gravitational waves. We apply machine-learning algorithms (MLAs) to the problem, using data from auxiliary channels within the LIGO detectors that monitor degrees of freedom unaffected by astrophysical signals. Noise sources may produce artifacts in these auxiliary channels as well as the gravitational-wave channel. The number of auxiliary-channel parameters describing these disturbances may also be extremely large; high dimensionality is an area where MLAs are particularly well suited. We demonstrate the feasibility and applicability of three different MLAs: artificial neural networks, support vector machines, and random forests. These classifiers identify and remove a substantial fraction of the glitches present in two different data sets: four weeks of LIGO’s fourth science run and one week of LIGO’s sixth science run. We observe that all three algorithms agree on which events are glitches to within 10% for the sixth-science-run data, and support this by showing that the different optimization criteria used by each classifier generate the same decision surface, based on a likelihood-ratio statistic. Furthermore, we find that all classifiers obtain similar performance to the benchmark algorithm, the ordered veto list, which is optimized to detect pairwise correlations between transients in LIGO auxiliary channels and glitches in the gravitational-wave data. This suggests that most of the useful information currently extracted from the auxiliary channels is already described by this model. Future performance gains are thus likely to involve additional sources of information, rather than improvements in the classification algorithms themselves. We discuss several plausible sources of such new information as well as the ways of propagating it through the classifiers into gravitational-wave searches.
Kaufman, I; Luchinsky, D G; Tindjong, R; McClintock, P V E; Eisenberg, R S
2013-11-01
We use Brownian dynamics (BD) simulations to study the ionic conduction and valence selectivity of a generic electrostatic model of a biological ion channel as functions of the fixed charge Q(f) at its selectivity filter. We are thus able to reconcile the discrete calcium conduction bands recently revealed in our BD simulations, M0 (Q(f)=1e), M1 (3e), M2 (5e), with a set of sodium conduction bands L0 (0.5e), L1 (1.5e), thereby obtaining a completed pattern of conduction and selectivity bands vs Q(f) for the sodium-calcium channels family. An increase of Q(f) leads to an increase of calcium selectivity: L0 (sodium-selective, nonblocking channel) → M0 (nonselective channel) → L1 (sodium-selective channel with divalent block) → M1 (calcium-selective channel exhibiting the anomalous mole fraction effect). We create a consistent identification scheme where the L0 band is putatively identified with the eukaryotic sodium channel The scheme created is able to account for the experimentally observed mutation-induced transformations between nonselective channels, sodium-selective channels, and calcium-selective channels, which we interpret as transitions between different rows of the identification table. By considering the potential energy changes during permeation, we show explicitly that the multi-ion conduction bands of calcium and sodium channels arise as the result of resonant barrierless conduction. The pattern of periodic conduction bands is explained on the basis of sequential neutralization taking account of self-energy, as Q(f)(z,i)=ze(1/2+i), where i is the order of the band and z is the valence of the ion. Our results confirm the crucial influence of electrostatic interactions on conduction and on the Ca(2+)/Na(+) valence selectivity of calcium and sodium ion channels. The model and results could be also applicable to biomimetic nanopores with charged walls.
NASA Astrophysics Data System (ADS)
Kaufman, I.; Luchinsky, D. G.; Tindjong, R.; McClintock, P. V. E.; Eisenberg, R. S.
2013-11-01
We use Brownian dynamics (BD) simulations to study the ionic conduction and valence selectivity of a generic electrostatic model of a biological ion channel as functions of the fixed charge Qf at its selectivity filter. We are thus able to reconcile the discrete calcium conduction bands recently revealed in our BD simulations, M0 (Qf=1e), M1 (3e), M2 (5e), with a set of sodium conduction bands L0 (0.5e), L1 (1.5e), thereby obtaining a completed pattern of conduction and selectivity bands vs Qf for the sodium-calcium channels family. An increase of Qf leads to an increase of calcium selectivity: L0 (sodium-selective, nonblocking channel) → M0 (nonselective channel) → L1 (sodium-selective channel with divalent block) → M1 (calcium-selective channel exhibiting the anomalous mole fraction effect). We create a consistent identification scheme where the L0 band is putatively identified with the eukaryotic sodium channel The scheme created is able to account for the experimentally observed mutation-induced transformations between nonselective channels, sodium-selective channels, and calcium-selective channels, which we interpret as transitions between different rows of the identification table. By considering the potential energy changes during permeation, we show explicitly that the multi-ion conduction bands of calcium and sodium channels arise as the result of resonant barrierless conduction. The pattern of periodic conduction bands is explained on the basis of sequential neutralization taking account of self-energy, as Qf(z,i)=ze(1/2+i), where i is the order of the band and z is the valence of the ion. Our results confirm the crucial influence of electrostatic interactions on conduction and on the Ca2+/Na+ valence selectivity of calcium and sodium ion channels. The model and results could be also applicable to biomimetic nanopores with charged walls.
Zhan, Xue-yan; Zhao, Na; Lin, Zhao-zhou; Wu, Zhi-sheng; Yuan, Rui-juan; Qiao, Yan-jiang
2014-12-01
The appropriate algorithm for calibration set selection was one of the key technologies for a good NIR quantitative model. There are different algorithms for calibration set selection, such as Random Sampling (RS) algorithm, Conventional Selection (CS) algorithm, Kennard-Stone(KS) algorithm and Sample set Portioning based on joint x-y distance (SPXY) algorithm, et al. However, there lack systematic comparisons between two algorithms of the above algorithms. The NIR quantitative models to determine the asiaticoside content in Centella total glucosides were established in the present paper, of which 7 indexes were classified and selected, and the effects of CS algorithm, KS algorithm and SPXY algorithm for calibration set selection on the accuracy and robustness of NIR quantitative models were investigated. The accuracy indexes of NIR quantitative models with calibration set selected by SPXY algorithm were significantly different from that with calibration set selected by CS algorithm or KS algorithm, while the robustness indexes, such as RMSECV and |RMSEP-RMSEC|, were not significantly different. Therefore, SPXY algorithm for calibration set selection could improve the predicative accuracy of NIR quantitative models to determine asiaticoside content in Centella total glucosides, and have no significant effect on the robustness of the models, which provides a reference to determine the appropriate algorithm for calibration set selection when NIR quantitative models are established for the solid system of traditional Chinese medcine.
SST algorithm based on radiative transfer model
NASA Astrophysics Data System (ADS)
Mat Jafri, Mohd Z.; Abdullah, Khiruddin; Bahari, Alui
2001-03-01
An algorithm for measuring sea surface temperature (SST) without recourse to the in-situ data for calibration has been proposed. The algorithm which is based on the recorded infrared signal by the satellite sensor is composed of three terms, namely, the surface emission, the up-welling radiance emitted by the atmosphere, and the down-welling atmospheric radiance reflected at the sea surface. This algorithm requires the transmittance values of thermal bands. The angular dependence of the transmittance function was modeled using the MODTRAN code. Radiosonde data were used with the MODTRAN code. The expression of transmittance as a function of zenith view angle was obtained for each channel through regression of the MODTRAN output. The Ocean Color Temperature Scanner (OCTS) data from the Advanced Earth Observation Satellite (ADEOS) were used in this study. The study area covers the seas of the North West of Peninsular Malaysia region. The in-situ data (ship collected SST values) were used for verification of the results. Cloud contaminated pixels were masked out using the standard procedures which have been applied to the Advanced Very High Resolution Radiometer (AVHRR) data. The cloud free pixels at the in-situ sites were extracted for analysis. The OCTS data were then substituted in the proposed algorithm. The appropriate transmittance value for each channel was then assigned in the calculation. Assessment for the accuracy was made by observing the correlation and the rms deviations between the computed and the ship collected values. The results were also compared with the results from OCTS multi- channel sea surface temperature algorithm. The comparison produced high correlation values. The performance of this algorithm is comparable with the established OCTS algorithm. The effect of emissivity on the retrieved SST values was also investigated. SST map was generated and contoured manually.
The JET diagnostic fast central acquisition and trigger system (abstract)
NASA Astrophysics Data System (ADS)
Edwards, A. W.; Blackler, K.
1995-01-01
Most plasma physics diagnostics sample at a fixed frequency that is normally matched to available memory limits. This technique is not appropriate for long pulse machines such as JET where sampling frequencies of hundreds of kHz are required to diagnose very fast events. As a result of work using real-time event selection within the previous JET soft x-ray diagnostic, a single data acquisition and event triggering system for all suitable fast diagnostics, the fast central acquisition and trigger system (Fast CATS), has been developed for JET. The front-end analog-to-digital conversion (ADC) part samples all channels at 250 kHz, with a 100 kHz pass band and a stop band of 125 kHz. The back-end data collection system is based around Texas Instruments TMS320C40 microprocessors. Within this system, two levels of trigger algorithms are able to evaluate data. The first level typically analyzes data on a per diagnostic and individual channel basis. The second level looks at the data from one or more diagnostics in a window around the time of interest flagged by the first level system. Selection criteria defined by the diagnosticians are then imposed on the results from the second level to decide whether that data should be kept. The use of such a system involving intelligent real time trigger algorithms and fast data analysis will improve both the quantity and quality of JET diagnostic data, while providing valuable input to the design of data acquisition systems for very long pulse machines such as ITER. This paper will give an overview of the various elements of this new system. In addition, first results from this system following the restart of JET operation will be presented.
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2008-01-01
In this paper, an enhanced on-line diagnostic system which utilizes dual-channel sensor measurements is developed for the aircraft engine application. The enhanced system is composed of a nonlinear on-board engine model (NOBEM), the hybrid Kalman filter (HKF) algorithm, and fault detection and isolation (FDI) logic. The NOBEM provides the analytical third channel against which the dual-channel measurements are compared. The NOBEM is further utilized as part of the HKF algorithm which estimates measured engine parameters. Engine parameters obtained from the dual-channel measurements, the NOBEM, and the HKF are compared against each other. When the discrepancy among the signals exceeds a tolerance level, the FDI logic determines the cause of discrepancy. Through this approach, the enhanced system achieves the following objectives: 1) anomaly detection, 2) component fault detection, and 3) sensor fault detection and isolation. The performance of the enhanced system is evaluated in a simulation environment using faults in sensors and components, and it is compared to an existing baseline system.
MuLoG, or How to Apply Gaussian Denoisers to Multi-Channel SAR Speckle Reduction?
Deledalle, Charles-Alban; Denis, Loic; Tabti, Sonia; Tupin, Florence
2017-09-01
Speckle reduction is a longstanding topic in synthetic aperture radar (SAR) imaging. Since most current and planned SAR imaging satellites operate in polarimetric, interferometric, or tomographic modes, SAR images are multi-channel and speckle reduction techniques must jointly process all channels to recover polarimetric and interferometric information. The distinctive nature of SAR signal (complex-valued, corrupted by multiplicative fluctuations) calls for the development of specialized methods for speckle reduction. Image denoising is a very active topic in image processing with a wide variety of approaches and many denoising algorithms available, almost always designed for additive Gaussian noise suppression. This paper proposes a general scheme, called MuLoG (MUlti-channel LOgarithm with Gaussian denoising), to include such Gaussian denoisers within a multi-channel SAR speckle reduction technique. A new family of speckle reduction algorithms can thus be obtained, benefiting from the ongoing progress in Gaussian denoising, and offering several speckle reduction results often displaying method-specific artifacts that can be dismissed by comparison between results.
Fractional Poisson-Nernst-Planck Model for Ion Channels I: Basic Formulations and Algorithms.
Chen, Duan
2017-11-01
In this work, we propose a fractional Poisson-Nernst-Planck model to describe ion permeation in gated ion channels. Due to the intrinsic conformational changes, crowdedness in narrow channel pores, binding and trapping introduced by functioning units of channel proteins, ionic transport in the channel exhibits a power-law-like anomalous diffusion dynamics. We start from continuous-time random walk model for a single ion and use a long-tailed density distribution function for the particle jump waiting time, to derive the fractional Fokker-Planck equation. Then, it is generalized to the macroscopic fractional Poisson-Nernst-Planck model for ionic concentrations. Necessary computational algorithms are designed to implement numerical simulations for the proposed model, and the dynamics of gating current is investigated. Numerical simulations show that the fractional PNP model provides a more qualitatively reasonable match to the profile of gating currents from experimental observations. Meanwhile, the proposed model motivates new challenges in terms of mathematical modeling and computations.
Channel Deviation-Based Power Control in Body Area Networks.
Van, Son Dinh; Cotton, Simon L; Smith, David B
2018-05-01
Internet enabled body area networks (BANs) will form a core part of future remote health monitoring and ambient assisted living technology. In BAN applications, due to the dynamic nature of human activity, the off-body BAN channel can be prone to deep fading caused by body shadowing and multipath fading. Using this knowledge, we present some novel practical adaptive power control protocols based on the channel deviation to simultaneously prolong the lifetime of wearable devices and reduce outage probability. The proposed schemes are both flexible and relatively simple to implement on hardware platforms with constrained resources making them inherently suitable for BAN applications. We present the key algorithm parameters used to dynamically respond to the channel variation. This allows the algorithms to achieve a better energy efficiency and signal reliability in everyday usage scenarios such as those in which a person undertakes many different activities (e.g., sitting, walking, standing, etc.). We also profile their performance against traditional, optimal, and other existing schemes for which it is demonstrated that not only does the outage probability reduce significantly, but the proposed algorithms also save up to average transmit power compared to the competing schemes.
Assessment of the NPOESS/VIIRS Nighttime Infrared Cloud Optical Properties Algorithms
NASA Astrophysics Data System (ADS)
Wong, E.; Ou, S. C.
2008-12-01
In this paper we will describe two NPOESS VIIRS IR algorithms used to retrieve microphysical properties for water and ice clouds during nighttime conditions. Both algorithms employ four VIIRS IR channels: M12 (3.7 μm), M14 (8.55 μm), M15 (10.7 μm) and M16 (12 μm). The physical basis for the two algorithms is similar in that while the Cloud Top Temperature (CTT) is derived from M14 and M16 for ice clouds the Cloud Optical Thickness (COT) and Cloud Effective Particle Size (CEPS) are derived from M12 and M15. The two algorithms depart in the different radiative transfer parameterization equations used for ice and water clouds. Both the VIIRS nighttime IR algorithms and the CERES split-window method employ the 3.7 μm and 10.7 μm bands for cloud optical properties retrievals, apparently based on similar physical principles but with different implementations. It is reasonable to expect that the VIIRS and CERES IR algorithms produce comparable performance and similar limitations. To demonstrate the VIIRS nighttime IR algorithm performance, we will select a number of test cases using NASA MODIS L1b radiance products as proxy input data for VIIRS. The VIIRS retrieved COT and CEPS will then be compared to cloud products available from the MODIS, NASA CALIPSO, CloudSat and CERES sensors. For the MODIS product, the nighttime cloud emissivity will serve as an indirect comparison to VIIRS COT. For the CALIPSO and CloudSat products, the layered COT will be used for direct comparison. Finally, the CERES products will provide direct comparison with COT as well as CEPS. This study can only provide a qualitative assessment of the VIIRS IR algorithms due to the large uncertainties in these cloud products.
Local SAR in Parallel Transmission Pulse Design
Lee, Joonsung; Gebhardt, Matthias; Wald, Lawrence L.; Adalsteinsson, Elfar
2011-01-01
The management of local and global power deposition in human subjects (Specific Absorption Rate, SAR) is a fundamental constraint to the application of parallel transmission (pTx) systems. Even though the pTx and single channel have to meet the same SAR requirements, the complex behavior of the spatial distribution of local SAR for transmission arrays poses problems that are not encountered in conventional single-channel systems and places additional requirements on pTx RF pulse design. We propose a pTx pulse design method which builds on recent work to capture the spatial distribution of local SAR in numerical tissue models in a compressed parameterization in order to incorporate local SAR constraints within computation times that accommodate pTx pulse design during an in vivo MRI scan. Additionally, the algorithm yields a Protocol-specific Ultimate Peak in Local SAR (PUPiL SAR), which is shown to bound the achievable peak local SAR for a given excitation profile fidelity. The performance of the approach was demonstrated using a numerical human head model and a 7T eight-channel transmit array. The method reduced peak local 10g SAR by 14–66% for slice-selective pTx excitations and 2D selective pTx excitations compared to a pTx pulse design constrained only by global SAR. The primary tradeoff incurred for reducing peak local SAR was an increase in global SAR, up to 34% for the evaluated examples, which is favorable in cases where local SAR constraints dominate the pulse applications. PMID:22083594
Optimizing ion channel models using a parallel genetic algorithm on graphical processors.
Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon
2012-01-01
We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sadeghi, Saman; MacKay, William A.; van Dam, R. Michael; Thompson, Michael
2011-02-01
Real-time analysis of multi-channel spatio-temporal sensor data presents a considerable technical challenge for a number of applications. For example, in brain-computer interfaces, signal patterns originating on a time-dependent basis from an array of electrodes on the scalp (i.e. electroencephalography) must be analyzed in real time to recognize mental states and translate these to commands which control operations in a machine. In this paper we describe a new technique for recognition of spatio-temporal patterns based on performing online discrimination of time-resolved events through the use of correlation of phase dynamics between various channels in a multi-channel system. The algorithm extracts unique sensor signature patterns associated with each event during a training period and ranks importance of sensor pairs in order to distinguish between time-resolved stimuli to which the system may be exposed during real-time operation. We apply the algorithm to electroencephalographic signals obtained from subjects tested in the neurophysiology laboratories at the University of Toronto. The extension of this algorithm for rapid detection of patterns in other sensing applications, including chemical identification via chemical or bio-chemical sensor arrays, is also discussed.
Distributed Channel Allocation and Time Slot Optimization for Green Internet of Things.
Ding, Kaiqi; Zhao, Haitao; Hu, Xiping; Wei, Jibo
2017-10-28
In sustainable smart cities, power saving is a severe challenge in the energy-constrained Internet of Things (IoT). Efficient utilization of limited multiple non-overlap channels and time resources is a promising solution to reduce the network interference and save energy consumption. In this paper, we propose a joint channel allocation and time slot optimization solution for IoT. First, we propose a channel ranking algorithm which enables each node to rank its available channels based on the channel properties. Then, we propose a distributed channel allocation algorithm so that each node can choose a proper channel based on the channel ranking and its own residual energy. Finally, the sleeping duration and spectrum sensing duration are jointly optimized to maximize the normalized throughput and satisfy energy consumption constraints simultaneously. Different from the former approaches, our proposed solution requires no central coordination or any global information that each node can operate based on its own local information in a total distributed manner. Also, theoretical analysis and extensive simulations have validated that when applying our solution in the network of IoT: (i) each node can be allocated to a proper channel based on the residual energy to balance the lifetime; (ii) the network can rapidly converge to a collision-free transmission through each node's learning ability in the process of the distributed channel allocation; and (iii) the network throughput is further improved via the dynamic time slot optimization.
A TDM link with channel coding and digital voice.
NASA Technical Reports Server (NTRS)
Jones, M. W.; Tu, K.; Harton, P. L.
1972-01-01
The features of a TDM (time-division multiplexed) link model are described. A PCM telemetry sequence was coded for error correction and multiplexed with a digitized voice channel. An all-digital implementation of a variable-slope delta modulation algorithm was used to digitize the voice channel. The results of extensive testing are reported. The measured coding gain and the system performance over a Gaussian channel are compared with theoretical predictions and computer simulations. Word intelligibility scores are reported as a measure of voice channel performance.
Han, Guangjie; Li, Shanshan; Zhu, Chunsheng; Jiang, Jinfang; Zhang, Wenbo
2017-01-01
Marine environmental monitoring provides crucial information and support for the exploitation, utilization, and protection of marine resources. With the rapid development of information technology, the development of three-dimensional underwater acoustic sensor networks (3D UASNs) provides a novel strategy to acquire marine environment information conveniently, efficiently and accurately. However, the specific propagation effects of acoustic communication channel lead to decreased successful information delivery probability with increased distance. Therefore, we investigate two probabilistic neighborhood-based data collection algorithms for 3D UASNs which are based on a probabilistic acoustic communication model instead of the traditional deterministic acoustic communication model. An autonomous underwater vehicle (AUV) is employed to traverse along the designed path to collect data from neighborhoods. For 3D UASNs without prior deployment knowledge, partitioning the network into grids can allow the AUV to visit the central location of each grid for data collection. For 3D UASNs in which the deployment knowledge is known in advance, the AUV only needs to visit several selected locations by constructing a minimum probabilistic neighborhood covering set to reduce data latency. Otherwise, by increasing the transmission rounds, our proposed algorithms can provide a tradeoff between data collection latency and information gain. These algorithms are compared with basic Nearest-neighbor Heuristic algorithm via simulations. Simulation analyses show that our proposed algorithms can efficiently reduce the average data collection completion time, corresponding to a decrease of data latency. PMID:28208735
Retrieval of volcanic ash height from satellite-based infrared measurements
NASA Astrophysics Data System (ADS)
Zhu, Lin; Li, Jun; Zhao, Yingying; Gong, He; Li, Wenjie
2017-05-01
A new algorithm for retrieving volcanic ash cloud height from satellite-based measurements is presented. This algorithm, which was developed in preparation for China's next-generation meteorological satellite (FY-4), is based on volcanic ash microphysical property simulation and statistical optimal estimation theory. The MSG satellite's main payload, a 12-channel Spinning Enhanced Visible and Infrared Imager, was used as proxy data to test this new algorithm. A series of eruptions of Iceland's Eyjafjallajökull volcano during April to May 2010 and the Puyehue-Cordón Caulle volcanic complex eruption in the Chilean Andes on 16 June 2011 were selected as two typical cases for evaluating the algorithm under various meteorological backgrounds. Independent volcanic ash simulation training samples and satellite-based Cloud-Aerosol Lidar with Orthogonal Polarization data were used as validation data. It is demonstrated that the statistically based volcanic ash height algorithm is able to rapidly retrieve volcanic ash heights, globally. The retrieved ash heights show comparable accuracy with both independent training data and the lidar measurements, which is consistent with previous studies. However, under complicated background, with multilayers in vertical scale, underlying stratus clouds tend to have detrimental effects on the final retrieval accuracy. This is an unresolved problem, like many other previously published methods using passive satellite sensors. Compared with previous studies, the FY-4 ash height algorithm is independent of simultaneous atmospheric profiles, providing a flexible way to estimate volcanic ash height using passive satellite infrared measurements.
Information theoretic analysis of edge detection in visual communication
NASA Astrophysics Data System (ADS)
Jiang, Bo; Rahman, Zia-ur
2010-08-01
Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the artifacts introduced into the process by the image gathering process. However, experiments show that the image gathering process profoundly impacts the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. In this paper, we perform an end-to-end information theory based system analysis to assess edge detection methods. We evaluate the performance of the different algorithms as a function of the characteristics of the scene, and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge detection algorithm is regarded to have high performance only if the information rate from the scene to the edge approaches the maximum possible. This goal can be achieved only by jointly optimizing all processes. People generally use subjective judgment to compare different edge detection methods. There is not a common tool that can be used to evaluate the performance of the different algorithms, and to give people a guide for selecting the best algorithm for a given system or scene. Our information-theoretic assessment becomes this new tool to which allows us to compare the different edge detection operators in a common environment.
Damrath, Martin; Korte, Sebastian; Hoeher, Peter Adam
2017-01-01
This paper introduces the equivalent discrete-time channel model (EDTCM) to the area of diffusion-based molecular communication (DBMC). Emphasis is on an absorbing receiver, which is based on the so-called first passage time concept. In the wireless communications community the EDTCM is well known. Therefore, it is anticipated that the EDTCM improves the accessibility of DBMC and supports the adaptation of classical wireless communication algorithms to the area of DBMC. Furthermore, the EDTCM has the capability to provide a remarkable reduction of computational complexity compared to random walk based DBMC simulators. Besides the exact EDTCM, three approximations thereof based on binomial, Gaussian, and Poisson approximation are proposed and analyzed in order to further reduce computational complexity. In addition, the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm is adapted to all four channel models. Numerical results show the performance of the exact EDTCM, illustrate the performance of the adapted BCJR algorithm, and demonstrate the accuracy of the approximations.
Message passing with queues and channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dozsa, Gabor J; Heidelberger, Philip; Kumar, Sameer
In an embodiment, a send thread receives an identifier that identifies a destination node and a pointer to data. The send thread creates a first send request in response to the receipt of the identifier and the data pointer. The send thread selects a selected channel from among a plurality of channels. The selected channel comprises a selected hand-off queue and an identification of a selected message unit. Each of the channels identifies a different message unit. The selected hand-off queue is randomly accessible. If the selected hand-off queue contains an available entry, the send thread adds the first sendmore » request to the selected hand-off queue. If the selected hand-off queue does not contain an available entry, the send thread removes a second send request from the selected hand-off queue and sends the second send request to the selected message unit.« less
NASA Astrophysics Data System (ADS)
Meng, Qingxin; Hu, Xiangyun; Pan, Heping; Xi, Yufei
2018-04-01
We propose an algorithm for calculating all-time apparent resistivity from transient electromagnetic induction logging. The algorithm is based on the whole-space transient electric field expression of the uniform model and Halley's optimisation. In trial calculations for uniform models, the all-time algorithm is shown to have high accuracy. We use the finite-difference time-domain method to simulate the transient electromagnetic field in radial two-layer models without wall rock and convert the simulation results to apparent resistivity using the all-time algorithm. The time-varying apparent resistivity reflects the radially layered geoelectrical structure of the models and the apparent resistivity of the earliest time channel follows the true resistivity of the inner layer; however, the apparent resistivity at larger times reflects the comprehensive electrical characteristics of the inner and outer layers. To accurately identify the outer layer resistivity based on the series relationship model of the layered resistance, the apparent resistivity and diffusion depth of the different time channels are approximately replaced by related model parameters; that is, we propose an apparent resistivity correction algorithm. By correcting the time-varying apparent resistivity of radial two-layer models, we show that the correction results reflect the radially layered electrical structure and the corrected resistivities of the larger time channels follow the outer layer resistivity. The transient electromagnetic fields of radially layered models with wall rock are simulated to obtain the 2D time-varying profiles of the apparent resistivity and corrections. The results suggest that the time-varying apparent resistivity and correction results reflect the vertical and radial geoelectrical structures. For models with small wall-rock effect, the correction removes the effect of the low-resistance inner layer on the apparent resistivity of the larger time channels.
NASA Astrophysics Data System (ADS)
Merk, D.; Zinner, T.
2013-02-01
In this paper a new detection scheme for Convective Initation (CI) under day and night conditions is presented. The new algorithm combines the strengths of two existing methods for detecting Convective Initation with geostationary satellite data and uses the channels of the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG). For the new algorithm five infrared criteria from the Satellite Convection Analysis and Tracking algorithm (SATCAST) and one High Resolution Visible channel (HRV) criteria from Cb-TRAM were adapted. This set of criteria aims for identifying the typical development of quickly developing convective cells in an early stage. The different criteria include timetrends of the 10.8 IR channel and IR channel differences as well as their timetrends. To provide the trend fields an optical flow based method is used, the Pyramidal Matching algorithm which is part of Cb-TRAM. The new detection scheme is implemented in Cb-TRAM and is verified for seven days which comprise different weather situations in Central Europe. Contrasted with the original early stage detection scheme of Cb-TRAM skill scores are provided. From the comparison against detections of later thunderstorm stages, which are also provided by Cb-TRAM, a decrease in false prior warnings (false alarm ratio) from 91 to 81% is presented, an increase of the critical success index from 7.4 to 12.7%, and a decrease of the BIAS from 320 to 146% for normal scan mode. Similar trends are found for rapid scan mode. Most obvious is the decline of false alarms found for synoptic conditions with upper cold air masses triggering convection.
NASA Astrophysics Data System (ADS)
Merk, D.; Zinner, T.
2013-08-01
In this paper a new detection scheme for convective initiation (CI) under day and night conditions is presented. The new algorithm combines the strengths of two existing methods for detecting CI with geostationary satellite data. It uses the channels of the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG). For the new algorithm five infrared (IR) criteria from the Satellite Convection Analysis and Tracking algorithm (SATCAST) and one high-resolution visible channel (HRV) criteria from Cb-TRAM were adapted. This set of criteria aims to identify the typical development of quickly developing convective cells in an early stage. The different criteria include time trends of the 10.8 IR channel, and IR channel differences, as well as their time trends. To provide the trend fields an optical-flow-based method is used: the pyramidal matching algorithm, which is part of Cb-TRAM. The new detection scheme is implemented in Cb-TRAM, and is verified for seven days which comprise different weather situations in central Europe. Contrasted with the original early-stage detection scheme of Cb-TRAM, skill scores are provided. From the comparison against detections of later thunderstorm stages, which are also provided by Cb-TRAM, a decrease in false prior warnings (false alarm ratio) from 91 to 81% is presented, an increase of the critical success index from 7.4 to 12.7%, and a decrease of the BIAS from 320 to 146% for normal scan mode. Similar trends are found for rapid scan mode. Most obvious is the decline of false alarms found for the synoptic class "cold air" masses.
Ion-binding properties of a K+ channel selectivity filter in different conformations.
Liu, Shian; Focke, Paul J; Matulef, Kimberly; Bian, Xuelin; Moënne-Loccoz, Pierre; Valiyaveetil, Francis I; Lockless, Steve W
2015-12-08
K(+) channels are membrane proteins that selectively conduct K(+) ions across lipid bilayers. Many voltage-gated K(+) (KV) channels contain two gates, one at the bundle crossing on the intracellular side of the membrane and another in the selectivity filter. The gate at the bundle crossing is responsible for channel opening in response to a voltage stimulus, whereas the gate at the selectivity filter is responsible for C-type inactivation. Together, these regions determine when the channel conducts ions. The K(+) channel from Streptomyces lividians (KcsA) undergoes an inactivation process that is functionally similar to KV channels, which has led to its use as a practical system to study inactivation. Crystal structures of KcsA channels with an open intracellular gate revealed a selectivity filter in a constricted conformation similar to the structure observed in closed KcsA containing only Na(+) or low [K(+)]. However, recent work using a semisynthetic channel that is unable to adopt a constricted filter but inactivates like WT channels challenges this idea. In this study, we measured the equilibrium ion-binding properties of channels with conductive, inactivated, and constricted filters using isothermal titration calorimetry (ITC). EPR spectroscopy was used to determine the state of the intracellular gate of the channel, which we found can depend on the presence or absence of a lipid bilayer. Overall, we discovered that K(+) ion binding to channels with an inactivated or conductive selectivity filter is different from K(+) ion binding to channels with a constricted filter, suggesting that the structures of these channels are different.
Brankov, Jovan G
2013-10-21
The channelized Hotelling observer (CHO) has become a widely used approach for evaluating medical image quality, acting as a surrogate for human observers in early-stage research on assessment and optimization of imaging devices and algorithms. The CHO is typically used to measure lesion detectability. Its popularity stems from experiments showing that the CHO's detection performance can correlate well with that of human observers. In some cases, CHO performance overestimates human performance; to counteract this effect, an internal-noise model is introduced, which allows the CHO to be tuned to match human-observer performance. Typically, this tuning is achieved using example data obtained from human observers. We argue that this internal-noise tuning step is essentially a model training exercise; therefore, just as in supervised learning, it is essential to test the CHO with an internal-noise model on a set of data that is distinct from that used to tune (train) the model. Furthermore, we argue that, if the CHO is to provide useful insights about new imaging algorithms or devices, the test data should reflect such potential differences from the training data; it is not sufficient simply to use new noise realizations of the same imaging method. Motivated by these considerations, the novelty of this paper is the use of new model selection criteria to evaluate ten established internal-noise models, utilizing four different channel models, in a train-test approach. Though not the focus of the paper, a new internal-noise model is also proposed that outperformed the ten established models in the cases tested. The results, using cardiac perfusion SPECT data, show that the proposed train-test approach is necessary, as judged by the newly proposed model selection criteria, to avoid spurious conclusions. The results also demonstrate that, in some models, the optimal internal-noise parameter is very sensitive to the choice of training data; therefore, these models are prone to overfitting, and will not likely generalize well to new data. In addition, we present an alternative interpretation of the CHO as a penalized linear regression wherein the penalization term is defined by the internal-noise model.
Effective pore size and radius of capture for K+ ions in K-channels
Moldenhauer, Hans; Díaz-Franulic, Ignacio; González-Nilo, Fernando; Naranjo, David
2016-01-01
Reconciling protein functional data with crystal structure is arduous because rare conformations or crystallization artifacts occur. Here we present a tool to validate the dimensions of open pore structures of potassium-selective ion channels. We used freely available algorithms to calculate the molecular contour of the pore to determine the effective internal pore radius (rE) in several K-channel crystal structures. rE was operationally defined as the radius of the biggest sphere able to enter the pore from the cytosolic side. We obtained consistent rE estimates for MthK and Kv1.2/2.1 structures, with rE = 5.3–5.9 Å and rE = 4.5–5.2 Å, respectively. We compared these structural estimates with functional assessments of the internal mouth radii of capture (rC) for two electrophysiological counterparts, the large conductance calcium activated K-channel (rC = 2.2 Å) and the Shaker Kv-channel (rC = 0.8 Å), for MthK and Kv1.2/2.1 structures, respectively. Calculating the difference between rE and rC, produced consistent size radii of 3.1–3.7 Å and 3.6–4.4 Å for hydrated K+ ions. These hydrated K+ estimates harmonize with others obtained with diverse experimental and theoretical methods. Thus, these findings validate MthK and the Kv1.2/2.1 structures as templates for open BK and Kv-channels, respectively. PMID:26831782
Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.
Tao, Liang; Kwan, Hon Keung
2012-07-01
Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.
Spatial detection of tv channel logos as outliers from the content
NASA Astrophysics Data System (ADS)
Ekin, Ahmet; Braspenning, Ralph
2006-01-01
This paper proposes a purely image-based TV channel logo detection algorithm that can detect logos independently from their motion and transparency features. The proposed algorithm can robustly detect any type of logos, such as transparent and animated, without requiring any temporal constraints whereas known methods have to wait for the occurrence of large motion in the scene and assume stationary logos. The algorithm models logo pixels as outliers from the actual scene content that is represented by multiple 3-D histograms in the YC BC R space. We use four scene histograms corresponding to each of the four corners because the content characteristics change from one image corner to another. A further novelty of the proposed algorithm is that we define image corners and the areas where we compute the scene histograms by a cinematic technique called Golden Section Rule that is used by professionals. The robustness of the proposed algorithm is demonstrated over a dataset of representative TV content.
Multichannel blind iterative image restoration.
Sroubek, Filip; Flusser, Jan
2003-01-01
Blind image deconvolution is required in many applications of microscopy imaging, remote sensing, and astronomical imaging. Unfortunately in a single-channel framework, serious conceptual and numerical problems are often encountered. Very recently, an eigenvector-based method (EVAM) was proposed for a multichannel framework which determines perfectly convolution masks in a noise-free environment if channel disparity, called co-primeness, is satisfied. We propose a novel iterative algorithm based on recent anisotropic denoising techniques of total variation and a Mumford-Shah functional with the EVAM restoration condition included. A linearization scheme of half-quadratic regularization together with a cell-centered finite difference discretization scheme is used in the algorithm and provides a unified approach to the solution of total variation or Mumford-Shah. The algorithm performs well even on very noisy images and does not require an exact estimation of mask orders. We demonstrate capabilities of the algorithm on synthetic data. Finally, the algorithm is applied to defocused images taken with a digital camera and to data from astronomical ground-based observations of the Sun.
Optimized design of embedded DSP system hardware supporting complex algorithms
NASA Astrophysics Data System (ADS)
Li, Yanhua; Wang, Xiangjun; Zhou, Xinling
2003-09-01
The paper presents an optimized design method for a flexible and economical embedded DSP system that can implement complex processing algorithms as biometric recognition, real-time image processing, etc. It consists of a floating-point DSP, 512 Kbytes data RAM, 1 Mbytes FLASH program memory, a CPLD for achieving flexible logic control of input channel and a RS-485 transceiver for local network communication. Because of employing a high performance-price ratio DSP TMS320C6712 and a large FLASH in the design, this system permits loading and performing complex algorithms with little algorithm optimization and code reduction. The CPLD provides flexible logic control for the whole DSP board, especially in input channel, and allows convenient interface between different sensors and DSP system. The transceiver circuit can transfer data between DSP and host computer. In the paper, some key technologies are also introduced which make the whole system work efficiently. Because of the characters referred above, the hardware is a perfect flat for multi-channel data collection, image processing, and other signal processing with high performance and adaptability. The application section of this paper presents how this hardware is adapted for the biometric identification system with high identification precision. The result reveals that this hardware is easy to interface with a CMOS imager and is capable of carrying out complex biometric identification algorithms, which require real-time process.
FPGA implementation of image dehazing algorithm for real time applications
NASA Astrophysics Data System (ADS)
Kumar, Rahul; Kaushik, Brajesh Kumar; Balasubramanian, R.
2017-09-01
Weather degradation such as haze, fog, mist, etc. severely reduces the effective range of visual surveillance. This degradation is a spatially varying phenomena, which makes this problem non trivial. Dehazing is an essential preprocessing stage in applications such as long range imaging, border security, intelligent transportation system, etc. However, these applications require low latency of the preprocessing block. In this work, single image dark channel prior algorithm is modified and implemented for fast processing with comparable visual quality of the restored image/video. Although conventional single image dark channel prior algorithm is computationally expensive, it yields impressive results. Moreover, a two stage image dehazing architecture is introduced, wherein, dark channel and airlight are estimated in the first stage. Whereas, transmission map and intensity restoration are computed in the next stages. The algorithm is implemented using Xilinx Vivado software and validated by using Xilinx zc702 development board, which contains an Artix7 equivalent Field Programmable Gate Array (FPGA) and ARM Cortex A9 dual core processor. Additionally, high definition multimedia interface (HDMI) has been incorporated for video feed and display purposes. The results show that the dehazing algorithm attains 29 frames per second for the image resolution of 1920x1080 which is suitable of real time applications. The design utilizes 9 18K_BRAM, 97 DSP_48, 6508 FFs and 8159 LUTs.
Automatic burst detection for the EEG of the preterm infant.
Jennekens, Ward; Ruijs, Loes S; Lommen, Charlotte M L; Niemarkt, Hendrik J; Pasman, Jaco W; van Kranen-Mastenbroek, Vivianne H J M; Wijn, Pieter F F; van Pul, Carola; Andriessen, Peter
2011-10-01
To aid with prognosis and stratification of clinical treatment for preterm infants, a method for automated detection of bursts, interburst-intervals (IBIs) and continuous patterns in the electroencephalogram (EEG) is developed. Results are evaluated for preterm infants with normal neurological follow-up at 2 years. The detection algorithm (MATLAB®) for burst, IBI and continuous pattern is based on selection by amplitude, time span, number of channels and numbers of active electrodes. Annotations of two neurophysiologists were used to determine threshold values. The training set consisted of EEG recordings of four preterm infants with postmenstrual age (PMA, gestational age + postnatal age) of 29-34 weeks. Optimal threshold values were based on overall highest sensitivity. For evaluation, both observers verified detections in an independent dataset of four EEG recordings with comparable PMA. Algorithm performance was assessed by calculation of sensitivity and positive predictive value. The results of algorithm evaluation are as follows: sensitivity values of 90% ± 6%, 80% ± 9% and 97% ± 5% for burst, IBI and continuous patterns, respectively. Corresponding positive predictive values were 88% ± 8%, 96% ± 3% and 85% ± 15%, respectively. In conclusion, the algorithm showed high sensitivity and positive predictive values for bursts, IBIs and continuous patterns in preterm EEG. Computer-assisted analysis of EEG may allow objective and reproducible analysis for clinical treatment.
Variability of ICA decomposition may impact EEG signals when used to remove eyeblink artifacts
PONTIFEX, MATTHEW B.; GWIZDALA, KATHRYN L.; PARKS, ANDREW C.; BILLINGER, MARTIN; BRUNNER, CLEMENS
2017-01-01
Despite the growing use of independent component analysis (ICA) algorithms for isolating and removing eyeblink-related activity from EEG data, we have limited understanding of how variability associated with ICA uncertainty may be influencing the reconstructed EEG signal after removing the eyeblink artifact components. To characterize the magnitude of this ICA uncertainty and to understand the extent to which it may influence findings within ERP and EEG investigations, ICA decompositions of EEG data from 32 college-aged young adults were repeated 30 times for three popular ICA algorithms. Following each decomposition, eyeblink components were identified and removed. The remaining components were back-projected, and the resulting clean EEG data were further used to analyze ERPs. Findings revealed that ICA uncertainty results in variation in P3 amplitude as well as variation across all EEG sampling points, but differs across ICA algorithms as a function of the spatial location of the EEG channel. This investigation highlights the potential of ICA uncertainty to introduce additional sources of variance when the data are back-projected without artifact components. Careful selection of ICA algorithms and parameters can reduce the extent to which ICA uncertainty may introduce an additional source of variance within ERP/EEG studies. PMID:28026876
Tuning the ion selectivity of two-pore channels
Guo, Jiangtao; Zeng, Weizhong; Jiang, Youxing
2017-01-01
Organellar two-pore channels (TPCs) contain two copies of a Shaker-like six-transmembrane (6-TM) domain in each subunit and are ubiquitously expressed in plants and animals. Interestingly, plant and animal TPCs share high sequence similarity in the filter region, yet exhibit drastically different ion selectivity. Plant TPC1 functions as a nonselective cation channel on the vacuole membrane, whereas mammalian TPC channels have been shown to be endo/lysosomal Na+-selective or Ca2+-release channels. In this study, we performed systematic characterization of the ion selectivity of TPC1 from Arabidopsis thaliana (AtTPC1) and compared its selectivity with the selectivity of human TPC2 (HsTPC2). We demonstrate that AtTPC1 is selective for Ca2+ over Na+, but nonselective among monovalent cations (Li+, Na+, and K+). Our results also confirm that HsTPC2 is a Na+-selective channel activated by phosphatidylinositol 3,5-bisphosphate. Guided by our recent structure of AtTPC1, we converted AtTPC1 to a Na+-selective channel by mimicking the selectivity filter of HsTPC2 and identified key residues in the TPC filters that differentiate the selectivity between AtTPC1 and HsTPC2. Furthermore, the structure of the Na+-selective AtTPC1 mutant elucidates the structural basis for Na+ selectivity in mammalian TPCs. PMID:28096396
Tuning the ion selectivity of two-pore channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Jiangtao; Zeng, Weizhong; Jiang, Youxing
Organellar two-pore channels (TPCs) contain two copies of a Shaker-like six-transmembrane (6-TM) domain in each subunit and are ubiquitously expressed in plants and animals. Interestingly, plant and animal TPCs share high sequence similarity in the filter region, yet exhibit drastically different ion selectivity. Plant TPC1 functions as a nonselective cation channel on the vacuole membrane, whereas mammalian TPC channels have been shown to be endo/lysosomal Na+-selective or Ca2+-release channels. In this study, we performed systematic characterization of the ion selectivity of TPC1 from Arabidopsis thaliana (AtTPC1) and compared its selectivity with the selectivity of human TPC2 (HsTPC2). We demonstrate thatmore » AtTPC1 is selective for Ca2+ over Na+, but nonselective among monovalent cations (Li+, Na+, and K+). Our results also confirm that HsTPC2 is a Na+-selective channel activated by phosphatidylinositol 3,5-bisphosphate. Guided by our recent structure of AtTPC1, we converted AtTPC1 to a Na+-selective channel by mimicking the selectivity filter of HsTPC2 and identified key residues in the TPC filters that differentiate the selectivity between AtTPC1 and HsTPC2. Furthermore, the structure of the Na+-selective AtTPC1 mutant elucidates the structural basis for Na+ selectivity in mammalian TPCs.« less
A Cancer Gene Selection Algorithm Based on the K-S Test and CFS.
Su, Qiang; Wang, Yina; Jiang, Xiaobing; Chen, Fuxue; Lu, Wen-Cong
2017-01-01
To address the challenging problem of selecting distinguished genes from cancer gene expression datasets, this paper presents a gene subset selection algorithm based on the Kolmogorov-Smirnov (K-S) test and correlation-based feature selection (CFS) principles. The algorithm selects distinguished genes first using the K-S test, and then, it uses CFS to select genes from those selected by the K-S test. We adopted support vector machines (SVM) as the classification tool and used the criteria of accuracy to evaluate the performance of the classifiers on the selected gene subsets. This approach compared the proposed gene subset selection algorithm with the K-S test, CFS, minimum-redundancy maximum-relevancy (mRMR), and ReliefF algorithms. The average experimental results of the aforementioned gene selection algorithms for 5 gene expression datasets demonstrate that, based on accuracy, the performance of the new K-S and CFS-based algorithm is better than those of the K-S test, CFS, mRMR, and ReliefF algorithms. The experimental results show that the K-S test-CFS gene selection algorithm is a very effective and promising approach compared to the K-S test, CFS, mRMR, and ReliefF algorithms.
a Novel Two-Component Decomposition for Co-Polar Channels of GF-3 Quad-Pol Data
NASA Astrophysics Data System (ADS)
Kwok, E.; Li, C. H.; Zhao, Q. H.; Li, Y.
2018-04-01
Polarimetric target decomposition theory is the most dynamic and exploratory research area in the field of PolSAR. But most methods of target decomposition are based on fully polarized data (quad pol) and seldom utilize dual-polar data for target decomposition. Given this, we proposed a novel two-component decomposition method for co-polar channels of GF-3 quad-pol data. This method decomposes the data into two scattering contributions: surface, double bounce in dual co-polar channels. To save this underdetermined problem, a criterion for determining the model is proposed. The criterion can be named as second-order averaged scattering angle, which originates from the H/α decomposition. and we also put forward an alternative parameter of it. To validate the effectiveness of proposed decomposition, Liaodong Bay is selected as research area. The area is located in northeastern China, where it grows various wetland resources and appears sea ice phenomenon in winter. and we use the GF-3 quad-pol data as study data, which which is China's first C-band polarimetric synthetic aperture radar (PolSAR) satellite. The dependencies between the features of proposed algorithm and comparison decompositions (Pauli decomposition, An&Yang decomposition, Yamaguchi S4R decomposition) were investigated in the study. Though several aspects of the experimental discussion, we can draw the conclusion: the proposed algorithm may be suitable for special scenes with low vegetation coverage or low vegetation in the non-growing season; proposed decomposition features only using co-polar data are highly correlated with the corresponding comparison decomposition features under quad-polarization data. Moreover, it would be become input of the subsequent classification or parameter inversion.
flowVS: channel-specific variance stabilization in flow cytometry.
Azad, Ariful; Rajwa, Bartek; Pothen, Alex
2016-07-28
Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances. We present a variance-stabilization algorithm, called flowVS, that removes the mean-variance correlations from cell populations identified in each fluorescence channel. flowVS transforms each channel from all samples of a data set by the inverse hyperbolic sine (asinh) transformation. For each channel, the parameters of the transformation are optimally selected by Bartlett's likelihood-ratio test so that the populations attain homogeneous variances. The optimum parameters are then used to transform the corresponding channels in every sample. flowVS is therefore an explicit variance-stabilization method that stabilizes within-population variances in each channel by evaluating the homoskedasticity of clusters with a likelihood-ratio test. With two publicly available datasets, we show that flowVS removes the mean-variance dependence from raw FC data and makes the within-population variance relatively homogeneous. We demonstrate that alternative transformation techniques such as flowTrans, flowScape, logicle, and FCSTrans might not stabilize variance. Besides flow cytometry, flowVS can also be applied to stabilize variance in microarray data. With a publicly available data set we demonstrate that flowVS performs as well as the VSN software, a state-of-the-art approach developed for microarrays. The homogeneity of variance in cell populations across FC samples is desirable when extracting features uniformly and comparing cell populations with different levels of marker expressions. The newly developed flowVS algorithm solves the variance-stabilization problem in FC and microarrays by optimally transforming data with the help of Bartlett's likelihood-ratio test. On two publicly available FC datasets, flowVS stabilizes within-population variances more evenly than the available transformation and normalization techniques. flowVS-based variance stabilization can help in performing comparison and alignment of phenotypically identical cell populations across different samples. flowVS and the datasets used in this paper are publicly available in Bioconductor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, W.; Yin, J.; Li, C.
This paper presents a novel front-end electronics based on a front-end ASIC with post digital filtering and calibration dedicated to CZT detectors for PET imaging. A cascade amplifier based on split-leg topology is selected to realize the charge-sensitive amplifier (CSA) for the sake of low noise performances and the simple scheme of the power supplies. The output of the CSA is connected to a variable-gain amplifier to generate the compatible signals for the A/D conversion. A multi-channel single-slope ADC is designed to sample multiple points for the digital filtering and shaping. The digital signal processing algorithms are implemented by amore » FPGA. To verify the proposed scheme, a front-end readout prototype ASIC is designed and implemented in 0.35 μm CMOS process. In a single readout channel, a CSA, a VGA, a 10-bit ADC and registers are integrated. Two dummy channels, bias circuits, and time controller are also integrated. The die size is 2.0 mm x 2.1 mm. The input range of the ASIC is from 2000 e{sup -} to 100000 e{sup -}, which is suitable for the detection of the X-and gamma ray from 11.2 keV to 550 keV. The linearity of the output voltage is less than 1 %. The gain of the readout channel is 40.2 V/pC. The static power dissipation is about 10 mW/channel. The above tested results show that the electrical performances of the ASIC can well satisfy PET imaging applications. (authors)« less
Zhang, Shen; Zheng, Yanchun; Wang, Daifa; Wang, Ling; Ma, Jianai; Zhang, Jing; Xu, Weihao; Li, Deyu; Zhang, Dan
2017-08-10
Motor imagery is one of the most investigated paradigms in the field of brain-computer interfaces (BCIs). The present study explored the feasibility of applying a common spatial pattern (CSP)-based algorithm for a functional near-infrared spectroscopy (fNIRS)-based motor imagery BCI. Ten participants performed kinesthetic imagery of their left- and right-hand movements while 20-channel fNIRS signals were recorded over the motor cortex. The CSP method was implemented to obtain the spatial filters specific for both imagery tasks. The mean, slope, and variance of the CSP filtered signals were taken as features for BCI classification. Results showed that the CSP-based algorithm outperformed two representative channel-wise methods for classifying the two imagery statuses using either data from all channels or averaged data from imagery responsive channels only (oxygenated hemoglobin: CSP-based: 75.3±13.1%; all-channel: 52.3±5.3%; averaged: 64.8±13.2%; deoxygenated hemoglobin: CSP-based: 72.3±13.0%; all-channel: 48.8±8.2%; averaged: 63.3±13.3%). Furthermore, the effectiveness of the CSP method was also observed for the motor execution data to a lesser extent. A partial correlation analysis revealed significant independent contributions from all three types of features, including the often-ignored variance feature. To our knowledge, this is the first study demonstrating the effectiveness of the CSP method for fNIRS-based motor imagery BCIs. Copyright © 2017 Elsevier B.V. All rights reserved.
Leung, Vitus J [Albuquerque, NM; Phillips, Cynthia A [Albuquerque, NM; Bender, Michael A [East Northport, NY; Bunde, David P [Urbana, IL
2009-07-21
In a multiple processor computing apparatus, directional routing restrictions and a logical channel construct permit fault tolerant, deadlock-free routing. Processor allocation can be performed by creating a linear ordering of the processors based on routing rules used for routing communications between the processors. The linear ordering can assume a loop configuration, and bin-packing is applied to this loop configuration. The interconnection of the processors can be conceptualized as a generally rectangular 3-dimensional grid, and the MC allocation algorithm is applied with respect to the 3-dimensional grid.
Soft-decision decoding techniques for linear block codes and their error performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu
1996-01-01
The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.
Expeditious reconciliation for practical quantum key distribution
NASA Astrophysics Data System (ADS)
Nakassis, Anastase; Bienfang, Joshua C.; Williams, Carl J.
2004-08-01
The paper proposes algorithmic and environmental modifications to the extant reconciliation algorithms within the BB84 protocol so as to speed up reconciliation and privacy amplification. These algorithms have been known to be a performance bottleneck 1 and can process data at rates that are six times slower than the quantum channel they serve2. As improvements in single-photon sources and detectors are expected to improve the quantum channel throughput by two or three orders of magnitude, it becomes imperative to improve the performance of the classical software. We developed a Cascade-like algorithm that relies on a symmetric formulation of the problem, error estimation through the segmentation process, outright elimination of segments with many errors, Forward Error Correction, recognition of the distinct data subpopulations that emerge as the algorithm runs, ability to operate on massive amounts of data (of the order of 1 Mbit), and a few other minor improvements. The data from the experimental algorithm we developed show that by operating on massive arrays of data we can improve software performance by better than three orders of magnitude while retaining nearly as many bits (typically more than 90%) as the algorithms that were designed for optimal bit retention.
Color filter array design based on a human visual model
NASA Astrophysics Data System (ADS)
Parmar, Manu; Reeves, Stanley J.
2004-05-01
To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of the error criterion.
NASA Astrophysics Data System (ADS)
Murray, J. E.; Brindley, H. E.; Bryant, R. G.; Russell, J. E.; Jenkins, K. F.; Washington, R.
2016-09-01
A method is described to significantly enhance the signature of dust events using observations from the Spinning Enhanced Visible and InfraRed Imager (SEVIRI). The approach involves the derivation of a composite clear-sky signal for selected channels on an individual time step and pixel basis. These composite signals are subtracted from each observation in the relevant channels to enhance weak transient signals associated with either (a) low levels of dust emission or (b) dust emissions with high salt or low quartz content. Different channel combinations, of the differenced data from the steps above, are then rendered in false color imagery for the purpose of improved identification of dust source locations and activity. We have applied this clear-sky difference (CSD) algorithm over three (globally significant) source regions in southern Africa: the Makgadikgadi Basin, Etosha Pan, and the Namibian and western South African coast. Case study analyses indicate three notable advantages associated with the CSD approach over established image rendering methods: (i) an improved ability to detect dust plumes, (ii) the observation of source activation earlier in the diurnal cycle, and (iii) an improved ability to resolve and pinpoint dust plume source locations.
Barbosa, Daniel J C; Ramos, Jaime; Lima, Carlos S
2008-01-01
Capsule endoscopy is an important tool to diagnose tumor lesions in the small bowel. The capsule endoscopic images possess vital information expressed by color and texture. This paper presents an approach based in the textural analysis of the different color channels, using the wavelet transform to select the bands with the most significant texture information. A new image is then synthesized from the selected wavelet bands, trough the inverse wavelet transform. The features of each image are based on second-order textural information, and they are used in a classification scheme using a multilayer perceptron neural network. The proposed methodology has been applied in real data taken from capsule endoscopic exams and reached 98.7% sensibility and 96.6% specificity. These results support the feasibility of the proposed algorithm.
Non-Equilibrium Dynamics Contribute to Ion Selectivity in the KcsA Channel
Haas, Stephan; Farley, Robert A.
2014-01-01
The ability of biological ion channels to conduct selected ions across cell membranes is critical for the survival of both animal and bacterial cells. Numerous investigations of ion selectivity have been conducted over more than 50 years, yet the mechanisms whereby the channels select certain ions and reject others are not well understood. Here we report a new application of Jarzynski’s Equality to investigate the mechanism of ion selectivity using non-equilibrium molecular dynamics simulations of Na+ and K+ ions moving through the KcsA channel. The simulations show that the selectivity filter of KcsA adapts and responds to the presence of the ions with structural rearrangements that are different for Na+ and K+. These structural rearrangements facilitate entry of K+ ions into the selectivity filter and permeation through the channel, and rejection of Na+ ions. A mechanistic model of ion selectivity by this channel based on the results of the simulations relates the structural rearrangement of the selectivity filter to the differential dehydration of ions and multiple-ion occupancy and describes a mechanism to efficiently select and conduct K+. Estimates of the K+/Na+ selectivity ratio and steady state ion conductance for KcsA from the simulations are in good quantitative agreement with experimental measurements. This model also accurately describes experimental observations of channel block by cytoplasmic Na+ ions, the “punch through” relief of channel block by cytoplasmic positive voltages, and is consistent with the knock-on mechanism of ion permeation. PMID:24465882
NASA Astrophysics Data System (ADS)
Passas, Georgios; Freear, Steven; Fawcett, Darren
2010-01-01
Space-time coding (STC) is an important milestone in modern wireless communications. In this technique, more copies of the same signal are transmitted through different antennas (space) and different symbol periods (time), to improve the robustness of a wireless system by increasing its diversity gain. STCs are channel coding algorithms that can be readily implemented on a field programmable gate array (FPGA) device. This work provides some figures for the amount of required FPGA hardware resources, the speed that the algorithms can operate and the power consumption requirements of a space-time block code (STBC) encoder. Seven encoder very high-speed integrated circuit hardware description language (VHDL) designs have been coded, synthesised and tested. Each design realises a complex orthogonal space-time block code with a different transmission matrix. All VHDL designs are parameterisable in terms of sample precision. Precisions ranging from 4 bits to 32 bits have been synthesised. Alamouti's STBC encoder design [Alamouti, S.M. (1998), 'A Simple Transmit Diversity Technique for Wireless Communications', IEEE Journal on Selected Areas in Communications, 16:55-108.] proved to be the best trade-off, since it is on average 3.2 times smaller, 1.5 times faster and requires slightly less power than the next best trade-off in the comparison, which is a 3/4-rate full-diversity 3Tx-antenna STBC.
Image demosaicing: a systematic survey
NASA Astrophysics Data System (ADS)
Li, Xin; Gunturk, Bahadir; Zhang, Lei
2008-01-01
Image demosaicing is a problem of interpolating full-resolution color images from so-called color-filter-array (CFA) samples. Among various CFA patterns, Bayer pattern has been the most popular choice and demosaicing of Bayer pattern has attracted renewed interest in recent years partially due to the increased availability of source codes/executables in response to the principle of "reproducible research". In this article, we provide a systematic survey of over seventy published works in this field since 1999 (complementary to previous reviews 22, 67). Our review attempts to address important issues to demosaicing and identify fundamental differences among competing approaches. Our findings suggest most existing works belong to the class of sequential demosaicing - i.e., luminance channel is interpolated first and then chrominance channels are reconstructed based on recovered luminance information. We report our comparative study results with a collection of eleven competing algorithms whose source codes or executables are provided by the authors. Our comparison is performed on two data sets: Kodak PhotoCD (popular choice) and IMAX high-quality images (more challenging). While most existing demosaicing algorithms achieve good performance on the Kodak data set, their performance on the IMAX one (images with varying-hue and high-saturation edges) degrades significantly. Such observation suggests the importance of properly addressing the issue of mismatch between assumed model and observation data in demosaicing, which calls for further investigation on issues such as model validation, test data selection and performance evaluation.
Determination of cloud liquid water content using the SSM/I
NASA Technical Reports Server (NTRS)
Alishouse, John C.; Snider, Jack B.; Westwater, Ed R.; Swift, Calvin T.; Ruf, Christopher S.
1990-01-01
As part of a calibration/validation effort for the special sensor microwave/imager (SSM/I), coincident observations of SSM/I brightness temperatures and surface-based observations of cloud liquid water were obtained. These observations were used to validate initial algorithms and to derive an improved algorithm. The initial algorithms were divided into latitudinal-, seasonal-, and surface-type zones. It was found that these initial algorithms, which were of the D-matrix type, did not yield sufficiently accurate results. The surface-based measurements of channels were investigated; however, the 85V channel was excluded because of excessive noise. It was found that there is no significant correlation between the SSM/I brightness temperatures and the surface-based cloud liquid water determination when the background surface is land or snow. A high correlation was found between brightness temperatures and ground-based measurements over the ocean.
NASA Technical Reports Server (NTRS)
Comiso, Josefino C.; Parkinson, Claire L.
2007-01-01
We use two algorithms to process AMSR-E data in order to determine algorithm dependence, if any, on the estimates of sea ice concentration, ice extent and area, and trends and to evaluate how AMSR-E data compare with historical SSM/I data. The monthly ice concentrations derived from the two algorithms from AMSR-E data (the AMSR-E Bootstrap Algorithm, or ABA, and the enhanced NASA Team algorithm, or NT2) differ on average by about 1 to 3%, with data from the consolidated ice region being generally comparable for ABA and NT2 retrievals while data in the marginal ice zones and thin ice regions show higher values when the NT2 algorithm is used. The ice extents and areas derived separately from AMSR-E using these two algorithms are, however, in good agreement, with the differences (ABA-NT2) being about 6.6 x 10(exp 4) square kilometers on average for ice extents and -6.6 x 10(exp 4) square kilometers for ice area which are small compared to mean seasonal values of 10.5 x 10(exp 6) and 9.8 x 10(exp 6) for ice extent and area: respectively. Likewise, extents and areas derived from the same algorithm but from AMSR-E and SSM/I data are consistent but differ by about -24.4 x 10(exp 4) square kilometers and -13.9 x 10(exp 4) square kilometers, respectively. The discrepancies are larger with the estimates of extents than area mainly because of differences in channel selection and sensor resolutions. Trends in extent during the AMSR-E era were also estimated and results from all three data sets are shown to be in good agreement (within errors).
Determinations of cloud liquid water in the tropics from the SSM/I
NASA Technical Reports Server (NTRS)
Alishouse, John C.; Swift, Calvin; Ruf, Christopher; Snyder, Sheila; Vongsathorn, Jennifer
1989-01-01
Upward-looking microwave radiometric observations were used to validate the SSM/I determinations, and also as a basis for the determination of new coefficients. Due to insufficiency of the initial four channel algorithm for cloud liquid water, the improved algorithm was derived from the CORRAD (the University of Massachusetts autocorrelation radiometer) measurements of cloud liquid water and the matching SSM/I brightness temperatures using the standard linear regression. The correlation coefficients for the possible four channel combinations, and subsequently the best and the worst combinations were determined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkov, M V; Garanin, S G; Dolgopolov, Yu V
2014-11-30
A seven-channel fibre laser system operated by the master oscillator – multichannel power amplifier scheme is the phase locked using a stochastic parallel gradient algorithm. The phase modulators on lithium niobate crystals are controlled by a multichannel electronic unit with the microcontroller processing signals in real time. The dynamic phase locking of the laser system with the bandwidth of 14 kHz is demonstrated, the time of phasing is 3 – 4 ms. (fibre and integrated-optical structures)
GPU Acceleration of DSP for Communication Receivers.
Gunther, Jake; Gunther, Hyrum; Moon, Todd
2017-09-01
Graphics processing unit (GPU) implementations of signal processing algorithms can outperform CPU-based implementations. This paper describes the GPU implementation of several algorithms encountered in a wide range of high-data rate communication receivers including filters, multirate filters, numerically controlled oscillators, and multi-stage digital down converters. These structures are tested by processing the 20 MHz wide FM radio band (88-108 MHz). Two receiver structures are explored: a single channel receiver and a filter bank channelizer. Both run in real time on NVIDIA GeForce GTX 1080 graphics card.
Design of a clinical notification system.
Wagner, M M; Tsui, F C; Pike, J; Pike, L
1999-01-01
We describe the requirements and design of an enterprise-wide notification system. From published descriptions of notification schemes, our own experience, and use cases provided by diverse users in our institution, we developed a set of functional requirements. The resulting design supports multiple communication channels, third party mappings (algorithms) from message to recipient and/or channel of delivery, and escalation algorithms. A requirement for multiple message formats is addressed by a document specification. We implemented this system in Java as a CORBA object. This paper describes the design and current implementation of our notification system.
Radioastronomic signal processing cores for the SKA radio telescope
NASA Astrophysics Data System (ADS)
Comorett, G.; Chiarucc, S.; Belli, C.
Modern radio telescopes require the processing of wideband signals, with sample rates from tens of MHz to tens of GHz, and are composed from hundreds up to a million of individual antennas. Digital signal processing of these signals include digital receivers (the digital equivalent of the heterodyne receiver), beamformers, channelizers, spectrometers. FPGAs present the advantage of providing a relatively low power consumption, relative to GPUs or dedicated computers, a wide signal data path, and high interconnectivity. Efficient algorithms have been developed for these applications. Here we will review some of the signal processing cores developed for the SKA telescope. The LFAA beamformer/channelizer architecture is based on an oversampling channelizer, where the channelizer output sampling rate and channel spacing can be set independently. This is useful where an overlap between adjacent channels is required to provide an uniform spectral coverage. The architecture allows for an efficient and distributed channelization scheme, with a final resolution corresponding to a million of spectral channels, minimum leakage and high out-of-band rejection. An optimized filter design procedure is used to provide an equiripple response with a very large number of spectral channels. A wideband digital receiver has been designed in order to select the processed bandwidth of the SKA Mid receiver. The receiver extracts a 2.5 MHz bandwidth form a 14 GHz input bandwidth. The design allows for non-integer ratios between the input and output sampling rates, with a resource usage comparable to that of a conventional decimating digital receiver. Finally, some considerations on quantization of radioastronomic signals are presented. Due to the stochastic nature of the signal, quantization using few data bits is possible. Good accuracies and dynamic range are possible even with 2-3 bits, but the nonlinearity in the correlation process must be corrected in post-processing. With at least 6 bits it is possible to have a very linear response of the instrument, with nonlinear terms below 80 dB, providing the signal amplitude is kept within bounds.
Progressive video coding for noisy channels
NASA Astrophysics Data System (ADS)
Kim, Beong-Jo; Xiong, Zixiang; Pearlman, William A.
1998-10-01
We extend the work of Sherwood and Zeger to progressive video coding for noisy channels. By utilizing a 3D extension of the set partitioning in hierarchical trees (SPIHT) algorithm, we cascade the resulting 3D SPIHT video coder with a rate-compatible punctured convolutional channel coder for transmission of video over a binary symmetric channel. Progressive coding is achieved by increasing the target rate of the 3D embedded SPIHT video coder as the channel condition improves. The performance of our proposed coding system is acceptable at low transmission rate and bad channel conditions. Its low complexity makes it suitable for emerging applications such as video over wireless channels.
Multiple-access relaying with network coding: iterative network/channel decoding with imperfect CSI
NASA Astrophysics Data System (ADS)
Vu, Xuan-Thang; Renzo, Marco Di; Duhamel, Pierre
2013-12-01
In this paper, we study the performance of the four-node multiple-access relay channel with binary Network Coding (NC) in various Rayleigh fading scenarios. In particular, two relay protocols, decode-and-forward (DF) and demodulate-and-forward (DMF) are considered. In the first case, channel decoding is performed at the relay before NC and forwarding. In the second case, only demodulation is performed at the relay. The contributions of the paper are as follows: (1) two joint network/channel decoding (JNCD) algorithms, which take into account possible decoding error at the relay, are developed in both DF and DMF relay protocols; (2) both perfect channel state information (CSI) and imperfect CSI at receivers are studied. In addition, we propose a practical method to forward the relays error characterization to the destination (quantization of the BER). This results in a fully practical scheme. (3) We show by simulation that the number of pilot symbols only affects the coding gain but not the diversity order, and that quantization accuracy affects both coding gain and diversity order. Moreover, when compared with the recent results using DMF protocol, our proposed DF protocol algorithm shows an improvement of 4 dB in fully interleaved Rayleigh fading channels and 0.7 dB in block Rayleigh fading channels.
The analysis of polar clouds from AVHRR satellite data using pattern recognition techniques
NASA Technical Reports Server (NTRS)
Smith, William L.; Ebert, Elizabeth
1990-01-01
The cloud cover in a set of summertime and wintertime AVHRR data from the Arctic and Antarctic regions was analyzed using a pattern recognition algorithm. The data were collected by the NOAA-7 satellite on 6 to 13 Jan. and 1 to 7 Jul. 1984 between 60 deg and 90 deg north and south latitude in 5 spectral channels, at the Global Area Coverage (GAC) resolution of approximately 4 km. This data embodied a Polar Cloud Pilot Data Set which was analyzed by a number of research groups as part of a polar cloud algorithm intercomparison study. This study was intended to determine whether the additional information contained in the AVHRR channels (beyond the standard visible and infrared bands on geostationary satellites) could be effectively utilized in cloud algorithms to resolve some of the cloud detection problems caused by low visible and thermal contrasts in the polar regions. The analysis described makes use of a pattern recognition algorithm which estimates the surface and cloud classification, cloud fraction, and surface and cloudy visible (channel 1) albedo and infrared (channel 4) brightness temperatures on a 2.5 x 2.5 deg latitude-longitude grid. In each grid box several spectral and textural features were computed from the calibrated pixel values in the multispectral imagery, then used to classify the region into one of eighteen surface and/or cloud types using the maximum likelihood decision rule. A slightly different version of the algorithm was used for each season and hemisphere because of differences in categories and because of the lack of visible imagery during winter. The classification of the scene is used to specify the optimal AVHRR channel for separating clear and cloudy pixels using a hybrid histogram-spatial coherence method. This method estimates values for cloud fraction, clear and cloudy albedos and brightness temperatures in each grid box. The choice of a class-dependent AVHRR channel allows for better separation of clear and cloudy pixels than does a global choice of a visible and/or infrared threshold. The classification also prevents erroneous estimates of large fractional cloudiness in areas of cloudfree snow and sea ice. The hybrid histogram-spatial coherence technique and the advantages of first classifying a scene in the polar regions are detailed. The complete Polar Cloud Pilot Data Set was analyzed and the results are presented and discussed.
Least squares restoration of multi-channel images
NASA Technical Reports Server (NTRS)
Chin, Roland T.; Galatsanos, Nikolas P.
1989-01-01
In this paper, a least squares filter for the restoration of multichannel imagery is presented. The restoration filter is based on a linear, space-invariant imaging model and makes use of an iterative matrix inversion algorithm. The restoration utilizes both within-channel (spatial) and cross-channel information as constraints. Experiments using color images (three-channel imagery with red, green, and blue components) were performed to evaluate the filter's performance and to compare it with other monochrome and multichannel filters.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.
1986-01-01
High rate concatenated coding systems with trellis inner codes and Reed-Solomon (RS) outer codes for application in satellite communication systems are considered. Two types of inner codes are studied: high rate punctured binary convolutional codes which result in overall effective information rates between 1/2 and 1 bit per channel use; and bandwidth efficient signal space trellis codes which can achieve overall effective information rates greater than 1 bit per channel use. Channel capacity calculations with and without side information performed for the concatenated coding system. Concatenated coding schemes are investigated. In Scheme 1, the inner code is decoded with the Viterbi algorithm and the outer RS code performs error-correction only (decoding without side information). In scheme 2, the inner code is decoded with a modified Viterbi algorithm which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, while branch metrics are used to provide the reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. These two schemes are proposed for use on NASA satellite channels. Results indicate that high system reliability can be achieved with little or no bandwidth expansion.
Performance analysis of a finite radon transform in OFDM system under different channel models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawood, Sameer A.; Anuar, M. S.; Fayadh, Rashid A.
In this paper, a class of discrete Radon transforms namely Finite Radon Transform (FRAT) was proposed as a modulation technique in the realization of Orthogonal Frequency Division Multiplexing (OFDM). The proposed FRAT operates as a data mapper in the OFDM transceiver instead of the conventional phase shift mapping and quadrature amplitude mapping that are usually used with the standard OFDM based on Fast Fourier Transform (FFT), by the way that ensure increasing the orthogonality of the system. The Fourier domain approach was found here to be the more suitable way for obtaining the forward and inverse FRAT. This structure resultedmore » in a more suitable realization of conventional FFT- OFDM. It was shown that this application increases the orthogonality significantly in this case due to the use of Inverse Fast Fourier Transform (IFFT) twice, namely, in the data mapping and in the sub-carrier modulation also due to the use of an efficient algorithm in determining the FRAT coefficients called the optimal ordering method. The proposed approach was tested and compared with conventional OFDM, for additive white Gaussian noise (AWGN) channel, flat fading channel, and multi-path frequency selective fading channel. The obtained results showed that the proposed system has improved the bit error rate (BER) performance by reducing inter-symbol interference (ISI) and inter-carrier interference (ICI), comparing with conventional OFDM system.« less
Performance analysis of a finite radon transform in OFDM system under different channel models
NASA Astrophysics Data System (ADS)
Dawood, Sameer A.; Malek, F.; Anuar, M. S.; Fayadh, Rashid A.; Abdullah, Farrah Salwani
2015-05-01
In this paper, a class of discrete Radon transforms namely Finite Radon Transform (FRAT) was proposed as a modulation technique in the realization of Orthogonal Frequency Division Multiplexing (OFDM). The proposed FRAT operates as a data mapper in the OFDM transceiver instead of the conventional phase shift mapping and quadrature amplitude mapping that are usually used with the standard OFDM based on Fast Fourier Transform (FFT), by the way that ensure increasing the orthogonality of the system. The Fourier domain approach was found here to be the more suitable way for obtaining the forward and inverse FRAT. This structure resulted in a more suitable realization of conventional FFT- OFDM. It was shown that this application increases the orthogonality significantly in this case due to the use of Inverse Fast Fourier Transform (IFFT) twice, namely, in the data mapping and in the sub-carrier modulation also due to the use of an efficient algorithm in determining the FRAT coefficients called the optimal ordering method. The proposed approach was tested and compared with conventional OFDM, for additive white Gaussian noise (AWGN) channel, flat fading channel, and multi-path frequency selective fading channel. The obtained results showed that the proposed system has improved the bit error rate (BER) performance by reducing inter-symbol interference (ISI) and inter-carrier interference (ICI), comparing with conventional OFDM system.
Variable Scheduling to Mitigate Channel Losses in Energy-Efficient Body Area Networks
Tselishchev, Yuriy; Boulis, Athanassios; Libman, Lavy
2012-01-01
We consider a typical body area network (BAN) setting in which sensor nodes send data to a common hub regularly on a TDMA basis, as defined by the emerging IEEE 802.15.6 BAN standard. To reduce transmission losses caused by the highly dynamic nature of the wireless channel around the human body, we explore variable TDMA scheduling techniques that allow the order of transmissions within each TDMA round to be decided on the fly, rather than being fixed in advance. Using a simple Markov model of the wireless links, we devise a number of scheduling algorithms that can be performed by the hub, which aim to maximize the expected number of successful transmissions in a TDMA round, and thereby significantly reduce transmission losses as compared with a static TDMA schedule. Importantly, these algorithms do not require a priori knowledge of the statistical properties of the wireless channels, and the reliability improvement is achieved entirely via shuffling the order of transmissions among devices, and does not involve any additional energy consumption (e.g., retransmissions). We evaluate these algorithms directly on an experimental set of traces obtained from devices strapped to human subjects performing regular daily activities, and confirm that the benefits of the proposed variable scheduling algorithms extend to this practical setup as well. PMID:23202183
Novel ion channel targets in atrial fibrillation.
Hancox, Jules C; James, Andrew F; Marrion, Neil V; Zhang, Henggui; Thomas, Dierk
2016-08-01
Atrial fibrillation (AF) is the most common arrhythmia in humans. It is progressive and the development of electrical and structural remodeling makes early intervention desirable. Existing antiarrhythmic pharmacological approaches are not always effective and can produce unwanted side effects. Additional atrial-selective antiarrhythmic strategies are therefore desirable. Evidence for three novel ion channel atrial-selective therapeutic targets is evaluated: atrial-selective fast sodium channel current (INa) inhibition; small conductance calcium-activated potassium (SK) channels; and two-pore (K2P) potassium channels. Data from animal models support atrial-ventricular differences in INa kinetics and also suggest atrial-ventricular differences in sodium channel β subunit expression. Further work is required to determine whether intrinsic atrial-ventricular differences in human INa exist or whether functional differences occur due to distinct atrial and ventricular action and resting potentials. SK and K2P channels (particularly K2P 3.1) offer potentially attractive atrial-selective targets. Work is needed to identify the underlying basis of SK current that contributes to (patho)physiological atrial repolarization and settings in which SK inhibition is anti- versus pro-arrhythmic. Although K2P3.1 appears to be a promising target with comparatively selective drugs for experimental use, a lack of selective pharmacology hinders evaluation of other K2P channels as potential atrial-selective targets.
Rough sets and Laplacian score based cost-sensitive feature selection
Yu, Shenglong
2018-01-01
Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of “good” features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms. PMID:29912884
Rough sets and Laplacian score based cost-sensitive feature selection.
Yu, Shenglong; Zhao, Hong
2018-01-01
Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of "good" features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms.
Linear time-invariant controller design for two-channel decentralized control systems
NASA Technical Reports Server (NTRS)
Desoer, Charles A.; Gundes, A. Nazli
1987-01-01
This paper analyzes a linear time-invariant two-channel decentralized control system with a 2 x 2 strictly proper plant. It presents an algorithm for the algebraic design of a class of decentralized compensators which stabilize the given plant.
Plant Species Identification by Bi-channel Deep Convolutional Networks
NASA Astrophysics Data System (ADS)
He, Guiqing; Xia, Zhaoqiang; Zhang, Qiqi; Zhang, Haixi; Fan, Jianping
2018-04-01
Plant species identification achieves much attention recently as it has potential application in the environmental protection and human life. Although deep learning techniques can be directly applied for plant species identification, it still needs to be designed for this specific task to obtain the state-of-art performance. In this paper, a bi-channel deep learning framework is developed for identifying plant species. In the framework, two different sub-networks are fine-tuned over their pretrained models respectively. And then a stacking layer is used to fuse the output of two different sub-networks. We construct a plant dataset of Orchidaceae family for algorithm evaluation. Our experimental results have demonstrated that our bi-channel deep network can achieve very competitive performance on accuracy rates compared to the existing deep learning algorithm.
Two-Channel Satellite Retrievals of Aerosol Properties: An Overview
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.
1999-01-01
In order to reduce current uncertainties in the evaluation of the direct and indirect effects of tropospheric aerosols on climate on the global scale, it has been suggested to apply multi-channel retrieval algorithms to the full period of existing satellite data. This talk will outline the methodology of interpreting two-channel satellite radiance data over the ocean and describe a detailed analysis of the sensitivity of retrieved aerosol parameters to the assumptions made in different retrieval algorithms. We will specifically address the calibration and cloud screening issues, consider the suitability of existing satellite data sets to detecting short- and long-term regional and global changes, compare preliminary results obtained by several research groups, and discuss the prospects of creating an advanced retroactive climatology of aerosol optical thickness and size over the oceans.
Multi-channel distributed coordinated function over single radio in wireless sensor networks.
Campbell, Carlene E-A; Loo, Kok-Keong Jonathan; Gemikonakli, Orhan; Khan, Shafiullah; Singh, Dhananjay
2011-01-01
Multi-channel assignments are becoming the solution of choice to improve performance in single radio for wireless networks. Multi-channel allows wireless networks to assign different channels to different nodes in real-time transmission. In this paper, we propose a new approach, Multi-channel Distributed Coordinated Function (MC-DCF) which takes advantage of multi-channel assignment. The backoff algorithm of the IEEE 802.11 distributed coordination function (DCF) was modified to invoke channel switching, based on threshold criteria in order to improve the overall throughput for wireless sensor networks (WSNs) over 802.11 networks. We presented simulation experiments in order to investigate the characteristics of multi-channel communication in wireless sensor networks using an NS2 platform. Nodes only use a single radio and perform channel switching only after specified threshold is reached. Single radio can only work on one channel at any given time. All nodes initiate constant bit rate streams towards the receiving nodes. In this work, we studied the impact of non-overlapping channels in the 2.4 frequency band on: constant bit rate (CBR) streams, node density, source nodes sending data directly to sink and signal strength by varying distances between the sensor nodes and operating frequencies of the radios with different data rates. We showed that multi-channel enhancement using our proposed algorithm provides significant improvement in terms of throughput, packet delivery ratio and delay. This technique can be considered for WSNs future use in 802.11 networks especially when the IEEE 802.11n becomes popular thereby may prevent the 802.15.4 network from operating effectively in the 2.4 GHz frequency band.
Multi-Channel Distributed Coordinated Function over Single Radio in Wireless Sensor Networks
Campbell, Carlene E.-A.; Loo, Kok-Keong (Jonathan); Gemikonakli, Orhan; Khan, Shafiullah; Singh, Dhananjay
2011-01-01
Multi-channel assignments are becoming the solution of choice to improve performance in single radio for wireless networks. Multi-channel allows wireless networks to assign different channels to different nodes in real-time transmission. In this paper, we propose a new approach, Multi-channel Distributed Coordinated Function (MC-DCF) which takes advantage of multi-channel assignment. The backoff algorithm of the IEEE 802.11 distributed coordination function (DCF) was modified to invoke channel switching, based on threshold criteria in order to improve the overall throughput for wireless sensor networks (WSNs) over 802.11 networks. We presented simulation experiments in order to investigate the characteristics of multi-channel communication in wireless sensor networks using an NS2 platform. Nodes only use a single radio and perform channel switching only after specified threshold is reached. Single radio can only work on one channel at any given time. All nodes initiate constant bit rate streams towards the receiving nodes. In this work, we studied the impact of non-overlapping channels in the 2.4 frequency band on: constant bit rate (CBR) streams, node density, source nodes sending data directly to sink and signal strength by varying distances between the sensor nodes and operating frequencies of the radios with different data rates. We showed that multi-channel enhancement using our proposed algorithm provides significant improvement in terms of throughput, packet delivery ratio and delay. This technique can be considered for WSNs future use in 802.11 networks especially when the IEEE 802.11n becomes popular thereby may prevent the 802.15.4 network from operating effectively in the 2.4 GHz frequency band. PMID:22346614
NASA Astrophysics Data System (ADS)
Sano', Paolo; Casella, Daniele; Panegrossi, Giulia; Cinzia Marra, Anna; Dietrich, Stefano
2016-04-01
Spaceborne microwave cross-track scanning radiometers, originally developed for temperature and humidity sounding, have shown great capabilities to provide a significant contribution in precipitation monitoring both in terms of measurement quality and spatial/temporal coverage. The Passive microwave Neural network Precipitation Retrieval (PNPR) algorithm for cross-track scanning radiometers, originally developed for the Advanced Microwave Sounding Unit/Microwave Humidity Sounder (AMSU-A/MHS) radiometers (on board the European MetOp and U.S. NOAA satellites), was recently newly designed to exploit the Advanced Technology Microwave Sounder (ATMS) on board the Suomi-NPP satellite and the future JPSS satellites. The PNPR algorithm is based on the Artificial Neural Network (ANN) approach. The main PNPR-ATMS algorithm changes with respect to PNPR-AMSU/MHS are the design and implementation of a new ANN able to manage the information derived from the additional ATMS channels (respect to the AMSU-A/MHS radiometer) and a new screening procedure for not-precipitating pixels. In order to achieve maximum consistency of the retrieved surface precipitation, both PNPR algorithms are based on the same physical foundation. The PNPR is optimized for the European and the African area. The neural network was trained using a cloud-radiation database built upon 94 cloud-resolving simulations over Europe and the Mediterranean and over the African area and radiative transfer model simulations of TB vectors consistent with the AMSU-A/MHS and ATMS channel frequencies, viewing angles, and view-angle dependent IFOV sizes along the scan projections. As opposed to other ANN precipitation retrieval algorithms, PNPR uses a unique ANN that retrieves the surface precipitation rate for all types of surface backgrounds represented in the training database, i.e., land (vegetated or arid), ocean, snow/ice or coast. This approach prevents different precipitation estimates from being inconsistent with one another when an observed precipitation system extends over two or more types of surfaces. As input data, the PNPR algorithm incorporates the TBs from selected channels, and various additional TBs-derived variables. Ancillary geographical/geophysical inputs (i.e., latitude, terrain height, surface type, season) are also considered during the training phase. The PNPR algorithm outputs consist of both the surface precipitation rate (along with the information on precipitation phase: liquid, mixed, solid) and a pixel-based quality index. We will illustrate the main features of the PNPR algorithm and will show results of a verification study over Europe and Africa. The study is based on the available ground-based radar and/or rain gauge network observations over the European area. In addition, results of the comparison with rainfall products available from the NASA/JAXA Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) (over the African area) and Global Precipitation Measurement (GPM) Dual frequency Precipitation Radar (DPR) will be shown. The analysis is built upon a two-years coincidence dataset of AMSU/MHS and ATMS observations with PR (2013-2014) and DPR (2014-2015). The PNPR is developed within the EUMETSAT H/SAF program (Satellite Application Facility for Operational Hydrology and Water Management), where it is used operationally towards the full exploitation of all microwave radiometers available in the GPM era. The algorithm will be tailored to the future European Microwave Sounder (MWS) onboard the MetOp-Second Generation (MetOp-SG) satellites.
NASA Astrophysics Data System (ADS)
O'Shea, Daniel J.; Shenoy, Krishna V.
2018-04-01
Objective. Electrical stimulation is a widely used and effective tool in systems neuroscience, neural prosthetics, and clinical neurostimulation. However, electrical artifacts evoked by stimulation prevent the detection of spiking activity on nearby recording electrodes, which obscures the neural population response evoked by stimulation. We sought to develop a method to clean artifact-corrupted electrode signals recorded on multielectrode arrays in order to recover the underlying neural spiking activity. Approach. We created an algorithm, which performs estimation and removal of array artifacts via sequential principal components regression (ERAASR). This approach leverages the similar structure of artifact transients, but not spiking activity, across simultaneously recorded channels on the array, across pulses within a train, and across trials. The ERAASR algorithm requires no special hardware, imposes no requirements on the shape of the artifact or the multielectrode array geometry, and comprises sequential application of straightforward linear methods with intuitive parameters. The approach should be readily applicable to most datasets where stimulation does not saturate the recording amplifier. Main results. The effectiveness of the algorithm is demonstrated in macaque dorsal premotor cortex using acute linear multielectrode array recordings and single electrode stimulation. Large electrical artifacts appeared on all channels during stimulation. After application of ERAASR, the cleaned signals were quiescent on channels with no spontaneous spiking activity, whereas spontaneously active channels exhibited evoked spikes which closely resembled spontaneously occurring spiking waveforms. Significance. We hope that enabling simultaneous electrical stimulation and multielectrode array recording will help elucidate the causal links between neural activity and cognition and facilitate naturalistic sensory protheses.
Improved Surface Parameter Retrievals using AIRS/AMSU Data
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John
2008-01-01
The AIRS Science Team Version 5.0 retrieval algorithm became operational at the Goddard DAAC in July 2007 generating near real-time products from analysis of AIRS/AMSU sounding data. This algorithm contains many significant theoretical advances over the AIRS Science Team Version 4.0 retrieval algorithm used previously. Two very significant developments of Version 5 are: 1) the development and implementation of an improved Radiative Transfer Algorithm (RTA) which allows for accurate treatment of non-Local Thermodynamic Equilibrium (non-LTE) effects on shortwave sounding channels; and 2) the development of methodology to obtain very accurate case by case product error estimates which are in turn used for quality control. These theoretical improvements taken together enabled a new methodology to be developed which further improves soundings in partially cloudy conditions. In this methodology, longwave C02 channel observations in the spectral region 700 cm(exp -1) to 750 cm(exp -1) are used exclusively for cloud clearing purposes, while shortwave C02 channels in the spectral region 2195 cm(exp -1) 2395 cm(exp -1) are used for temperature sounding purposes. This allows for accurate temperature soundings under more difficult cloud conditions. This paper further improves on the methodology used in Version 5 to derive surface skin temperature and surface spectral emissivity from AIRS/AMSU observations. Now, following the approach used to improve tropospheric temperature profiles, surface skin temperature is also derived using only shortwave window channels. This produces improved surface parameters, both day and night, compared to what was obtained in Version 5. These in turn result in improved boundary layer temperatures and retrieved total O3 burden.
NASA Astrophysics Data System (ADS)
Schmidl, Marius
2017-04-01
We present a comprehensive training data set covering a large range of atmospheric conditions, including disperse volcanic ash and desert dust layers. These data sets contain all information required for the development of volcanic ash detection algorithms based on artificial neural networks, urgently needed since volcanic ash in the airspace is a major concern of aviation safety authorities. Selected parts of the data are used to train the volcanic ash detection algorithm VADUGS. They contain atmospheric and surface-related quantities as well as the corresponding simulated satellite data for the channels in the infrared spectral range of the SEVIRI instrument on board MSG-2. To get realistic results, ECMWF, IASI-based, and GEOS-Chem data are used to calculate all parameters describing the environment, whereas the software package libRadtran is used to perform radiative transfer simulations returning the brightness temperatures for each atmospheric state. As optical properties are a prerequisite for radiative simulations accounting for aerosol layers, the development also included the computation of optical properties for a set of different aerosol types from different sources. A description of the developed software and the used methods is given, besides an overview of the resulting data sets.
An energy-efficient rate adaptive media access protocol (RA-MAC) for long-lived sensor networks.
Hu, Wen; Chen, Quanjun; Corke, Peter; O'Rourke, Damien
2010-01-01
We introduce an energy-efficient Rate Adaptive Media Access Control (RA-MAC) algorithm for long-lived Wireless Sensor Networks (WSNs). Previous research shows that the dynamic and lossy nature of wireless communications is one of the major challenges to reliable data delivery in WSNs. RA-MAC achieves high link reliability in such situations by dynamically trading off data rate for channel gain. The extra gain that can be achieved reduces the packet loss rate which contributes to reduced energy expenditure through a reduced numbers of retransmissions. We achieve this at the expense of raw bit rate which generally far exceeds the application's link requirement. To minimize communication energy consumption, RA-MAC selects the optimal data rate based on the estimated link quality at each data rate and an analytical model of the energy consumption. Our model shows how the selected data rate depends on different channel conditions in order to minimize energy consumption. We have implemented RA-MAC in TinyOS for an off-the-shelf sensor platform (the TinyNode) on top of a state-of-the-art WSN Media Access Control Protocol, SCP-MAC, and evaluated its performance by comparing our implementation with the original SCP-MAC using both simulation and experiment.
Estimating the beam attenuation coefficient in coastal waters from AVHRR imagery
NASA Astrophysics Data System (ADS)
Gould, Richard W.; Arnone, Robert A.
1997-09-01
This paper presents an algorithm to estimate particle beam attenuation at 660 nm ( cp660) in coastal areas using the red and near-infrared channels of the NOAA AVHRR satellite sensor. In situ reflectance spectra and cp660 measurements were collected at 23 stations in Case I and II waters during an April 1993 cruise in the northern Gulf of Mexico. The reflectance spectra were weighted by the spectral response of the AVHRR sensor and integrated over the channel 1 waveband to estimate the atmospherically corrected signal recorded by the satellite. An empirical relationship between integrated reflectance and cp660 values was derived with a linear correlation coefficient of 0.88. Because the AVHRR sensor requires a strong channel 1 signal, the algorithm is applicable in highly turbid areas ( cp660 > 1.5 m -1) where scattering from suspended sediment strongly controls the shape and magnitude of the red (550-650 nm) reflectance spectrum. The algorithm was tested on a data set collected 2 years later in different coastal waters in the northern Gulf of Mexico and satellite estimates of cp660 averaged within 37% of measured values. Application of the algorithm provides daily images of nearshore regions at 1 km resolution for evaluating processes affecting ocean color distribution patterns (tides, winds, currents, river discharge). Further validation and refinement of the algorithm are in progress to permit quantitative application in other coastal areas. Published by Elsevier Science Ltd
NASA Astrophysics Data System (ADS)
Sun, Y. W.; Liu, C.; Xie, P. H.; Hartl, A.; Chan, K. L.; Tian, Y.; Wang, W.; Qin, M.; Liu, J. G.; Liu, W. Q.
2015-12-01
In this paper, we demonstrate achieving accurate industrial SO2 emissions monitoring using a portable multi-channel gas analyzer with an optimized retrieval algorithm. The introduced analyzer features with large dynamic measurement range and correction of interferences from other co-existing infrared absorbers, e.g., NO, CO, CO2, NO2, CH4, HC, N2O and H2O. Both effects have been the major limitations of industrial SO2 emissions monitoring. The multi-channel gas analyzer measures 11 different wavelength channels simultaneously in order to achieve correction of several major problems of an infrared gas analyzer, including system drift, conflict of sensitivity, interferences among different infrared absorbers and limitation of measurement range. The optimized algorithm makes use of a 3rd polynomial rather than a constant factor to quantify gas-to-gas interference. The measurement results show good performance in both linear and nonlinear range, thereby solving the problem that the conventional interference correction is restricted by the linearity of both intended and interfering channels. The result implies that the measurement range of the developed multi-channel analyzer can be extended to the nonlinear absorption region. The measurement range and accuracy are evaluated by experimental laboratory calibration. An excellent agreement was achieved with a Pearson correlation coefficient (r2) of 0.99977 with measurement range from ~5 ppmv to 10 000 ppmv and measurement error <2 %. The instrument was also deployed for field measurement. Emissions from 3 different factories were measured. The emissions of these factories have been characterized with different co-existing infrared absorbers, covering a wide range of concentration levels. We compared our measurements with the commercial SO2 analyzers. The overall good agreements are achieved.
System Design for Nano-Network Communications
NASA Astrophysics Data System (ADS)
ShahMohammadian, Hoda
The potential applications of nanotechnology in a wide range of areas necessities nano-networking research. Nano-networking is a new type of networking which has emerged by applying nanotechnology to communication theory. Therefore, this dissertation presents a framework for physical layer communications in a nano-network and addresses some of the pressing unsolved challenges in designing a molecular communication system. The contribution of this dissertation is proposing well-justified models for signal propagation, noise sources, optimum receiver design and synchronization in molecular communication channels. The design of any communication system is primarily based on the signal propagation channel and noise models. Using the Brownian motion and advection molecular statistics, separate signal propagation and noise models are presented for diffusion-based and flow-based molecular communication channels. It is shown that the corrupting noise of molecular channels is uncorrelated and non-stationary with a signal dependent magnitude. The next key component of any communication system is the reception and detection process. This dissertation provides a detailed analysis of the effect of the ligand-receptor binding mechanism on the received signal, and develops the first optimal receiver design for molecular communications. The bit error rate performance of the proposed receiver is evaluated and the impact of medium motion on the receiver performance is investigated. Another important feature of any communication system is synchronization. In this dissertation, the first blind synchronization algorithm is presented for the molecular communication channels. The proposed algorithm uses a non-decision directed maximum likelihood criterion for estimating the channel delay. The Cramer-Rao lower bound is also derived and the performance of the proposed synchronization algorithm is evaluated by investigating its mean square error.
Device Centric Throughput and QoS Optimization for IoTsin a Smart Building Using CRN-Techniques
Aslam, Saleem; Hasan, Najam Ul; Shahid, Adnan; Jang, Ju Wook; Lee, Kyung-Geun
2016-01-01
The Internet of Things (IoT) has gained an incredible importance in the communication and networking industry due to its innovative solutions and advantages in diverse domains. The IoT’ network is a network of smart physical objects: devices, vehicles, buildings, etc. The IoT has a number of applications ranging from smart home, smart surveillance to smart healthcare systems. Since IoT consists of various heterogeneous devices that exhibit different traffic patterns and expect different quality of service (QoS) in terms of data rate, bit error rate and the stability index of the channel, therefore, in this paper, we formulated an optimization problem to assign channels to heterogeneous IoT devices within a smart building for the provisioning of their desired QoS. To solve this problem, a novel particle swarm optimization-based algorithm is proposed. Then, exhaustive simulations are carried out to evaluate the performance of the proposed algorithm. Simulation results demonstrate the supremacy of our proposed algorithm over the existing ones in terms of throughput, bit error rate and the stability index of the channel. PMID:27782057
Study on additional carrier sensing for IEEE 802.15.4 wireless sensor networks.
Lee, Bih-Hwang; Lai, Ruei-Lung; Wu, Huai-Kuei; Wong, Chi-Ming
2010-01-01
Wireless sensor networks based on the IEEE 802.15.4 standard are able to achieve low-power transmissions in the guise of low-rate and short-distance wireless personal area networks (WPANs). The slotted carrier sense multiple access with collision avoidance (CSMA/CA) is used for contention mechanism. Sensor nodes perform a backoff process as soon as the clear channel assessment (CCA) detects a busy channel. In doing so they may neglect the implicit information of the failed CCA detection and further cause the redundant sensing. The blind backoff process in the slotted CSMA/CA will cause lower channel utilization. This paper proposes an additional carrier sensing (ACS) algorithm based on IEEE 802.15.4 to enhance the carrier sensing mechanism for the original slotted CSMA/CA. An analytical Markov chain model is developed to evaluate the performance of the ACS algorithm. Both analytical and simulation results show that the proposed algorithm performs better than IEEE 802.15.4, which in turn significantly improves throughput, average medium access control (MAC) delay and power consumption of CCA detection.
Wang, Zhirui; Xu, Jia; Huang, Zuzhen; Zhang, Xudong; Xia, Xiang-Gen; Long, Teng; Bao, Qian
2016-03-16
To detect and estimate ground slowly moving targets in airborne single-channel synthetic aperture radar (SAR), a road-aided ground moving target indication (GMTI) algorithm is proposed in this paper. First, the road area is extracted from a focused SAR image based on radar vision. Second, after stationary clutter suppression in the range-Doppler domain, a moving target is detected and located in the image domain via the watershed method. The target's position on the road as well as its radial velocity can be determined according to the target's offset distance and traffic rules. Furthermore, the target's azimuth velocity is estimated based on the road slope obtained via polynomial fitting. Compared with the traditional algorithms, the proposed method can effectively cope with slowly moving targets partly submerged in a stationary clutter spectrum. In addition, the proposed method can be easily extended to a multi-channel system to further improve the performance of clutter suppression and motion estimation. Finally, the results of numerical experiments are provided to demonstrate the effectiveness of the proposed algorithm.
Resiliency of the Multiscale Retinex Image Enhancement Algorithm
NASA Technical Reports Server (NTRS)
Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.
1998-01-01
The multiscale retinex with color restoration (MSRCR) continues to prove itself in extensive testing to be very versatile automatic image enhancement algorithm that simultaneously provides dynamic range compression, color constancy, and color rendition, However, issues remain with regard to the resiliency of the MSRCR to different image sources and arbitrary image manipulations which may have been applied prior to retinex processing. In this paper we define these areas of concern, provide experimental results, and, examine the effects of commonly occurring image manipulation on retinex performance. In virtually all cases the MSRCR is highly resilient to the effects of both the image source variations and commonly encountered prior image-processing. Significant artifacts are primarily observed for the case of selective color channel clipping in large dark zones in a image. These issues are of concerning the processing of digital image archives and other applications where there is neither control over the image acquisition process, nor knowledge about any processing done on th data beforehand.
Motion control of the rabbit ankle joint with a flat interface nerve electrode.
Park, Hyun-Joo; Durand, Dominique M
2015-12-01
A flat interface nerve electrode (FINE) has been shown to improve fascicular and subfascicular selectivity. A recently developed novel control algorithm for FINE was applied to motion control of the rabbit ankle. A 14-contact FINE was placed on the rabbit sciatic nerve (n = 8), and ankle joint motion was controlled for sinusoidal trajectories and filtered random trajectories. To this end, a real-time controller was implemented with a multiple-channel current stimulus isolator. The performance test results showed good tracking performance of rabbit ankle joint motion for filtered random trajectories and sinusoidal trajectories (0.5 Hz and 1.0 Hz) with <10% average root-mean-square (RMS) tracking error, whereas the average range of ankle joint motion was between -20.0 ± 9.3° and 18.1 ± 8.8°. The proposed control algorithm enables the use of a multiple-contact nerve electrode for motion trajectory tracking control of musculoskeletal systems. © 2015 Wiley Periodicals, Inc.
An Efficient, Highly Flexible Multi-Channel Digital Downconverter Architecture
NASA Technical Reports Server (NTRS)
Goodhart, Charles E.; Soriano, Melissa A.; Navarro, Robert; Trinh, Joseph T.; Sigman, Elliott H.
2013-01-01
In this innovation, a digital downconverter has been created that produces a large (16 or greater) number of output channels of smaller bandwidths. Additionally, this design has the flexibility to tune each channel independently to anywhere in the input bandwidth to cover a wide range of output bandwidths (from 32 MHz down to 1 kHz). Both the flexibility in channel frequency selection and the more than four orders of magnitude range in output bandwidths (decimation rates from 32 to 640,000) presented significant challenges to be solved. The solution involved breaking the digital downconversion process into a two-stage process. The first stage is a 2 oversampled filter bank that divides the whole input bandwidth as a real input signal into seven overlapping, contiguous channels represented with complex samples. Using the symmetry of the sine and cosine functions in a similar way to that of an FFT (fast Fourier transform), this downconversion is very efficient and gives seven channels fixed in frequency. An arbitrary number of smaller bandwidth channels can be formed from second-stage downconverters placed after the first stage of downconversion. Because of the overlapping of the first stage, there is no gap in coverage of the entire input bandwidth. The input to any of the second-stage downconverting channels has a multiplexer that chooses one of the seven wideband channels from the first stage. These second-stage downconverters take up fewer resources because they operate at lower bandwidths than doing the entire downconversion process from the input bandwidth for each independent channel. These second-stage downconverters are each independent with fine frequency control tuning, providing extreme flexibility in positioning the center frequency of a downconverted channel. Finally, these second-stage downconverters have flexible decimation factors over four orders of magnitude The algorithm was developed to run in an FPGA (field programmable gate array) at input data sampling rates of up to 1,280 MHz. The current implementation takes a 1,280-MHz real input, and first breaks it up into seven 160-MHz complex channels, each spaced 80 MHz apart. The eighth channel at baseband was not required for this implementation, and led to more optimization. Afterwards, 16 second stage narrow band channels with independently tunable center frequencies and bandwidth settings are implemented A future implementation in a larger Xilinx FPGA will hold up to 32 independent second-stage channels.
A Network Selection Algorithm Considering Power Consumption in Hybrid Wireless Networks
NASA Astrophysics Data System (ADS)
Joe, Inwhee; Kim, Won-Tae; Hong, Seokjoon
In this paper, we propose a novel network selection algorithm considering power consumption in hybrid wireless networks for vertical handover. CDMA, WiBro, WLAN networks are candidate networks for this selection algorithm. This algorithm is composed of the power consumption prediction algorithm and the final network selection algorithm. The power consumption prediction algorithm estimates the expected lifetime of the mobile station based on the current battery level, traffic class and power consumption for each network interface card of the mobile station. If the expected lifetime of the mobile station in a certain network is not long enough compared the handover delay, this particular network will be removed from the candidate network list, thereby preventing unnecessary handovers in the preprocessing procedure. On the other hand, the final network selection algorithm consists of AHP (Analytic Hierarchical Process) and GRA (Grey Relational Analysis). The global factors of the network selection structure are QoS, cost and lifetime. If user preference is lifetime, our selection algorithm selects the network that offers longest service duration due to low power consumption. Also, we conduct some simulations using the OPNET simulation tool. The simulation results show that the proposed algorithm provides longer lifetime in the hybrid wireless network environment.
The electrostatics of VDAC: implications for selectivity and gating.
Choudhary, Om P; Ujwal, Rachna; Kowallis, William; Coalson, Rob; Abramson, Jeff; Grabe, Michael
2010-02-26
The voltage-dependent anion channel (VDAC) is the major pathway mediating the transfer of metabolites and ions across the mitochondrial outer membrane. Two hallmarks of the channel in the open state are high metabolite flux and anion selectivity, while the partially closed state blocks metabolites and is cation selective. Here we report the results from electrostatics calculations carried out on the recently determined high-resolution structure of murine VDAC1 (mVDAC1). Poisson-Boltzmann calculations show that the ion transfer free energy through the channel is favorable for anions, suggesting that mVDAC1 represents the open state. This claim is buttressed by Poisson-Nernst-Planck calculations that predict a high single-channel conductance indicative of the open state and an anion selectivity of 1.75--nearly a twofold selectivity for anions over cations. These calculations were repeated on mutant channels and gave selectivity changes in accord with experimental observations. We were then able to engineer an in silico mutant channel with three point mutations that converted mVDAC1 into a channel with a preference for cations. Finally, we investigated two proposals for how the channel gates between the open and the closed state. Both models involve the movement of the N-terminal helix, but neither motion produced the observed voltage sensitivity, nor did either model result in a cation-selective channel, which is observed experimentally. Thus, we were able to rule out certain models for channel gating, but the true motion has yet to be determined. Copyright (c) 2009. Elsevier Ltd. All rights reserved.
Local SAR in parallel transmission pulse design.
Lee, Joonsung; Gebhardt, Matthias; Wald, Lawrence L; Adalsteinsson, Elfar
2012-06-01
The management of local and global power deposition in human subjects (specific absorption rate, SAR) is a fundamental constraint to the application of parallel transmission (pTx) systems. Even though the pTx and single channel have to meet the same SAR requirements, the complex behavior of the spatial distribution of local SAR for transmission arrays poses problems that are not encountered in conventional single-channel systems and places additional requirements on pTx radio frequency pulse design. We propose a pTx pulse design method which builds on recent work to capture the spatial distribution of local SAR in numerical tissue models in a compressed parameterization in order to incorporate local SAR constraints within computation times that accommodate pTx pulse design during an in vivo magnetic resonance imaging scan. Additionally, the algorithm yields a protocol-specific ultimate peak in local SAR, which is shown to bound the achievable peak local SAR for a given excitation profile fidelity. The performance of the approach was demonstrated using a numerical human head model and a 7 Tesla eight-channel transmit array. The method reduced peak local 10 g SAR by 14-66% for slice-selective pTx excitations and 2D selective pTx excitations compared to a pTx pulse design constrained only by global SAR. The primary tradeoff incurred for reducing peak local SAR was an increase in global SAR, up to 34% for the evaluated examples, which is favorable in cases where local SAR constraints dominate the pulse applications. Copyright © 2011 Wiley Periodicals, Inc.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-15
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.
Daytime Land Surface Temperature Extraction from MODIS Thermal Infrared Data under Cirrus Clouds
Fan, Xiwei; Tang, Bo-Hui; Wu, Hua; Yan, Guangjian; Li, Zhao-Liang
2015-01-01
Simulated data showed that cirrus clouds could lead to a maximum land surface temperature (LST) retrieval error of 11.0 K when using the generalized split-window (GSW) algorithm with a cirrus optical depth (COD) at 0.55 μm of 0.4 and in nadir view. A correction term in the COD linear function was added to the GSW algorithm to extend the GSW algorithm to cirrus cloudy conditions. The COD was acquired by a look up table of the isolated cirrus bidirectional reflectance at 0.55 μm. Additionally, the slope k of the linear function was expressed as a multiple linear model of the top of the atmospheric brightness temperatures of MODIS channels 31–34 and as the difference between split-window channel emissivities. The simulated data showed that the LST error could be reduced from 11.0 to 2.2 K. The sensitivity analysis indicated that the total errors from all the uncertainties of input parameters, extension algorithm accuracy, and GSW algorithm accuracy were less than 2.5 K in nadir view. Finally, the Great Lakes surface water temperatures measured by buoys showed that the retrieval accuracy of the GSW algorithm was improved by at least 1.5 K using the proposed extension algorithm for cirrus skies. PMID:25928059
Research on Intelligent Control System of DC SQUID Magnetometer Parameters for Multi-channel System
NASA Astrophysics Data System (ADS)
Chen, Hua; Yang, Kang; Lu, Li; Kong, Xiangyan; Wang, Hai; Wu, Jun; Wang, Yongliang
2018-07-01
In a multi-channel SQUID measurement system, adjusting device parameters to optimal condition for all channels is time-consuming. In this paper, an intelligent control system is presented to determine the optimal working point of devices which is automatic and more efficient comparing to the manual one. An optimal working point searching algorithm is introduced as the core component of the control system. In this algorithm, the bias voltage V_bias is step scanned to obtain the maximal value of the peak-to-peak current value I_pp of the SQUID magnetometer modulation curve. We choose this point as the optimal one. Using the above control system, more than 30 weakly damped SQUID magnetometers with area of 5 × 5 mm^2 or 10 × 10 mm^2 are adjusted and a 36-channel magnetocardiography system perfectly worked in a magnetically shielded room. The average white flux noise is 15 {μ Φ }_0/Hz^{1/2}.
Status of the Suomi-NPP VIIRS Moisture Products
NASA Astrophysics Data System (ADS)
Borbas, E. E.; Li, Z.; Menzel, W. P.; Rada, M.
2017-12-01
The goal of the Soumi NPP VIIRS Moisture Project is to provide total column water vapor (TPW) properties from merged VIIRS infrared measurements and CrIS plus ATMS water vapor soundings to continue the depiction of global moisture at high spatial resolution started with MODIS. While MODIS has two water vapor channels within the 6.5 μm H2O absorption band and four channels within the 15 μm CO2 absorption band, VIIRS has no channels in either IR absorption band. The VIIRS/CrIS+ATMS TPW algorithm being developed at CIMSS is similar to the MOD07 synthetic regression algorithm. It uses the three VIIRS longwave IR window bands in a regression relation and adds the NUCAPS (CrIS+ATMS) water vapor product to compensate for the absence of VIIRS water vapor channels. This poster presents the methodology and evaluation of the S-NPP TPW Level 2 and 3 products with TPW data from ground-based and satellite-based measurements.
A Streaming PCA VLSI Chip for Neural Data Compression.
Wu, Tong; Zhao, Wenfeng; Guo, Hongsun; Lim, Hubert H; Yang, Zhi
2017-12-01
Neural recording system miniaturization and integration with low-power wireless technologies require compressing neural data before transmission. Feature extraction is a procedure to represent data in a low-dimensional space; its integration into a recording chip can be an efficient approach to compress neural data. In this paper, we propose a streaming principal component analysis algorithm and its microchip implementation to compress multichannel local field potential (LFP) and spike data. The circuits have been designed in a 65-nm CMOS technology and occupy a silicon area of 0.06 mm. Throughout the experiments, the chip compresses LFPs by 10 at the expense of as low as 1% reconstruction errors and 144-nW/channel power consumption; for spikes, the achieved compression ratio is 25 with 8% reconstruction errors and 3.05-W/channel power consumption. In addition, the algorithm and its hardware architecture can swiftly adapt to nonstationary spiking activities, which enables efficient hardware sharing among multiple channels to support a high-channel count recorder.
Research on Intelligent Control System of DC SQUID Magnetometer Parameters for Multi-channel System
NASA Astrophysics Data System (ADS)
Chen, Hua; Yang, Kang; Lu, Li; Kong, Xiangyan; Wang, Hai; Wu, Jun; Wang, Yongliang
2018-03-01
In a multi-channel SQUID measurement system, adjusting device parameters to optimal condition for all channels is time-consuming. In this paper, an intelligent control system is presented to determine the optimal working point of devices which is automatic and more efficient comparing to the manual one. An optimal working point searching algorithm is introduced as the core component of the control system. In this algorithm, the bias voltage V_bias is step scanned to obtain the maximal value of the peak-to-peak current value I_pp of the SQUID magnetometer modulation curve. We choose this point as the optimal one. Using the above control system, more than 30 weakly damped SQUID magnetometers with area of 5 × 5 mm^2 or 10 × 10 mm^2 are adjusted and a 36-channel magnetocardiography system perfectly worked in a magnetically shielded room. The average white flux noise is 15 μΦ_0/Hz^{1/2}.
Monte Carlo simulation of a noisy quantum channel with memory.
Akhalwaya, Ismail; Moodley, Mervlyn; Petruccione, Francesco
2015-10-01
The classical capacity of quantum channels is well understood for channels with uncorrelated noise. For the case of correlated noise, however, there are still open questions. We calculate the classical capacity of a forgetful channel constructed by Markov switching between two depolarizing channels. Techniques have previously been applied to approximate the output entropy of this channel and thus its capacity. In this paper, we use a Metropolis-Hastings Monte Carlo approach to numerically calculate the entropy. The algorithm is implemented in parallel and its performance is studied and optimized. The effects of memory on the capacity are explored and previous results are confirmed to higher precision.
A support vector machine approach for classification of welding defects from ultrasonic signals
NASA Astrophysics Data System (ADS)
Chen, Yuan; Ma, Hong-Wei; Zhang, Guang-Ming
2014-07-01
Defect classification is an important issue in ultrasonic non-destructive evaluation. A layered multi-class support vector machine (LMSVM) classification system, which combines multiple SVM classifiers through a layered architecture, is proposed in this paper. The proposed LMSVM classification system is applied to the classification of welding defects from ultrasonic test signals. The measured ultrasonic defect echo signals are first decomposed into wavelet coefficients by the wavelet packet transform. The energy of the wavelet coefficients at different frequency channels are used to construct the feature vectors. The bees algorithm (BA) is then used for feature selection and SVM parameter optimisation for the LMSVM classification system. The BA-based feature selection optimises the energy feature vectors. The optimised feature vectors are input to the LMSVM classification system for training and testing. Experimental results of classifying welding defects demonstrate that the proposed technique is highly robust, precise and reliable for ultrasonic defect classification.
Two-Step Fair Scheduling of Continuous Media Streams over Error-Prone Wireless Channels
NASA Astrophysics Data System (ADS)
Oh, Soohyun; Lee, Jin Wook; Park, Taejoon; Jo, Tae-Chang
In wireless cellular networks, streaming of continuous media (with strict QoS requirements) over wireless links is challenging due to their inherent unreliability characterized by location-dependent, bursty errors. To address this challenge, we present a two-step scheduling algorithm for a base station to provide streaming of continuous media to wireless clients over the error-prone wireless links. The proposed algorithm is capable of minimizing the packet loss rate of individual clients in the presence of error bursts, by transmitting packets in the round-robin manner and also adopting a mechanism for channel prediction and swapping.
NASA Astrophysics Data System (ADS)
Valderrama, Joaquin T.; de la Torre, Angel; Van Dun, Bram
2018-02-01
Objective. Artifact reduction in electroencephalogram (EEG) signals is usually necessary to carry out data analysis appropriately. Despite the large amount of denoising techniques available with a multichannel setup, there is a lack of efficient algorithms that remove (not only detect) blink-artifacts from a single channel EEG, which is of interest in many clinical and research applications. This paper describes and evaluates the iterative template matching and suppression (ITMS), a new method proposed for detecting and suppressing the artifact associated with the blink activity from a single channel EEG. Approach. The approach of ITMS consists of (a) an iterative process in which blink-events are detected and the blink-artifact waveform of the analyzed subject is estimated, (b) generation of a signal modeling the blink-artifact, and (c) suppression of this signal from the raw EEG. The performance of ITMS is compared with the multi-window summation of derivatives within a window (MSDW) technique using both synthesized and real EEG data. Main results. Results suggest that ITMS presents an adequate performance in detecting and suppressing blink-artifacts from a single channel EEG. When applied to the analysis of cortical auditory evoked potentials (CAEPs), ITMS provides a significant quality improvement in the resulting responses, i.e. in a cohort of 30 adults, the mean correlation coefficient improved from 0.37 to 0.65 when the blink-artifacts were detected and suppressed by ITMS. Significance. ITMS is an efficient solution to the problem of denoising blink-artifacts in single-channel EEG applications, both in clinical and research fields. The proposed ITMS algorithm is stable; automatic, since it does not require human intervention; low-invasive, because the EEG segments not contaminated by blink-artifacts remain unaltered; and easy to implement, as can be observed in the Matlab script implemeting the algorithm provided as supporting material.
Zhao, Hongbo; Chen, Yuying; Feng, Wenquan; Zhuang, Chen
2018-05-25
Inter-satellite links are an important component of the new generation of satellite navigation systems, characterized by low signal-to-noise ratio (SNR), complex electromagnetic interference and the short time slot of each satellite, which brings difficulties to the acquisition stage. The inter-satellite link in both Global Positioning System (GPS) and BeiDou Navigation Satellite System (BDS) adopt the long code spread spectrum system. However, long code acquisition is a difficult and time-consuming task due to the long code period. Traditional folding methods such as extended replica folding acquisition search technique (XFAST) and direct average are largely restricted because of code Doppler and additional SNR loss caused by replica folding. The dual folding method (DF-XFAST) and dual-channel method have been proposed to achieve long code acquisition in low SNR and high dynamic situations, respectively, but the former is easily affected by code Doppler and the latter is not fast enough. Considering the environment of inter-satellite links and the problems of existing algorithms, this paper proposes a new long code acquisition algorithm named dual-channel acquisition method based on the extended replica folding algorithm (DC-XFAST). This method employs dual channels for verification. Each channel contains an incoming signal block. Local code samples are folded and zero-padded to the length of the incoming signal block. After a circular FFT operation, the correlation results contain two peaks of the same magnitude and specified relative position. The detection process is eased through finding the two largest values. The verification takes all the full and partial peaks into account. Numerical results reveal that the DC-XFAST method can improve acquisition performance while acquisition speed is guaranteed. The method has a significantly higher acquisition probability than folding methods XFAST and DF-XFAST. Moreover, with the advantage of higher detection probability and lower false alarm probability, it has a lower mean acquisition time than traditional XFAST, DF-XFAST and zero-padding.
Wittevrongel, Benjamin; Van Hulle, Marc M
2017-01-01
Brain-Computer Interfaces (BCIs) decode brain activity with the aim to establish a direct communication channel with an external device. Albeit they have been hailed to (re-)establish communication in persons suffering from severe motor- and/or communication disabilities, only recently BCI applications have been challenging other assistive technologies. Owing to their considerably increased performance and the advent of affordable technological solutions, BCI technology is expected to trigger a paradigm shift not only in assistive technology but also in the way we will interface with technology. However, the flipside of the quest for accuracy and speed is most evident in EEG-based visual BCI where it has led to a gamut of increasingly complex classifiers, tailored to the needs of specific stimulation paradigms and use contexts. In this contribution, we argue that spatiotemporal beamforming can serve several synchronous visual BCI paradigms. We demonstrate this for three popular visual paradigms even without attempting to optimizing their electrode sets. For each selectable target, a spatiotemporal beamformer is applied to assess whether the corresponding signal-of-interest is present in the preprocessed multichannel EEG signals. The target with the highest beamformer output is then selected by the decoder (maximum selection). In addition to this simple selection rule, we also investigated whether interactions between beamformer outputs could be employed to increase accuracy by combining the outputs for all targets into a feature vector and applying three common classification algorithms. The results show that the accuracy of spatiotemporal beamforming with maximum selection is at par with that of the classification algorithms and interactions between beamformer outputs do not further improve that accuracy.
A Distributed Transmission Rate Adjustment Algorithm in Heterogeneous CSMA/CA Networks
Xie, Shuanglong; Low, Kay Soon; Gunawan, Erry
2015-01-01
Distributed transmission rate tuning is important for a wide variety of IEEE 802.15.4 network applications such as industrial network control systems. Such systems often require each node to sustain certain throughput demand in order to guarantee the system performance. It is thus essential to determine a proper transmission rate that can meet the application requirement and compensate for network imperfections (e.g., packet loss). Such a tuning in a heterogeneous network is difficult due to the lack of modeling techniques that can deal with the heterogeneity of the network as well as the network traffic changes. In this paper, a distributed transmission rate tuning algorithm in a heterogeneous IEEE 802.15.4 CSMA/CA network is proposed. Each node uses the results of clear channel assessment (CCA) to estimate the busy channel probability. Then a mathematical framework is developed to estimate the on-going heterogeneous traffics using the busy channel probability at runtime. Finally a distributed algorithm is derived to tune the transmission rate of each node to accurately meet the throughput requirement. The algorithm does not require modifications on IEEE 802.15.4 MAC layer and it has been experimentally implemented and extensively tested using TelosB nodes with the TinyOS protocol stack. The results reveal that the algorithm is accurate and can satisfy the throughput demand. Compared with existing techniques, the algorithm is fully distributed and thus does not require any central coordination. With this property, it is able to adapt to traffic changes and re-adjust the transmission rate to the desired level, which cannot be achieved using the traditional modeling techniques. PMID:25822140
Optimization of polymer electrolyte membrane fuel cell flow channels using a genetic algorithm
NASA Astrophysics Data System (ADS)
Catlin, Glenn; Advani, Suresh G.; Prasad, Ajay K.
The design of the flow channels in PEM fuel cells directly impacts the transport of reactant gases to the electrodes and affects cell performance. This paper presents results from a study to optimize the geometry of the flow channels in a PEM fuel cell. The optimization process implements a genetic algorithm to rapidly converge on the channel geometry that provides the highest net power output from the cell. In addition, this work implements a method for the automatic generation of parameterized channel domains that are evaluated for performance using a commercial computational fluid dynamics package from ANSYS. The software package includes GAMBIT as the solid modeling and meshing software, the solver FLUENT, and a PEMFC Add-on Module capable of modeling the relevant physical and electrochemical mechanisms that describe PEM fuel cell operation. The result of the optimization process is a set of optimal channel geometry values for the single-serpentine channel configuration. The performance of the optimal geometry is contrasted with a sub-optimal one by comparing contour plots of current density, oxygen and hydrogen concentration. In addition, the role of convective bypass in bringing fresh reactant to the catalyst layer is examined in detail. The convergence to the optimal geometry is confirmed by a bracketing study which compares the performance of the best individual to those of its neighbors with adjacent parameter values.
Mechanism of activation at the selectivity filter of the KcsA K+ channel
Heer, Florian T; Posson, David J; Wojtas-Niziurski, Wojciech
2017-01-01
Potassium channels are opened by ligands and/or membrane potential. In voltage-gated K+ channels and the prokaryotic KcsA channel, conduction is believed to result from opening of an intracellular constriction that prevents ion entry into the pore. On the other hand, numerous ligand-gated K+ channels lack such gate, suggesting that they may be activated by a change within the selectivity filter, a narrow region at the extracellular side of the pore. Using molecular dynamics simulations and electrophysiology measurements, we show that ligand-induced conformational changes in the KcsA channel removes steric restraints at the selectivity filter, thus resulting in structural fluctuations, reduced K+ affinity, and increased ion permeation. Such activation of the selectivity filter may be a universal gating mechanism within K+ channels. The occlusion of the pore at the level of the intracellular gate appears to be secondary. PMID:28994652
Alavizargar, Azadeh; Berti, Claudio; Ejtehadi, Mohammad Reza; Furini, Simone
2018-04-26
Calcium release-activated calcium (CRAC) channels open upon depletion of Ca 2+ from the endoplasmic reticulum, and when open, they are permeable to a selective flux of calcium ions. The atomic structure of Orai, the pore domain of CRAC channels, from Drosophila melanogaster has revealed many details about conduction and selectivity in this family of ion channels. However, it is still unclear how residues on the third transmembrane helix can affect the conduction properties of the channel. Here, molecular dynamics and Brownian dynamics simulations were employed to analyze how a conserved glutamate residue on the third transmembrane helix (E262) contributes to selectivity. The comparison between the wild-type and mutated channels revealed a severe impact of the mutation on the hydration pattern of the pore domain and on the dynamics of residues K270, and Brownian dynamics simulations proved that the altered configuration of residues K270 in the mutated channel impairs selectivity to Ca 2+ over Na + . The crevices of water molecules, revealed by molecular dynamics simulations, are perfectly located to contribute to the dynamics of the hydrophobic gate and the basic gate, suggesting a possible role in channel opening and in selectivity function.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena
2010-01-01
AIRS was launched on EOS Aqua on May 4, 2002 together with ASMU-A and HSB to form a next generation polar orbiting infrared and microwave atmosphere sounding system (Pagano et al 2003). The theoretical approach used to analyze AIRS/AMSU/HSB data in the presence of clouds in the AIRS Science Team Version 3 at-launch algorithm, and that used in the Version 4 post-launch algorithm, have been published previously. Significant theoretical and practical improvements have been made in the analysis of AIRS/AMSU data since the Version 4 algorithm. Most of these have already been incorporated in the AIRS Science Team Version 5 algorithm (Susskind et al 2010), now being used operationally at the Goddard DISC. The AIRS Version 5 retrieval algorithm contains three significant improvements over Version 4. Improved physics in Version 5 allowed for use of AIRS clear column radiances (R(sub i)) in the entire 4.3 micron CO2 absorption band in the retrieval of temperature profiles T(p) during both day and night. Tropospheric sounding 15 micron CO2 observations were used primarily in the generation of clear column radiances (R(sub i)) for all channels. This new approach allowed for the generation of accurate Quality Controlled values of R(sub i) and T(p) under more stressing cloud conditions. Secondly, Version 5 contained a new methodology to provide accurate case-by-case error estimates for retrieved geophysical parameters and for channel-by-channel clear column radiances. Thresholds of these error estimates are used in a new approach for Quality Control. Finally, Version 5 contained for the first time an approach to provide AIRS soundings in partially cloudy conditions that does not require use of any microwave data. This new AIRS Only sounding methodology was developed as a backup to AIRS Version 5 should the AMSU-A instrument fail. Susskind et al 2010 shows that Version 5 AIRS Only sounding are only slightly degraded from the AIRS/AMSU soundings, even at large fractional cloud cover.
Ripple distribution for nonlinear fiber-optic channels.
Sorokina, Mariia; Sygletos, Stylianos; Turitsyn, Sergei
2017-02-06
We demonstrate data rates above the threshold imposed by nonlinearity on conventional optical signals by applying novel probability distribution, which we call ripple distribution, adapted to the properties of the fiber channel. Our results offer a new direction for signal coding, modulation and practical nonlinear distortions compensation algorithms.
Algorithms development for the GEM-based detection system
NASA Astrophysics Data System (ADS)
Czarski, T.; Chernyshova, M.; Malinowski, K.; Pozniak, K. T.; Kasprowicz, G.; Kolasinski, P.; Krawczyk, R.; Wojenski, A.; Zabolotny, W.
2016-09-01
The measurement system based on GEM - Gas Electron Multiplier detector - is developed for soft X-ray diagnostics of tokamak plasmas. The multi-channel setup is designed for estimation of the energy and the position distribution of an Xray source. The focal measuring issue is the charge cluster identification by its value and position estimation. The fast and accurate mode of the serial data acquisition is applied for the dynamic plasma diagnostics. The charge clusters are counted in the space determined by 2D position, charge value and time intervals. Radiation source characteristics are presented by histograms for a selected range of position, time intervals and cluster charge values corresponding to the energy spectra.
Algorithm and assessment work of active fire detection based on FengYun-3C/VIRR
NASA Astrophysics Data System (ADS)
Lin, Z.; Chen, F.
2017-12-01
The wildfire is one of the most destructive and uncontrollable disasters and causes huge environmental, ecological, social effects. To better serve scientific research and practical fire management, an algorithm and corresponding validation work of active fire detection based on FengYun-3C/VIRR data, which is an optical sensor onboard the Chinese polar-orbiting meteorological sun-synchronous satellite, is hereby introduced. While the main structure heritages the `contextual algorithm', some new concepts including `infrared channel slope' are introduced for better adaptions to different situations. The validation work contains three parts: 1) comparing with the current FengYun-3C fire product GFR; 2) comparing with MODIS fire products; 3) comparing with Landsat series data. Study areas are selected from different places all over the world from 2014 to 2016. The results showed great improvement on GFR files on accuracy of both positioning and detection rate. In most study areas, the results match well with MODIS products and Landsat series data (with over 85% match degree) despite the differences in imaging time. However, detection rates and match degrees in Africa and South-east Asia are not satisfied (around 70%), where the occurrences of numerous small fire events and corresponding smokes may strongly affect the results of the algorithm. This is our future research direction and one of the main improvements requires achieving.
NASA Astrophysics Data System (ADS)
Kurnosov, R. Yu; Chernyshova, T. I.; Chernyshov, V. N.
2018-05-01
The algorithms for improving the metrological reliability of analogue blocks of measuring channels and information-measuring systems are developed. The proposed algorithms ensure the optimum values of their metrological reliability indices for a given analogue circuit block solution.
Estimation of Boreal Forest Biomass Using Spaceborne SAR Systems
NASA Technical Reports Server (NTRS)
Saatchi, Sassan; Moghaddam, Mahta
1995-01-01
In this paper, we report on the use of a semiempirical algorithm derived from a two layer radar backscatter model for forest canopies. The model stratifies the forest canopy into crown and stem layers, separates the structural and biometric attributes of the canopy. The structural parameters are estimated by training the model with polarimetric SAR (synthetic aperture radar) data acquired over homogeneous stands with known above ground biomass. Given the structural parameters, the semi-empirical algorithm has four remaining parameters, crown biomass, stem biomass, surface soil moisture, and surface rms height that can be estimated by at least four independent SAR measurements. The algorithm has been used to generate biomass maps over the entire images acquired by JPL AIRSAR and SIR-C SAR systems. The semi-empirical algorithms are then modified to be used by single frequency radar systems such as ERS-1, JERS-1, and Radarsat. The accuracy. of biomass estimation from single channel radars is compared with the case when the channels are used together in synergism or in a polarimetric system.
Blind equalization with criterion with memory nonlinearity
NASA Astrophysics Data System (ADS)
Chen, Yuanjie; Nikias, Chrysostomos L.; Proakis, John G.
1992-06-01
Blind equalization methods usually combat the linear distortion caused by a nonideal channel via a transversal filter, without resorting to the a priori known training sequences. We introduce a new criterion with memory nonlinearity (CRIMNO) for the blind equalization problem. The basic idea of this criterion is to augment the Godard [or constant modulus algorithm (CMA)] cost function with additional terms that penalize the autocorrelations of the equalizer outputs. Several variations of the CRIMNO algorithms are derived, with the variations dependent on (1) whether the empirical averages or the single point estimates are used to approximate the expectations, (2) whether the recent or the delayed equalizer coefficients are used, and (3) whether the weights applied to the autocorrelation terms are fixed or are allowed to adapt. Simulation experiments show that the CRIMNO algorithm, and especially its adaptive weight version, exhibits faster convergence speed than the Godard (or CMA) algorithm. Extensions of the CRIMNO criterion to accommodate the case of correlated inputs to the channel are also presented.
Adaptive Wiener filter super-resolution of color filter array images.
Karch, Barry K; Hardie, Russell C
2013-08-12
Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.
Real-time motion-based H.263+ frame rate control
NASA Astrophysics Data System (ADS)
Song, Hwangjun; Kim, JongWon; Kuo, C.-C. Jay
1998-12-01
Most existing H.263+ rate control algorithms, e.g. the one adopted in the test model of the near-term (TMN8), focus on the macroblock layer rate control and low latency under the assumptions of with a constant frame rate and through a constant bit rate (CBR) channel. These algorithms do not accommodate the transmission bandwidth fluctuation efficiently, and the resulting video quality can be degraded. In this work, we propose a new H.263+ rate control scheme which supports the variable bit rate (VBR) channel through the adjustment of the encoding frame rate and quantization parameter. A fast algorithm for the encoding frame rate control based on the inherent motion information within a sliding window in the underlying video is developed to efficiently pursue a good tradeoff between spatial and temporal quality. The proposed rate control algorithm also takes the time-varying bandwidth characteristic of the Internet into account and is able to accommodate the change accordingly. Experimental results are provided to demonstrate the superior performance of the proposed scheme.
Retrieval of chlorophyll from remote-sensing reflectance in the china seas.
He, M X; Liu, Z S; Du, K P; Li, L P; Chen, R; Carder, K L; Lee, Z P
2000-05-20
The East China Sea is a typical case 2 water environment, where concentrations of phytoplankton pigments, suspended matter, and chromophoric dissolved organic matter (CDOM) are all higher than those in the open oceans, because of the discharge from the Yangtze River and the Yellow River. By using a hyperspectral semianalytical model, we simulated a set of remote-sensing reflectance for a variety of chlorophyll, suspended matter, and CDOM concentrations. From this simulated data set, a new algorithm for the retrieval of chlorophyll concentration from remote-sensing reflectance is proposed. For this method, we took into account the 682-nm spectral channel in addition to the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) channels. When this algorithm was applied to a field data set, the chlorophyll concentrations retrieved through the new algorithm were consistent with field measurements to within a small error of 18%, in contrast with that of 147% between the SeaWiFS ocean chlorophyll 2 algorithm and the in situ observation.
A fuzzy reinforcement learning approach to power control in wireless transmitters.
Vengerov, David; Bambos, Nicholas; Berenji, Hamid R
2005-08-01
We address the issue of power-controlled shared channel access in wireless networks supporting packetized data traffic. We formulate this problem using the dynamic programming framework and present a new distributed fuzzy reinforcement learning algorithm (ACFRL-2) capable of adequately solving a class of problems to which the power control problem belongs. Our experimental results show that the algorithm converges almost deterministically to a neighborhood of optimal parameter values, as opposed to a very noisy stochastic convergence of earlier algorithms. The main tradeoff facing a transmitter is to balance its current power level with future backlog in the presence of stochastically changing interference. Simulation experiments demonstrate that the ACFRL-2 algorithm achieves significant performance gains over the standard power control approach used in CDMA2000. Such a large improvement is explained by the fact that ACFRL-2 allows transmitters to learn implicit coordination policies, which back off under stressful channel conditions as opposed to engaging in escalating "power wars."
Finnerty, Justin John; Peyser, Alexander; Carloni, Paolo
2015-01-01
Cation selective channels constitute the gate for ion currents through the cell membrane. Here we present an improved statistical mechanical model based on atomistic structural information, cation hydration state and without tuned parameters that reproduces the selectivity of biological Na+ and Ca2+ ion channels. The importance of the inclusion of step-wise cation hydration in these results confirms the essential role partial dehydration plays in the bacterial Na+ channels. The model, proven reliable against experimental data, could be straightforwardly used for designing Na+ and Ca2+ selective nanopores.
A generic EEG artifact removal algorithm based on the multi-channel Wiener filter
NASA Astrophysics Data System (ADS)
Somers, Ben; Francart, Tom; Bertrand, Alexander
2018-06-01
Objective. The electroencephalogram (EEG) is an essential neuro-monitoring tool for both clinical and research purposes, but is susceptible to a wide variety of undesired artifacts. Removal of these artifacts is often done using blind source separation techniques, relying on a purely data-driven transformation, which may sometimes fail to sufficiently isolate artifacts in only one or a few components. Furthermore, some algorithms perform well for specific artifacts, but not for others. In this paper, we aim to develop a generic EEG artifact removal algorithm, which allows the user to annotate a few artifact segments in the EEG recordings to inform the algorithm. Approach. We propose an algorithm based on the multi-channel Wiener filter (MWF), in which the artifact covariance matrix is replaced by a low-rank approximation based on the generalized eigenvalue decomposition. The algorithm is validated using both hybrid and real EEG data, and is compared to other algorithms frequently used for artifact removal. Main results. The MWF-based algorithm successfully removes a wide variety of artifacts with better performance than current state-of-the-art methods. Significance. Current EEG artifact removal techniques often have limited applicability due to their specificity to one kind of artifact, their complexity, or simply because they are too ‘blind’. This paper demonstrates a fast, robust and generic algorithm for removal of EEG artifacts of various types, i.e. those that were annotated as unwanted by the user.
Operational Planning of Channel Airlift Missions Using Forecasted Demand
2013-03-01
tailored to the specific problem ( Metaheuristics , 2005). As seen in the section Cargo Loading Algorithm , heuristic methods are often iterative...that are equivalent to the forecasted cargo amount. The simulated pallets are then used in a heuristic cargo loading algorithm . The loading... algorithm places cargo onto available aircraft (based on real schedules) given the date and the destination and outputs statistics based on the aircraft ton
Adaptive Reception for Underwater Communications
2011-06-01
Experimental results prove the effectiveness of the receiver. 14. SUBJECT TERMS Underwater acoustic communications, adaptive algorithms , Kalman filter...the update algorithm design and the value of the spatial diversity are addressed. In this research, an adaptive multichannel equalizer made up of a...for the time-varying nature of the channel is to use an Adaptive Decision Feedback Equalizer based on either the RLS or LMS algorithm . Although this
Lossless compression algorithm for multispectral imagers
NASA Astrophysics Data System (ADS)
Gladkova, Irina; Grossberg, Michael; Gottipati, Srikanth
2008-08-01
Multispectral imaging is becoming an increasingly important tool for monitoring the earth and its environment from space borne and airborne platforms. Multispectral imaging data consists of visible and IR measurements from a scene across space and spectrum. Growing data rates resulting from faster scanning and finer spatial and spectral resolution makes compression an increasingly critical tool to reduce data volume for transmission and archiving. Research for NOAA NESDIS has been directed to finding for the characteristics of satellite atmospheric Earth science Imager sensor data what level of Lossless compression ratio can be obtained as well as appropriate types of mathematics and approaches that can lead to approaching this data's entropy level. Conventional lossless do not achieve the theoretical limits for lossless compression on imager data as estimated from the Shannon entropy. In a previous paper, the authors introduce a lossless compression algorithm developed for MODIS as a proxy for future NOAA-NESDIS satellite based Earth science multispectral imagers such as GOES-R. The algorithm is based on capturing spectral correlations using spectral prediction, and spatial correlations with a linear transform encoder. In decompression, the algorithm uses a statistically computed look up table to iteratively predict each channel from a channel decompressed in the previous iteration. In this paper we present a new approach which fundamentally differs from our prior work. In this new approach, instead of having a single predictor for each pair of bands we introduce a piecewise spatially varying predictor which significantly improves the compression results. Our new algorithm also now optimizes the sequence of channels we use for prediction. Our results are evaluated by comparison with a state of the art wavelet based image compression scheme, Jpeg2000. We present results on the 14 channel subset of the MODIS imager, which serves as a proxy for the GOES-R imager. We will also show results of the algorithm for on NOAA AVHRR data and data from SEVIRI. The algorithm is designed to be adapted to the wide range of multispectral imagers and should facilitate distribution of data throughout globally. This compression research is managed by Roger Heymann, PE of OSD NOAA NESDIS Engineering, in collaboration with the NOAA NESDIS STAR Research Office through Mitch Goldberg, Tim Schmit, Walter Wolf.
Sodium channel selectivity and conduction: Prokaryotes have devised their own molecular strategy
Finol-Urdaneta, Rocio K.; Wang, Yibo; Al-Sabi, Ahmed; Zhao, Chunfeng
2014-01-01
Striking structural differences between voltage-gated sodium (Nav) channels from prokaryotes (homotetramers) and eukaryotes (asymmetric, four-domain proteins) suggest the likelihood of different molecular mechanisms for common functions. For these two channel families, our data show similar selectivity sequences among alkali cations (relative permeability, Pion/PNa) and asymmetric, bi-ionic reversal potentials when the Na/K gradient is reversed. We performed coordinated experimental and computational studies, respectively, on the prokaryotic Nav channels NaChBac and NavAb. NaChBac shows an “anomalous,” nonmonotonic mole-fraction dependence in the presence of certain sodium–potassium mixtures; to our knowledge, no comparable observation has been reported for eukaryotic Nav channels. NaChBac’s preferential selectivity for sodium is reduced either by partial titration of its highly charged selectivity filter, when extracellular pH is lowered from 7.4 to 5.8, or by perturbation—likely steric—associated with a nominally electro-neutral substitution in the selectivity filter (E191D). Although no single molecular feature or energetic parameter appears to dominate, our atomistic simulations, based on the published NavAb crystal structure, revealed factors that may contribute to the normally observed selectivity for Na over K. These include: (a) a thermodynamic penalty to exchange one K+ for one Na+ in the wild-type (WT) channel, increasing the relative likelihood of Na+ occupying the binding site; (b) a small tendency toward weaker ion binding to the selectivity filter in Na–K mixtures, consistent with the higher conductance observed with both sodium and potassium present; and (c) integrated 1-D potentials of mean force for sodium or potassium movement that show less separation for the less selective E/D mutant than for WT. Overall, tight binding of a single favored ion to the selectivity filter, together with crucial inter-ion interactions within the pore, suggests that prokaryotic Nav channels use a selective strategy more akin to those of eukaryotic calcium and potassium channels than that of eukaryotic Nav channels. PMID:24420772
Keivanian, Farshid; Mehrshad, Nasser; Bijari, Abolfazl
2016-01-01
D Flip-Flop as a digital circuit can be used as a timing element in many sophisticated circuits. Therefore the optimum performance with the lowest power consumption and acceptable delay time will be critical issue in electronics circuits. The newly proposed Dual-Edge Triggered Static D Flip-Flop circuit layout is defined as a multi-objective optimization problem. For this, an optimum fuzzy inference system with fuzzy rules is proposed to enhance the performance and convergence of non-dominated sorting Genetic Algorithm-II by adaptive control of the exploration and exploitation parameters. By using proposed Fuzzy NSGA-II algorithm, the more optimum values for MOSFET channel widths and power supply are discovered in search space than ordinary NSGA types. What is more, the design parameters involving NMOS and PMOS channel widths and power supply voltage and the performance parameters including average power consumption and propagation delay time are linked. To do this, the required mathematical backgrounds are presented in this study. The optimum values for the design parameters of MOSFETs channel widths and power supply are discovered. Based on them the power delay product quantity (PDP) is 6.32 PJ at 125 MHz Clock Frequency, L = 0.18 µm, and T = 27 °C.
Automated Detection of Knickpoints and Knickzones Across Transient Landscapes
NASA Astrophysics Data System (ADS)
Gailleton, B.; Mudd, S. M.; Clubb, F. J.
2017-12-01
Mountainous regions are ubiquitously dissected by river channels, which transmit climate and tectonic signals to the rest of the landscape by adjusting their long profiles. Fluvial response to allogenic forcing is often expressed through the upstream propagation of steepened reaches, referred to as knickpoints or knickzones. The identification and analysis of these steepened reaches has numerous applications in geomorphology, such as modelling long-term landscape evolution, understanding controls on fluvial incision, and constraining tectonic uplift histories. Traditionally, the identification of knickpoints or knickzones from fluvial profiles requires manual selection or calibration. This process is both time-consuming and subjective, as different workers may select different steepened reaches within the profile. We propose an objective, statistically-based method to systematically pick knickpoints/knickzones on a landscape scale using an outlier-detection algorithm. Our method integrates river profiles normalised by drainage area (Chi, using the approach of Perron and Royden, 2013), then separates the chi-elevation plots into a series of transient segments using the method of Mudd et al. (2014). This method allows the systematic detection of knickpoints across a DEM, regardless of size, using a high-performance algorithm implemented in the open-source Edinburgh Land Surface Dynamics Topographic Tools (LSDTopoTools) software package. After initial knickpoint identification, outliers are selected using several sorting and binning methods based on the Median Absolute Deviation, to avoid the influence sample size. We test our method on a series of DEMs and grid resolutions, and show that our method consistently identifies accurate knickpoint locations across each landscape tested.
Tixier, Eliott; Raphel, Fabien; Lombardi, Damiano; Gerbeau, Jean-Frédéric
2017-01-01
The Micro-Electrode Array (MEA) device enables high-throughput electrophysiology measurements that are less labor-intensive than patch-clamp based techniques. Combined with human-induced pluripotent stem cells cardiomyocytes (hiPSC-CM), it represents a new and promising paradigm for automated and accurate in vitro drug safety evaluation. In this article, the following question is addressed: which features of the MEA signals should be measured to better classify the effects of drugs? A framework for the classification of drugs using MEA measurements is proposed. The classification is based on the ion channels blockades induced by the drugs. It relies on an in silico electrophysiology model of the MEA, a feature selection algorithm and automatic classification tools. An in silico model of the MEA is developed and is used to generate synthetic measurements. An algorithm that extracts MEA measurements features designed to perform well in a classification context is described. These features are called composite biomarkers. A state-of-the-art machine learning program is used to carry out the classification of drugs using experimental MEA measurements. The experiments are carried out using five different drugs: mexiletine, flecainide, diltiazem, moxifloxacin, and dofetilide. We show that the composite biomarkers outperform the classical ones in different classification scenarios. We show that using both synthetic and experimental MEA measurements improves the robustness of the composite biomarkers and that the classification scores are increased.
NASA Astrophysics Data System (ADS)
Le Nir, Vincent; Moonen, Marc; Verlinden, Jan; Guenach, Mamoun
2009-02-01
Recently, the duality between Multiple Input Multiple Output (MIMO) Multiple Access Channels (MAC) and MIMO Broadcast Channels (BC) has been established under a total power constraint. The same set of rates for MAC can be achieved in BC exploiting the MAC-BC duality formulas while preserving the total power constraint. In this paper, we describe the BC optimal power allo- cation applying this duality in a downstream x-Digital Subscriber Lines (xDSL) context under a total power constraint for all modems over all tones. Then, a new algorithm called BC-Optimal Spectrum Balancing (BC-OSB) is devised for a more realistic power allocation under per-modem total power constraints. The capacity region of the primal BC problem under per-modem total power constraints is found by the dual optimization problem for the BC under per-modem total power constraints which can be rewritten as a dual optimization problem in the MAC by means of a precoder matrix based on the Lagrange multipliers. We show that the duality gap between the two problems is zero. The multi-user power allocation problem has been solved for interference channels and MAC using the OSB algorithm. In this paper we solve the problem of multi-user power allocation for the BC case using the OSB algorithm as well and we derive a computational efficient algorithm that will be referred to as BC-OSB. Simulation results are provided for two VDSL2 scenarios: the first one with Differential-Mode (DM) transmission only and the second one with both DM and Phantom- Mode (PM) transmissions.
Shi, Jun; Liu, Xiao; Li, Yan; Zhang, Qi; Li, Yingjie; Ying, Shihui
2015-10-30
Electroencephalography (EEG) based sleep staging is commonly used in clinical routine. Feature extraction and representation plays a crucial role in EEG-based automatic classification of sleep stages. Sparse representation (SR) is a state-of-the-art unsupervised feature learning method suitable for EEG feature representation. Collaborative representation (CR) is an effective data coding method used as a classifier. Here we use CR as a data representation method to learn features from the EEG signal. A joint collaboration model is established to develop a multi-view learning algorithm, and generate joint CR (JCR) codes to fuse and represent multi-channel EEG signals. A two-stage multi-view learning-based sleep staging framework is then constructed, in which JCR and joint sparse representation (JSR) algorithms first fuse and learning the feature representation from multi-channel EEG signals, respectively. Multi-view JCR and JSR features are then integrated and sleep stages recognized by a multiple kernel extreme learning machine (MK-ELM) algorithm with grid search. The proposed two-stage multi-view learning algorithm achieves superior performance for sleep staging. With a K-means clustering based dictionary, the mean classification accuracy, sensitivity and specificity are 81.10 ± 0.15%, 71.42 ± 0.66% and 94.57 ± 0.07%, respectively; while with the dictionary learned using the submodular optimization method, they are 80.29 ± 0.22%, 71.26 ± 0.78% and 94.38 ± 0.10%, respectively. The two-stage multi-view learning based sleep staging framework outperforms all other classification methods compared in this work, while JCR is superior to JSR. The proposed multi-view learning framework has the potential for sleep staging based on multi-channel or multi-modality polysomnography signals. Copyright © 2015 Elsevier B.V. All rights reserved.
A MULTIPLE GRID ALGORITHM FOR ONE-DIMENSIONAL TRANSIENT OPEN CHANNEL FLOWS. (R825200)
Numerical modeling of open channel flows with shocks using explicit finite difference schemes is constrained by the choice of time step, which is limited by the CFL stability criteria. To overcome this limitation, in this work we introduce the application of a multiple grid al...
Narayanan, Ram M; Pooler, Richard K; Martone, Anthony F; Gallagher, Kyle A; Sherbondy, Kelly D
2018-02-22
This paper describes a multichannel super-heterodyne signal analyzer, called the Spectrum Analysis Solution (SAS), which performs multi-purpose spectrum sensing to support spectrally adaptive and cognitive radar applications. The SAS operates from ultrahigh frequency (UHF) to the S-band and features a wideband channel with eight narrowband channels. The wideband channel acts as a monitoring channel that can be used to tune the instantaneous band of the narrowband channels to areas of interest in the spectrum. The data collected from the SAS has been utilized to develop spectrum sensing algorithms for the budding field of spectrum sharing (SS) radar. Bandwidth (BW), average total power, percent occupancy (PO), signal-to-interference-plus-noise ratio (SINR), and power spectral entropy (PSE) have been examined as metrics for the characterization of the spectrum. These metrics are utilized to determine a contiguous optimal sub-band (OSB) for a SS radar transmission in a given spectrum for different modalities. Three OSB algorithms are presented and evaluated: the spectrum sensing multi objective (SS-MO), the spectrum sensing with brute force PSE (SS-BFE), and the spectrum sensing multi-objective with brute force PSE (SS-MO-BFE).
Lin, Yun; Wang, Chao; Wang, Jiaxing; Dou, Zheng
2016-10-12
Cognitive radio sensor networks are one of the kinds of application where cognitive techniques can be adopted and have many potential applications, challenges and future research trends. According to the research surveys, dynamic spectrum access is an important and necessary technology for future cognitive sensor networks. Traditional methods of dynamic spectrum access are based on spectrum holes and they have some drawbacks, such as low accessibility and high interruptibility, which negatively affect the transmission performance of the sensor networks. To address this problem, in this paper a new initialization mechanism is proposed to establish a communication link and set up a sensor network without adopting spectrum holes to convey control information. Specifically, firstly a transmission channel model for analyzing the maximum accessible capacity for three different polices in a fading environment is discussed. Secondly, a hybrid spectrum access algorithm based on a reinforcement learning model is proposed for the power allocation problem of both the transmission channel and the control channel. Finally, extensive simulations have been conducted and simulation results show that this new algorithm provides a significant improvement in terms of the tradeoff between the control channel reliability and the efficiency of the transmission channel.
Pooler, Richard K.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.
2018-01-01
This paper describes a multichannel super-heterodyne signal analyzer, called the Spectrum Analysis Solution (SAS), which performs multi-purpose spectrum sensing to support spectrally adaptive and cognitive radar applications. The SAS operates from ultrahigh frequency (UHF) to the S-band and features a wideband channel with eight narrowband channels. The wideband channel acts as a monitoring channel that can be used to tune the instantaneous band of the narrowband channels to areas of interest in the spectrum. The data collected from the SAS has been utilized to develop spectrum sensing algorithms for the budding field of spectrum sharing (SS) radar. Bandwidth (BW), average total power, percent occupancy (PO), signal-to-interference-plus-noise ratio (SINR), and power spectral entropy (PSE) have been examined as metrics for the characterization of the spectrum. These metrics are utilized to determine a contiguous optimal sub-band (OSB) for a SS radar transmission in a given spectrum for different modalities. Three OSB algorithms are presented and evaluated: the spectrum sensing multi objective (SS-MO), the spectrum sensing with brute force PSE (SS-BFE), and the spectrum sensing multi-objective with brute force PSE (SS-MO-BFE). PMID:29470448
Lin, Yun; Wang, Chao; Wang, Jiaxing; Dou, Zheng
2016-01-01
Cognitive radio sensor networks are one of the kinds of application where cognitive techniques can be adopted and have many potential applications, challenges and future research trends. According to the research surveys, dynamic spectrum access is an important and necessary technology for future cognitive sensor networks. Traditional methods of dynamic spectrum access are based on spectrum holes and they have some drawbacks, such as low accessibility and high interruptibility, which negatively affect the transmission performance of the sensor networks. To address this problem, in this paper a new initialization mechanism is proposed to establish a communication link and set up a sensor network without adopting spectrum holes to convey control information. Specifically, firstly a transmission channel model for analyzing the maximum accessible capacity for three different polices in a fading environment is discussed. Secondly, a hybrid spectrum access algorithm based on a reinforcement learning model is proposed for the power allocation problem of both the transmission channel and the control channel. Finally, extensive simulations have been conducted and simulation results show that this new algorithm provides a significant improvement in terms of the tradeoff between the control channel reliability and the efficiency of the transmission channel. PMID:27754316
Robust Rate Maximization for Heterogeneous Wireless Networks under Channel Uncertainties
Xu, Yongjun; Hu, Yuan; Li, Guoquan
2018-01-01
Heterogeneous wireless networks are a promising technology in next generation wireless communication networks, which has been shown to efficiently reduce the blind area of mobile communication and improve network coverage compared with the traditional wireless communication networks. In this paper, a robust power allocation problem for a two-tier heterogeneous wireless networks is formulated based on orthogonal frequency-division multiplexing technology. Under the consideration of imperfect channel state information (CSI), the robust sum-rate maximization problem is built while avoiding sever cross-tier interference to macrocell user and maintaining the minimum rate requirement of each femtocell user. To be practical, both of channel estimation errors from the femtocells to the macrocell and link uncertainties of each femtocell user are simultaneously considered in terms of outage probabilities of users. The optimization problem is analyzed under no CSI feedback with some cumulative distribution function and partial CSI with Gaussian distribution of channel estimation error. The robust optimization problem is converted into the convex optimization problem which is solved by using Lagrange dual theory and subgradient algorithm. Simulation results demonstrate the effectiveness of the proposed algorithm by the impact of channel uncertainties on the system performance. PMID:29466315
Quantum red-green-blue image steganography
NASA Astrophysics Data System (ADS)
Heidari, Shahrokh; Pourarian, Mohammad Rasoul; Gheibi, Reza; Naseri, Mosayeb; Houshmand, Monireh
One of the most considering matters in the field of quantum information processing is quantum data hiding including quantum steganography and quantum watermarking. This field is an efficient tool for protecting any kind of digital data. In this paper, three quantum color images steganography algorithms are investigated based on Least Significant Bit (LSB). The first algorithm employs only one of the image’s channels to cover secret data. The second procedure is based on LSB XORing technique, and the last algorithm utilizes two channels to cover the color image for hiding secret quantum data. The performances of the proposed schemes are analyzed by using software simulations in MATLAB environment. The analysis of PSNR, BER and Histogram graphs indicate that the presented schemes exhibit acceptable performances and also theoretical analysis demonstrates that the networks complexity of the approaches scales squarely.
Pottosin, Igor; Dobrovinskaya, Oxana
2014-05-15
Both in vacuolar and plasma membranes, in addition to truly K(+)-selective channels there is a variety of non-selective channels, which conduct K(+) and other ions with little preference. Many non-selective channels in the plasma membrane are active at depolarized potentials, thus, contributing to K(+) efflux rather than to K(+) uptake. They may play important roles in xylem loading or contribute to a K(+) leak, induced by salt or oxidative stress. Here, three currents, expressed in root cells, are considered: voltage-insensitive cation current, non-selective outwardly rectifying current, and low-selective conductance, activated by reactive oxygen species. The latter two do not only poorly discriminate between different cations (like K(+)vs Na(+)), but also conduct anions. Such solute channels may mediate massive electroneutral transport of salts and might be involved in osmotic adjustment or volume decrease, associated with cell death. In the tonoplast two major currents are mediated by SV (slow) and FV (fast) vacuolar channels, respectively, which are virtually impermeable for anions. SV channels conduct mono- and divalent cations indiscriminately and are activated by high cytosolic Ca(2+) and depolarized voltages. FV channels are inhibited by micromolar cytosolic Ca(2+), Mg(2+), and polyamines, and conduct a variety of monovalent cations, including K(+). Strikingly, both SV and FV channels sense the K(+) content of vacuoles, which modulates their voltage dependence, and in case of SV, also alleviates channel's inhibition by luminal Ca(2+). Therefore, SV and FV channels may operate as K(+)-sensing valves, controlling K(+) distribution between the vacuole and the cytosol. Copyright © 2014 Elsevier GmbH. All rights reserved.
NASA Astrophysics Data System (ADS)
Szafranek, K.; Jakubiak, B.; Lech, R.; Tomczuk, M.
2012-04-01
PROZA (Operational decision-making based on atmospheric conditions) is the project co-financed by the European Union through the European Regional Development Fund. One of its tasks is to develop the operational forecast system, which is supposed to support different economies branches like forestry or fruit farming by reducing the risk of economic decisions with taking into consideration weather conditions. In the frame of this studies system of sudden convective phenomena (storms or tornados) prediction is going to be built. The main authors' purpose is to predict MCSs (Mezoscale Convective Systems) basing on MSG (Meteosat Second Generation) real-time data. Until now several tests were performed. The Meteosat satellite images in selected spectral channels collected for Central Europe Region for May and August 2010 were used to detect and track cloud systems related to MCSs. In proposed tracking method first the cloud objects are defined using the temperature threshold and next the selected cells are tracked using principle of overlapping position on consecutive images. The main benefit to use a temperature thresholding to define cells is its simplicity. During the tracking process the algorithm links the cells of the image at time t to the one of the following image at time t+dt that correspond to the same cloud system (Morel-Senesi algorithm). An automated detection and elimination of some instabilities presented in tracking algorithm was developed. The poster presents analysis of exemplary MCSs in the context of near real-time prediction system development.
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Mo, C. D.
1978-01-01
An empirical study of the performance of the Viterbi decoders in bursty channels was carried out and an improved algebraic decoder for nonsystematic codes was developed. The hybrid algorithm was simulated for the (2,1), k = 7 code on a computer using 20 channels having various error statistics, ranging from pure random error to pure bursty channels. The hybrid system outperformed both the algebraic and the Viterbi decoders in every case, except the 1% random error channel where the Viterbi decoder had one bit less decoding error.
Structural implications of weak Ca2+ block in Drosophila cyclic nucleotide–gated channels
Lam, Yee Ling; Zeng, Weizhong; Derebe, Mehabaw Getahun
2015-01-01
Calcium permeability and the concomitant calcium block of monovalent ion current (“Ca2+ block”) are properties of cyclic nucleotide–gated (CNG) channel fundamental to visual and olfactory signal transduction. Although most CNG channels bear a conserved glutamate residue crucial for Ca2+ block, the degree of block displayed by different CNG channels varies greatly. For instance, the Drosophila melanogaster CNG channel shows only weak Ca2+ block despite the presence of this glutamate. We previously constructed a series of chimeric channels in which we replaced the selectivity filter of the bacterial nonselective cation channel NaK with a set of CNG channel filter sequences and determined that the resulting NaK2CNG chimeras displayed the ion selectivity and Ca2+ block properties of the parent CNG channels. Here, we used the same strategy to determine the structural basis of the weak Ca2+ block observed in the Drosophila CNG channel. The selectivity filter of the Drosophila CNG channel is similar to that of most other CNG channels except that it has a threonine at residue 318 instead of a proline. We constructed a NaK chimera, which we called NaK2CNG-Dm, which contained the Drosophila selectivity filter sequence. The high resolution structure of NaK2CNG-Dm revealed a filter structure different from those of NaK and all other previously investigated NaK2CNG chimeric channels. Consistent with this structural difference, functional studies of the NaK2CNG-Dm chimeric channel demonstrated a loss of Ca2+ block compared with other NaK2CNG chimeras. Moreover, mutating the corresponding threonine (T318) to proline in Drosophila CNG channels increased Ca2+ block by 16 times. These results imply that a simple replacement of a threonine for a proline in Drosophila CNG channels has likely given rise to a distinct selectivity filter conformation that results in weak Ca2+ block. PMID:26283200
Structural implications of weak Ca2+ block in Drosophila cyclic nucleotide-gated channels.
Lam, Yee Ling; Zeng, Weizhong; Derebe, Mehabaw Getahun; Jiang, Youxing
2015-09-01
Calcium permeability and the concomitant calcium block of monovalent ion current ("Ca(2+) block") are properties of cyclic nucleotide-gated (CNG) channel fundamental to visual and olfactory signal transduction. Although most CNG channels bear a conserved glutamate residue crucial for Ca(2+) block, the degree of block displayed by different CNG channels varies greatly. For instance, the Drosophila melanogaster CNG channel shows only weak Ca(2+) block despite the presence of this glutamate. We previously constructed a series of chimeric channels in which we replaced the selectivity filter of the bacterial nonselective cation channel NaK with a set of CNG channel filter sequences and determined that the resulting NaK2CNG chimeras displayed the ion selectivity and Ca(2+) block properties of the parent CNG channels. Here, we used the same strategy to determine the structural basis of the weak Ca(2+) block observed in the Drosophila CNG channel. The selectivity filter of the Drosophila CNG channel is similar to that of most other CNG channels except that it has a threonine at residue 318 instead of a proline. We constructed a NaK chimera, which we called NaK2CNG-Dm, which contained the Drosophila selectivity filter sequence. The high resolution structure of NaK2CNG-Dm revealed a filter structure different from those of NaK and all other previously investigated NaK2CNG chimeric channels. Consistent with this structural difference, functional studies of the NaK2CNG-Dm chimeric channel demonstrated a loss of Ca(2+) block compared with other NaK2CNG chimeras. Moreover, mutating the corresponding threonine (T318) to proline in Drosophila CNG channels increased Ca(2+) block by 16 times. These results imply that a simple replacement of a threonine for a proline in Drosophila CNG channels has likely given rise to a distinct selectivity filter conformation that results in weak Ca(2+) block. © 2015 Lam et al.
A Feature and Algorithm Selection Method for Improving the Prediction of Protein Structural Class.
Ni, Qianwu; Chen, Lei
2017-01-01
Correct prediction of protein structural class is beneficial to investigation on protein functions, regulations and interactions. In recent years, several computational methods have been proposed in this regard. However, based on various features, it is still a great challenge to select proper classification algorithm and extract essential features to participate in classification. In this study, a feature and algorithm selection method was presented for improving the accuracy of protein structural class prediction. The amino acid compositions and physiochemical features were adopted to represent features and thirty-eight machine learning algorithms collected in Weka were employed. All features were first analyzed by a feature selection method, minimum redundancy maximum relevance (mRMR), producing a feature list. Then, several feature sets were constructed by adding features in the list one by one. For each feature set, thirtyeight algorithms were executed on a dataset, in which proteins were represented by features in the set. The predicted classes yielded by these algorithms and true class of each protein were collected to construct a dataset, which were analyzed by mRMR method, yielding an algorithm list. From the algorithm list, the algorithm was taken one by one to build an ensemble prediction model. Finally, we selected the ensemble prediction model with the best performance as the optimal ensemble prediction model. Experimental results indicate that the constructed model is much superior to models using single algorithm and other models that only adopt feature selection procedure or algorithm selection procedure. The feature selection procedure or algorithm selection procedure are really helpful for building an ensemble prediction model that can yield a better performance. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S
2016-12-01
We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Choice: 36 band feature selection software with applications to multispectral pattern recognition
NASA Technical Reports Server (NTRS)
Jones, W. C.
1973-01-01
Feature selection software was developed at the Earth Resources Laboratory that is capable of inputting up to 36 channels and selecting channel subsets according to several criteria based on divergence. One of the criterion used is compatible with the table look-up classifier requirements. The software indicates which channel subset best separates (based on average divergence) each class from all other classes. The software employs an exhaustive search technique, and computer time is not prohibitive. A typical task to select the best 4 of 22 channels for 12 classes takes 9 minutes on a Univac 1108 computer.
Finnerty, Justin John
2015-01-01
Cation selective channels constitute the gate for ion currents through the cell membrane. Here we present an improved statistical mechanical model based on atomistic structural information, cation hydration state and without tuned parameters that reproduces the selectivity of biological Na+ and Ca2+ ion channels. The importance of the inclusion of step-wise cation hydration in these results confirms the essential role partial dehydration plays in the bacterial Na+ channels. The model, proven reliable against experimental data, could be straightforwardly used for designing Na+ and Ca2+ selective nanopores. PMID:26460827
McTwo: a two-step feature selection algorithm based on maximal information coefficient.
Ge, Ruiquan; Zhou, Manli; Luo, Youxi; Meng, Qinghan; Mai, Guoqin; Ma, Dongli; Wang, Guoqing; Zhou, Fengfeng
2016-03-23
High-throughput bio-OMIC technologies are producing high-dimension data from bio-samples at an ever increasing rate, whereas the training sample number in a traditional experiment remains small due to various difficulties. This "large p, small n" paradigm in the area of biomedical "big data" may be at least partly solved by feature selection algorithms, which select only features significantly associated with phenotypes. Feature selection is an NP-hard problem. Due to the exponentially increased time requirement for finding the globally optimal solution, all the existing feature selection algorithms employ heuristic rules to find locally optimal solutions, and their solutions achieve different performances on different datasets. This work describes a feature selection algorithm based on a recently published correlation measurement, Maximal Information Coefficient (MIC). The proposed algorithm, McTwo, aims to select features associated with phenotypes, independently of each other, and achieving high classification performance of the nearest neighbor algorithm. Based on the comparative study of 17 datasets, McTwo performs about as well as or better than existing algorithms, with significantly reduced numbers of selected features. The features selected by McTwo also appear to have particular biomedical relevance to the phenotypes from the literature. McTwo selects a feature subset with very good classification performance, as well as a small feature number. So McTwo may represent a complementary feature selection algorithm for the high-dimensional biomedical datasets.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-01
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963
Zhao, Yu-Xiang; Chou, Chien-Hsing
2016-01-01
In this study, a new feature selection algorithm, the neighborhood-relationship feature selection (NRFS) algorithm, is proposed for identifying rat electroencephalogram signals and recognizing Chinese characters. In these two applications, dependent relationships exist among the feature vectors and their neighboring feature vectors. Therefore, the proposed NRFS algorithm was designed for solving this problem. By applying the NRFS algorithm, unselected feature vectors have a high priority of being added into the feature subset if the neighboring feature vectors have been selected. In addition, selected feature vectors have a high priority of being eliminated if the neighboring feature vectors are not selected. In the experiments conducted in this study, the NRFS algorithm was compared with two feature algorithms. The experimental results indicated that the NRFS algorithm can extract the crucial frequency bands for identifying rat vigilance states and identifying crucial character regions for recognizing Chinese characters. PMID:27314346
An evolutionarily conserved gene family encodes proton-selective ion channels.
Tu, Yu-Hsiang; Cooper, Alexander J; Teng, Bochuan; Chang, Rui B; Artiga, Daniel J; Turner, Heather N; Mulhall, Eric M; Ye, Wenlei; Smith, Andrew D; Liman, Emily R
2018-03-02
Ion channels form the basis for cellular electrical signaling. Despite the scores of genetically identified ion channels selective for other monatomic ions, only one type of proton-selective ion channel has been found in eukaryotic cells. By comparative transcriptome analysis of mouse taste receptor cells, we identified Otopetrin1 (OTOP1), a protein required for development of gravity-sensing otoconia in the vestibular system, as forming a proton-selective ion channel. We found that murine OTOP1 is enriched in acid-detecting taste receptor cells and is required for their zinc-sensitive proton conductance. Two related murine genes, Otop2 and Otop3 , and a Drosophila ortholog also encode proton channels. Evolutionary conservation of the gene family and its widespread tissue distribution suggest a broad role for proton channels in physiology and pathophysiology. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Robust Transceiver Design for Multiuser MIMO Downlink with Channel Uncertainties
NASA Astrophysics Data System (ADS)
Miao, Wei; Li, Yunzhou; Chen, Xiang; Zhou, Shidong; Wang, Jing
This letter addresses the problem of robust transceiver design for the multiuser multiple-input-multiple-output (MIMO) downlink where the channel state information at the base station (BS) is imperfect. A stochastic approach which minimizes the expectation of the total mean square error (MSE) of the downlink conditioned on the channel estimates under a total transmit power constraint is adopted. The iterative algorithm reported in [2] is improved to handle the proposed robust optimization problem. Simulation results show that our proposed robust scheme effectively reduces the performance loss due to channel uncertainties and outperforms existing methods, especially when the channel errors of the users are different.
Alvarez, Guillermo Dufort Y; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo
2018-02-01
This work presents a wireless multichannel electroencephalogram (EEG) recording system featuring lossless and near-lossless compression of the digitized EEG signal. Two novel, low-complexity, efficient compression algorithms were developed and tested in a low-power platform. The algorithms were tested on six public EEG databases comparing favorably with the best compression rates reported up to date in the literature. In its lossless mode, the platform is capable of encoding and transmitting 59-channel EEG signals, sampled at 500 Hz and 16 bits per sample, at a current consumption of 337 A per channel; this comes with a guarantee that the decompressed signal is identical to the sampled one. The near-lossless mode allows for significant energy savings and/or higher throughputs in exchange for a small guaranteed maximum per-sample distortion in the recovered signal. Finally, we address the tradeoff between computation cost and transmission savings by evaluating three alternatives: sending raw data, or encoding with one of two compression algorithms that differ in complexity and compression performance. We observe that the higher the throughput (number of channels and sampling rate) the larger the benefits obtained from compression.
Noskov, Sergei Yu; Rostovtseva, Tatiana K; Bezrukov, Sergey M
2013-12-23
Voltage-dependent anion channel (VDAC), the major channel of the mitochondrial outer membrane, serves as a principal pathway for ATP, ADP, and other respiratory substrates across this membrane. Using umbrella-sampling simulations, we established the thermodynamic and kinetic components governing ATP transport across the VDAC1 channel. We found that there are several low-affinity binding sites for ATP along the translocation pathway and that the main barrier for ATP transport is located around the center of the channel and is formed predominantly by residues in the N-terminus. The binding affinity of ATP to an open channel was found to be in the millimolar to micromolar range. However, we show that this weak binding increases the ATP translocation probability by about 10-fold compared with the VDAC pore in which attractive interactions were artificially removed. Recently, it was found that free dimeric tubulin induces a highly efficient, reversible blockage of VDAC reconstituted into planar lipid membranes. It was proposed that by blocking VDAC permeability for ATP/ADP and other mitochondrial respiratory substrates tubulin controls mitochondrial respiration. Using the Rosetta protein-protein docking algorithm, we established a tentative structure of the VDAC-tubulin complex. An extensive set of equilibrium and nonequilibrium (under applied electric field) molecular dynamics (MD) simulations was used to establish the conductance of the open and blocked channel. It was found that the presence of the unstructured C-terminal tail of tubulin in the VDAC pore decreases its conductance by more than 40% and switches its selectivity from anionic to cationic. The subsequent 1D potential of mean force (PMF) computations for the VDAC-tubulin complex show that the state renders ATP transport virtually impossible. A number of residues pivotal for tubulin binding to the channel were identified that help to clarify the molecular details of VDAC-tubulin interaction and to provide new insight into the mechanism of the control of mitochondria respiration by VDAC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Syh, J; Syh, J; Patel, B
2015-06-15
Purpose: The multichannel cylindrical applicator has a distinctive modification of the traditional single channel cylindrical applicator. The novel multichannel applicator has additional peripheral channels that provide more flexibility both in treatment planning process and outcomes. To protect by reducing doses to adjacent organ at risk (OAR) while maintaining target coverage with inverse plan optimization are the goals for such novel Brachytherapy device. Through a series of comparison and analysis of reults in more than forty patients who received HDR Brachytherapy using multichannel vaginal applicator, this procedure has been implemented in our institution. Methods: Multichannel planning was CT image based. Themore » CTV of 5mm vaginal cuff rind with prescribed length was well reconstructed as well as bladder and rectum. At least D95 of CTV coverage is 95% of prescribed dose. Multichannel inverse plan optimization algorithm not only shapes target dose cloud but set dose avoids to OAR’s exclusively. The doses of D2cc, D5cc and D5; volume of V2Gy in OAR’s were selected to compare with single channel results when sole central channel is only possibility. Results: Study demonstrates plan superiorly in OAR’s doe reduction in multi-channel plan. The D2cc of the rectum and bladder were showing a little lower for multichannel vs. single channel. The V2Gy of the rectum was 93.72% vs. 83.79% (p=0.007) for single channel vs. multichannel respectively. Absolute reduced mean dose of D5 by multichannel was 17 cGy (s.d.=6.4) and 44 cGy (s.d.=15.2) in bladder and rectum respectively. Conclusion: The optimization solution in multichannel was to maintain D95 CTV coverage while reducing the dose to OAR’s. Dosimetric advantage in sparing critical organs by using a multichannel applicator in HDR Brachytherapy treatment of the vaginal cuff is so promising and has been implemented clinically.« less
Alshamlan, Hala M; Badr, Ghada H; Alohali, Yousef A
2015-06-01
Naturally inspired evolutionary algorithms prove effectiveness when used for solving feature selection and classification problems. Artificial Bee Colony (ABC) is a relatively new swarm intelligence method. In this paper, we propose a new hybrid gene selection method, namely Genetic Bee Colony (GBC) algorithm. The proposed algorithm combines the used of a Genetic Algorithm (GA) along with Artificial Bee Colony (ABC) algorithm. The goal is to integrate the advantages of both algorithms. The proposed algorithm is applied to a microarray gene expression profile in order to select the most predictive and informative genes for cancer classification. In order to test the accuracy performance of the proposed algorithm, extensive experiments were conducted. Three binary microarray datasets are use, which include: colon, leukemia, and lung. In addition, another three multi-class microarray datasets are used, which are: SRBCT, lymphoma, and leukemia. Results of the GBC algorithm are compared with our recently proposed technique: mRMR when combined with the Artificial Bee Colony algorithm (mRMR-ABC). We also compared the combination of mRMR with GA (mRMR-GA) and Particle Swarm Optimization (mRMR-PSO) algorithms. In addition, we compared the GBC algorithm with other related algorithms that have been recently published in the literature, using all benchmark datasets. The GBC algorithm shows superior performance as it achieved the highest classification accuracy along with the lowest average number of selected genes. This proves that the GBC algorithm is a promising approach for solving the gene selection problem in both binary and multi-class cancer classification. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cloud screening Coastal Zone Color Scanner images using channel 5
NASA Technical Reports Server (NTRS)
Eckstein, B. A.; Simpson, J. J.
1991-01-01
Clouds are removed from Coastal Zone Color Scanner (CZCS) data using channel 5. Instrumentation problems require pre-processing of channel 5 before an intelligent cloud-screening algorithm can be used. For example, at intervals of about 16 lines, the sensor records anomalously low radiances. Moreover, the calibration equation yields negative radiances when the sensor records zero counts, and pixels corrupted by electronic overshoot must also be excluded. The remaining pixels may then be used in conjunction with the procedure of Simpson and Humphrey to determine the CZCS cloud mask. These results plus in situ observations of phytoplankton pigment concentration show that pre-processing and proper cloud-screening of CZCS data are necessary for accurate satellite-derived pigment concentrations. This is especially true in the coastal margins, where pigment content is high and image distortion associated with electronic overshoot is also present. The pre-processing algorithm is critical to obtaining accurate global estimates of pigment from spacecraft data.
Zhang, Junwen; Yu, Jianjun; Chi, Nan; Chien, Hung-Chang
2014-08-25
We theoretically and experimentally investigate a time-domain digital pre-equalization (DPEQ) scheme for bandwidth-limited optical coherent communication systems, which is based on feedback of channel characteristics from the receiver-side blind and adaptive equalizers, such as least-mean-squares (LMS) algorithm and constant or multi- modulus algorithms (CMA, MMA). Based on the proposed DPEQ scheme, we theoretically and experimentally study its performance in terms of various channel conditions as well as resolutions for channel estimation, such as filtering bandwidth, taps length, and OSNR. Using a high speed 64-GSa/s DAC in cooperation with the proposed DPEQ technique, we successfully synthesized band-limited 40-Gbaud signals in modulation formats of polarization-diversion multiplexed (PDM) quadrature phase shift keying (QPSK), 8-quadrature amplitude modulation (QAM) and 16-QAM, and significant improvement in both back-to-back and transmission BER performances are also demonstrated.
NASA Astrophysics Data System (ADS)
Gong, Li-Hua; He, Xiang-Tao; Tan, Ru-Chao; Zhou, Zhi-Hong
2018-01-01
In order to obtain high-quality color images, it is important to keep the hue component unchanged while emphasize the intensity or saturation component. As a public color model, Hue-Saturation Intensity (HSI) model is commonly used in image processing. A new single channel quantum color image encryption algorithm based on HSI model and quantum Fourier transform (QFT) is investigated, where the color components of the original color image are converted to HSI and the logistic map is employed to diffuse the relationship of pixels in color components. Subsequently, quantum Fourier transform is exploited to fulfill the encryption. The cipher-text is a combination of a gray image and a phase matrix. Simulations and theoretical analyses demonstrate that the proposed single channel quantum color image encryption scheme based on the HSI model and quantum Fourier transform is secure and effective.
NASA Astrophysics Data System (ADS)
Geneva, Nicholas; Wang, Lian-Ping
2015-11-01
In the past 25 years, the mesoscopic lattice Boltzmann method (LBM) has become an increasingly popular approach to simulate incompressible flows including turbulent flows. While LBM solves more solution variables compared to the conventional CFD approach based on the macroscopic Navier-Stokes equation, it also offers opportunities for more efficient parallelization. In this talk we will describe several different algorithms that have been developed over the past 10 plus years, which can be used to represent the two core steps of LBM, collision and streaming, more effectively than standard approaches. The application of these algorithms spans LBM simulations ranging from basic channel to particle laden flows. We will cover the essential detail on the implementation of each algorithm for simple 2D flows, to the challenges one faces when using a given algorithm for more complex simulations. The key is to explore the best use of data structure and cache memory. Two basic data structures will be discussed and the importance of effective data storage to maximize a CPU's cache will be addressed. The performance of a 3D turbulent channel flow simulation using these different algorithms and data structures will be compared along with important hardware related issues.
Hondorp, Darryl W; Bennion, David H; Roseman, Edward F; Holbrook, Christopher M; Boase, James C; Chiotti, Justin A; Thomas, Michael V; Wills, Todd C; Drouin, Richard G; Kessel, Steven T; Krueger, Charles C
2017-01-01
Channelization for navigation and flood control has altered the hydrology and bathymetry of many large rivers with unknown consequences for fish species that undergo riverine migrations. In this study, we investigated whether altered flow distributions and bathymetry associated with channelization attracted migrating Lake Sturgeon (Acipenser fulvescens) into commercial navigation channels, potentially increasing their exposure to ship strikes. To address this question, we quantified and compared Lake Sturgeon selection for navigation channels vs. alternative pathways in two multi-channel rivers differentially affected by channelization, but free of barriers to sturgeon movement. Acoustic telemetry was used to quantify Lake Sturgeon movements. Under the assumption that Lake Sturgeon navigate by following primary flow paths, acoustic-tagged Lake Sturgeon in the more-channelized lower Detroit River were expected to choose navigation channels over alternative pathways and to exhibit greater selection for navigation channels than conspecifics in the less-channelized lower St. Clair River. Consistent with these predictions, acoustic-tagged Lake Sturgeon in the more-channelized lower Detroit River selected the higher-flow and deeper navigation channels over alternative migration pathways, whereas in the less-channelized lower St. Clair River, individuals primarily used pathways alternative to navigation channels. Lake Sturgeon selection for navigation channels as migratory pathways also was significantly higher in the more-channelized lower Detroit River than in the less-channelized lower St. Clair River. We speculated that use of navigation channels over alternative pathways would increase the spatial overlap of commercial vessels and migrating Lake Sturgeon, potentially enhancing their vulnerability to ship strikes. Results of our study thus demonstrated an association between channelization and the path use of migrating Lake Sturgeon that could prove important for predicting sturgeon-vessel interactions in navigable rivers as well as for understanding how fish interact with their habitat in landscapes altered by human activity.
Bennion, David H.; Roseman, Edward F.; Holbrook, Christopher M.; Boase, James C.; Chiotti, Justin A.; Thomas, Michael V.; Wills, Todd C.; Drouin, Richard G.; Kessel, Steven T.; Krueger, Charles C.
2017-01-01
Channelization for navigation and flood control has altered the hydrology and bathymetry of many large rivers with unknown consequences for fish species that undergo riverine migrations. In this study, we investigated whether altered flow distributions and bathymetry associated with channelization attracted migrating Lake Sturgeon (Acipenser fulvescens) into commercial navigation channels, potentially increasing their exposure to ship strikes. To address this question, we quantified and compared Lake Sturgeon selection for navigation channels vs. alternative pathways in two multi-channel rivers differentially affected by channelization, but free of barriers to sturgeon movement. Acoustic telemetry was used to quantify Lake Sturgeon movements. Under the assumption that Lake Sturgeon navigate by following primary flow paths, acoustic-tagged Lake Sturgeon in the more-channelized lower Detroit River were expected to choose navigation channels over alternative pathways and to exhibit greater selection for navigation channels than conspecifics in the less-channelized lower St. Clair River. Consistent with these predictions, acoustic-tagged Lake Sturgeon in the more-channelized lower Detroit River selected the higher-flow and deeper navigation channels over alternative migration pathways, whereas in the less-channelized lower St. Clair River, individuals primarily used pathways alternative to navigation channels. Lake Sturgeon selection for navigation channels as migratory pathways also was significantly higher in the more-channelized lower Detroit River than in the less-channelized lower St. Clair River. We speculated that use of navigation channels over alternative pathways would increase the spatial overlap of commercial vessels and migrating Lake Sturgeon, potentially enhancing their vulnerability to ship strikes. Results of our study thus demonstrated an association between channelization and the path use of migrating Lake Sturgeon that could prove important for predicting sturgeon-vessel interactions in navigable rivers as well as for understanding how fish interact with their habitat in landscapes altered by human activity. PMID:28678798
A high throughput architecture for a low complexity soft-output demapping algorithm
NASA Astrophysics Data System (ADS)
Ali, I.; Wasenmüller, U.; Wehn, N.
2015-11-01
Iterative channel decoders such as Turbo-Code and LDPC decoders show exceptional performance and therefore they are a part of many wireless communication receivers nowadays. These decoders require a soft input, i.e., the logarithmic likelihood ratio (LLR) of the received bits with a typical quantization of 4 to 6 bits. For computing the LLR values from a received complex symbol, a soft demapper is employed in the receiver. The implementation cost of traditional soft-output demapping methods is relatively large in high order modulation systems, and therefore low complexity demapping algorithms are indispensable in low power receivers. In the presence of multiple wireless communication standards where each standard defines multiple modulation schemes, there is a need to have an efficient demapper architecture covering all the flexibility requirements of these standards. Another challenge associated with hardware implementation of the demapper is to achieve a very high throughput in double iterative systems, for instance, MIMO and Code-Aided Synchronization. In this paper, we present a comprehensive communication and hardware performance evaluation of low complexity soft-output demapping algorithms to select the best algorithm for implementation. The main goal of this work is to design a high throughput, flexible, and area efficient architecture. We describe architectures to execute the investigated algorithms. We implement these architectures on a FPGA device to evaluate their hardware performance. The work has resulted in a hardware architecture based on the figured out best low complexity algorithm delivering a high throughput of 166 Msymbols/second for Gray mapped 16-QAM modulation on Virtex-5. This efficient architecture occupies only 127 slice registers, 248 slice LUTs and 2 DSP48Es.
Revision of an automated microseismic location algorithm for DAS - 3C geophone hybrid array
NASA Astrophysics Data System (ADS)
Mizuno, T.; LeCalvez, J.; Raymer, D.
2017-12-01
Application of distributed acoustic sensing (DAS) has been studied in several areas in seismology. One of the areas is microseismic reservoir monitoring (e.g., Molteni et al., 2017, First Break). Considering the present limitations of DAS, which include relatively low signal-to-noise ratio (SNR) and no 3C polarization measurements, a DAS - 3C geophone hybrid array is a practical option when using a single monitoring well. Considering the large volume of data from distributed sensing, microseismic event detection and location using a source scanning type algorithm is a reasonable choice, especially for real-time monitoring. The algorithm must handle both strain rate along the borehole axis for DAS and particle velocity for 3C geophones. Only a small quantity of large SNR events will be detected throughout a large aperture encompassing the hybrid array; therefore, the aperture is to be optimized dynamically to eliminate noisy channels for a majority of events. For such hybrid array, coalescence microseismic mapping (CMM) (Drew et al., 2005, SPE) was revised. CMM forms a likelihood function of location of event and its origin time. At each receiver, a time function of event arrival likelihood is inferred using an SNR function, and it is migrated to time and space to determine hypocenter and origin time likelihood. This algorithm was revised to dynamically optimize such a hybrid array by identifying receivers where a microseismic signal is possibly detected and using only those receivers to compute the likelihood function. Currently, peak SNR is used to select receivers. To prevent false results due to small aperture, a minimum aperture threshold is employed. The algorithm refines location likelihood using 3C geophone polarization. We tested this algorithm using a ray-based synthetic dataset. Leaney (2014, PhD thesis, UBC) is used to compute particle velocity at receivers. Strain rate along the borehole axis is computed from particle velocity as DAS microseismic synthetic data. The likelihood function formed by both DAS and geophone behaves as expected with the aperture dynamically selected depending on the SNR of the event. We conclude that this algorithm can be successfully applied for such hybrid arrays to monitor microseismic activity. A study using a recently acquired dataset is planned.
Spatial and Social Diffusion of Information and Influence: Models and Algorithms
ERIC Educational Resources Information Center
Doo, Myungcheol
2012-01-01
In this dissertation research, we argue that spatial alarms and activity-based social networks are two fundamentally new types of information and influence diffusion channels. Such new channels have the potential of enriching our professional experiences and our personal life quality in many unprecedented ways. First, we develop an activity driven…
Lysine and the Na+/K+ Selectivity in Mammalian Voltage-Gated Sodium Channels.
Li, Yang; Liu, Huihui; Xia, Mengdie; Gong, Haipeng
2016-01-01
Voltage-gated sodium (Nav) channels are critical in the generation and transmission of neuronal signals in mammals. The crystal structures of several prokaryotic Nav channels determined in recent years inspire the mechanistic studies on their selection upon the permeable cations (especially between Na+ and K+ ions), a property that is proposed to be mainly determined by residues in the selectivity filter. However, the mechanism of cation selection in mammalian Nav channels lacks direct explanation at atomic level due to the difference in amino acid sequences between mammalian and prokaryotic Nav homologues, especially at the constriction site where the DEKA motif has been identified to determine the Na+/K+ selectivity in mammalian Nav channels but is completely absent in the prokaryotic counterparts. Among the DEKA residues, Lys is of the most importance since its mutation to Arg abolishes the Na+/K+ selectivity. In this work, we modeled the pore domain of mammalian Nav channels by mutating the four residues at the constriction site of a prokaryotic Nav channel (NavRh) to DEKA, and then mechanistically investigated the contribution of Lys in cation selection using molecular dynamics simulations. The DERA mutant was generated as a comparison to understand the loss of ion selectivity caused by the K-to-R mutation. Simulations and free energy calculations on the mutants indicate that Lys facilitates Na+/K+ selection by electrostatically repelling the cation to a highly Na+-selective location sandwiched by the carboxylate groups of Asp and Glu at the constriction site. In contrast, the electrostatic repulsion is substantially weakened when Lys is mutated to Arg, because of two intrinsic properties of the Arg side chain: the planar geometric design and the sparse charge distribution of the guanidine group.
Khozani, Zohreh Sheikh; Bonakdari, Hossein; Zaji, Amir Hossein
2016-01-01
Two new soft computing models, namely genetic programming (GP) and genetic artificial algorithm (GAA) neural network (a combination of modified genetic algorithm and artificial neural network methods) were developed in order to predict the percentage of shear force in a rectangular channel with non-homogeneous roughness. The ability of these methods to estimate the percentage of shear force was investigated. Moreover, the independent parameters' effectiveness in predicting the percentage of shear force was determined using sensitivity analysis. According to the results, the GP model demonstrated superior performance to the GAA model. A comparison was also made between the GP program determined as the best model and five equations obtained in prior research. The GP model with the lowest error values (root mean square error ((RMSE) of 0.0515) had the best function compared with the other equations presented for rough and smooth channels as well as smooth ducts. The equation proposed for rectangular channels with rough boundaries (RMSE of 0.0642) outperformed the prior equations for smooth boundaries.
NASA Astrophysics Data System (ADS)
Dhakshnamoorthy, Balasundaresan; Rohaim, Ahmed; Rui, Huan; Blachowicz, Lydia; Roux, Benoît
2016-09-01
The selectivity filter is an essential functional element of K+ channels that is highly conserved both in terms of its primary sequence and its three-dimensional structure. Here, we investigate the properties of an ion channel from the Gram-positive bacterium Tsukamurella paurometabola with a selectivity filter formed by an uncommon proline-rich sequence. Electrophysiological recordings show that it is a non-selective cation channel and that its activity depends on Ca2+ concentration. In the crystal structure, the selectivity filter adopts a novel conformation with Ca2+ ions bound within the filter near the pore helix where they are coordinated by backbone oxygen atoms, a recurrent motif found in multiple proteins. The binding of Ca2+ ion in the selectivity filter controls the widening of the pore as shown in crystal structures and in molecular dynamics simulations. The structural, functional and computational data provide a characterization of this calcium-gated cationic channel.
Fujiwara, Yuichiro; Arrigoni, Cristina; Domigan, Courtney; Ferrara, Giuseppina; Pantoja, Carlos; Thiel, Gerhard; Moroni, Anna; Minor, Daniel L.
2009-01-01
Background Understanding the interactions between ion channels and blockers remains an important goal that has implications for delineating the basic mechanisms of ion channel function and for the discovery and development of ion channel directed drugs. Methodology/Principal Findings We used genetic selection methods to probe the interaction of two ion channel blockers, barium and amantadine, with the miniature viral potassium channel Kcv. Selection for Kcv mutants that were resistant to either blocker identified a mutant bearing multiple changes that was resistant to both. Implementation of a PCR shuffling and backcrossing procedure uncovered that the blocker resistance could be attributed to a single change, T63S, at a position that is likely to form the binding site for the inner ion in the selectivity filter (site 4). A combination of electrophysiological and biochemical assays revealed a distinct difference in the ability of the mutant channel to interact with the blockers. Studies of the analogous mutation in the mammalian inward rectifier Kir2.1 show that the T→S mutation affects barium block as well as the stability of the conductive state. Comparison of the effects of similar barium resistant mutations in Kcv and Kir2.1 shows that neighboring amino acids in the Kcv selectivity filter affect blocker binding. Conclusions/Significance The data support the idea that permeant ions have an integral role in stabilizing potassium channel structure, suggest that both barium and amantadine act at a similar site, and demonstrate how genetic selections can be used to map blocker binding sites and reveal mechanistic features. PMID:19834614
New capabilities for characterizing smoke and dust aerosol over land using MODIS
NASA Astrophysics Data System (ADS)
Levy, R. C.; Remer, L. A.
2006-12-01
Smoke and dust aerosol have different chemical, optical and physical properties and both types affect many processes within the climate system. As earth's surface and atmosphere are continuously altered by natural and anthropogenic processes, the emission and presumably the effects of these aerosols are also changing. Thus it is necessary to observe and characterize aerosols on a global and climatic scale. While MODIS has been reporting characteristics of smoke and dust aerosol over land and ocean since shortly after Terra launch, the uncertainties in the over-land retrieval have been larger than expected. To better characterize different aerosol types closer to their source regions with greater accuracy, we have developed a new operational algorithm for retrieving aerosol properties over dark land surfaces from MODIS-observed visible (VIS) and infrared (IR) reflectance. Like earlier versions, this algorithm estimates the total loading (aerosol optical depth-τ) and relative weighting of fine (non-dust) and coarse (dust) -dominated aerosol to the total τ (fine weighting-η) over dark land surfaces. However, the fundamental mathematics and major assumptions have been overhauled. The new algorithm performs simultaneous multi-channel inversion that includes information about coarse aerosol in the IR channels, while assuming a fine-tuned relationship between VIS and IR surface reflectances, that is itself a function of scattering angle and vegetation condition. Finally, the suite of expected aerosol optical models described by the lookup table have been revised to closer resemble the AERONET climatology, including for smoke and dust aerosol. Beginning in April 2006, this algorithm has been used for forward processing and backward re- processing of the entire MODIS dataset observed from both Terra and Aqua. "Collection 5" products were completed for Aqua reprocessing by July 2006 and should be complete for Terra by December 2006. In this study, we used the complete Aqua dataset (July 2002-Aug 2006) and two years of Terra (2005-Aug 2006) data to evaluate the products in regions known to be dominated by smoke and/or dust. We compared with sunphotometer data at selected AERONET sites and found improved τ retrievals,within prescribed accuracy.
An algorithm to detect fire activity using Meteosat: fine tuning and quality assesment
NASA Astrophysics Data System (ADS)
Amraoui, M.; DaCamara, C. C.; Ermida, S. L.
2012-04-01
Hot spot detection by means of sensors on-board geostationary satellites allows studying wildfire activity at hourly and even sub-hourly intervals, an advantage that cannot be met by polar orbiters. Since 1997, the Satellite Application Facility for Land Surface Analysis has been running an operational procedure that allows detecting active fires based on information from Meteosat-8/SEVIRI. This is the so-called Fire Detection and Monitoring (FD&M) product and the procedure takes advantage of the temporal resolution of SEVIRI (one image every 15 min), and relies on information from SEVIRI channels (namely 0.6, 0.8, 3.9, 10.8 and 12.0 μm) together with information on illumination angles. The method is based on heritage from contextual algorithms designed for polar, sun-synchronous instruments, namely NOAA/AVHRR and MODIS/TERRAAQUA. A potential fire pixel is compared with the neighboring ones and the decision is made based on relative thresholds as derived from the pixels in the neighborhood. Generally speaking, the observed fire incidence compares well against hot spots extracted from the global daily active fire product developed by the MODIS Fire Team. However, values of probability of detection (POD) tend to be quite low, a result that may be partially expected by the finer resolution of MODIS. The aim of the present study is to make a systematic assessment of the impacts on POD and False Alarm Ratio (FAR) of the several parameters that are set in the algorithms. Such parameters range from the threshold values of brightness temperature in the IR3.9 and 10.8 channels that are used to select potential fire pixels up to the extent of the background grid and thresholds used to statistically characterize the radiometric departures of a potential pixel from the respective background. The impact of different criteria to identify pixels contaminated by clouds, smoke and sun glint is also evaluated. Finally, the advantages that may be brought to the algorithm by adding contextual tests in the time domain are discussed. The study lays the grounds to the development of improved quality flags that will be integrated in the FD&M product in the nearby future.
Initial steps of inactivation at the K+ channel selectivity filter
Thomson, Andrew S.; Heer, Florian T.; Smith, Frank J.; Hendron, Eunan; Bernèche, Simon; Rothberg, Brad S.
2014-01-01
K+ efflux through K+ channels can be controlled by C-type inactivation, which is thought to arise from a conformational change near the channel’s selectivity filter. Inactivation is modulated by ion binding near the selectivity filter; however, the molecular forces that initiate inactivation remain unclear. We probe these driving forces by electrophysiology and molecular simulation of MthK, a prototypical K+ channel. Either Mg2+ or Ca2+ can reduce K+ efflux through MthK channels. However, Ca2+, but not Mg2+, can enhance entry to the inactivated state. Molecular simulations illustrate that, in the MthK pore, Ca2+ ions can partially dehydrate, enabling selective accessibility of Ca2+ to a site at the entry to the selectivity filter. Ca2+ binding at the site interacts with K+ ions in the selectivity filter, facilitating a conformational change within the filter and subsequent inactivation. These results support an ionic mechanism that precedes changes in channel conformation to initiate inactivation. PMID:24733889
In-flight automatic detection of vigilance states using a single EEG channel.
Sauvet, F; Bougard, C; Coroenne, M; Lely, L; Van Beers, P; Elbaz, M; Guillard, M; Leger, D; Chennaoui, M
2014-12-01
Sleepiness and fatigue can reach particularly high levels during long-haul overnight flights. Under these conditions, voluntary or even involuntary sleep periods may occur, increasing the risk of accidents. The aim of this study was to assess the performance of an in-flight automatic detection system of low-vigilance states using a single electroencephalogram channel. Fourteen healthy pilots voluntarily wore a miniaturized brain electrical activity recording device during long-haul flights ( 10 ±2.0 h, Atlantic 2 and Falcon 50 M, French naval aviation). No subject was disturbed by the equipment. Seven pilots experienced at least a period of voluntary ( 26.8 ±8.0 min, n = 4) or involuntary sleep (N1 sleep stage, 26.6 ±18.7 s, n = 7) during the flight. Automatic classification (wake/sleep) by the algorithm was made for 10-s epochs (O1-M2 or C3-M2 channel), based on comparison of means to detect changes in α, β, and θ relative power, or ratio [( α+θ)/β], or fuzzy logic fusion (α, β). Pertinence and prognostic of the algorithm were determined using epoch-by-epoch comparison with visual-scoring (two blinded readers, AASM rules). The best concordance between automatic detection and visual-scoring was observed within the O1-M2 channel, using the ratio [( α+θ )/β] ( 98.3 ±4.1% of good detection, K = 0.94 ±0.07, with a 0.04 ±0.04 false positive rate and a 0.87 ±0.10 true positive rate). Our results confirm the efficiency of a miniaturized single electroencephalographic channel recording device, associated with an automatic detection algorithm, in order to detect low-vigilance states during real flights.
NASA Astrophysics Data System (ADS)
Cai, Xiuhong; Li, Xiang; Qi, Hong; Wei, Fang; Chen, Jianyong; Shuai, Jianwei
2016-10-01
The gating properties of the inositol 1, 4, 5-trisphosphate (IP3) receptor (IP3R) are determined by the binding and unbinding capability of Ca2+ ions and IP3 messengers. With the patch clamp experiments, the stationary properties have been discussed for Xenopus oocyte type-1 IP3R (Oo-IP3R1), type-3 IP3R (Oo-IP3R3) and Spodoptera frugiperda IP3R (Sf-IP3R). In this paper, in order to provide insights about the relation between the observed gating characteristics and the gating parameters in different IP3Rs, we apply the immune algorithm to fit the parameters of a modified DeYoung-Keizer model. By comparing the fitting parameter distributions of three IP3Rs, we suggest that the three types of IP3Rs have the similar open sensitivity in responding to IP3. The Oo-IP3R3 channel is easy to open in responding to low Ca2+ concentration, while Sf-IP3R channel is easily inhibited in responding to high Ca2+ concentration. We also show that the IP3 binding rate is not a sensitive parameter for stationary gating dynamics for three IP3Rs, but the inhibitory Ca2+ binding/unbinding rates are sensitive parameters for gating dynamics for both Oo-IP3R1 and Oo-IP3R3 channels. Such differences may be important in generating the spatially and temporally complex Ca2+ oscillations in cells. Our study also demonstrates that the immune algorithm can be applied for model parameter searching in biological systems.
An affine projection algorithm using grouping selection of input vectors
NASA Astrophysics Data System (ADS)
Shin, JaeWook; Kong, NamWoong; Park, PooGyeon
2011-10-01
This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.
Viumdal, Håkon; Mylvaganam, Saba
2017-01-01
In oil and gas and geothermal installations, open channels followed by sieves for removal of drill cuttings, are used to monitor the quality and quantity of the drilling fluids. Drilling fluid flow rate is difficult to measure due to the varying flow conditions (e.g., wavy, turbulent and irregular) and the presence of drilling cuttings and gas bubbles. Inclusion of a Venturi section in the open channel and an array of ultrasonic level sensors above it at locations in the vicinity of and above the Venturi constriction gives the varying levels of the drilling fluid in the channel. The time series of the levels from this array of ultrasonic level sensors are used to estimate the drilling fluid flow rate, which is compared with Coriolis meter measurements. Fuzzy logic, neural networks and support vector regression algorithms applied to the data from temporal and spatial ultrasonic level measurements of the drilling fluid in the open channel give estimates of its flow rate with sufficient reliability, repeatability and uncertainty, providing a novel soft sensing of an important process variable. Simulations, cross-validations and experimental results show that feedforward neural networks with the Bayesian regularization learning algorithm provide the best flow rate estimates. Finally, the benefits of using this soft sensing technique combined with Venturi constriction in open channels are discussed. PMID:29072595
Study on localization of epileptic focus based on causality analysis
NASA Astrophysics Data System (ADS)
Shan, Shaojie; Li, Hanjun; Tang, Xiaoying
2018-05-01
In this paper, we considered that the ECoG signal contain abundant pathological information, which can be used for the localization of epileptic focus before epileptic seizures in 1-2 mins. In order to validate this hypothesis, cutting the ECoG into three stages: before seizure, seizure and after seizure, then through using Granger causality algorithm, PSI causality algorithm, Transfer Entropy causality algorithm at different stages of epilepsy ECoG, we were able to do the causality analysis of ECoG data. The results have shown that there is significant difference with the causality value of the epileptic focus area in before seizure, seizure and after seizure. An increase is in the causality value of each channel during epileptic seizure. After epileptic seizure, the causality between the channels showed a downward trend, but the difference was not obvious. The difference of the causality provides a reliable technical method to assist the clinical diagnosis of epileptic focus.
Transmission over UWB channels with OFDM system using LDPC coding
NASA Astrophysics Data System (ADS)
Dziwoki, Grzegorz; Kucharczyk, Marcin; Sulek, Wojciech
2009-06-01
Hostile wireless environment requires use of sophisticated signal processing methods. The paper concerns on Ultra Wideband (UWB) transmission over Personal Area Networks (PAN) including MB-OFDM specification of physical layer. In presented work the transmission system with OFDM modulation was connected with LDPC encoder/decoder. Additionally the frame and bit error rate (FER and BER) of the system was decreased using results from the LDPC decoder in a kind of turbo equalization algorithm for better channel estimation. Computational block using evolutionary strategy, from genetic algorithms family, was also used in presented system. It was placed after SPA (Sum-Product Algorithm) decoder and is conditionally turned on in the decoding process. The result is increased effectiveness of the whole system, especially lower FER. The system was tested with two types of LDPC codes, depending on type of parity check matrices: randomly generated and constructed deterministically, optimized for practical decoder architecture implemented in the FPGA device.
Wang, Yulin; Tian, Xuelong
2014-08-01
In order to improve the speech quality and auditory perceptiveness of electronic cochlear implant under strong noise background, a speech enhancement system used for electronic cochlear implant front-end was constructed. Taking digital signal processing (DSP) as the core, the system combines its multi-channel buffered serial port (McBSP) data transmission channel with extended audio interface chip TLV320AIC10, so speech signal acquisition and output with high speed are realized. Meanwhile, due to the traditional speech enhancement method which has the problems as bad adaptability, slow convergence speed and big steady-state error, versiera function and de-correlation principle were used to improve the existing adaptive filtering algorithm, which effectively enhanced the quality of voice communications. Test results verified the stability of the system and the de-noising performance of the algorithm, and it also proved that they could provide clearer speech signals for the deaf or tinnitus patients.
Stephens, Robert F.; Guan, W.; Zhorov, Boris S.; Spafford, J. David
2015-01-01
How nature discriminates sodium from calcium ions in eukaryotic channels has been difficult to resolve because they contain four homologous, but markedly different repeat domains. We glean clues from analyzing the changing pore region in sodium, calcium and NALCN channels, from single-cell eukaryotes to mammals. Alternative splicing in invertebrate homologs provides insights into different structural features underlying calcium and sodium selectivity. NALCN generates alternative ion selectivity with splicing that changes the high field strength (HFS) site at the narrowest level of the hourglass shaped pore where the selectivity filter is located. Alternative splicing creates NALCN isoforms, in which the HFS site has a ring of glutamates contributed by all four repeat domains (EEEE), or three glutamates and a lysine residue in the third (EEKE) or second (EKEE) position. Alternative splicing provides sodium and/or calcium selectivity in T-type channels with extracellular loops between S5 and P-helices (S5P) of different lengths that contain three or five cysteines. All eukaryotic channels have a set of eight core cysteines in extracellular regions, but the T-type channels have an infusion of 4–12 extra cysteines in extracellular regions. The pattern of conservation suggests a possible pairing of long loops in Domains I and III, which are bridged with core cysteines in NALCN, Cav, and Nav channels, and pairing of shorter loops in Domains II and IV in T-type channel through disulfide bonds involving T-type specific cysteines. Extracellular turrets of increasing lengths in potassium channels (Kir2.2, hERG, and K2P1) contribute to a changing landscape above the pore selectivity filter that can limit drug access and serve as an ion pre-filter before ions reach the pore selectivity filter below. Pairing of extended loops likely contributes to the large extracellular appendage as seen in single particle electron cryo-microscopy images of the eel Nav1 channel. PMID:26042044
Anastasiadou, Maria N; Christodoulakis, Manolis; Papathanasiou, Eleftherios S; Papacostas, Savvas S; Mitsis, Georgios D
2017-09-01
This paper proposes supervised and unsupervised algorithms for automatic muscle artifact detection and removal from long-term EEG recordings, which combine canonical correlation analysis (CCA) and wavelets with random forests (RF). The proposed algorithms first perform CCA and continuous wavelet transform of the canonical components to generate a number of features which include component autocorrelation values and wavelet coefficient magnitude values. A subset of the most important features is subsequently selected using RF and labelled observations (supervised case) or synthetic data constructed from the original observations (unsupervised case). The proposed algorithms are evaluated using realistic simulation data as well as 30min epochs of non-invasive EEG recordings obtained from ten patients with epilepsy. We assessed the performance of the proposed algorithms using classification performance and goodness-of-fit values for noisy and noise-free signal windows. In the simulation study, where the ground truth was known, the proposed algorithms yielded almost perfect performance. In the case of experimental data, where expert marking was performed, the results suggest that both the supervised and unsupervised algorithm versions were able to remove artifacts without affecting noise-free channels considerably, outperforming standard CCA, independent component analysis (ICA) and Lagged Auto-Mutual Information Clustering (LAMIC). The proposed algorithms achieved excellent performance for both simulation and experimental data. Importantly, for the first time to our knowledge, we were able to perform entirely unsupervised artifact removal, i.e. without using already marked noisy data segments, achieving performance that is comparable to the supervised case. Overall, the results suggest that the proposed algorithms yield significant future potential for improving EEG signal quality in research or clinical settings without the need for marking by expert neurophysiologists, EMG signal recording and user visual inspection. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Dong, Feihong; Li, Hongjun; Gong, Xiangwu; Liu, Quan; Wang, Jingchao
2015-01-01
A typical application scenario of remote wireless sensor networks (WSNs) is identified as an emergency scenario. One of the greatest design challenges for communications in emergency scenarios is energy-efficient transmission, due to scarce electrical energy in large-scale natural and man-made disasters. Integrated high altitude platform (HAP)/satellite networks are expected to optimally meet emergency communication requirements. In this paper, a novel integrated HAP/satellite (IHS) architecture is proposed, and three segments of the architecture are investigated in detail. The concept of link-state advertisement (LSA) is designed in a slow flat Rician fading channel. The LSA is received and processed by the terminal to estimate the link state information, which can significantly reduce the energy consumption at the terminal end. Furthermore, the transmission power requirements of the HAPs and terminals are derived using the gradient descent and differential equation methods. The energy consumption is modeled at both the source and system level. An innovative and adaptive algorithm is given for the energy-efficient path selection. The simulation results validate the effectiveness of the proposed adaptive algorithm. It is shown that the proposed adaptive algorithm can significantly improve energy efficiency when combined with the LSA and the energy consumption estimation. PMID:26404292
Dong, Feihong; Li, Hongjun; Gong, Xiangwu; Liu, Quan; Wang, Jingchao
2015-09-03
A typical application scenario of remote wireless sensor networks (WSNs) is identified as an emergency scenario. One of the greatest design challenges for communications in emergency scenarios is energy-efficient transmission, due to scarce electrical energy in large-scale natural and man-made disasters. Integrated high altitude platform (HAP)/satellite networks are expected to optimally meet emergency communication requirements. In this paper, a novel integrated HAP/satellite (IHS) architecture is proposed, and three segments of the architecture are investigated in detail. The concept of link-state advertisement (LSA) is designed in a slow flat Rician fading channel. The LSA is received and processed by the terminal to estimate the link state information, which can significantly reduce the energy consumption at the terminal end. Furthermore, the transmission power requirements of the HAPs and terminals are derived using the gradient descent and differential equation methods. The energy consumption is modeled at both the source and system level. An innovative and adaptive algorithm is given for the energy-efficient path selection. The simulation results validate the effectiveness of the proposed adaptive algorithm. It is shown that the proposed adaptive algorithm can significantly improve energy efficiency when combined with the LSA and the energy consumption estimation.
Kumaravel, Rasadurai; Narayanaswamy, Kumaratharan
2015-01-01
Multi carrier code division multiple access (MC-CDMA) system is a promising multi carrier modulation (MCM) technique for high data rate wireless communication over frequency selective fading channels. MC-CDMA system is a combination of code division multiple access (CDMA) and orthogonal frequency division multiplexing (OFDM). The OFDM parts reduce multipath fading and inter symbol interference (ISI) and the CDMA part increases spectrum utilization. Advantages of this technique are its robustness in case of multipath propagation and improve security with the minimize ISI. Nevertheless, due to the loss of orthogonality at the receiver in a mobile environment, the multiple access interference (MAI) appears. The MAI is one of the factors that degrade the bit error rate (BER) performance of MC-CDMA system. The multiuser detection (MUD) and turbo coding are the two dominant techniques for enhancing the performance of the MC-CDMA systems in terms of BER as a solution of overcome to MAI effects. In this paper a low complexity iterative soft sensitive bits algorithm (SBA) aided logarithmic-Maximum a-Posteriori algorithm (Log MAP) based turbo MUD is proposed. Simulation results show that the proposed method provides better BER performance with low complexity decoding, by mitigating the detrimental effects of MAI. PMID:25714917
Optimal Signal Processing of Frequency-Stepped CW Radar Data
NASA Technical Reports Server (NTRS)
Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.
1995-01-01
An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the first two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-X510 network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.
Optimal Signal Processing of Frequency-Stepped CW Radar Data
NASA Technical Reports Server (NTRS)
Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.
1995-01-01
An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-851O network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.
Masoli, Stefano; Rizza, Martina F; Sgritta, Martina; Van Geit, Werner; Schürmann, Felix; D'Angelo, Egidio
2017-01-01
In realistic neuronal modeling, once the ionic channel complement has been defined, the maximum ionic conductance (G i-max ) values need to be tuned in order to match the firing pattern revealed by electrophysiological recordings. Recently, selection/mutation genetic algorithms have been proposed to efficiently and automatically tune these parameters. Nonetheless, since similar firing patterns can be achieved through different combinations of G i-max values, it is not clear how well these algorithms approximate the corresponding properties of real cells. Here we have evaluated the issue by exploiting a unique opportunity offered by the cerebellar granule cell (GrC), which is electrotonically compact and has therefore allowed the direct experimental measurement of ionic currents. Previous models were constructed using empirical tuning of G i-max values to match the original data set. Here, by using repetitive discharge patterns as a template, the optimization procedure yielded models that closely approximated the experimental G i-max values. These models, in addition to repetitive firing, captured additional features, including inward rectification, near-threshold oscillations, and resonance, which were not used as features. Thus, parameter optimization using genetic algorithms provided an efficient modeling strategy for reconstructing the biophysical properties of neurons and for the subsequent reconstruction of large-scale neuronal network models.
A study of metaheuristic algorithms for high dimensional feature selection on microarray data
NASA Astrophysics Data System (ADS)
Dankolo, Muhammad Nasiru; Radzi, Nor Haizan Mohamed; Sallehuddin, Roselina; Mustaffa, Noorfa Haszlinna
2017-11-01
Microarray systems enable experts to examine gene profile at molecular level using machine learning algorithms. It increases the potentials of classification and diagnosis of many diseases at gene expression level. Though, numerous difficulties may affect the efficiency of machine learning algorithms which includes vast number of genes features comprised in the original data. Many of these features may be unrelated to the intended analysis. Therefore, feature selection is necessary to be performed in the data pre-processing. Many feature selection algorithms are developed and applied on microarray which including the metaheuristic optimization algorithms. This paper discusses the application of the metaheuristics algorithms for feature selection in microarray dataset. This study reveals that, the algorithms have yield an interesting result with limited resources thereby saving computational expenses of machine learning algorithms.