Array signal recovery algorithm for a single-RF-channel DBF array
NASA Astrophysics Data System (ADS)
Zhang, Duo; Wu, Wen; Fang, Da Gang
2016-12-01
An array signal recovery algorithm based on sparse signal reconstruction theory is proposed for a single-RF-channel digital beamforming (DBF) array. A single-RF-channel antenna array is a low-cost antenna array in which signals are obtained from all antenna elements by only one microwave digital receiver. The spatially parallel array signals are converted into time-sequence signals, which are then sampled by the system. The proposed algorithm uses these time-sequence samples to recover the original parallel array signals by exploiting the second-order sparse structure of the array signals. Additionally, an optimization method based on the artificial bee colony (ABC) algorithm is proposed to improve the reconstruction performance. Using the proposed algorithm, the motion compensation problem for the single-RF-channel DBF array can be solved effectively, and the angle and Doppler information for the target can be simultaneously estimated. The effectiveness of the proposed algorithms is demonstrated by the results of numerical simulations.
Li, Junfeng; Yang, Lin; Zhang, Jianping; Yan, Yonghong; Hu, Yi; Akagi, Masato; Loizou, Philipos C
2011-05-01
A large number of single-channel noise-reduction algorithms have been proposed based largely on mathematical principles. Most of these algorithms, however, have been evaluated with English speech. Given the different perceptual cues used by native listeners of different languages including tonal languages, it is of interest to examine whether there are any language effects when the same noise-reduction algorithm is used to process noisy speech in different languages. A comparative evaluation and investigation is taken in this study of various single-channel noise-reduction algorithms applied to noisy speech taken from three languages: Chinese, Japanese, and English. Clean speech signals (Chinese words and Japanese words) were first corrupted by three types of noise at two signal-to-noise ratios and then processed by five single-channel noise-reduction algorithms. The processed signals were finally presented to normal-hearing listeners for recognition. Intelligibility evaluation showed that the majority of noise-reduction algorithms did not improve speech intelligibility. Consistent with a previous study with the English language, the Wiener filtering algorithm produced small, but statistically significant, improvements in intelligibility for car and white noise conditions. Significant differences between the performances of noise-reduction algorithms across the three languages were observed.
Single-channel mixed signal blind source separation algorithm based on multiple ICA processing
NASA Astrophysics Data System (ADS)
Cheng, Xiefeng; Li, Ji
2017-01-01
Take separating the fetal heart sound signal from the mixed signal that get from the electronic stethoscope as the research background, the paper puts forward a single-channel mixed signal blind source separation algorithm based on multiple ICA processing. Firstly, according to the empirical mode decomposition (EMD), the single-channel mixed signal get multiple orthogonal signal components which are processed by ICA. The multiple independent signal components are called independent sub component of the mixed signal. Then by combining with the multiple independent sub component into single-channel mixed signal, the single-channel signal is expanded to multipath signals, which turns the under-determined blind source separation problem into a well-posed blind source separation problem. Further, the estimate signal of source signal is get by doing the ICA processing. Finally, if the separation effect is not very ideal, combined with the last time's separation effect to the single-channel mixed signal, and keep doing the ICA processing for more times until the desired estimated signal of source signal is get. The simulation results show that the algorithm has good separation effect for the single-channel mixed physiological signals.
Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio
2018-03-01
To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Centralized Routing and Scheduling Using Multi-Channel System Single Transceiver in 802.16d
NASA Astrophysics Data System (ADS)
Al-Hemyari, A.; Noordin, N. K.; Ng, Chee Kyun; Ismail, A.; Khatun, S.
This paper proposes a cross-layer optimized strategy that reduces the effect of interferences from neighboring nodes within a mesh networks. This cross-layer design relies on the routing information in network layer and the scheduling table in medium access control (MAC) layer. A proposed routing algorithm in network layer is exploited to find the best route for all subscriber stations (SS). Also, a proposed centralized scheduling algorithm in MAC layer is exploited to assign a time slot for each possible node transmission. The cross-layer optimized strategy is using multi-channel single transceiver and single channel single transceiver systems for WiMAX mesh networks (WMNs). Each node in WMN has a transceiver that can be tuned to any available channel for eliminating the secondary interference. Among the considered parameters in the performance analysis are interference from the neighboring nodes, hop count to the base station (BS), number of children per node, slot reuse, load balancing, quality of services (QoS), and node identifier (ID). Results show that the proposed algorithms significantly improve the system performance in terms of length of scheduling, channel utilization ratio (CUR), system throughput, and average end to end transmission delay.
A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs.
Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan
2015-01-01
In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network.
A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs
Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan
2015-01-01
In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network. PMID:26571042
Automatic detection and classification of artifacts in single-channel EEG.
Olund, Thomas; Duun-Henriksen, Jonas; Kjaer, Troels W; Sorensen, Helge B D
2014-01-01
Ambulatory EEG monitoring can provide medical doctors important diagnostic information, without hospitalizing the patient. These recordings are however more exposed to noise and artifacts compared to clinically recorded EEG. An automatic artifact detection and classification algorithm for single-channel EEG is proposed to help identifying these artifacts. Features are extracted from the EEG signal and wavelet subbands. Subsequently a selection algorithm is applied in order to identify the best discriminating features. A non-linear support vector machine is used to discriminate among different artifact classes using the selected features. Single-channel (Fp1-F7) EEG recordings are obtained from experiments with 12 healthy subjects performing artifact inducing movements. The dataset was used to construct and validate the model. Both subject-specific and generic implementation, are investigated. The detection algorithm yield an average sensitivity and specificity above 95% for both the subject-specific and generic models. The classification algorithm show a mean accuracy of 78 and 64% for the subject-specific and generic model, respectively. The classification model was additionally validated on a reference dataset with similar results.
Cross contrast multi-channel image registration using image synthesis for MR brain images.
Chen, Min; Carass, Aaron; Jog, Amod; Lee, Junghoon; Roy, Snehashis; Prince, Jerry L
2017-02-01
Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information. Copyright © 2016 Elsevier B.V. All rights reserved.
Channel coding for underwater acoustic single-carrier CDMA communication system
NASA Astrophysics Data System (ADS)
Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong
2017-01-01
CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.
NASA Astrophysics Data System (ADS)
Valderrama, Joaquin T.; de la Torre, Angel; Van Dun, Bram
2018-02-01
Objective. Artifact reduction in electroencephalogram (EEG) signals is usually necessary to carry out data analysis appropriately. Despite the large amount of denoising techniques available with a multichannel setup, there is a lack of efficient algorithms that remove (not only detect) blink-artifacts from a single channel EEG, which is of interest in many clinical and research applications. This paper describes and evaluates the iterative template matching and suppression (ITMS), a new method proposed for detecting and suppressing the artifact associated with the blink activity from a single channel EEG. Approach. The approach of ITMS consists of (a) an iterative process in which blink-events are detected and the blink-artifact waveform of the analyzed subject is estimated, (b) generation of a signal modeling the blink-artifact, and (c) suppression of this signal from the raw EEG. The performance of ITMS is compared with the multi-window summation of derivatives within a window (MSDW) technique using both synthesized and real EEG data. Main results. Results suggest that ITMS presents an adequate performance in detecting and suppressing blink-artifacts from a single channel EEG. When applied to the analysis of cortical auditory evoked potentials (CAEPs), ITMS provides a significant quality improvement in the resulting responses, i.e. in a cohort of 30 adults, the mean correlation coefficient improved from 0.37 to 0.65 when the blink-artifacts were detected and suppressed by ITMS. Significance. ITMS is an efficient solution to the problem of denoising blink-artifacts in single-channel EEG applications, both in clinical and research fields. The proposed ITMS algorithm is stable; automatic, since it does not require human intervention; low-invasive, because the EEG segments not contaminated by blink-artifacts remain unaltered; and easy to implement, as can be observed in the Matlab script implemeting the algorithm provided as supporting material.
Multi-channel distributed coordinated function over single radio in wireless sensor networks.
Campbell, Carlene E-A; Loo, Kok-Keong Jonathan; Gemikonakli, Orhan; Khan, Shafiullah; Singh, Dhananjay
2011-01-01
Multi-channel assignments are becoming the solution of choice to improve performance in single radio for wireless networks. Multi-channel allows wireless networks to assign different channels to different nodes in real-time transmission. In this paper, we propose a new approach, Multi-channel Distributed Coordinated Function (MC-DCF) which takes advantage of multi-channel assignment. The backoff algorithm of the IEEE 802.11 distributed coordination function (DCF) was modified to invoke channel switching, based on threshold criteria in order to improve the overall throughput for wireless sensor networks (WSNs) over 802.11 networks. We presented simulation experiments in order to investigate the characteristics of multi-channel communication in wireless sensor networks using an NS2 platform. Nodes only use a single radio and perform channel switching only after specified threshold is reached. Single radio can only work on one channel at any given time. All nodes initiate constant bit rate streams towards the receiving nodes. In this work, we studied the impact of non-overlapping channels in the 2.4 frequency band on: constant bit rate (CBR) streams, node density, source nodes sending data directly to sink and signal strength by varying distances between the sensor nodes and operating frequencies of the radios with different data rates. We showed that multi-channel enhancement using our proposed algorithm provides significant improvement in terms of throughput, packet delivery ratio and delay. This technique can be considered for WSNs future use in 802.11 networks especially when the IEEE 802.11n becomes popular thereby may prevent the 802.15.4 network from operating effectively in the 2.4 GHz frequency band.
Multi-Channel Distributed Coordinated Function over Single Radio in Wireless Sensor Networks
Campbell, Carlene E.-A.; Loo, Kok-Keong (Jonathan); Gemikonakli, Orhan; Khan, Shafiullah; Singh, Dhananjay
2011-01-01
Multi-channel assignments are becoming the solution of choice to improve performance in single radio for wireless networks. Multi-channel allows wireless networks to assign different channels to different nodes in real-time transmission. In this paper, we propose a new approach, Multi-channel Distributed Coordinated Function (MC-DCF) which takes advantage of multi-channel assignment. The backoff algorithm of the IEEE 802.11 distributed coordination function (DCF) was modified to invoke channel switching, based on threshold criteria in order to improve the overall throughput for wireless sensor networks (WSNs) over 802.11 networks. We presented simulation experiments in order to investigate the characteristics of multi-channel communication in wireless sensor networks using an NS2 platform. Nodes only use a single radio and perform channel switching only after specified threshold is reached. Single radio can only work on one channel at any given time. All nodes initiate constant bit rate streams towards the receiving nodes. In this work, we studied the impact of non-overlapping channels in the 2.4 frequency band on: constant bit rate (CBR) streams, node density, source nodes sending data directly to sink and signal strength by varying distances between the sensor nodes and operating frequencies of the radios with different data rates. We showed that multi-channel enhancement using our proposed algorithm provides significant improvement in terms of throughput, packet delivery ratio and delay. This technique can be considered for WSNs future use in 802.11 networks especially when the IEEE 802.11n becomes popular thereby may prevent the 802.15.4 network from operating effectively in the 2.4 GHz frequency band. PMID:22346614
NASA Technical Reports Server (NTRS)
Wolf, Michael
2012-01-01
A document describes an algorithm created to estimate the mass placed on a sample verification sensor (SVS) designed for lunar or planetary robotic sample return missions. A novel SVS measures the capacitance between a rigid bottom plate and an elastic top membrane in seven locations. As additional sample material (soil and/or small rocks) is placed on the top membrane, the deformation of the membrane increases the capacitance. The mass estimation algorithm addresses both the calibration of each SVS channel, and also addresses how to combine the capacitances read from each of the seven channels into a single mass estimate. The probabilistic approach combines the channels according to the variance observed during the training phase, and provides not only the mass estimate, but also a value for the certainty of the estimate. SVS capacitance data is collected for known masses under a wide variety of possible loading scenarios, though in all cases, the distribution of sample within the canister is expected to be approximately uniform. A capacitance-vs-mass curve is fitted to this data, and is subsequently used to determine the mass estimate for the single channel s capacitance reading during the measurement phase. This results in seven different mass estimates, one for each SVS channel. Moreover, the variance of the calibration data is used to place a Gaussian probability distribution function (pdf) around this mass estimate. To blend these seven estimates, the seven pdfs are combined into a single Gaussian distribution function, providing the final mean and variance of the estimate. This blending technique essentially takes the final estimate as an average of the estimates of the seven channels, weighted by the inverse of the channel s variance.
NASA Astrophysics Data System (ADS)
Gong, Li-Hua; He, Xiang-Tao; Tan, Ru-Chao; Zhou, Zhi-Hong
2018-01-01
In order to obtain high-quality color images, it is important to keep the hue component unchanged while emphasize the intensity or saturation component. As a public color model, Hue-Saturation Intensity (HSI) model is commonly used in image processing. A new single channel quantum color image encryption algorithm based on HSI model and quantum Fourier transform (QFT) is investigated, where the color components of the original color image are converted to HSI and the logistic map is employed to diffuse the relationship of pixels in color components. Subsequently, quantum Fourier transform is exploited to fulfill the encryption. The cipher-text is a combination of a gray image and a phase matrix. Simulations and theoretical analyses demonstrate that the proposed single channel quantum color image encryption scheme based on the HSI model and quantum Fourier transform is secure and effective.
Sniffer Channel Selection for Monitoring Wireless LANs
NASA Astrophysics Data System (ADS)
Song, Yuan; Chen, Xian; Kim, Yoo-Ah; Wang, Bing; Chen, Guanling
Wireless sniffers are often used to monitor APs in wireless LANs (WLANs) for network management, fault detection, traffic characterization, and optimizing deployment. It is cost effective to deploy single-radio sniffers that can monitor multiple nearby APs. However, since nearby APs often operate on orthogonal channels, a sniffer needs to switch among multiple channels to monitor its nearby APs. In this paper, we formulate and solve two optimization problems on sniffer channel selection. Both problems require that each AP be monitored by at least one sniffer. In addition, one optimization problem requires minimizing the maximum number of channels that a sniffer listens to, and the other requires minimizing the total number of channels that the sniffers listen to. We propose a novel LP-relaxation based algorithm, and two simple greedy heuristics for the above two optimization problems. Through simulation, we demonstrate that all the algorithms are effective in achieving their optimization goals, and the LP-based algorithm outperforms the greedy heuristics.
The Error Structure of the SMAP Single and Dual Channel Soil Moisture Retrievals
NASA Astrophysics Data System (ADS)
Dong, Jianzhi; Crow, Wade T.; Bindlish, Rajat
2018-01-01
Knowledge of the temporal error structure for remotely sensed surface soil moisture retrievals can improve our ability to exploit them for hydrologic and climate studies. This study employs a triple collocation analysis to investigate both the total variance and temporal autocorrelation of errors in Soil Moisture Active and Passive (SMAP) products generated from two separate soil moisture retrieval algorithms, the vertically polarized brightness temperature-based single-channel algorithm (SCA-V, the current baseline SMAP algorithm) and the dual-channel algorithm (DCA). A key assumption made in SCA-V is that real-time vegetation opacity can be accurately captured using only a climatology for vegetation opacity. Results demonstrate that while SCA-V generally outperforms DCA, SCA-V can produce larger total errors when this assumption is significantly violated by interannual variability in vegetation health and biomass. Furthermore, larger autocorrelated errors in SCA-V retrievals are found in areas with relatively large vegetation opacity deviations from climatological expectations. This implies that a significant portion of the autocorrelated error in SCA-V is attributable to the violation of its vegetation opacity climatology assumption and suggests that utilizing a real (as opposed to climatological) vegetation opacity time series in the SCA-V algorithm would reduce the magnitude of autocorrelated soil moisture retrieval errors.
Single-cell copy number variation detection
2011-01-01
Detection of chromosomal aberrations from a single cell by array comparative genomic hybridization (single-cell array CGH), instead of from a population of cells, is an emerging technique. However, such detection is challenging because of the genome artifacts and the DNA amplification process inherent to the single cell approach. Current normalization algorithms result in inaccurate aberration detection for single-cell data. We propose a normalization method based on channel, genome composition and recurrent genome artifact corrections. We demonstrate that the proposed channel clone normalization significantly improves the copy number variation detection in both simulated and real single-cell array CGH data. PMID:21854607
Color enhancement and image defogging in HSI based on Retinex model
NASA Astrophysics Data System (ADS)
Gao, Han; Wei, Ping; Ke, Jun
2015-08-01
Retinex is a luminance perceptual algorithm based on color consistency. It has a good performance in color enhancement. But in some cases, the traditional Retinex algorithms, both Single-Scale Retinex(SSR) and Multi-Scale Retinex(MSR) in RGB color space, do not work well and will cause color deviation. To solve this problem, we present improved SSR and MSR algorithms. Compared to other Retinex algorithms, we implement Retinex algorithms in HSI(Hue, Saturation, Intensity) color space, and use a parameter αto improve quality of the image. Moreover, the algorithms presented in this paper has a good performance in image defogging. Contrasted with traditional Retinex algorithms, we use intensity channel to obtain reflection information of an image. The intensity channel is processed using a Gaussian center-surround image filter to get light information, which should be removed from intensity channel. After that, we subtract the light information from intensity channel to obtain the reflection image, which only includes the attribute of the objects in image. Using the reflection image and a parameter α, which is an arbitrary scale factor set manually, we improve the intensity channel, and complete the color enhancement. Our experiments show that this approach works well compared with existing methods for color enhancement. Besides a better performance in color deviation problem and image defogging, a visible improvement in the image quality for human contrast perception is also observed.
Maximum likelihood positioning algorithm for high-resolution PET scanners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gross-Weege, Nicolas, E-mail: nicolas.gross-weege@pmi.rwth-aachen.de, E-mail: schulz@pmi.rwth-aachen.de; Schug, David; Hallen, Patrick
2016-06-15
Purpose: In high-resolution positron emission tomography (PET), lightsharing elements are incorporated into typical detector stacks to read out scintillator arrays in which one scintillator element (crystal) is smaller than the size of the readout channel. In order to identify the hit crystal by means of the measured light distribution, a positioning algorithm is required. One commonly applied positioning algorithm uses the center of gravity (COG) of the measured light distribution. The COG algorithm is limited in spatial resolution by noise and intercrystal Compton scatter. The purpose of this work is to develop a positioning algorithm which overcomes this limitation. Methods:more » The authors present a maximum likelihood (ML) algorithm which compares a set of expected light distributions given by probability density functions (PDFs) with the measured light distribution. Instead of modeling the PDFs by using an analytical model, the PDFs of the proposed ML algorithm are generated assuming a single-gamma-interaction model from measured data. The algorithm was evaluated with a hot-rod phantom measurement acquired with the preclinical HYPERION II {sup D} PET scanner. In order to assess the performance with respect to sensitivity, energy resolution, and image quality, the ML algorithm was compared to a COG algorithm which calculates the COG from a restricted set of channels. The authors studied the energy resolution of the ML and the COG algorithm regarding incomplete light distributions (missing channel information caused by detector dead time). Furthermore, the authors investigated the effects of using a filter based on the likelihood values on sensitivity, energy resolution, and image quality. Results: A sensitivity gain of up to 19% was demonstrated in comparison to the COG algorithm for the selected operation parameters. Energy resolution and image quality were on a similar level for both algorithms. Additionally, the authors demonstrated that the performance of the ML algorithm is less prone to missing channel information. A likelihood filter visually improved the image quality, i.e., the peak-to-valley increased up to a factor of 3 for 2-mm-diameter phantom rods by rejecting 87% of the coincidences. A relative improvement of the energy resolution of up to 12.8% was also measured rejecting 91% of the coincidences. Conclusions: The developed ML algorithm increases the sensitivity by correctly handling missing channel information without influencing energy resolution or image quality. Furthermore, the authors showed that energy resolution and image quality can be improved substantially by rejecting events that do not comply well with the single-gamma-interaction model, such as Compton-scattered events.« less
Zhang, Jia-Hua; Li, Xin; Yao, Feng-Mei; Li, Xian-Hua
2009-08-01
Land surface temperature (LST) is an important parameter in the study on the exchange of substance and energy between land surface and air for the land surface physics process at regional and global scales. Many applications of satellites remotely sensed data must provide exact and quantificational LST, such as drought, high temperature, forest fire, earthquake, hydrology and the vegetation monitor, and the models of global circulation and regional climate also need LST as input parameter. Therefore, the retrieval of LST using remote sensing technology becomes one of the key tasks in quantificational remote sensing study. Normally, in the spectrum bands, the thermal infrared (TIR, 3-15 microm) and microwave bands (1 mm-1 m) are important for retrieval of the LST. In the present paper, firstly, several methods for estimating the LST on the basis of thermal infrared (TIR) remote sensing were synthetically reviewed, i. e., the LST measured with an ground-base infrared thermometer, the LST retrieval from mono-window algorithm (MWA), single-channel algorithm (SCA), split-window techniques (SWT) and multi-channels algorithm(MCA), single-channel & multi-angle algorithm and multi-channels algorithm & multi-angle algorithm, and retrieval method of land surface component temperature using thermal infrared remotely sensed satellite observation. Secondly, the study status of land surface emissivity (epsilon) was presented. Thirdly, in order to retrieve LST for all weather conditions, microwave remotely sensed data, instead of thermal infrared data, have been developed recently, and the LST retrieval method from passive microwave remotely sensed data was also introduced. Finally, the main merits and shortcomings of different kinds of LST retrieval methods were discussed, respectively.
Wang, Zhirui; Xu, Jia; Huang, Zuzhen; Zhang, Xudong; Xia, Xiang-Gen; Long, Teng; Bao, Qian
2016-03-16
To detect and estimate ground slowly moving targets in airborne single-channel synthetic aperture radar (SAR), a road-aided ground moving target indication (GMTI) algorithm is proposed in this paper. First, the road area is extracted from a focused SAR image based on radar vision. Second, after stationary clutter suppression in the range-Doppler domain, a moving target is detected and located in the image domain via the watershed method. The target's position on the road as well as its radial velocity can be determined according to the target's offset distance and traffic rules. Furthermore, the target's azimuth velocity is estimated based on the road slope obtained via polynomial fitting. Compared with the traditional algorithms, the proposed method can effectively cope with slowly moving targets partly submerged in a stationary clutter spectrum. In addition, the proposed method can be easily extended to a multi-channel system to further improve the performance of clutter suppression and motion estimation. Finally, the results of numerical experiments are provided to demonstrate the effectiveness of the proposed algorithm.
Estimation of Boreal Forest Biomass Using Spaceborne SAR Systems
NASA Technical Reports Server (NTRS)
Saatchi, Sassan; Moghaddam, Mahta
1995-01-01
In this paper, we report on the use of a semiempirical algorithm derived from a two layer radar backscatter model for forest canopies. The model stratifies the forest canopy into crown and stem layers, separates the structural and biometric attributes of the canopy. The structural parameters are estimated by training the model with polarimetric SAR (synthetic aperture radar) data acquired over homogeneous stands with known above ground biomass. Given the structural parameters, the semi-empirical algorithm has four remaining parameters, crown biomass, stem biomass, surface soil moisture, and surface rms height that can be estimated by at least four independent SAR measurements. The algorithm has been used to generate biomass maps over the entire images acquired by JPL AIRSAR and SIR-C SAR systems. The semi-empirical algorithms are then modified to be used by single frequency radar systems such as ERS-1, JERS-1, and Radarsat. The accuracy. of biomass estimation from single channel radars is compared with the case when the channels are used together in synergism or in a polarimetric system.
Performance Analysis for Channel Estimation With 1-Bit ADC and Unknown Quantization Threshold
NASA Astrophysics Data System (ADS)
Stein, Manuel S.; Bar, Shahar; Nossek, Josef A.; Tabrikian, Joseph
2018-05-01
In this work, the problem of signal parameter estimation from measurements acquired by a low-complexity analog-to-digital converter (ADC) with $1$-bit output resolution and an unknown quantization threshold is considered. Single-comparator ADCs are energy-efficient and can be operated at ultra-high sampling rates. For analysis of such systems, a fixed and known quantization threshold is usually assumed. In the symmetric case, i.e., zero hard-limiting offset, it is known that in the low signal-to-noise ratio (SNR) regime the signal processing performance degrades moderately by ${2}/{\\pi}$ ($-1.96$ dB) when comparing to an ideal $\\infty$-bit converter. Due to hardware imperfections, low-complexity $1$-bit ADCs will in practice exhibit an unknown threshold different from zero. Therefore, we study the accuracy which can be obtained with receive data processed by a hard-limiter with unknown quantization level by using asymptotically optimal channel estimation algorithms. To characterize the estimation performance of these nonlinear algorithms, we employ analytic error expressions for different setups while modeling the offset as a nuisance parameter. In the low SNR regime, we establish the necessary condition for a vanishing loss due to missing offset knowledge at the receiver. As an application, we consider the estimation of single-input single-output wireless channels with inter-symbol interference and validate our analysis by comparing the analytic and experimental performance of the studied estimation algorithms. Finally, we comment on the extension to multiple-input multiple-output channel models.
Real-time image dehazing using local adaptive neighborhoods and dark-channel-prior
NASA Astrophysics Data System (ADS)
Valderrama, Jesus A.; Díaz-Ramírez, Víctor H.; Kober, Vitaly; Hernandez, Enrique
2015-09-01
A real-time algorithm for single image dehazing is presented. The algorithm is based on calculation of local neighborhoods of a hazed image inside a moving window. The local neighborhoods are constructed by computing rank-order statistics. Next the dark-channel-prior approach is applied to the local neighborhoods to estimate the transmission function of the scene. By using the suggested approach there is no need for applying a refining algorithm to the estimated transmission such as the soft matting algorithm. To achieve high-rate signal processing the proposed algorithm is implemented exploiting massive parallelism on a graphics processing unit (GPU). Computer simulation results are carried out to test the performance of the proposed algorithm in terms of dehazing efficiency and speed of processing. These tests are performed using several synthetic and real images. The obtained results are analyzed and compared with those obtained with existing dehazing algorithms.
FPGA implementation of image dehazing algorithm for real time applications
NASA Astrophysics Data System (ADS)
Kumar, Rahul; Kaushik, Brajesh Kumar; Balasubramanian, R.
2017-09-01
Weather degradation such as haze, fog, mist, etc. severely reduces the effective range of visual surveillance. This degradation is a spatially varying phenomena, which makes this problem non trivial. Dehazing is an essential preprocessing stage in applications such as long range imaging, border security, intelligent transportation system, etc. However, these applications require low latency of the preprocessing block. In this work, single image dark channel prior algorithm is modified and implemented for fast processing with comparable visual quality of the restored image/video. Although conventional single image dark channel prior algorithm is computationally expensive, it yields impressive results. Moreover, a two stage image dehazing architecture is introduced, wherein, dark channel and airlight are estimated in the first stage. Whereas, transmission map and intensity restoration are computed in the next stages. The algorithm is implemented using Xilinx Vivado software and validated by using Xilinx zc702 development board, which contains an Artix7 equivalent Field Programmable Gate Array (FPGA) and ARM Cortex A9 dual core processor. Additionally, high definition multimedia interface (HDMI) has been incorporated for video feed and display purposes. The results show that the dehazing algorithm attains 29 frames per second for the image resolution of 1920x1080 which is suitable of real time applications. The design utilizes 9 18K_BRAM, 97 DSP_48, 6508 FFs and 8159 LUTs.
Zhang, Guosong; Hovem, Jens M.; Dong, Hefeng
2012-01-01
Underwater communication channels are often complicated, and in particular multipath propagation may cause intersymbol interference (ISI). This paper addresses how to remove ISI, and evaluates the performance of three different receiver structures and their implementations. Using real data collected in a high-frequency (10–14 kHz) field experiment, the receiver structures are evaluated by off-line data processing. The three structures are multichannel decision feedback equalizer (DFE), passive time reversal receiver (passive-phase conjugation (PPC) with a single channel DFE), and the joint PPC with multichannel DFE. In sparse channels, dominant arrivals represent the channel information, and the matching pursuit (MP) algorithm which exploits the channel sparseness has been investigated for PPC processing. In the assessment, it is found that: (1) it is advantageous to obtain spatial gain using the adaptive multichannel combining scheme; and (2) the MP algorithm improves the performance of communications using PPC processing. PMID:22438755
Warren, Kristen M; Harvey, Joshua R; Chon, Ki H; Mendelson, Yitzhak
2016-03-07
Photoplethysmographic (PPG) waveforms are used to acquire pulse rate (PR) measurements from pulsatile arterial blood volume. PPG waveforms are highly susceptible to motion artifacts (MA), limiting the implementation of PR measurements in mobile physiological monitoring devices. Previous studies have shown that multichannel photoplethysmograms can successfully acquire diverse signal information during simple, repetitive motion, leading to differences in motion tolerance across channels. In this paper, we investigate the performance of a custom-built multichannel forehead-mounted photoplethysmographic sensor under a variety of intense motion artifacts. We introduce an advanced multichannel template-matching algorithm that chooses the channel with the least motion artifact to calculate PR for each time instant. We show that for a wide variety of random motion, channels respond differently to motion artifacts, and the multichannel estimate outperforms single-channel estimates in terms of motion tolerance, signal quality, and PR errors. We have acquired 31 data sets consisting of PPG waveforms corrupted by random motion and show that the accuracy of PR measurements achieved was increased by up to 2.7 bpm when the multichannel-switching algorithm was compared to individual channels. The percentage of PR measurements with error ≤ 5 bpm during motion increased by 18.9% when the multichannel switching algorithm was compared to the mean PR from all channels. Moreover, our algorithm enables automatic selection of the best signal fidelity channel at each time point among the multichannel PPG data.
In-flight automatic detection of vigilance states using a single EEG channel.
Sauvet, F; Bougard, C; Coroenne, M; Lely, L; Van Beers, P; Elbaz, M; Guillard, M; Leger, D; Chennaoui, M
2014-12-01
Sleepiness and fatigue can reach particularly high levels during long-haul overnight flights. Under these conditions, voluntary or even involuntary sleep periods may occur, increasing the risk of accidents. The aim of this study was to assess the performance of an in-flight automatic detection system of low-vigilance states using a single electroencephalogram channel. Fourteen healthy pilots voluntarily wore a miniaturized brain electrical activity recording device during long-haul flights ( 10 ±2.0 h, Atlantic 2 and Falcon 50 M, French naval aviation). No subject was disturbed by the equipment. Seven pilots experienced at least a period of voluntary ( 26.8 ±8.0 min, n = 4) or involuntary sleep (N1 sleep stage, 26.6 ±18.7 s, n = 7) during the flight. Automatic classification (wake/sleep) by the algorithm was made for 10-s epochs (O1-M2 or C3-M2 channel), based on comparison of means to detect changes in α, β, and θ relative power, or ratio [( α+θ)/β], or fuzzy logic fusion (α, β). Pertinence and prognostic of the algorithm were determined using epoch-by-epoch comparison with visual-scoring (two blinded readers, AASM rules). The best concordance between automatic detection and visual-scoring was observed within the O1-M2 channel, using the ratio [( α+θ )/β] ( 98.3 ±4.1% of good detection, K = 0.94 ±0.07, with a 0.04 ±0.04 false positive rate and a 0.87 ±0.10 true positive rate). Our results confirm the efficiency of a miniaturized single electroencephalographic channel recording device, associated with an automatic detection algorithm, in order to detect low-vigilance states during real flights.
GPU Acceleration of DSP for Communication Receivers.
Gunther, Jake; Gunther, Hyrum; Moon, Todd
2017-09-01
Graphics processing unit (GPU) implementations of signal processing algorithms can outperform CPU-based implementations. This paper describes the GPU implementation of several algorithms encountered in a wide range of high-data rate communication receivers including filters, multirate filters, numerically controlled oscillators, and multi-stage digital down converters. These structures are tested by processing the 20 MHz wide FM radio band (88-108 MHz). Two receiver structures are explored: a single channel receiver and a filter bank channelizer. Both run in real time on NVIDIA GeForce GTX 1080 graphics card.
A channel estimation scheme for MIMO-OFDM systems
NASA Astrophysics Data System (ADS)
He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen
2017-08-01
In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.
Accurately tracking single-cell movement trajectories in microfluidic cell sorting devices.
Jeong, Jenny; Frohberg, Nicholas J; Zhou, Enlu; Sulchek, Todd; Qiu, Peng
2018-01-01
Microfluidics are routinely used to study cellular properties, including the efficient quantification of single-cell biomechanics and label-free cell sorting based on the biomechanical properties, such as elasticity, viscosity, stiffness, and adhesion. Both quantification and sorting applications require optimal design of the microfluidic devices and mathematical modeling of the interactions between cells, fluid, and the channel of the device. As a first step toward building such a mathematical model, we collected video recordings of cells moving through a ridged microfluidic channel designed to compress and redirect cells according to cell biomechanics. We developed an efficient algorithm that automatically and accurately tracked the cell trajectories in the recordings. We tested the algorithm on recordings of cells with different stiffness, and showed the correlation between cell stiffness and the tracked trajectories. Moreover, the tracking algorithm successfully picked up subtle differences of cell motion when passing through consecutive ridges. The algorithm for accurately tracking cell trajectories paves the way for future efforts of modeling the flow, forces, and dynamics of cell properties in microfluidics applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C; Adcock, A; Azevedo, S
2010-12-28
Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple datamore » channels with redundant time samples and missing data points. The data channels can have different, time-varying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.« less
Comparing Binaural Pre-processing Strategies III
Warzybok, Anna; Ernst, Stephan M. A.
2015-01-01
A comprehensive evaluation of eight signal pre-processing strategies, including directional microphones, coherence filters, single-channel noise reduction, binaural beamformers, and their combinations, was undertaken with normal-hearing (NH) and hearing-impaired (HI) listeners. Speech reception thresholds (SRTs) were measured in three noise scenarios (multitalker babble, cafeteria noise, and single competing talker). Predictions of three common instrumental measures were compared with the general perceptual benefit caused by the algorithms. The individual SRTs measured without pre-processing and individual benefits were objectively estimated using the binaural speech intelligibility model. Ten listeners with NH and 12 HI listeners participated. The participants varied in age and pure-tone threshold levels. Although HI listeners required a better signal-to-noise ratio to obtain 50% intelligibility than listeners with NH, no differences in SRT benefit from the different algorithms were found between the two groups. With the exception of single-channel noise reduction, all algorithms showed an improvement in SRT of between 2.1 dB (in cafeteria noise) and 4.8 dB (in single competing talker condition). Model predictions with binaural speech intelligibility model explained 83% of the measured variance of the individual SRTs in the no pre-processing condition. Regarding the benefit from the algorithms, the instrumental measures were not able to predict the perceptual data in all tested noise conditions. The comparable benefit observed for both groups suggests a possible application of noise reduction schemes for listeners with different hearing status. Although the model can predict the individual SRTs without pre-processing, further development is necessary to predict the benefits obtained from the algorithms at an individual level. PMID:26721922
DeepSleepNet: A Model for Automatic Sleep Stage Scoring Based on Raw Single-Channel EEG.
Supratak, Akara; Dong, Hao; Wu, Chao; Guo, Yike
2017-11-01
This paper proposes a deep learning model, named DeepSleepNet, for automatic sleep stage scoring based on raw single-channel EEG. Most of the existing methods rely on hand-engineered features, which require prior knowledge of sleep analysis. Only a few of them encode the temporal information, such as transition rules, which is important for identifying the next sleep stages, into the extracted features. In the proposed model, we utilize convolutional neural networks to extract time-invariant features, and bidirectional-long short-term memory to learn transition rules among sleep stages automatically from EEG epochs. We implement a two-step training algorithm to train our model efficiently. We evaluated our model using different single-channel EEGs (F4-EOG (left), Fpz-Cz, and Pz-Oz) from two public sleep data sets, that have different properties (e.g., sampling rate) and scoring standards (AASM and R&K). The results showed that our model achieved similar overall accuracy and macro F1-score (MASS: 86.2%-81.7, Sleep-EDF: 82.0%-76.9) compared with the state-of-the-art methods (MASS: 85.9%-80.5, Sleep-EDF: 78.9%-73.7) on both data sets. This demonstrated that, without changing the model architecture and the training algorithm, our model could automatically learn features for sleep stage scoring from different raw single-channel EEGs from different data sets without utilizing any hand-engineered features.
Efficient universal quantum channel simulation in IBM's cloud quantum computer
NASA Astrophysics Data System (ADS)
Wei, Shi-Jie; Xin, Tao; Long, Gui-Lu
2018-07-01
The study of quantum channels is an important field and promises a wide range of applications, because any physical process can be represented as a quantum channel that transforms an initial state into a final state. Inspired by the method of performing non-unitary operators by the linear combination of unitary operations, we proposed a quantum algorithm for the simulation of the universal single-qubit channel, described by a convex combination of "quasi-extreme" channels corresponding to four Kraus operators, and is scalable to arbitrary higher dimension. We demonstrated the whole algorithm experimentally using the universal IBM cloud-based quantum computer and studied the properties of different qubit quantum channels. We illustrated the quantum capacity of the general qubit quantum channels, which quantifies the amount of quantum information that can be protected. The behavior of quantum capacity in different channels revealed which types of noise processes can support information transmission, and which types are too destructive to protect information. There was a general agreement between the theoretical predictions and the experiments, which strongly supports our method. By realizing the arbitrary qubit channel, this work provides a universally- accepted way to explore various properties of quantum channels and novel prospect for quantum communication.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Geogdzhayev, Igor V.; Cairns, Brian; Rossow, William B.; Lacis, Andrew A.
1999-01-01
This paper outlines the methodology of interpreting channel 1 and 2 AVHRR radiance data over the oceans and describes a detailed analysis of the sensitivity of monthly averages of retrieved aerosol parameters to the assumptions made in different retrieval algorithms. The analysis is based on using real AVHRR data and exploiting accurate numerical techniques for computing single and multiple scattering and spectral absorption of light in the vertically inhomogeneous atmosphere-ocean system. We show that two-channel algorithms can be expected to provide significantly more accurate and less biased retrievals of the aerosol optical thickness than one-channel algorithms and that imperfect cloud screening and calibration uncertainties are by far the largest sources of errors in the retrieved aerosol parameters. Both underestimating and overestimating aerosol absorption as well as the potentially strong variability of the real part of the aerosol refractive index may lead to regional and/or seasonal biases in optical thickness retrievals. The Angstrom exponent appears to be the most invariant aerosol size characteristic and should be retrieved along with optical thickness as the second aerosol parameter.
Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F
2011-04-01
To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast three-dimensional MRI data acquisition. Copyright © 2011 Wiley-Liss, Inc.
Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F.
2011-01-01
Purpose To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Materials and Methods Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in-vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Results Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. Conclusion The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast 3D MRI data acquisition. PMID:21448967
Single-channel autocorrelation functions: the effects of time interval omission.
Ball, F G; Sansom, M S
1988-01-01
We present a general mathematical framework for analyzing the dynamic aspects of single channel kinetics incorporating time interval omission. An algorithm for computing model autocorrelation functions, incorporating time interval omission, is described. We show, under quite general conditions, that the form of these autocorrelations is identical to that which would be obtained if time interval omission was absent. We also show, again under quite general conditions, that zero correlations are necessarily a consequence of the underlying gating mechanism and not an artefact of time interval omission. The theory is illustrated by a numerical study of an allosteric model for the gating mechanism of the locust muscle glutamate receptor-channel. PMID:2455553
Noise Reduction with Microphone Arrays for Speaker Identification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, Z
Reducing acoustic noise in audio recordings is an ongoing problem that plagues many applications. This noise is hard to reduce because of interfering sources and non-stationary behavior of the overall background noise. Many single channel noise reduction algorithms exist but are limited in that the more the noise is reduced; the more the signal of interest is distorted due to the fact that the signal and noise overlap in frequency. Specifically acoustic background noise causes problems in the area of speaker identification. Recording a speaker in the presence of acoustic noise ultimately limits the performance and confidence of speaker identificationmore » algorithms. In situations where it is impossible to control the environment where the speech sample is taken, noise reduction filtering algorithms need to be developed to clean the recorded speech of background noise. Because single channel noise reduction algorithms would distort the speech signal, the overall challenge of this project was to see if spatial information provided by microphone arrays could be exploited to aid in speaker identification. The goals are: (1) Test the feasibility of using microphone arrays to reduce background noise in speech recordings; (2) Characterize and compare different multichannel noise reduction algorithms; (3) Provide recommendations for using these multichannel algorithms; and (4) Ultimately answer the question - Can the use of microphone arrays aid in speaker identification?« less
Fractional Poisson-Nernst-Planck Model for Ion Channels I: Basic Formulations and Algorithms.
Chen, Duan
2017-11-01
In this work, we propose a fractional Poisson-Nernst-Planck model to describe ion permeation in gated ion channels. Due to the intrinsic conformational changes, crowdedness in narrow channel pores, binding and trapping introduced by functioning units of channel proteins, ionic transport in the channel exhibits a power-law-like anomalous diffusion dynamics. We start from continuous-time random walk model for a single ion and use a long-tailed density distribution function for the particle jump waiting time, to derive the fractional Fokker-Planck equation. Then, it is generalized to the macroscopic fractional Poisson-Nernst-Planck model for ionic concentrations. Necessary computational algorithms are designed to implement numerical simulations for the proposed model, and the dynamics of gating current is investigated. Numerical simulations show that the fractional PNP model provides a more qualitatively reasonable match to the profile of gating currents from experimental observations. Meanwhile, the proposed model motivates new challenges in terms of mathematical modeling and computations.
NASA Technical Reports Server (NTRS)
Stowe, Larry L.; Ignatov, Alexander M.; Singh, Ramdas R.
1997-01-01
A revised (phase 2) single-channel algorithm for aerosol optical thickness, tau(sup A)(sub SAT), retrieval over oceans from radiances in channel 1 (0.63 microns) of the Advanced Very High Resolution Radiometer (AVHRR) has been implemented at the National Oceanic and Atmospheric Administration's National Environmental Satellite Data and Information Service for the NOAA 14 satellite launched December 30, 1994. It is based on careful validation of its operational predecessor (phase 1 algorithm), implemented for NOAA 14 in 1989. Both algorithms scale the upward satellite radiances in cloud-free conditions to aerosol optical thickness using an updated radiative transfer model of the ocean and atmosphere. Application of the phase 2 algorithm to three matchup Sun-photometer and satellite data sets, one with NOAA 9 in 1988 and two with NOAA 11 in 1989 and 1991, respectively, show systematic error is less than 10%, with a random error of sigma(sub tau) approx. equal 0.04. First results of tau(sup A)(sub SAT) retrievals from NOAA 14 using the phase 2 algorithm, and from checking its internal consistency, are presented. The potential two-channel (phase 3) algorithm for the retrieval of an aerosol size parameter, such as the Junge size distribution exponent, by adding either channel 2 (0.83 microns) from the current AVHRR instrument, or a 1.6-microns channel to be available on the Tropical Rainfall Measurement Mission and the NOAA-KLM satellites by 1997 is under investigation. The possibility of using this additional information in the retrieval of a more accurate estimate of aerosol optical thickness is being explored.
Multichannel blind iterative image restoration.
Sroubek, Filip; Flusser, Jan
2003-01-01
Blind image deconvolution is required in many applications of microscopy imaging, remote sensing, and astronomical imaging. Unfortunately in a single-channel framework, serious conceptual and numerical problems are often encountered. Very recently, an eigenvector-based method (EVAM) was proposed for a multichannel framework which determines perfectly convolution masks in a noise-free environment if channel disparity, called co-primeness, is satisfied. We propose a novel iterative algorithm based on recent anisotropic denoising techniques of total variation and a Mumford-Shah functional with the EVAM restoration condition included. A linearization scheme of half-quadratic regularization together with a cell-centered finite difference discretization scheme is used in the algorithm and provides a unified approach to the solution of total variation or Mumford-Shah. The algorithm performs well even on very noisy images and does not require an exact estimation of mask orders. We demonstrate capabilities of the algorithm on synthetic data. Finally, the algorithm is applied to defocused images taken with a digital camera and to data from astronomical ground-based observations of the Sun.
Automatic sleep stage classification using two facial electrodes.
Virkkala, Jussi; Velin, Riitta; Himanen, Sari-Leena; Värri, Alpo; Müller, Kiti; Hasan, Joel
2008-01-01
Standard sleep stage classification is based on visual analysis of central EEG, EOG and EMG signals. Automatic analysis with a reduced number of sensors has been studied as an easy alternative to the standard. In this study, a single-channel electro-oculography (EOG) algorithm was developed for separation of wakefulness, SREM, light sleep (S1, S2) and slow wave sleep (S3, S4). The algorithm was developed and tested with 296 subjects. Additional validation was performed on 16 subjects using a low weight single-channel Alive Monitor. In the validation study, subjects attached the disposable EOG electrodes themselves at home. In separating the four stages total agreement (and Cohen's Kappa) in the training data set was 74% (0.59), in the testing data set 73% (0.59) and in the validation data set 74% (0.59). Self-applicable electro-oculography with only two facial electrodes was found to provide reasonable sleep stage information.
NASA Technical Reports Server (NTRS)
Ellsworth, Joel C.
2017-01-01
During flight-testing of the National Aeronautics and Space Administration (NASA) Gulfstream III (G-III) airplane (Gulfstream Aerospace Corporation, Savannah, Georgia) SubsoniC Research Aircraft Testbed (SCRAT) between March 2013 and April 2015 it became evident that the sensor array used for stagnation point detection was not functioning as expected. The stagnation point detection system is a self calibrating hot-film array; the calibration was unknown and varied between flights, however, the channel with the lowest power consumption was expected to correspond with the point of least surface shear. While individual channels showed the expected behavior for the hot-film sensors, more often than not the lowest power consumption occurred at a single sensor (despite in-flight maneuvering) in the array located far from the expected stagnation point. An algorithm was developed to process the available system output and determine the stagnation point location. After multiple updates and refinements, the final algorithm was not sensitive to the failure of a single sensor in the array, but adjacent failures beneath the stagnation point crippled the algorithm.
Lee, Kwang Jin; Lee, Boreom
2016-01-01
Fetal heart rate (FHR) is an important determinant of fetal health. Cardiotocography (CTG) is widely used for measuring the FHR in the clinical field. However, fetal movement and blood flow through the maternal blood vessels can critically influence Doppler ultrasound signals. Moreover, CTG is not suitable for long-term monitoring. Therefore, researchers have been developing algorithms to estimate the FHR using electrocardiograms (ECGs) from the abdomen of pregnant women. However, separating the weak fetal ECG signal from the abdominal ECG signal is a challenging problem. In this paper, we propose a method for estimating the FHR using sequential total variation denoising and compare its performance with that of other single-channel fetal ECG extraction methods via simulation using the Fetal ECG Synthetic Database (FECGSYNDB). Moreover, we used real data from PhysioNet fetal ECG databases for the evaluation of the algorithm performance. The R-peak detection rate is calculated to evaluate the performance of our algorithm. Our approach could not only separate the fetal ECG signals from the abdominal ECG signals but also accurately estimate the FHR. PMID:27376296
Lee, Kwang Jin; Lee, Boreom
2016-07-01
Fetal heart rate (FHR) is an important determinant of fetal health. Cardiotocography (CTG) is widely used for measuring the FHR in the clinical field. However, fetal movement and blood flow through the maternal blood vessels can critically influence Doppler ultrasound signals. Moreover, CTG is not suitable for long-term monitoring. Therefore, researchers have been developing algorithms to estimate the FHR using electrocardiograms (ECGs) from the abdomen of pregnant women. However, separating the weak fetal ECG signal from the abdominal ECG signal is a challenging problem. In this paper, we propose a method for estimating the FHR using sequential total variation denoising and compare its performance with that of other single-channel fetal ECG extraction methods via simulation using the Fetal ECG Synthetic Database (FECGSYNDB). Moreover, we used real data from PhysioNet fetal ECG databases for the evaluation of the algorithm performance. The R-peak detection rate is calculated to evaluate the performance of our algorithm. Our approach could not only separate the fetal ECG signals from the abdominal ECG signals but also accurately estimate the FHR.
A multichannel block-matching denoising algorithm for spectral photon-counting CT images.
Harrison, Adam P; Xu, Ziyue; Pourmorteza, Amir; Bluemke, David A; Mollura, Daniel J
2017-06-01
We present a denoising algorithm designed for a whole-body prototype photon-counting computed tomography (PCCT) scanner with up to 4 energy thresholds and associated energy-binned images. Spectral PCCT images can exhibit low signal to noise ratios (SNRs) due to the limited photon counts in each simultaneously-acquired energy bin. To help address this, our denoising method exploits the correlation and exact alignment between energy bins, adapting the highly-effective block-matching 3D (BM3D) denoising algorithm for PCCT. The original single-channel BM3D algorithm operates patch-by-patch. For each small patch in the image, a patch grouping action collects similar patches from the rest of the image, which are then collaboratively filtered together. The resulting performance hinges on accurate patch grouping. Our improved multi-channel version, called BM3D_PCCT, incorporates two improvements. First, BM3D_PCCT uses a more accurate shared patch grouping based on the image reconstructed from photons detected in all 4 energy bins. Second, BM3D_PCCT performs a cross-channel decorrelation, adding a further dimension to the collaborative filtering process. These two improvements produce a more effective algorithm for PCCT denoising. Preliminary results compare BM3D_PCCT against BM3D_Naive, which denoises each energy bin independently. Experiments use a three-contrast PCCT image of a canine abdomen. Within five regions of interest, selected from paraspinal muscle, liver, and visceral fat, BM3D_PCCT reduces the noise standard deviation by 65.0%, compared to 40.4% for BM3D_Naive. Attenuation values of the contrast agents in calibration vials also cluster much tighter to their respective lines of best fit. Mean angular differences (in degrees) for the original, BM3D_Naive, and BM3D_PCCT images, respectively, were 15.61, 7.34, and 4.45 (iodine); 12.17, 7.17, and 4.39 (galodinium); and 12.86, 6.33, and 3.96 (bismuth). We outline a multi-channel denoising algorithm tailored for spectral PCCT images, demonstrating improved performance over an independent, yet state-of-the-art, single-channel approach. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Nonlinear Algorithms for Channel Equalization and Map Symbol Detection.
NASA Astrophysics Data System (ADS)
Giridhar, K.
The transfer of information through a communication medium invariably results in various kinds of distortion to the transmitted signal. In this dissertation, a feed -forward neural network-based equalizer, and a family of maximum a posteriori (MAP) symbol detectors are proposed for signal recovery in the presence of intersymbol interference (ISI) and additive white Gaussian noise. The proposed neural network-based equalizer employs a novel bit-mapping strategy to handle multilevel data signals in an equivalent bipolar representation. It uses a training procedure to learn the channel characteristics, and at the end of training, the multilevel symbols are recovered from the corresponding inverse bit-mapping. When the channel characteristics are unknown and no training sequences are available, blind estimation of the channel (or its inverse) and simultaneous data recovery is required. Convergence properties of several existing Bussgang-type blind equalization algorithms are studied through computer simulations, and a unique gain independent approach is used to obtain a fair comparison of their rates of convergence. Although simple to implement, the slow convergence of these Bussgang-type blind equalizers make them unsuitable for many high data-rate applications. Rapidly converging blind algorithms based on the principle of MAP symbol-by -symbol detection are proposed, which adaptively estimate the channel impulse response (CIR) and simultaneously decode the received data sequence. Assuming a linear and Gaussian measurement model, the near-optimal blind MAP symbol detector (MAPSD) consists of a parallel bank of conditional Kalman channel estimators, where the conditioning is done on each possible data subsequence that can convolve with the CIR. This algorithm is also extended to the recovery of convolutionally encoded waveforms in the presence of ISI. Since the complexity of the MAPSD algorithm increases exponentially with the length of the assumed CIR, a suboptimal decision-feedback mechanism is introduced to truncate the channel memory "seen" by the MAPSD section. Also, simpler gradient-based updates for the channel estimates, and a metric pruning technique are used to further reduce the MAPSD complexity. Spatial diversity MAP combiners are developed to enhance the error rate performance and combat channel fading. As a first application of the MAPSD algorithm, dual-mode recovery techniques for TDMA (time-division multiple access) mobile radio signals are presented. Combined estimation of the symbol timing and the multipath parameters is proposed, using an auxiliary extended Kalman filter during the training cycle, and then tracking of the fading parameters is performed during the data cycle using the blind MAPSD algorithm. For the second application, a single-input receiver is employed to jointly recover cochannel narrowband signals. Assuming known channels, this two-stage joint MAPSD (JMAPSD) algorithm is compared to the optimal joint maximum likelihood sequence estimator, and to the joint decision-feedback detector. A blind MAPSD algorithm for the joint recovery of cochannel signals is also presented. Computer simulation results are provided to quantify the performance of the various algorithms proposed in this dissertation.
25 Tb/s transmission over 5,530 km using 16QAM at 5.2 b/s/Hz spectral efficiency.
Cai, J-X; Batshon, H G; Zhang, H; Davidson, C R; Sun, Y; Mazurczyk, M; Foursa, D G; Sinkin, O; Pilipetskii, A; Mohs, G; Bergano, Neal S
2013-01-28
We transmit 250x100G PDM RZ-16QAM channels with 5.2 b/s/Hz spectral efficiency over 5,530 km using single-stage C-band EDFAs equalized to 40 nm. We use single parity check coded modulation and all channels are decoded with no errors after iterative decoding between a MAP decoder and an LDPC based FEC algorithm. We also observe that the optimum power spectral density is nearly independent of SE, signal baud rate or modulation format in a dispersion uncompensated system.
Subspace techniques to remove artifacts from EEG: a quantitative analysis.
Teixeira, A R; Tome, A M; Lang, E W; Martins da Silva, A
2008-01-01
In this work we discuss and apply projective subspace techniques to both multichannel as well as single channel recordings. The single-channel approach is based on singular spectrum analysis(SSA) and the multichannel approach uses the extended infomax algorithm which is implemented in the opensource toolbox EEGLAB. Both approaches will be evaluated using artificial mixtures of a set of selected EEG signals. The latter were selected visually to contain as the dominant activity one of the characteristic bands of an electroencephalogram (EEG). The evaluation is performed both in the time and frequency domain by using correlation coefficients and coherence function, respectively.
Optimizing ion channel models using a parallel genetic algorithm on graphical processors.
Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon
2012-01-01
We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.
Data Processing for a High Resolution Preclinical PET Detector Based on Philips DPC Digital SiPMs
NASA Astrophysics Data System (ADS)
Schug, David; Wehner, Jakob; Goldschmidt, Benjamin; Lerche, Christoph; Dueppenbecker, Peter Michael; Hallen, Patrick; Weissler, Bjoern; Gebhardt, Pierre; Kiessling, Fabian; Schulz, Volkmar
2015-06-01
In positron emission tomography (PET) systems, light sharing techniques are commonly used to readout scintillator arrays consisting of scintillation elements, which are smaller than the optical sensors. The scintillating element is then identified evaluating the signal heights in the readout channels using statistical algorithms, the center of gravity (COG) algorithm being the simplest and mostly used one. We propose a COG algorithm with a fixed number of input channels in order to guarantee a stable calculation of the position. The algorithm is implemented and tested with the raw detector data obtained with the Hyperion-II D preclinical PET insert which uses Philips Digital Photon Counting's (PDPC) digitial SiPMs. The gamma detectors use LYSO scintillator arrays with 30 ×30 crystals of 1 ×1 ×12 mm3 in size coupled to 4 ×4 PDPC DPC 3200-22 sensors (DPC) via a 2-mm-thick light guide. These self-triggering sensors are made up of 2 ×2 pixels resulting in a total of 64 readout channels. We restrict the COG calculation to a main pixel, which captures most of the scintillation light from a crystal, and its (direct and diagonal) neighboring pixels and reject single events in which this data is not fully available. This results in stable COG positions for a crystal element and enables high spatial image resolution. Due to the sensor layout, for some crystals it is very likely that a single diagonal neighbor pixel is missing as a result of the low light level on the corresponding DPC. This leads to a loss of sensitivity, if these events are rejected. An enhancement of the COG algorithm is proposed which handles the potentially missing pixel separately both for the crystal identification and the energy calculation. Using this advancement, we show that the sensitivity of the Hyperion-II D insert using the described scintillator configuration can be improved by 20-100% for practical useful readout thresholds of a single DPC pixel ranging from 17-52 photons. Furthermore, we show that the energy resolution of the scanner is superior for all readout thresholds if singles with a single missing pixel are accepted and correctly handled compared to the COG method only accepting singles with all neighbors present by 0-1.6% (relative difference). The presented methods can not only be applied to gamma detectors employing DPC sensors, but can be generalized to other similarly structured and self-triggering detectors, using light sharing techniques, as well.
NASA Astrophysics Data System (ADS)
KIM, M.; Kim, J.
2016-12-01
Numerous efforts to retrieve aerosol optical properties (AOPs) using satellite measurements have been accumulated for decades, resulted in several qualified data which can be used for the analysis of spatiotemporal characteristics of AOPs. However, the limitation in the instrument lifetime restricts temporal window of the analysis of long-term AOPs variation. In this point of view, single channel algorithm, which uses a single visible channel to retrieve aerosol optical depth (AOD), has an advantage to extent the time domain of the analysis. The Korean geostationary earth orbit (GEO) satellite, the Communication, Ocean and Meteorological Satellite (COMS) includes the single channel Meteorological Imager (MI), which can also be utilized for the retrieval of AOPs. Since the GEO satellite measurement has an advantage for continuous monitoring of AOPs over Northeast Asia, we can analyze the spatiotemporal characteristic of the aerosol using MI observations. In this study, we investigate the trend of AOD and also discuss the impact of long-range transport of aerosol on the temporal variation. Since the year 2010 when the COMS was launched, AODs over Northeast China and Yellow Sea region show 3.02 % and 2.74 % decrease per year, respectively, which are significant trends in spite of only 5-year short period. The decreasing behavior seems associated with the recent decreasing frequency of dust event over the region. But other Northeast Asia regions do not show clear temporal change. The accuracy of retrieved AOD can relates to the uncertainty of this trend analysis. According to the error analysis, cloud contamination and error in bright surface reflectance results in the accuracy of AOD. Therefore, improvements of cloud masking process and surface reflectance estimation in the developed single channel MI algorithm will be required for the future study.
Expeditious reconciliation for practical quantum key distribution
NASA Astrophysics Data System (ADS)
Nakassis, Anastase; Bienfang, Joshua C.; Williams, Carl J.
2004-08-01
The paper proposes algorithmic and environmental modifications to the extant reconciliation algorithms within the BB84 protocol so as to speed up reconciliation and privacy amplification. These algorithms have been known to be a performance bottleneck 1 and can process data at rates that are six times slower than the quantum channel they serve2. As improvements in single-photon sources and detectors are expected to improve the quantum channel throughput by two or three orders of magnitude, it becomes imperative to improve the performance of the classical software. We developed a Cascade-like algorithm that relies on a symmetric formulation of the problem, error estimation through the segmentation process, outright elimination of segments with many errors, Forward Error Correction, recognition of the distinct data subpopulations that emerge as the algorithm runs, ability to operate on massive amounts of data (of the order of 1 Mbit), and a few other minor improvements. The data from the experimental algorithm we developed show that by operating on massive arrays of data we can improve software performance by better than three orders of magnitude while retaining nearly as many bits (typically more than 90%) as the algorithms that were designed for optimal bit retention.
Chang, S; Wong, K W; Zhang, W; Zhang, Y
1999-08-10
An algorithm for optimizing a bipolar interconnection weight matrix with the Hopfield network is proposed. The effectiveness of this algorithm is demonstrated by computer simulation and optical implementation. In the optical implementation of the neural network the interconnection weights are biased to yield a nonnegative weight matrix. Moreover, a threshold subchannel is added so that the system can realize, in real time, the bipolar weighted summation in a single channel. Preliminary experimental results obtained from the applications in associative memories and multitarget classification with rotation invariance are shown.
NASA Astrophysics Data System (ADS)
Chang, Shengjiang; Wong, Kwok-Wo; Zhang, Wenwei; Zhang, Yanxin
1999-08-01
An algorithm for optimizing a bipolar interconnection weight matrix with the Hopfield network is proposed. The effectiveness of this algorithm is demonstrated by computer simulation and optical implementation. In the optical implementation of the neural network the interconnection weights are biased to yield a nonnegative weight matrix. Moreover, a threshold subchannel is added so that the system can realize, in real time, the bipolar weighted summation in a single channel. Preliminary experimental results obtained from the applications in associative memories and multitarget classification with rotation invariance are shown.
NASA Astrophysics Data System (ADS)
Hortos, William S.
1997-04-01
The use of artificial neural networks (NNs) to address the channel assignment problem (CAP) for cellular time-division multiple access and code-division multiple access networks has previously been investigated by this author and many others. The investigations to date have been based on a hexagonal cell structure established by omnidirectional antennas at the base stations. No account was taken of the use of spatial isolation enabled by directional antennas to reduce interference between mobiles. Any reduction in interference translates into increased capacity and consequently alters the performance of the NNs. Previous studies have sought to improve the performance of Hopfield- Tank network algorithms and self-organizing feature map algorithms applied primarily to static channel assignment (SCA) for cellular networks that handle uniformly distributed, stationary traffic in each cell for a single type of service. The resulting algorithms minimize energy functions representing interference constraint and ad hoc conditions that promote convergence to optimal solutions. While the structures of the derived neural network algorithms (NNAs) offer the potential advantages of inherent parallelism and adaptability to changing system conditions, this potential has yet to be fulfilled the CAP for emerging mobile networks. The next-generation communication infrastructures must accommodate dynamic operating conditions. Macrocell topologies are being refined to microcells and picocells that can be dynamically sectored by adaptively controlled, directional antennas and programmable transceivers. These networks must support the time-varying demands for personal communication services (PCS) that simultaneously carry voice, data and video and, thus, require new dynamic channel assignment (DCA) algorithms. This paper examines the impact of dynamic cell sectoring and geometric conditioning on NNAs developed for SCA in omnicell networks with stationary traffic to improve the metrics of convergence rate and call blocking. Genetic algorithms (GAs) are also considered in PCS networks as a means to overcome the known weakness of Hopfield NNAs in determining global minima. The resulting GAs for DCA in PCS networks are compared to improved DCA algorithms based on Hopfield NNs for stationary cellular networks. Algorithm performance is compared on the basis of rate of convergence, blocking probability, analytic complexity, and parametric sensitivity to transient traffic demands and channel interference.
Blind equalization with criterion with memory nonlinearity
NASA Astrophysics Data System (ADS)
Chen, Yuanjie; Nikias, Chrysostomos L.; Proakis, John G.
1992-06-01
Blind equalization methods usually combat the linear distortion caused by a nonideal channel via a transversal filter, without resorting to the a priori known training sequences. We introduce a new criterion with memory nonlinearity (CRIMNO) for the blind equalization problem. The basic idea of this criterion is to augment the Godard [or constant modulus algorithm (CMA)] cost function with additional terms that penalize the autocorrelations of the equalizer outputs. Several variations of the CRIMNO algorithms are derived, with the variations dependent on (1) whether the empirical averages or the single point estimates are used to approximate the expectations, (2) whether the recent or the delayed equalizer coefficients are used, and (3) whether the weights applied to the autocorrelation terms are fixed or are allowed to adapt. Simulation experiments show that the CRIMNO algorithm, and especially its adaptive weight version, exhibits faster convergence speed than the Godard (or CMA) algorithm. Extensions of the CRIMNO criterion to accommodate the case of correlated inputs to the channel are also presented.
Adaptive Wiener filter super-resolution of color filter array images.
Karch, Barry K; Hardie, Russell C
2013-08-12
Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.
Spatial and Temporal Varying Thresholds for Cloud Detection in Satellite Imagery
NASA Technical Reports Server (NTRS)
Jedlovec, Gary; Haines, Stephanie
2007-01-01
A new cloud detection technique has been developed and applied to both geostationary and polar orbiting satellite imagery having channels in the thermal infrared and short wave infrared spectral regions. The bispectral composite threshold (BCT) technique uses only the 11 micron and 3.9 micron channels, and composite imagery generated from these channels, in a four-step cloud detection procedure to produce a binary cloud mask at single pixel resolution. A unique aspect of this algorithm is the use of 20-day composites of the 11 micron and the 11 - 3.9 micron channel difference imagery to represent spatially and temporally varying clear-sky thresholds for the bispectral cloud tests. The BCT cloud detection algorithm has been applied to GOES and MODIS data over the continental United States over the last three years with good success. The resulting products have been validated against "truth" datasets (generated by the manual determination of the sky conditions from available satellite imagery) for various seasons from the 2003-2005 periods. The day and night algorithm has been shown to determine the correct sky conditions 80-90% of the time (on average) over land and ocean areas. Only a small variation in algorithm performance occurs between day-night, land-ocean, and between seasons. The algorithm performs least well. during he winter season with only 80% of the sky conditions determined correctly. The algorithm was found to under-determine clouds at night and during times of low sun angle (in geostationary satellite data) and tends to over-determine the presence of clouds during the day, particularly in the summertime. Since the spectral tests use only the short- and long-wave channels common to most multispectral scanners; the application of the BCT technique to a variety of satellite sensors including SEVERI should be straightforward and produce similar performance results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Syh, J; Syh, J; Patel, B
2014-06-15
Purpose: The multichannel cylindrical vaginal applicator is a variation of traditional single channel cylindrical vaginal applicator. The multichannel applicator has additional peripheral channels that provide more flexibility in the planning process. The dosimetric advantage is to reduce dose to adjacent organ at risk (OAR) such as bladder and rectum while maintaining target coverage with the dose optimization from additional channels. Methods: Vaginal HDR brachytherapy plans are all CT based. CT images were acquired in 2 mm thickness to keep integrity of cylinder contouring. The CTV of 5mm Rind with prescribed treatment length was reconstructed from 5mm expansion of inserted cylinder.more » The goal was 95% of CTV covered by 95% of prescribed dose in both single channel planning (SCP)and multichannel planning (MCP) before proceeding any further optimization for dose reduction to critical structures with emphasis on D2cc and V2Gy . Results: This study demonstrated noticeable dose reduction to OAR was apparent in multichannel plans. The D2cc of the rectum and bladder were showing the reduced dose for multichannel versus single channel. The V2Gy of the rectum was 93.72% and 83.79% (p=0.007) for single channel and multichannel respectively (Figure 1 and Table 1). To assure adequate coverage to target while reducing the dose to the OAR without any compromise is the main goal in using multichannel vaginal applicator in HDR brachytherapy. Conclusion: Multichannel plans were optimized using anatomical based inverse optimization algorithm of inverse planning simulation annealing. The optimization solution of the algorithm was to improve the clinical target volume dose coverage while reducing the dose to critical organs such as bladder, rectum and bowels. The comparison between SCP and MCP demonstrated MCP is superior to SCP where the dwell positions were based on geometric array only. It concluded that MCP is preferable and is able to provide certain features superior to SCP.« less
A New, More Physically Based Algorithm, for Retrieving Aerosol Properties over Land from MODIS
NASA Technical Reports Server (NTRS)
Levy, Robert C.; Kaufman, Yoram J.; Remer, Lorraine A.; Mattoo, Shana
2004-01-01
The MOD Imaging Spectrometer (MODIS) has been successfully retrieving aerosol properties, beginning in early 2000 from Terra and from mid 2002 from Aqua. Over land, the retrieval algorithm makes use of three MODIS channels, in the blue, red and infrared wavelengths. As part of the validation exercises, retrieved spectral aerosol optical thickness (AOT) has been compared via scatterplots against spectral AOT measured by the global Aerosol Robotic NETwork (AERONET). On one hand, global and long term validation looks promising, with two-thirds (average plus and minus one standard deviation) of all points falling between published expected error bars. On the other hand, regression of these points shows a positive y-offset and a slope less than 1.0. For individual regions, such as along the U.S. East Coast, the offset and slope are even worse. Here, we introduce an overhaul of the algorithm for retrieving aerosol properties over land. Some well-known weaknesses in the current aerosol retrieval from MODIS include: a) rigid assumptions about the underlying surface reflectance, b) limited aerosol models to choose from, c) simplified (scalar) radiative transfer (RT) calculations used to simulate satellite observations, and d) assumption that aerosol is transparent in the infrared channel. The new algorithm attempts to address all four problems: a) The new algorithm will include surface type information, instead of fixed ratios of the reflectance in the visible channels to the mid-IR reflectance. b) It will include updated aerosol optical properties to reflect the growing aerosol retrieved from eight-plus years of AERONE". operation. c) The effects of polarization will be including using vector RT calculations. d) Most importantly, the new algorithm does not assume that aerosol is transparent in the infrared channel. It will be an inversion of reflectance observed in the three channels (blue, red, and infrared), rather than iterative single channel retrievals. Thus, this new formulation of the MODIS aerosol retrieval over land includes more physically based surface, aerosol and radiative transfer with fewer potentially erroneous assumptions.
NASA Technical Reports Server (NTRS)
Kim, M.; Kim, J.; Jeong, U.; Kim, W.; Hong, H.; Holben, B.; Eck, T. F.; Lim, J.; Song, C.; Lee, S.;
2016-01-01
An aerosol model optimized for northeast Asia is updated with the inversion data from the Distributed Regional Aerosol Gridded Observation Networks (DRAGON)-northeast (NE) Asia campaign which was conducted during spring from March to May 2012. This updated aerosol model was then applied to a single visible channel algorithm to retrieve aerosol optical depth (AOD) from a Meteorological Imager (MI) on-board the geostationary meteorological satellite, Communication, Ocean, and Meteorological Satellite (COMS). This model plays an important role in retrieving accurate AOD from a single visible channel measurement. For the single-channel retrieval, sensitivity tests showed that perturbations by 4 % (0.926 +/- 0.04) in the assumed single scattering albedo (SSA) can result in the retrieval error in AOD by over 20 %. Since the measured reflectance at the top of the atmosphere depends on both AOD and SSA, the overestimation of assumed SSA in the aerosol model leads to an underestimation of AOD. Based on the AErosol RObotic NETwork (AERONET) inversion data sets obtained over East Asia before 2011, seasonally analyzed aerosol optical properties (AOPs) were categorized by SSAs at 675 nm of 0.92 +/- 0.035 for spring (March, April, and May). After the DRAGON-NE Asia campaign in 2012, the SSA during spring showed a slight increase to 0.93 +/- 0.035. In terms of the volume size distribution, the mode radius of coarse particles was increased from 2.08 +/- 0.40 to 2.14 +/- 0.40. While the original aerosol model consists of volume size distribution and refractive indices obtained before 2011, the new model is constructed by using a total data set after the DRAGON-NE Asia campaign. The large volume of data in high spatial resolution from this intensive campaign can be used to improve the representative aerosol model for East Asia. Accordingly, the new AOD data sets retrieved from a single-channel algorithm, which uses a precalculated look-up table (LUT) with the new aerosol model, show an improved correlation with the measured AOD during the DRAGON-NE Asia campaign. The correlation between the new AOD and AERONET value shows a regression slope of 1.00, while the comparison of the original AOD data retrieved using the original aerosol model shows a slope of 1.08. The change of y-offset is not significant, and the correlation coefficients for the comparisons of the original and new AOD are 0.87 and 0.85, respectively. The tendency of the original aerosol model to overestimate the retrieved AOD is significantly improved by using the SSA values in addition to size distribution and refractive index obtained using the new model.
Digital transceiver design for two-way AF-MIMO relay systems with imperfect CSI
NASA Astrophysics Data System (ADS)
Hu, Chia-Chang; Chou, Yu-Fei; Chen, Kui-He
2013-09-01
In the paper, combined optimization of the terminal precoders/equalizers and single-relay precoder is proposed for an amplify-and-forward (AF) multiple-input multiple-output (MIMO) two-way single-relay system with correlated channel uncertainties. Both terminal transceivers and relay precoding matrix are designed based on the minimum mean square error (MMSE) criterion when terminals are unable to erase completely self-interference due to imperfect correlated channel state information (CSI). This robust joint optimization problem of beamforming and precoding matrices under power constraints belongs to neither concave nor convex so that a nonlinear matrix-form conjugate gradient (MCG) algorithm is applied to explore local optimal solutions. Simulation results show that the robust transceiver design is able to overcome effectively the loss of bit-error-rate (BER) due to inclusion of correlated channel uncertainties and residual self-interference.
Comparing Binaural Pre-processing Strategies I: Instrumental Evaluation.
Baumgärtel, Regina M; Krawczyk-Becker, Martin; Marquardt, Daniel; Völker, Christoph; Hu, Hongmei; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Ernst, Stephan M A; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias
2015-12-30
In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios. © The Author(s) 2015.
Comparing Binaural Pre-processing Strategies I
Krawczyk-Becker, Martin; Marquardt, Daniel; Völker, Christoph; Hu, Hongmei; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Ernst, Stephan M. A.; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias
2015-01-01
In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios. PMID:26721920
Validation of TOMS Aerosol Products using AERONET Observations
NASA Technical Reports Server (NTRS)
Bhartia, P. K.; Torres, O.; Sinyuk, A.; Holben, B.
2002-01-01
The Total Ozone Mapping Spectrometer (TOMS) aerosol algorithm uses measurements of radiances at two near UV channels in the range 331-380 nm to derive aerosol optical depth and single scattering albedo. Because of the low near UV surface albedo of all terrestrial surfaces (between 0.02 and 0.08), the TOMS algorithm has the capability of retrieving aerosol properties over the oceans and the continents. The Aerosol Robotic Network (AERONET) routinely derives spectral aerosol optical depth and single scattering albedo at a large number of sites around the globe. We have performed comparisons of both aerosol optical depth and single scattering albedo derived from TOMS and AERONET. In general, the TOMS aerosol products agree well with the ground-based observations, Results of this validation will be discussed.
NASA Astrophysics Data System (ADS)
Kwok, Ngaiming; Shi, Haiyan; Peng, Yeping; Wu, Hongkun; Li, Ruowei; Liu, Shilong; Rahman, Md Arifur
2018-04-01
Restoring images captured under low-illuminations is an essential front-end process for most image based applications. The Center-Surround Retinex algorithm has been a popular approach employed to improve image brightness. However, this algorithm in its basic form, is known to produce color degradations. In order to mitigate this problem, here the Single-Scale Retinex algorithm is modifid as an edge extractor while illumination is recovered through a non-linear intensity mapping stage. The derived edges are then integrated with the mapped image to produce the enhanced output. Furthermore, in reducing color distortion, the process is conducted in the magnitude sorted domain instead of the conventional Red-Green-Blue (RGB) color channels. Experimental results had shown that improvements with regard to mean brightness, colorfulness, saturation, and information content can be obtained.
Optimization of polymer electrolyte membrane fuel cell flow channels using a genetic algorithm
NASA Astrophysics Data System (ADS)
Catlin, Glenn; Advani, Suresh G.; Prasad, Ajay K.
The design of the flow channels in PEM fuel cells directly impacts the transport of reactant gases to the electrodes and affects cell performance. This paper presents results from a study to optimize the geometry of the flow channels in a PEM fuel cell. The optimization process implements a genetic algorithm to rapidly converge on the channel geometry that provides the highest net power output from the cell. In addition, this work implements a method for the automatic generation of parameterized channel domains that are evaluated for performance using a commercial computational fluid dynamics package from ANSYS. The software package includes GAMBIT as the solid modeling and meshing software, the solver FLUENT, and a PEMFC Add-on Module capable of modeling the relevant physical and electrochemical mechanisms that describe PEM fuel cell operation. The result of the optimization process is a set of optimal channel geometry values for the single-serpentine channel configuration. The performance of the optimal geometry is contrasted with a sub-optimal one by comparing contour plots of current density, oxygen and hydrogen concentration. In addition, the role of convective bypass in bringing fresh reactant to the catalyst layer is examined in detail. The convergence to the optimal geometry is confirmed by a bracketing study which compares the performance of the best individual to those of its neighbors with adjacent parameter values.
Zunder, Eli R.; Finck, Rachel; Behbehani, Gregory K.; Amir, El-ad D.; Krishnaswamy, Smita; Gonzalez, Veronica D.; Lorang, Cynthia G.; Bjornson, Zach; Spitzer, Matthew H.; Bodenmiller, Bernd; Fantl, Wendy J.; Pe’er, Dana; Nolan, Garry P.
2015-01-01
SUMMARY Mass-tag cell barcoding (MCB) labels individual cell samples with unique combinatorial barcodes, after which they are pooled for processing and measurement as a single multiplexed sample. The MCB method eliminates variability between samples in antibody staining and instrument sensitivity, reduces antibody consumption, and shortens instrument measurement time. Here, we present an optimized MCB protocol with several improvements over previously described methods. The use of palladium-based labeling reagents expands the number of measurement channels available for mass cytometry and reduces interference with lanthanide-based antibody measurement. An error-detecting combinatorial barcoding scheme allows cell doublets to be identified and removed from the analysis. A debarcoding algorithm that is single cell-based rather than population-based improves the accuracy and efficiency of sample deconvolution. This debarcoding algorithm has been packaged into software that allows rapid and unbiased sample deconvolution. The MCB procedure takes 3–4 h, not including sample acquisition time of ~1 h per million cells. PMID:25612231
Real-time single image dehazing based on dark channel prior theory and guided filtering
NASA Astrophysics Data System (ADS)
Zhang, Zan
2017-10-01
Images and videos taken outside the foggy day are serious degraded. In order to restore degraded image taken in foggy day and overcome traditional Dark Channel prior algorithms problems of remnant fog in edge, we propose a new dehazing method.We first find the fog area in the dark primary color map to obtain the estimated value of the transmittance using quadratic tree. Then we regard the gray-scale image after guided filtering as atmospheric light map and remove haze based on it. Box processing and image down sampling technology are also used to improve the processing speed. Finally, the atmospheric light scattering model is used to restore the image. A plenty of experiments show that algorithm is effective, efficient and has a wide range of application.
Development of Single-Channel Hybrid BCI System Using Motor Imagery and SSVEP.
Ko, Li-Wei; Ranga, S S K; Komarov, Oleksii; Chen, Chung-Chiang
2017-01-01
Numerous EEG-based brain-computer interface (BCI) systems that are being developed focus on novel feature extraction algorithms, classification methods and combining existing approaches to create hybrid BCIs. Several recent studies demonstrated various advantages of hybrid BCI systems in terms of an improved accuracy or number of commands available for the user. But still, BCI systems are far from realization for daily use. Having high performance with less number of channels is one of the challenging issues that persists, especially with hybrid BCI systems, where multiple channels are necessary to record information from two or more EEG signal components. Therefore, this work proposes a single-channel (C3 or C4) hybrid BCI system that combines motor imagery (MI) and steady-state visually evoked potential (SSVEP) approaches. This study demonstrates that besides MI features, SSVEP features can also be captured from C3 or C4 channel. The results show that due to rich feature information (MI and SSVEP) at these channels, the proposed hybrid BCI system outperforms both MI- and SSVEP-based systems having an average classification accuracy of 85.6 ± 7.7% in a two-class task.
Monitoring Churn in Wireless Networks
NASA Astrophysics Data System (ADS)
Holzer, Stephan; Pignolet, Yvonne Anne; Smula, Jasmin; Wattenhofer, Roger
Wireless networks often experience a significant amount of churn, the arrival and departure of nodes. In this paper we propose a distributed algorithm for single-hop networks that detects churn and is resilient to a worst-case adversary. The nodes of the network are notified about changes quickly, in asymptotically optimal time up to an additive logarithmic overhead. We establish a trade-off between saving energy and minimizing the delay until notification for single- and multi-channel networks.
Independent EEG Sources Are Dipolar
Delorme, Arnaud; Palmer, Jason; Onton, Julie; Oostenveld, Robert; Makeig, Scott
2012-01-01
Independent component analysis (ICA) and blind source separation (BSS) methods are increasingly used to separate individual brain and non-brain source signals mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings. We compared results of decomposing thirteen 71-channel human scalp EEG datasets by 22 ICA and BSS algorithms, assessing the pairwise mutual information (PMI) in scalp channel pairs, the remaining PMI in component pairs, the overall mutual information reduction (MIR) effected by each decomposition, and decomposition ‘dipolarity’ defined as the number of component scalp maps matching the projection of a single equivalent dipole with less than a given residual variance. The least well-performing algorithm was principal component analysis (PCA); best performing were AMICA and other likelihood/mutual information based ICA methods. Though these and other commonly-used decomposition methods returned many similar components, across 18 ICA/BSS algorithms mean dipolarity varied linearly with both MIR and with PMI remaining between the resulting component time courses, a result compatible with an interpretation of many maximally independent EEG components as being volume-conducted projections of partially-synchronous local cortical field activity within single compact cortical domains. To encourage further method comparisons, the data and software used to prepare the results have been made available (http://sccn.ucsd.edu/wiki/BSSComparison). PMID:22355308
NASA Astrophysics Data System (ADS)
O'Shea, Daniel J.; Shenoy, Krishna V.
2018-04-01
Objective. Electrical stimulation is a widely used and effective tool in systems neuroscience, neural prosthetics, and clinical neurostimulation. However, electrical artifacts evoked by stimulation prevent the detection of spiking activity on nearby recording electrodes, which obscures the neural population response evoked by stimulation. We sought to develop a method to clean artifact-corrupted electrode signals recorded on multielectrode arrays in order to recover the underlying neural spiking activity. Approach. We created an algorithm, which performs estimation and removal of array artifacts via sequential principal components regression (ERAASR). This approach leverages the similar structure of artifact transients, but not spiking activity, across simultaneously recorded channels on the array, across pulses within a train, and across trials. The ERAASR algorithm requires no special hardware, imposes no requirements on the shape of the artifact or the multielectrode array geometry, and comprises sequential application of straightforward linear methods with intuitive parameters. The approach should be readily applicable to most datasets where stimulation does not saturate the recording amplifier. Main results. The effectiveness of the algorithm is demonstrated in macaque dorsal premotor cortex using acute linear multielectrode array recordings and single electrode stimulation. Large electrical artifacts appeared on all channels during stimulation. After application of ERAASR, the cleaned signals were quiescent on channels with no spontaneous spiking activity, whereas spontaneously active channels exhibited evoked spikes which closely resembled spontaneously occurring spiking waveforms. Significance. We hope that enabling simultaneous electrical stimulation and multielectrode array recording will help elucidate the causal links between neural activity and cognition and facilitate naturalistic sensory protheses.
NASA Technical Reports Server (NTRS)
Meyer, Kerry; Yang, Yuekui; Platnick, Steven
2016-01-01
This paper presents an investigation of the expected uncertainties of a single channel cloud optical thickness (COT) retrieval technique, as well as a simple cloud-temperature-threshold-based thermodynamic phase approach, in support of the Deep Space Climate Observatory (DSCOVR) mission. DSCOVR cloud products will be derived from Earth Polychromatic Imaging Camera (EPIC) observations in the ultraviolet and visible spectra. Since EPIC is not equipped with a spectral channel in the shortwave or mid-wave infrared that is sensitive to cloud effective radius (CER), COT will be inferred from a single visible channel with the assumption of appropriate CER values for liquid and ice phase clouds. One month of Aqua MODIS daytime granules from April 2005 is selected for investigating cloud phase sensitivity, and a subset of these granules that has similar EPIC sun-view geometry is selected for investigating COT uncertainties. EPIC COT retrievals are simulated with the same algorithm as the operational MODIS cloud products (MOD06), except using fixed phase-dependent CER values. Uncertainty estimates are derived by comparing the single channel COT retrievals with the baseline bi-spectral MODIS retrievals. Results show that a single channel COT retrieval is feasible for EPIC. For ice clouds, single channel retrieval errors are minimal (less than 2 percent) due to the particle- size insensitivity of the assumed ice crystal (i.e., severely roughened aggregate of hexagonal columns) scattering properties at visible wavelengths, while for liquid clouds the error is mostly limited to within 10 percent, although for thin clouds (COT less than 2) the error can be higher. Potential uncertainties in EPIC cloud masking and cloud temperature retrievals are not considered in this study.
Meyer, Kerry; Yang, Yuekui; Platnick, Steven
2018-01-01
This paper presents an investigation of the expected uncertainties of a single channel cloud optical thickness (COT) retrieval technique, as well as a simple cloud temperature threshold based thermodynamic phase approach, in support of the Deep Space Climate Observatory (DSCOVR) mission. DSCOVR cloud products will be derived from Earth Polychromatic Imaging Camera (EPIC) observations in the ultraviolet and visible spectra. Since EPIC is not equipped with a spectral channel in the shortwave or mid-wave infrared that is sensitive to cloud effective radius (CER), COT will be inferred from a single visible channel with the assumption of appropriate CER values for liquid and ice phase clouds. One month of Aqua MODIS daytime granules from April 2005 is selected for investigating cloud phase sensitivity, and a subset of these granules that has similar EPIC sun-view geometry is selected for investigating COT uncertainties. EPIC COT retrievals are simulated with the same algorithm as the operational MODIS cloud products (MOD06), except using fixed phase-dependent CER values. Uncertainty estimates are derived by comparing the single channel COT retrievals with the baseline bi-spectral MODIS retrievals. Results show that a single channel COT retrieval is feasible for EPIC. For ice clouds, single channel retrieval errors are minimal (< 2%) due to the particle size insensitivity of the assumed ice crystal (i.e., severely roughened aggregate of hexagonal columns) scattering properties at visible wavelengths, while for liquid clouds the error is mostly limited to within 10%, although for thin clouds (COT < 2) the error can be higher. Potential uncertainties in EPIC cloud masking and cloud temperature retrievals are not considered in this study. PMID:29619116
Meyer, Kerry; Yang, Yuekui; Platnick, Steven
2016-01-01
This paper presents an investigation of the expected uncertainties of a single channel cloud optical thickness (COT) retrieval technique, as well as a simple cloud temperature threshold based thermodynamic phase approach, in support of the Deep Space Climate Observatory (DSCOVR) mission. DSCOVR cloud products will be derived from Earth Polychromatic Imaging Camera (EPIC) observations in the ultraviolet and visible spectra. Since EPIC is not equipped with a spectral channel in the shortwave or mid-wave infrared that is sensitive to cloud effective radius (CER), COT will be inferred from a single visible channel with the assumption of appropriate CER values for liquid and ice phase clouds. One month of Aqua MODIS daytime granules from April 2005 is selected for investigating cloud phase sensitivity, and a subset of these granules that has similar EPIC sun-view geometry is selected for investigating COT uncertainties. EPIC COT retrievals are simulated with the same algorithm as the operational MODIS cloud products (MOD06), except using fixed phase-dependent CER values. Uncertainty estimates are derived by comparing the single channel COT retrievals with the baseline bi-spectral MODIS retrievals. Results show that a single channel COT retrieval is feasible for EPIC. For ice clouds, single channel retrieval errors are minimal (< 2%) due to the particle size insensitivity of the assumed ice crystal (i.e., severely roughened aggregate of hexagonal columns) scattering properties at visible wavelengths, while for liquid clouds the error is mostly limited to within 10%, although for thin clouds (COT < 2) the error can be higher. Potential uncertainties in EPIC cloud masking and cloud temperature retrievals are not considered in this study.
NASA Astrophysics Data System (ADS)
Meyer, Kerry; Yang, Yuekui; Platnick, Steven
2016-04-01
This paper presents an investigation of the expected uncertainties of a single-channel cloud optical thickness (COT) retrieval technique, as well as a simple cloud-temperature-threshold-based thermodynamic phase approach, in support of the Deep Space Climate Observatory (DSCOVR) mission. DSCOVR cloud products will be derived from Earth Polychromatic Imaging Camera (EPIC) observations in the ultraviolet and visible spectra. Since EPIC is not equipped with a spectral channel in the shortwave or mid-wave infrared that is sensitive to cloud effective radius (CER), COT will be inferred from a single visible channel with the assumption of appropriate CER values for liquid and ice phase clouds. One month of Aqua MODerate-resolution Imaging Spectroradiometer (MODIS) daytime granules from April 2005 is selected for investigating cloud phase sensitivity, and a subset of these granules that has similar EPIC Sun-view geometry is selected for investigating COT uncertainties. EPIC COT retrievals are simulated with the same algorithm as the operational MODIS cloud products (MOD06), except using fixed phase-dependent CER values. Uncertainty estimates are derived by comparing the single-channel COT retrievals with the baseline bi-spectral MODIS retrievals. Results show that a single-channel COT retrieval is feasible for EPIC. For ice clouds, single-channel retrieval errors are minimal (< 2 %) due to the particle size insensitivity of the assumed ice crystal (i.e., severely roughened aggregate of hexagonal columns) scattering properties at visible wavelengths, while for liquid clouds the error is mostly limited to within 10 %, although for thin clouds (COT < 2) the error can be higher. Potential uncertainties in EPIC cloud masking and cloud temperature retrievals are not considered in this study.
RAC-multi: reader anti-collision algorithm for multichannel mobile RFID networks.
Shin, Kwangcheol; Song, Wonil
2010-01-01
At present, RFID is installed on mobile devices such as mobile phones or PDAs and provides a means to obtain information about objects equipped with an RFID tag over a multi-channeled telecommunication networks. To use mobile RFIDs, reader collision problems should be addressed given that readers are continuously moving. Moreover, in a multichannel environment for mobile RFIDs, interference between adjacent channels should be considered. This work first defines a new concept of a reader collision problem between adjacent channels and then suggests a novel reader anti-collision algorithm for RFID readers that use multiple channels. To avoid interference with adjacent channels, the suggested algorithm separates data channels into odd and even numbered channels and allocates odd-numbered channels first to readers. It also sets an unused channel between the control channel and data channels to ensure that control messages and the signal of the adjacent channel experience no interference. Experimental results show that suggested algorithm shows throughput improvements ranging from 29% to 46% for tag identifications compared to the GENTLE reader anti-collision algorithm for multichannel RFID networks.
RAC-Multi: Reader Anti-Collision Algorithm for Multichannel Mobile RFID Networks
Shin, Kwangcheol; Song, Wonil
2010-01-01
At present, RFID is installed on mobile devices such as mobile phones or PDAs and provides a means to obtain information about objects equipped with an RFID tag over a multi-channeled telecommunication networks. To use mobile RFIDs, reader collision problems should be addressed given that readers are continuously moving. Moreover, in a multichannel environment for mobile RFIDs, interference between adjacent channels should be considered. This work first defines a new concept of a reader collision problem between adjacent channels and then suggests a novel reader anti-collision algorithm for RFID readers that use multiple channels. To avoid interference with adjacent channels, the suggested algorithm separates data channels into odd and even numbered channels and allocates odd-numbered channels first to readers. It also sets an unused channel between the control channel and data channels to ensure that control messages and the signal of the adjacent channel experience no interference. Experimental results show that suggested algorithm shows throughput improvements ranging from 29% to 46% for tag identifications compared to the GENTLE reader anti-collision algorithm for multichannel RFID networks. PMID:22315528
Software algorithm and hardware design for real-time implementation of new spectral estimator
2014-01-01
Background Real-time spectral analyzers can be difficult to implement for PC computer-based systems because of the potential for high computational cost, and algorithm complexity. In this work a new spectral estimator (NSE) is developed for real-time analysis, and compared with the discrete Fourier transform (DFT). Method Clinical data in the form of 216 fractionated atrial electrogram sequences were used as inputs. The sample rate for acquisition was 977 Hz, or approximately 1 millisecond between digital samples. Real-time NSE power spectra were generated for 16,384 consecutive data points. The same data sequences were used for spectral calculation using a radix-2 implementation of the DFT. The NSE algorithm was also developed for implementation as a real-time spectral analyzer electronic circuit board. Results The average interval for a single real-time spectral calculation in software was 3.29 μs for NSE versus 504.5 μs for DFT. Thus for real-time spectral analysis, the NSE algorithm is approximately 150× faster than the DFT. Over a 1 millisecond sampling period, the NSE algorithm had the capability to spectrally analyze a maximum of 303 data channels, while the DFT algorithm could only analyze a single channel. Moreover, for the 8 second sequences, the NSE spectral resolution in the 3-12 Hz range was 0.037 Hz while the DFT spectral resolution was only 0.122 Hz. The NSE was also found to be implementable as a standalone spectral analyzer board using approximately 26 integrated circuits at a cost of approximately $500. The software files used for analysis are included as a supplement, please see the Additional files 1 and 2. Conclusions The NSE real-time algorithm has low computational cost and complexity, and is implementable in both software and hardware for 1 millisecond updates of multichannel spectra. The algorithm may be helpful to guide radiofrequency catheter ablation in real time. PMID:24886214
Brownian Dynamics simulations of model colloids in channel geometries and external fields
NASA Astrophysics Data System (ADS)
Siems, Ullrich; Nielaba, Peter
2018-04-01
We review the results of Brownian Dynamics simulations of colloidal particles in external fields confined in channels. Super-paramagnetic Brownian particles are well suited two- dimensional model systems for a variety of problems on different length scales, ranging from pedestrian walking through a bottleneck to ions passing ion-channels in living cells. In such systems confinement into channels can have a great influence on the diffusion and transport properties. Especially we will discuss the crossover from single file diffusion in a narrow channel to the diffusion in the extended two-dimensional system. Therefore a new algorithm for computing the mean square displacement (MSD) on logarithmic time scales is presented. In a different study interacting colloidal particles were dragged over a washboard potential and are additionally confined in a two-dimensional micro-channel. In this system kink and anti-kink solitons determine the depinning process of the particles from the periodic potential.
NASA Astrophysics Data System (ADS)
Chang, Huan; Yin, Xiao-li; Cui, Xiao-zhou; Zhang, Zhi-chao; Ma, Jian-xin; Wu, Guo-hua; Zhang, Li-jia; Xin, Xiang-jun
2017-12-01
Practical orbital angular momentum (OAM)-based free-space optical (FSO) communications commonly experience serious performance degradation and crosstalk due to atmospheric turbulence. In this paper, we propose a wave-front sensorless adaptive optics (WSAO) system with a modified Gerchberg-Saxton (GS)-based phase retrieval algorithm to correct distorted OAM beams. We use the spatial phase perturbation (SPP) GS algorithm with a distorted probe Gaussian beam as the only input. The principle and parameter selections of the algorithm are analyzed, and the performance of the algorithm is discussed. The simulation results show that the proposed adaptive optics (AO) system can significantly compensate for distorted OAM beams in single-channel or multiplexed OAM systems, which provides new insights into adaptive correction systems using OAM beams.
Evaluation of stochastic differential equation approximation of ion channel gating models.
Bruce, Ian C
2009-04-01
Fox and Lu derived an algorithm based on stochastic differential equations for approximating the kinetics of ion channel gating that is simpler and faster than "exact" algorithms for simulating Markov process models of channel gating. However, the approximation may not be sufficiently accurate to predict statistics of action potential generation in some cases. The objective of this study was to develop a framework for analyzing the inaccuracies and determining their origin. Simulations of a patch of membrane with voltage-gated sodium and potassium channels were performed using an exact algorithm for the kinetics of channel gating and the approximate algorithm of Fox & Lu. The Fox & Lu algorithm assumes that channel gating particle dynamics have a stochastic term that is uncorrelated, zero-mean Gaussian noise, whereas the results of this study demonstrate that in many cases the stochastic term in the Fox & Lu algorithm should be correlated and non-Gaussian noise with a non-zero mean. The results indicate that: (i) the source of the inaccuracy is that the Fox & Lu algorithm does not adequately describe the combined behavior of the multiple activation particles in each sodium and potassium channel, and (ii) the accuracy does not improve with increasing numbers of channels.
ID card number detection algorithm based on convolutional neural network
NASA Astrophysics Data System (ADS)
Zhu, Jian; Ma, Hanjie; Feng, Jie; Dai, Leiyan
2018-04-01
In this paper, a new detection algorithm based on Convolutional Neural Network is presented in order to realize the fast and convenient ID information extraction in multiple scenarios. The algorithm uses the mobile device equipped with Android operating system to locate and extract the ID number; Use the special color distribution of the ID card, select the appropriate channel component; Use the image threshold segmentation, noise processing and morphological processing to take the binary processing for image; At the same time, the image rotation and projection method are used for horizontal correction when image was tilting; Finally, the single character is extracted by the projection method, and recognized by using Convolutional Neural Network. Through test shows that, A single ID number image from the extraction to the identification time is about 80ms, the accuracy rate is about 99%, It can be applied to the actual production and living environment.
Using the time shift in single pushbroom datatakes to detect ships and their heading
NASA Astrophysics Data System (ADS)
Willburger, Katharina A. M.; Schwenk, Kurt
2017-10-01
The detection of ships from remote sensing data has become an essential task for maritime security. The variety of application scenarios includes piracy, illegal fishery, ocean dumping and ships carrying refugees. While techniques using data from SAR sensors for ship detection are widely common, there is only few literature discussing algorithms based on imagery of optical camera systems. A ship detection algorithm for optical pushbroom data has been developed. It takes advantage of the special detector assembly of most of those scanners, which allows apart from the detection of a ship also the calculation of its heading out of a single acquisition. The proposed algorithm for the detection of moving ships was developed with RapidEye imagery. It algorithm consists mainly of three steps: the creation of a land-watermask, the object extraction and the deeper examination of each single object. The latter step is built up by several spectral and geometric filters, making heavy use of the inter-channel displacement typical for pushbroom sensors with multiple CCD lines, finally yielding a set of ships and their direction of movement. The working principle of time-shifted pushbroom sensors and the developed algorithm is explained in detail. Furthermore, we present our first results and give an outlook to future improvements.
Adaptive recurrence quantum entanglement distillation for two-Kraus-operator channels
NASA Astrophysics Data System (ADS)
Ruan, Liangzhong; Dai, Wenhan; Win, Moe Z.
2018-05-01
Quantum entanglement serves as a valuable resource for many important quantum operations. A pair of entangled qubits can be shared between two agents by first preparing a maximally entangled qubit pair at one agent, and then sending one of the qubits to the other agent through a quantum channel. In this process, the deterioration of entanglement is inevitable since the noise inherent in the channel contaminates the qubit. To address this challenge, various quantum entanglement distillation (QED) algorithms have been developed. Among them, recurrence algorithms have advantages in terms of implementability and robustness. However, the efficiency of recurrence QED algorithms has not been investigated thoroughly in the literature. This paper puts forth two recurrence QED algorithms that adapt to the quantum channel to tackle the efficiency issue. The proposed algorithms have guaranteed convergence for quantum channels with two Kraus operators, which include phase-damping and amplitude-damping channels. Analytical results show that the convergence speed of these algorithms is improved from linear to quadratic and one of the algorithms achieves the optimal speed. Numerical results confirm that the proposed algorithms significantly improve the efficiency of QED.
Balachandar, Arjun; Prescott, Steven A
2018-05-01
Distinct spiking patterns may arise from qualitative differences in ion channel expression (i.e. when different neurons express distinct ion channels) and/or when quantitative differences in expression levels qualitatively alter the spike generation process. We hypothesized that spiking patterns in neurons of the superficial dorsal horn (SDH) of spinal cord reflect both mechanisms. We reproduced SDH neuron spiking patterns by varying densities of K V 1- and A-type potassium conductances. Plotting the spiking patterns that emerge from different density combinations revealed spiking-pattern regions separated by boundaries (bifurcations). This map suggests that certain spiking pattern combinations occur when the distribution of potassium channel densities straddle boundaries, whereas other spiking patterns reflect distinct patterns of ion channel expression. The former mechanism may explain why certain spiking patterns co-occur in genetically identified neuron types. We also present algorithms to predict spiking pattern proportions from ion channel density distributions, and vice versa. Neurons are often classified by spiking pattern. Yet, some neurons exhibit distinct patterns under subtly different test conditions, which suggests that they operate near an abrupt transition, or bifurcation. A set of such neurons may exhibit heterogeneous spiking patterns not because of qualitative differences in which ion channels they express, but rather because quantitative differences in expression levels cause neurons to operate on opposite sides of a bifurcation. Neurons in the spinal dorsal horn, for example, respond to somatic current injection with patterns that include tonic, single, gap, delayed and reluctant spiking. It is unclear whether these patterns reflect five cell populations (defined by distinct ion channel expression patterns), heterogeneity within a single population, or some combination thereof. We reproduced all five spiking patterns in a computational model by varying the densities of a low-threshold (K V 1-type) potassium conductance and an inactivating (A-type) potassium conductance and found that single, gap, delayed and reluctant spiking arise when the joint probability distribution of those channel densities spans two intersecting bifurcations that divide the parameter space into quadrants, each associated with a different spiking pattern. Tonic spiking likely arises from a separate distribution of potassium channel densities. These results argue in favour of two cell populations, one characterized by tonic spiking and the other by heterogeneous spiking patterns. We present algorithms to predict spiking pattern proportions based on ion channel density distributions and, conversely, to estimate ion channel density distributions based on spiking pattern proportions. The implications for classifying cells based on spiking pattern are discussed. © 2018 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.
Cardiac sodium channel Markov model with temperature dependence and recovery from inactivation.
Irvine, L A; Jafri, M S; Winslow, R L
1999-01-01
A Markov model of the cardiac sodium channel is presented. The model is similar to the CA1 hippocampal neuron sodium channel model developed by Kuo and Bean (1994. Neuron. 12:819-829) with the following modifications: 1) an additional open state is added; 2) open-inactivated transitions are made voltage-dependent; and 3) channel rate constants are exponential functions of enthalpy, entropy, and voltage and have explicit temperature dependence. Model parameters are determined using a simulated annealing algorithm to minimize the error between model responses and various experimental data sets. The model reproduces a wide range of experimental data including ionic currents, gating currents, tail currents, steady-state inactivation, recovery from inactivation, and open time distributions over a temperature range of 10 degrees C to 25 degrees C. The model also predicts measures of single channel activity such as first latency, probability of a null sweep, and probability of reopening. PMID:10096885
2014-01-01
We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588
NASA Astrophysics Data System (ADS)
Silva, João Carlos; Souto, Nuno; Cercas, Francisco; Dinis, Rui
A MMSE (Minimum Mean Square Error) DS-CDMA (Direct Sequence-Code Division Multiple Access) receiver coupled with a low-complexity iterative interference suppression algorithm was devised for a MIMO/BLAST (Multiple Input, Multiple Output / Bell Laboratories Layered Space Time) system in order to improve system performance, considering frequency selective fading channels. The scheme is compared against the simple MMSE receiver, for both QPSK and 16QAM modulations, under SISO (Single Input, Single Output) and MIMO systems, the latter with 2Tx by 2Rx and 4Tx by 4Rx (MIMO order 2 and 4 respectively) antennas. To assess its performance in an existing system, the uncoded UMTS HSDPA (High Speed Downlink Packet Access) standard was considered.
An optimized algorithm for multiscale wideband deconvolution of radio astronomical images
NASA Astrophysics Data System (ADS)
Offringa, A. R.; Smirnov, O.
2017-10-01
We describe a new multiscale deconvolution algorithm that can also be used in a multifrequency mode. The algorithm only affects the minor clean loop. In single-frequency mode, the minor loop of our improved multiscale algorithm is over an order of magnitude faster than the casa multiscale algorithm, and produces results of similar quality. For multifrequency deconvolution, a technique named joined-channel cleaning is used. In this mode, the minor loop of our algorithm is two to three orders of magnitude faster than casa msmfs. We extend the multiscale mode with automated scale-dependent masking, which allows structures to be cleaned below the noise. We describe a new scale-bias function for use in multiscale cleaning. We test a second deconvolution method that is a variant of the moresane deconvolution technique, and uses a convex optimization technique with isotropic undecimated wavelets as dictionary. On simple well-calibrated data, the convex optimization algorithm produces visually more representative models. On complex or imperfect data, the convex optimization algorithm has stability issues.
Joint source-channel coding for motion-compensated DCT-based SNR scalable video.
Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K
2002-01-01
In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.
Routing channels in VLSI layout
NASA Astrophysics Data System (ADS)
Cai, Hong
A number of algorithms for the automatic routing of interconnections in Very Large Scale Integration (VLSI) building-block layouts are presented. Algorithms for the topological definition of channels, the global routing and the geometrical definition of channels are presented. In contrast to traditional approaches the definition and ordering of the channels is done after the global routing. This approach has the advantage that global routing information can be taken into account to select the optimal channel structure. A polynomial algorithm for the channel definition and ordering problem is presented. The existence of a conflict-free channel structure is guaranteed by enforcing a sliceable placement. Algorithms for finding the shortest connection path are described. A separate algorithm is developed for the power net routing, because the two power nets must be planarly routed with variable wire width. An integrated placement and routing system for generating building-block layout is briefly described. Some experimental results and design experiences in using the system are also presented. Very good results are obtained.
Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications
NASA Astrophysics Data System (ADS)
Qian, Xuewen; Deng, Honggui; He, Hailang
2017-10-01
Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.
NASA Astrophysics Data System (ADS)
Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles
2008-12-01
We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.
Mitigation of intra-channel nonlinearities using a frequency-domain Volterra series equalizer.
Guiomar, Fernando P; Reis, Jacklyn D; Teixeira, António L; Pinto, Armando N
2012-01-16
We address the issue of intra-channel nonlinear compensation using a Volterra series nonlinear equalizer based on an analytical closed-form solution for the 3rd order Volterra kernel in frequency-domain. The performance of the method is investigated through numerical simulations for a single-channel optical system using a 20 Gbaud NRZ-QPSK test signal propagated over 1600 km of both standard single-mode fiber and non-zero dispersion shifted fiber. We carry on performance and computational effort comparisons with the well-known backward propagation split-step Fourier (BP-SSF) method. The alias-free frequency-domain implementation of the Volterra series nonlinear equalizer makes it an attractive approach to work at low sampling rates, enabling to surpass the maximum performance of BP-SSF at 2× oversampling. Linear and nonlinear equalization can be treated independently, providing more flexibility to the equalization subsystem. The parallel structure of the algorithm is also a key advantage in terms of real-time implementation.
An enhanced multi-channel bacterial foraging optimization algorithm for MIMO communication system
NASA Astrophysics Data System (ADS)
Palanimuthu, Senthilkumar Jayalakshmi; Muthial, Chandrasekaran
2017-04-01
Channel estimation and optimisation are the main challenging tasks in Multi Input Multi Output (MIMO) wireless communication systems. In this work, a Multi-Channel Bacterial Foraging Optimization Algorithm approach is proposed for the selection of antenna in a transmission area. The main advantage of this method is, it reduces the loss of bandwidth during data transmission effectively. Here, we considered the channel estimation and optimisation for improving the transmission speed and reducing the unused bandwidth. Initially, the message is given to the input of the communication system. Then, the symbol mapping process is performed for converting the message into signals. It will be encoded based on the space-time encoding technique. Here, the single signal is divided into multiple signals and it will be given to the input of space-time precoder. Hence, the multiplexing is applied to transmission channel estimation. In this paper, the Rayleigh channel is selected based on the bandwidth range. This is the Gaussian distribution type channel. Then, the demultiplexing is applied on the obtained signal that is the reverse function of multiplexing, which splits the combined signal arriving from a medium into the original information signal. Furthermore, the long-term evolution technique is used for scheduling the time to channels during transmission. Here, the hidden Markov model technique is employed to predict the status information of the channel. Finally, the signals are decoded and the reconstructed signal is obtained after performing the scheduling process. The experimental results evaluate the performance of the proposed MIMO communication system in terms of bit error rate, mean squared error, average throughput, outage capacity and signal to interference noise ratio.
NASA Astrophysics Data System (ADS)
Makrakis, Dimitrios; Mathiopoulos, P. Takis
A maximum likelihood sequential decoder for the reception of digitally modulated signals with single or multiamplitude constellations transmitted over a multiplicative, nonselective fading channel is derived. It is shown that its structure consists of a combination of envelope, multiple differential, and coherent detectors. The outputs of each of these detectors are jointly processed by means of an algorithm. This algorithm is presented in a recursive form. The derivation of the new receiver is general enough to accommodate uncoded as well as coded (e.g., trellis-coded) schemes. Performance evaluation results for a reduced-complexity trellis-coded QPSK system have demonstrated that the proposed receiver dramatically reduces the error floors caused by fading. At Eb/N0 = 20 dB the new receiver structure results in bit-error-rate reductions of more than three orders of magnitude compared to a conventional Viterbi receiver, while being reasonably simple to implement.
Liu, Chen-Yi; Goertzen, Andrew L
2013-07-21
An iterative position-weighted centre-of-gravity algorithm was developed and tested for positioning events in a silicon photomultiplier (SiPM)-based scintillation detector for positron emission tomography. The algorithm used a Gaussian-based weighting function centred at the current estimate of the event location. The algorithm was applied to the signals from a 4 × 4 array of SiPM detectors that used individual channel readout and a LYSO:Ce scintillator array. Three scintillator array configurations were tested: single layer with 3.17 mm crystal pitch, matched to the SiPM size; single layer with 1.5 mm crystal pitch; and dual layer with 1.67 mm crystal pitch and a ½ crystal offset in the X and Y directions between the two layers. The flood histograms generated by this algorithm were shown to be superior to those generated by the standard centre of gravity. The width of the Gaussian weighting function of the algorithm was optimized for different scintillator array setups. The optimal width of the Gaussian curve was found to depend on the amount of light spread. The algorithm required less than 20 iterations to calculate the position of an event. The rapid convergence of this algorithm will readily allow for implementation on a front-end detector processing field programmable gate array for use in improved real-time event positioning and identification.
NASA Astrophysics Data System (ADS)
Cominelli, A.; Acconcia, G.; Caldi, F.; Peronio, P.; Ghioni, M.; Rech, I.
2018-02-01
Time-Correlated Single Photon Counting (TCSPC) is a powerful tool that permits to record extremely fast optical signals with a precision down to few picoseconds. On the other hand, it is recognized as a relatively slow technique, especially when a large time-resolved image is acquired exploiting a single acquisition channel and a scanning system. During the last years, much effort has been made towards the parallelization of many acquisition and conversion chains. In particular, the exploitation of Single-Photon Avalanche Diodes in standard CMOS technology has paved the way to the integration of thousands of independent channels on the same chip. Unfortunately, the presence of a large number of detectors can give rise to a huge rate of events, which can easily lead to the saturation of the transfer rate toward the elaboration unit. As a result, a smart readout approach is needed to guarantee an efficient exploitation of the limited transfer bandwidth. We recently introduced a novel readout architecture, aimed at maximizing the counting efficiency of the system in typical TCSPC measurements. It features a limited number of high-performance converters, which are shared with a much larger array, while a smart routing logic provides a dynamic multiplexing between the two parts. Here we propose a novel routing algorithm, which exploits standard digital gates distributed among a large 32x32 array to ensure a dynamic connection between detectors and external time-measurement circuits.
Computer Aided Synthesis or Measurement Schemes for Telemetry applications
1997-09-02
5.2.5. Frame structure generation The algorithm generating the frame structure should take as inputs the sampling frequency requirements of the channels...these channels into the frame structure. Generally there can be a lot of ways to divide channels among groups. The algorithm implemented in...groups) first. The algorithm uses the function "try_permutation" recursively to distribute channels among the groups, and the function "try_subtable
EDMC: An enhanced distributed multi-channel anti-collision algorithm for RFID reader system
NASA Astrophysics Data System (ADS)
Zhang, YuJing; Cui, Yinghua
2017-05-01
In this paper, we proposes an enhanced distributed multi-channel reader anti-collision algorithm for RFID environments which is based on the distributed multi-channel reader anti-collision algorithm for RFID environments (called DiMCA). We proposes a monitor method to decide whether reader receive the latest control news after it selected the data channel. The simulation result shows that it improves interrogation delay.
Joint digital signal processing for superchannel coherent optical communication systems.
Liu, Cheng; Pan, Jie; Detwiler, Thomas; Stark, Andrew; Hsueh, Yu-Ting; Chang, Gee-Kung; Ralph, Stephen E
2013-04-08
Ultra-high-speed optical communication systems which can support ≥ 1Tb/s per channel transmission will soon be required to meet the increasing capacity demand. However, 1Tb/s over a single carrier requires either or both a high-level modulation format (i.e. 1024QAM) and a high baud rate. Alternatively, grouping a number of tightly spaced "sub-carriers" to form a terabit superchannel increases channel capacity while minimizing the need for high-level modulation formats and high baud rate, which may allow existing formats, baud rate and components to be exploited. In ideal Nyquist-WDM superchannel systems, optical subcarriers with rectangular spectra are tightly packed at a channel spacing equal to the baud rate, thus achieving the Nyquist bandwidth limit. However, in practical Nyquist-WDM systems, precise electrical or optical control of channel spectra is required to avoid strong inter-channel interference (ICI). Here, we propose and demonstrate a new "super receiver" architecture for practical Nyquist-WDM systems, which jointly detects and demodulates multiple channels simultaneously and mitigates the penalties associated with the limitations of generating ideal Nyquist-WDM spectra. Our receiver-side solution relaxes the filter requirements imposed on the transmitter. Two joint DSP algorithms are developed for linear ICI cancellation and joint carrier-phase recovery. Improved system performance is observed with both experimental and simulation data. Performance analysis under different system configurations is conducted to demonstrate the feasibility and robustness of the proposed joint DSP algorithms.
NASA Astrophysics Data System (ADS)
Sokolov, M. A.
This handbook treats the design and analysis of of pulsed radar receivers, with emphasis on elements (especially IC elements) that implement optimal and suboptimal algorithms. The design methodology is developed from the viewpoint of statistical communications theory. Particular consideration is given to the synthesis of single-channel and multichannel detectors, the design of analog and digital signal-processing devices, and the analysis of IF amplifiers.
Evaluation of the operational SAR based Baltic sea ice concentration products
NASA Astrophysics Data System (ADS)
Karvonen, Juha
Sea ice concentration is an important ice parameter both for weather and climate modeling and sea ice navigation. We have developed an fully automated algorithm for sea ice concentration retrieval using dual-polarized ScanSAR wide mode RADARSAT-2 data. RADARSAT-2 is a C-band SAR instrument enabling dual-polarized acquisition in ScanSAR mode. The swath width for the RADARSAT-2 ScanSAR mode is about 500 km, making it very suitable for operational sea ice monitoring. The polarization combination used in our concentration estimation is HH/HV. The SAR data is first preprocessed, the preprocessing consists of geo-rectification to Mercator projection, incidence angle correction fro both the polarization channels. and SAR mosaicking. After preprocessing a segmentation is performed for the SAR mosaics, and some single-channel and dual-channel features are computed for each SAR segment. Finally the SAR concentration is estimated based on these segment-wise features. The algorithm is similar as introduced in Karvonen 2014. The ice concentration is computed daily using a daily RADARSAT-2 SAR mosaic as its input, and it thus gives the concentration estimated at each Baltic Sea location based on the most recent SAR data at the location. The algorithm has been run in an operational test mode since January 2014. We present evaluation of the SAR-based concentration estimates for the Baltic ice season 2014 by comparing the SAR results with gridded the Finnish Ice Service ice charts and ice concentration estimates from a radiometer algorithm (AMSR-2 Bootstrap algorithm results). References: J. Karvonen, Baltic Sea Ice Concentration Estimation Based on C-Band Dual-Polarized SAR Data, IEEE Transactions on Geoscience and Remote Sensing, in press, DOI: 10.1109/TGRS.2013.2290331, 2014.
Evaluation of Dynamic Channel and Power Assignment for Cognitive Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Syed A. Ahmad; Umesh Shukla; Ryan E. Irwin
2011-03-01
In this paper, we develop a unifying optimization formulation to describe the Dynamic Channel and Power Assignment (DCPA) problem and evaluation method for comparing DCPA algorithms. DCPA refers to the allocation of transmit power and frequency channels to links in a cognitive network so as to maximize the total number of feasible links while minimizing the aggregate transmit power. We apply our evaluation method to five algorithms representative of DCPA used in literature. This comparison illustrates the tradeoffs between control modes (centralized versus distributed) and channel/power assignment techniques. We estimate the complexity of each algorithm. Through simulations, we evaluate themore » effectiveness of the algorithms in achieving feasible link allocations in the network, as well as their power efficiency. Our results indicate that, when few channels are available, the effectiveness of all algorithms is comparable and thus the one with smallest complexity should be selected. The Least Interfering Channel and Iterative Power Assignment (LICIPA) algorithm does not require cross-link gain information, has the overall lowest run time, and highest feasibility ratio of all the distributed algorithms; however, this comes at a cost of higher average power per link.« less
NASA Astrophysics Data System (ADS)
Kim, Mijin; Kim, Jhoon; Yoon, Jongmin; Chung, Chu-Yong; Chung, Sung-Rae
2017-04-01
In 2010, the Korean geostationary earth orbit (GEO) satellite, the Communication, Ocean, and Meteorological Satellite (COMS), was launched including the Meteorological Imager (MI). The MI measures atmospheric condition over Northeast Asia (NEA) using a single visible channel centered at 0.675 μm and four IR channels at 3.75, 6.75, 10.8, 12.0 μm. The visible measurement can also be utilized for the retrieval of aerosol optical properties (AOPs). Since the GEO satellite measurement has an advantage for continuous monitoring of AOPs, we can analyze the spatiotemporal variation of the aerosol using the MI observations over NEA. Therefore, we developed an algorithm to retrieve aerosol optical depth (AOD) using the visible observation of MI, and named as MI Yonsei Aerosol Retrieval Algorithm (YAER). In this study, we investigated the accuracy of MI YAER AOD by comparing the values with the long-term products of AERONET sun-photometer. The result showed that the MI AODs were significantly overestimated than the AERONET values over bright surface in low AOD case. Because the MI visible channel centered at red color range, contribution of aerosol signal to the measured reflectance is relatively lower than the surface contribution. Therefore, the AOD error in low AOD case over bright surface can be a fundamental limitation of the algorithm. Meanwhile, an assumption of background aerosol optical depth (BAOD) could result in the retrieval uncertainty, also. To estimate the surface reflectance by considering polluted air condition over the NEA, we estimated the BAOD from the MODIS dark target (DT) aerosol products by pixel. The satellite-based AOD retrieval, however, largely depends on the accuracy of the surface reflectance estimation especially in low AOD case, and thus, the BAOD could include the uncertainty in surface reflectance estimation of the satellite-based retrieval. Therefore, we re-estimated the BAOD using the ground-based sun-photometer measurement, and investigated the effects of the BAOD assumption. The satellite-based BAOD was significantly higher than the ground-based value over urban area, and thus, resulted in the underestimation of surface reflectance and the overestimation of AOD. The error analysis of the MI AOD also showed sensitivity to cloud contamination, clearly. Therefore, improvements of cloud masking process in the developed single channel MI algorithm as well as the modification of the surface reflectance estimation will be required for the future study.
Rehan, Waqas; Fischer, Stefan; Rehan, Maaz
2016-09-12
Wireless sensor networks (WSNs) have become more and more diversified and are today able to also support high data rate applications, such as multimedia. In this case, per-packet channel handshaking/switching may result in inducing additional overheads, such as energy consumption, delays and, therefore, data loss. One of the solutions is to perform stream-based channel allocation where channel handshaking is performed once before transmitting the whole data stream. Deciding stream-based channel allocation is more critical in case of multichannel WSNs where channels of different quality/stability are available and the wish for high performance requires sensor nodes to switch to the best among the available channels. In this work, we will focus on devising mechanisms that perform channel quality/stability estimation in order to improve the accommodation of stream-based communication in multichannel wireless sensor networks. For performing channel quality assessment, we have formulated a composite metric, which we call channel rank measurement (CRM), that can demarcate channels into good, intermediate and bad quality on the basis of the standard deviation of the received signal strength indicator (RSSI) and the average of the link quality indicator (LQI) of the received packets. CRM is then used to generate a data set for training a supervised machine learning-based algorithm (which we call Normal Equation based Channel quality prediction (NEC) algorithm) in such a way that it may perform instantaneous channel rank estimation of any channel. Subsequently, two robust extensions of the NEC algorithm are proposed (which we call Normal Equation based Weighted Moving Average Channel quality prediction (NEWMAC) algorithm and Normal Equation based Aggregate Maturity Criteria with Beta Tracking based Channel weight prediction (NEAMCBTC) algorithm), that can perform channel quality estimation on the basis of both current and past values of channel rank estimation. In the end, simulations are made using MATLAB, and the results show that the Extended version of NEAMCBTC algorithm (Ext-NEAMCBTC) outperforms the compared techniques in terms of channel quality and stability assessment. It also minimizes channel switching overheads (in terms of switching delays and energy consumption) for accommodating stream-based communication in multichannel WSNs.
Rehan, Waqas; Fischer, Stefan; Rehan, Maaz
2016-01-01
Wireless sensor networks (WSNs) have become more and more diversified and are today able to also support high data rate applications, such as multimedia. In this case, per-packet channel handshaking/switching may result in inducing additional overheads, such as energy consumption, delays and, therefore, data loss. One of the solutions is to perform stream-based channel allocation where channel handshaking is performed once before transmitting the whole data stream. Deciding stream-based channel allocation is more critical in case of multichannel WSNs where channels of different quality/stability are available and the wish for high performance requires sensor nodes to switch to the best among the available channels. In this work, we will focus on devising mechanisms that perform channel quality/stability estimation in order to improve the accommodation of stream-based communication in multichannel wireless sensor networks. For performing channel quality assessment, we have formulated a composite metric, which we call channel rank measurement (CRM), that can demarcate channels into good, intermediate and bad quality on the basis of the standard deviation of the received signal strength indicator (RSSI) and the average of the link quality indicator (LQI) of the received packets. CRM is then used to generate a data set for training a supervised machine learning-based algorithm (which we call Normal Equation based Channel quality prediction (NEC) algorithm) in such a way that it may perform instantaneous channel rank estimation of any channel. Subsequently, two robust extensions of the NEC algorithm are proposed (which we call Normal Equation based Weighted Moving Average Channel quality prediction (NEWMAC) algorithm and Normal Equation based Aggregate Maturity Criteria with Beta Tracking based Channel weight prediction (NEAMCBTC) algorithm), that can perform channel quality estimation on the basis of both current and past values of channel rank estimation. In the end, simulations are made using MATLAB, and the results show that the Extended version of NEAMCBTC algorithm (Ext-NEAMCBTC) outperforms the compared techniques in terms of channel quality and stability assessment. It also minimizes channel switching overheads (in terms of switching delays and energy consumption) for accommodating stream-based communication in multichannel WSNs. PMID:27626429
Receiver Diversity Combining Using Evolutionary Algorithms in Rayleigh Fading Channel
Akbari, Mohsen; Manesh, Mohsen Riahi
2014-01-01
In diversity combining at the receiver, the output signal-to-noise ratio (SNR) is often maximized by using the maximal ratio combining (MRC) provided that the channel is perfectly estimated at the receiver. However, channel estimation is rarely perfect in practice, which results in deteriorating the system performance. In this paper, an imperialistic competitive algorithm (ICA) is proposed and compared with two other evolutionary based algorithms, namely, particle swarm optimization (PSO) and genetic algorithm (GA), for diversity combining of signals travelling across the imperfect channels. The proposed algorithm adjusts the combiner weights of the received signal components in such a way that maximizes the SNR and minimizes the bit error rate (BER). The results indicate that the proposed method eliminates the need of channel estimation and can outperform the conventional diversity combining methods. PMID:25045725
NASA Astrophysics Data System (ADS)
Lisitsa, Y. V.; Yatskou, M. M.; Apanasovich, V. V.; Apanasovich, T. V.
2015-09-01
We have developed an algorithm for segmentation of cancer cell nuclei in three-channel luminescent images of microbiological specimens. The algorithm is based on using a correlation between fluorescence signals in the detection channels for object segmentation, which permits complete automation of the data analysis procedure. We have carried out a comparative analysis of the proposed method and conventional algorithms implemented in the CellProfiler and ImageJ software packages. Our algorithm has an object localization uncertainty which is 2-3 times smaller than for the conventional algorithms, with comparable segmentation accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Syh, J; Syh, J; Patel, B
2015-06-15
Purpose: The multichannel cylindrical applicator has a distinctive modification of the traditional single channel cylindrical applicator. The novel multichannel applicator has additional peripheral channels that provide more flexibility both in treatment planning process and outcomes. To protect by reducing doses to adjacent organ at risk (OAR) while maintaining target coverage with inverse plan optimization are the goals for such novel Brachytherapy device. Through a series of comparison and analysis of reults in more than forty patients who received HDR Brachytherapy using multichannel vaginal applicator, this procedure has been implemented in our institution. Methods: Multichannel planning was CT image based. Themore » CTV of 5mm vaginal cuff rind with prescribed length was well reconstructed as well as bladder and rectum. At least D95 of CTV coverage is 95% of prescribed dose. Multichannel inverse plan optimization algorithm not only shapes target dose cloud but set dose avoids to OAR’s exclusively. The doses of D2cc, D5cc and D5; volume of V2Gy in OAR’s were selected to compare with single channel results when sole central channel is only possibility. Results: Study demonstrates plan superiorly in OAR’s doe reduction in multi-channel plan. The D2cc of the rectum and bladder were showing a little lower for multichannel vs. single channel. The V2Gy of the rectum was 93.72% vs. 83.79% (p=0.007) for single channel vs. multichannel respectively. Absolute reduced mean dose of D5 by multichannel was 17 cGy (s.d.=6.4) and 44 cGy (s.d.=15.2) in bladder and rectum respectively. Conclusion: The optimization solution in multichannel was to maintain D95 CTV coverage while reducing the dose to OAR’s. Dosimetric advantage in sparing critical organs by using a multichannel applicator in HDR Brachytherapy treatment of the vaginal cuff is so promising and has been implemented clinically.« less
Research on multi-source image fusion technology in haze environment
NASA Astrophysics Data System (ADS)
Ma, GuoDong; Piao, Yan; Li, Bing
2017-11-01
In the haze environment, the visible image collected by a single sensor can express the details of the shape, color and texture of the target very well, but because of the haze, the sharpness is low and some of the target subjects are lost; Because of the expression of thermal radiation and strong penetration ability, infrared image collected by a single sensor can clearly express the target subject, but it will lose detail information. Therefore, the multi-source image fusion method is proposed to exploit their respective advantages. Firstly, the improved Dark Channel Prior algorithm is used to preprocess the visible haze image. Secondly, the improved SURF algorithm is used to register the infrared image and the haze-free visible image. Finally, the weighted fusion algorithm based on information complementary is used to fuse the image. Experiments show that the proposed method can improve the clarity of the visible target and highlight the occluded infrared target for target recognition.
Estimation of saturated pixel values in digital color imaging
Zhang, Xuemei; Brainard, David H.
2007-01-01
Pixel saturation, where the incident light at a pixel causes one of the color channels of the camera sensor to respond at its maximum value, can produce undesirable artifacts in digital color images. We present a Bayesian algorithm that estimates what the saturated channel's value would have been in the absence of saturation. The algorithm uses the non-saturated responses from the other color channels, together with a multivariate Normal prior that captures the correlation in response across color channels. The appropriate parameters for the prior may be estimated directly from the image data, since most image pixels are not saturated. Given the prior, the responses of the non-saturated channels, and the fact that the true response of the saturated channel is known to be greater than the saturation level, the algorithm returns the optimal expected mean square estimate for the true response. Extensions of the algorithm to the case where more than one channel is saturated are also discussed. Both simulations and examples with real images are presented to show that the algorithm is effective. PMID:15603065
Particle identification algorithms for the PANDA Endcap Disc DIRC
NASA Astrophysics Data System (ADS)
Schmidt, M.; Ali, A.; Belias, A.; Dzhygadlo, R.; Gerhardt, A.; Götzen, K.; Kalicy, G.; Krebs, M.; Lehmann, D.; Nerling, F.; Patsyuk, M.; Peters, K.; Schepers, G.; Schmitt, L.; Schwarz, C.; Schwiening, J.; Traxler, M.; Böhm, M.; Eyrich, W.; Lehmann, A.; Pfaffinger, M.; Uhlig, F.; Düren, M.; Etzelmüller, E.; Föhl, K.; Hayrapetyan, A.; Kreutzfeld, K.; Merle, O.; Rieke, J.; Wasem, T.; Achenbach, P.; Cardinali, M.; Hoek, M.; Lauth, W.; Schlimme, S.; Sfienti, C.; Thiel, M.
2017-12-01
The Endcap Disc DIRC has been developed to provide an excellent particle identification for the future PANDA experiment by separating pions and kaons up to a momentum of 4 GeV/c with a separation power of 3 standard deviations in the polar angle region from 5o to 22o. This goal will be achieved using dedicated particle identification algorithms based on likelihood methods and will be applied in an offline analysis and online event filtering. This paper evaluates the resulting PID performance using Monte-Carlo simulations to study basic single track PID as well as the analysis of complex physics channels. The online reconstruction algorithm has been tested with a Virtex4 FGPA card and optimized regarding the resulting constraints.
NASA Astrophysics Data System (ADS)
Han, Minah; Jang, Hanjoo; Baek, Jongduk
2018-03-01
We investigate lesion detectability and its trends for different noise structures in single-slice and multislice CBCT images with anatomical background noise. Anatomical background noise is modeled using a power law spectrum of breast anatomy. Spherical signal with a 2 mm diameter is used for modeling a lesion. CT projection data are acquired by the forward projection and reconstructed by the Feldkamp-Davis-Kress algorithm. To generate different noise structures, two types of reconstruction filters (Hanning and Ram-Lak weighted ramp filters) are used in the reconstruction, and the transverse and longitudinal planes of reconstructed volume are used for detectability evaluation. To evaluate single-slice images, the central slice, which contains the maximum signal energy, is used. To evaluate multislice images, central nine slices are used. Detectability is evaluated using human and model observer studies. For model observer, channelized Hotelling observer (CHO) with dense difference-of-Gaussian (D-DOG) channels are used. For all noise structures, detectability by a human observer is higher for multislice images than single-slice images, and the degree of detectability increase in multislice images depends on the noise structure. Variation in detectability for different noise structures is reduced in multislice images, but detectability trends are not much different between single-slice and multislice images. The CHO with D-DOG channels predicts detectability by a human observer well for both single-slice and multislice images.
A Comparative Study of Co-Channel Interference Suppression Techniques
NASA Technical Reports Server (NTRS)
Hamkins, Jon; Satorius, Ed; Paparisto, Gent; Polydoros, Andreas
1997-01-01
We describe three methods of combatting co-channel interference (CCI): a cross-coupled phase-locked loop (CCPLL); a phase-tracking circuit (PTC), and joint Viterbi estimation based on the maximum likelihood principle. In the case of co-channel FM-modulated voice signals, the CCPLL and PTC methods typically outperform the maximum likelihood estimators when the modulation parameters are dissimilar. However, as the modulation parameters become identical, joint Viterbi estimation provides for a more robust estimate of the co-channel signals and does not suffer as much from "signal switching" which especially plagues the CCPLL approach. Good performance for the PTC requires both dissimilar modulation parameters and a priori knowledge of the co-channel signal amplitudes. The CCPLL and joint Viterbi estimators, on the other hand, incorporate accurate amplitude estimates. In addition, application of the joint Viterbi algorithm to demodulating co-channel digital (BPSK) signals in a multipath environment is also discussed. It is shown in this case that if the interference is sufficiently small, a single trellis model is most effective in demodulating the co-channel signals.
Lossless compression algorithm for multispectral imagers
NASA Astrophysics Data System (ADS)
Gladkova, Irina; Grossberg, Michael; Gottipati, Srikanth
2008-08-01
Multispectral imaging is becoming an increasingly important tool for monitoring the earth and its environment from space borne and airborne platforms. Multispectral imaging data consists of visible and IR measurements from a scene across space and spectrum. Growing data rates resulting from faster scanning and finer spatial and spectral resolution makes compression an increasingly critical tool to reduce data volume for transmission and archiving. Research for NOAA NESDIS has been directed to finding for the characteristics of satellite atmospheric Earth science Imager sensor data what level of Lossless compression ratio can be obtained as well as appropriate types of mathematics and approaches that can lead to approaching this data's entropy level. Conventional lossless do not achieve the theoretical limits for lossless compression on imager data as estimated from the Shannon entropy. In a previous paper, the authors introduce a lossless compression algorithm developed for MODIS as a proxy for future NOAA-NESDIS satellite based Earth science multispectral imagers such as GOES-R. The algorithm is based on capturing spectral correlations using spectral prediction, and spatial correlations with a linear transform encoder. In decompression, the algorithm uses a statistically computed look up table to iteratively predict each channel from a channel decompressed in the previous iteration. In this paper we present a new approach which fundamentally differs from our prior work. In this new approach, instead of having a single predictor for each pair of bands we introduce a piecewise spatially varying predictor which significantly improves the compression results. Our new algorithm also now optimizes the sequence of channels we use for prediction. Our results are evaluated by comparison with a state of the art wavelet based image compression scheme, Jpeg2000. We present results on the 14 channel subset of the MODIS imager, which serves as a proxy for the GOES-R imager. We will also show results of the algorithm for on NOAA AVHRR data and data from SEVIRI. The algorithm is designed to be adapted to the wide range of multispectral imagers and should facilitate distribution of data throughout globally. This compression research is managed by Roger Heymann, PE of OSD NOAA NESDIS Engineering, in collaboration with the NOAA NESDIS STAR Research Office through Mitch Goldberg, Tim Schmit, Walter Wolf.
A New Cell-Centered Implicit Numerical Scheme for Ions in the 2-D Axisymmetric Code Hall2de
NASA Technical Reports Server (NTRS)
Lopez Ortega, Alejandro; Mikellides, Ioannis G.
2014-01-01
We present a new algorithm in the Hall2De code to simulate the ion hydrodynamics in the acceleration channel and near plume regions of Hall-effect thrusters. This implementation constitutes an upgrade of the capabilities built in the Hall2De code. The equations of mass conservation and momentum for unmagnetized ions are solved using a conservative, finite-volume, cell-centered scheme on a magnetic-field-aligned grid. Major computational savings are achieved by making use of an implicit predictor/multi-corrector algorithm for time evolution. Inaccuracies in the prediction of the motion of low-energy ions in the near plume in hydrodynamics approaches are addressed by implementing a multi-fluid algorithm that tracks ions of different energies separately. A wide range of comparisons with measurements are performed to validate the new ion algorithms. Several numerical experiments with the location and value of the anomalous collision frequency are also presented. Differences in the plasma properties in the near-plume between the single fluid and multi-fluid approaches are discussed. We complete our validation by comparing predicted erosion rates at the channel walls of the thruster with measurements. Erosion rates predicted by the plasma properties obtained from simulations replicate accurately measured rates of erosion within the uncertainty range of the sputtering models employed.
Hybrid stochastic and deterministic simulations of calcium blips.
Rüdiger, S; Shuai, J W; Huisinga, W; Nagaiah, C; Warnecke, G; Parker, I; Falcke, M
2007-09-15
Intracellular calcium release is a prime example for the role of stochastic effects in cellular systems. Recent models consist of deterministic reaction-diffusion equations coupled to stochastic transitions of calcium channels. The resulting dynamics is of multiple time and spatial scales, which complicates far-reaching computer simulations. In this article, we introduce a novel hybrid scheme that is especially tailored to accurately trace events with essential stochastic variations, while deterministic concentration variables are efficiently and accurately traced at the same time. We use finite elements to efficiently resolve the extreme spatial gradients of concentration variables close to a channel. We describe the algorithmic approach and we demonstrate its efficiency compared to conventional methods. Our single-channel model matches experimental data and results in intriguing dynamics if calcium is used as charge carrier. Random openings of the channel accumulate in bursts of calcium blips that may be central for the understanding of cellular calcium dynamics.
Multiple-component Decomposition from Millimeter Single-channel Data
NASA Astrophysics Data System (ADS)
Rodríguez-Montoya, Iván; Sánchez-Argüelles, David; Aretxaga, Itziar; Bertone, Emanuele; Chávez-Dagostino, Miguel; Hughes, David H.; Montaña, Alfredo; Wilson, Grant W.; Zeballos, Milagros
2018-03-01
We present an implementation of a blind source separation algorithm to remove foregrounds off millimeter surveys made by single-channel instruments. In order to make possible such a decomposition over single-wavelength data, we generate levels of artificial redundancy, then perform a blind decomposition, calibrate the resulting maps, and lastly measure physical information. We simulate the reduction pipeline using mock data: atmospheric fluctuations, extended astrophysical foregrounds, and point-like sources, but we apply the same methodology to the Aztronomical Thermal Emission Camera/ASTE survey of the Great Observatories Origins Deep Survey–South (GOODS-S). In both applications, our technique robustly decomposes redundant maps into their underlying components, reducing flux bias, improving signal-to-noise ratio, and minimizing information loss. In particular, GOODS-S is decomposed into four independent physical components: one of them is the already-known map of point sources, two are atmospheric and systematic foregrounds, and the fourth component is an extended emission that can be interpreted as the confusion background of faint sources.
Performance evaluation of an automated single-channel sleep–wake detection algorithm
Kaplan, Richard F; Wang, Ying; Loparo, Kenneth A; Kelly, Monica R; Bootzin, Richard R
2014-01-01
Background A need exists, from both a clinical and a research standpoint, for objective sleep measurement systems that are both easy to use and can accurately assess sleep and wake. This study evaluates the output of an automated sleep–wake detection algorithm (Z-ALG) used in the Zmachine (a portable, single-channel, electroencephalographic [EEG] acquisition and analysis system) against laboratory polysomnography (PSG) using a consensus of expert visual scorers. Methods Overnight laboratory PSG studies from 99 subjects (52 females/47 males, 18–60 years, median age 32.7 years), including both normal sleepers and those with a variety of sleep disorders, were assessed. PSG data obtained from the differential mastoids (A1–A2) were assessed by Z-ALG, which determines sleep versus wake every 30 seconds using low-frequency, intermediate-frequency, and high-frequency and time domain EEG features. PSG data were independently scored by two to four certified PSG technologists, using standard Rechtschaffen and Kales guidelines, and these score files were combined on an epoch-by-epoch basis, using a majority voting rule, to generate a single score file per subject to compare against the Z-ALG output. Both epoch-by-epoch and standard sleep indices (eg, total sleep time, sleep efficiency, latency to persistent sleep, and wake after sleep onset) were compared between the Z-ALG output and the technologist consensus score files. Results Overall, the sensitivity and specificity for detecting sleep using the Z-ALG as compared to the technologist consensus are 95.5% and 92.5%, respectively, across all subjects, and the positive predictive value and the negative predictive value for detecting sleep are 98.0% and 84.2%, respectively. Overall κ agreement is 0.85 (approaching the level of agreement observed among sleep technologists). These results persist when the sleep disorder subgroups are analyzed separately. Conclusion This study demonstrates that the Z-ALG automated sleep–wake detection algorithm, using the single A1–A2 EEG channel, has a level of accuracy that is similar to PSG technologists in the scoring of sleep and wake, thereby making it suitable for a variety of in-home monitoring applications, such as in conjunction with the Zmachine system. PMID:25342922
Multiuser TOA Estimation Algorithm in DS-CDMA Sparse Channel for Radiolocation
NASA Astrophysics Data System (ADS)
Kim, Sunwoo
This letter considers multiuser time delay estimation in a sparse channel environment for radiolocation. The generalized successive interference cancellation (GSIC) algorithm is used to eliminate the multiple access interference (MAI). To adapt GSIC to sparse channels the alternating maximization (AM) algorithm is considered, and the continuous time delay of each path is estimated without requiring a priori known data sequences.
Rabelo, Gustavo Davi; Beletti, Marcelo Emílio; Dechichi, Paula
2010-10-01
The aim of this study was to evaluate the effects of radiotherapy in cortical bone channels network. Fourteen rabbits were divided in two groups and test group received single dose of 15 Gy cobalt-60 radiation in tibia, bilaterally. The animals were sacrificed and a segment of tibia was removed and histologically processed. Histological images were taken and had their bone channels segmented and called regions of interest (ROI). Images were analyzed through developed algorithms using the SCILAB mathematical environment, getting percentage of bone matrix, ROI areas, ROI perimeters, their standard deviations and Lacunarity. The osteocytes and empty lacunae were also counted. Data were evaluated using Kolmogorov-Smirnov, Mann Whitney, and Student's t test (P < 0.05). Significant differences in bone matrix percentage, area and perimeters of the channels, their respective standard deviations and lacunarity were found between groups. In conclusion, the radiotherapy causes reduction of bone matrix and modifies the morphology of bone channels network. © 2010 Wiley-Liss, Inc.
Kim, Dongwook; Seong, Kiwoong; Kim, Myoungnam; Cho, Jinho; Lee, Jyunghyun
2014-01-01
In this paper, a digital audio processing chip which uses a wide dynamic range compression (WDRC) algorithm is designed and implemented for implantable hearing aids system. The designed chip operates at a single voltage of 3.3V and drives a 16 bit parallel input and output at 32 kHz sample. The designed chip has 1-channel 3-band WDRC composed of a FIR filter bank, a level detector, and a compression part. To verify the performance of the designed chip, we measured the frequency separations of bands and compression gain control to reflect the hearing threshold level.
NASA Technical Reports Server (NTRS)
Mccall, D. L.
1984-01-01
The results of a simulation study to define the functional characteristics of a airborne and ground reference GPS receiver for use in a Differential GPS system are doumented. The operations of a variety of receiver types (sequential-single channel, continuous multi-channel, etc.) are evaluated for a typical civil helicopter mission scenario. The math model of each receiver type incorporated representative system errors including intentional degradation. The results include the discussion of the receiver relative performance, the spatial correlative properties of individual range error sources, and the navigation algorithm used to smooth the position data.
Two algorithms for neural-network design and training with application to channel equalization.
Sweatman, C Z; Mulgrew, B; Gibson, G J
1998-01-01
We describe two algorithms for designing and training neural-network classifiers. The first, the linear programming slab algorithm (LPSA), is motivated by the problem of reconstructing digital signals corrupted by passage through a dispersive channel and by additive noise. It constructs a multilayer perceptron (MLP) to separate two disjoint sets by using linear programming methods to identify network parameters. The second, the perceptron learning slab algorithm (PLSA), avoids the computational costs of linear programming by using an error-correction approach to identify parameters. Both algorithms operate in highly constrained parameter spaces and are able to exploit symmetry in the classification problem. Using these algorithms, we develop a number of procedures for the adaptive equalization of a complex linear 4-quadrature amplitude modulation (QAM) channel, and compare their performance in a simulation study. Results are given for both stationary and time-varying channels, the latter based on the COST 207 GSM propagation model.
Hamada, Yuki; O'Connor, Ben L.; Orr, Andrew B.; ...
2016-03-26
In this paper, understanding the spatial patterns of ephemeral streams is crucial for understanding how hydrologic processes influence the abundance and distribution of wildlife habitats in desert regions. Available methods for mapping ephemeral streams at the watershed scale typically underestimate the size of channel networks. Although remote sensing is an effective means of collecting data and obtaining information on large, inaccessible areas, conventional techniques for extracting channel features are not sufficient in regions that have small topographic gradients and subtle target-background spectral contrast. By using very high resolution multispectral imagery, we developed a new algorithm that applies landscape information tomore » map ephemeral channels in desert regions of the Southwestern United States where utility-scale solar energy development is occurring. Knowledge about landscape features and structures was integrated into the algorithm using a series of spectral transformation and spatial statistical operations to integrate information about landscape features and structures. The algorithm extracted ephemeral stream channels at a local scale, with the result that approximately 900% more ephemeral streams was identified than what were identified by using the U.S. Geological Survey’s National Hydrography Dataset. The accuracy of the algorithm in detecting channel areas was as high as 92%, and its accuracy in delineating channel center lines was 91% when compared to a subset of channel networks that were digitized by using the very high resolution imagery. Although the algorithm captured stream channels in desert landscapes across various channel sizes and forms, it often underestimated stream headwaters and channels obscured by bright soils and sparse vegetation. While further improvement is warranted, the algorithm provides an effective means of obtaining detailed information about ephemeral streams, and it could make a significant contribution toward improving the hydrological modelling of desert environments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamada, Yuki; O'Connor, Ben L.; Orr, Andrew B.
In this paper, understanding the spatial patterns of ephemeral streams is crucial for understanding how hydrologic processes influence the abundance and distribution of wildlife habitats in desert regions. Available methods for mapping ephemeral streams at the watershed scale typically underestimate the size of channel networks. Although remote sensing is an effective means of collecting data and obtaining information on large, inaccessible areas, conventional techniques for extracting channel features are not sufficient in regions that have small topographic gradients and subtle target-background spectral contrast. By using very high resolution multispectral imagery, we developed a new algorithm that applies landscape information tomore » map ephemeral channels in desert regions of the Southwestern United States where utility-scale solar energy development is occurring. Knowledge about landscape features and structures was integrated into the algorithm using a series of spectral transformation and spatial statistical operations to integrate information about landscape features and structures. The algorithm extracted ephemeral stream channels at a local scale, with the result that approximately 900% more ephemeral streams was identified than what were identified by using the U.S. Geological Survey’s National Hydrography Dataset. The accuracy of the algorithm in detecting channel areas was as high as 92%, and its accuracy in delineating channel center lines was 91% when compared to a subset of channel networks that were digitized by using the very high resolution imagery. Although the algorithm captured stream channels in desert landscapes across various channel sizes and forms, it often underestimated stream headwaters and channels obscured by bright soils and sparse vegetation. While further improvement is warranted, the algorithm provides an effective means of obtaining detailed information about ephemeral streams, and it could make a significant contribution toward improving the hydrological modelling of desert environments.« less
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286
NASA Astrophysics Data System (ADS)
Wahl, Michael; Rahn, Hans-Jürgen; Gregor, Ingo; Erdmann, Rainer; Enderlein, Jörg
2007-03-01
Time-correlated single photon counting is a powerful method for sensitive time-resolved fluorescence measurements down to the single molecule level. The method is based on the precisely timed registration of single photons of a fluorescence signal. Historically, its primary goal was the determination of fluorescence lifetimes upon optical excitation by a short light pulse. This goal is still important today and therefore has a strong influence on instrument design. However, modifications and extensions of the early designs allow for the recovery of much more information from the detected photons and enable entirely new applications. Here, we present a new instrument that captures single photon events on multiple synchronized channels with picosecond resolution and over virtually unlimited time spans. This is achieved by means of crystal-locked time digitizers with high resolution and very short dead time. Subsequent event processing in programmable logic permits classical histogramming as well as time tagging of individual photons and their streaming to the host computer. Through the latter, any algorithms and methods for the analysis of fluorescence dynamics can be implemented either in real time or offline. Instrument test results from single molecule applications will be presented.
NASA Astrophysics Data System (ADS)
Mesbah, Mostefa; Balakrishnan, Malarvili; Colditz, Paul B.; Boashash, Boualem
2012-12-01
This article proposes a new method for newborn seizure detection that uses information extracted from both multi-channel electroencephalogram (EEG) and a single channel electrocardiogram (ECG). The aim of the study is to assess whether additional information extracted from ECG can improve the performance of seizure detectors based solely on EEG. Two different approaches were used to combine this extracted information. The first approach, known as feature fusion, involves combining features extracted from EEG and heart rate variability (HRV) into a single feature vector prior to feeding it to a classifier. The second approach, called classifier or decision fusion, is achieved by combining the independent decisions of the EEG and the HRV-based classifiers. Tested on recordings obtained from eight newborns with identified EEG seizures, the proposed neonatal seizure detection algorithms achieved 95.20% sensitivity and 88.60% specificity for the feature fusion case and 95.20% sensitivity and 94.30% specificity for the classifier fusion case. These results are considerably better than those involving classifiers using EEG only (80.90%, 86.50%) or HRV only (85.70%, 84.60%).
Meng, Jianjun; Edelman, Bradley J.; Olsoe, Jaron; Jacobs, Gabriel; Zhang, Shuying; Beyko, Angeliki; He, Bin
2018-01-01
Motor imagery–based brain–computer interface (BCI) using electroencephalography (EEG) has demonstrated promising applications by directly decoding users' movement related mental intention. The selection of control signals, e.g., the channel configuration and decoding algorithm, plays a vital role in the online performance and progressing of BCI control. While several offline analyses report the effect of these factors on BCI accuracy for a single session—performance increases asymptotically by increasing the number of channels, saturates, and then decreases—no online study, to the best of our knowledge, has yet been performed to compare for a single session or across training. The purpose of the current study is to assess, in a group of forty-five subjects, the effect of channel number and decoding method on the progression of BCI performance across multiple training sessions and the corresponding neurophysiological changes. The 45 subjects were divided into three groups using Laplacian Filtering (LAP/S) with nine channels, Common Spatial Pattern (CSP/L) with 40 channels and CSP (CSP/S) with nine channels for online decoding. At the first training session, subjects using CSP/L displayed no significant difference compared to CSP/S but a higher average BCI performance over those using LAP/S. Despite the average performance when using the LAP/S method was initially lower, but LAP/S displayed improvement over first three sessions, whereas the other two groups did not. Additionally, analysis of the recorded EEG during BCI control indicates that the LAP/S produces control signals that are more strongly correlated with the target location and a higher R-square value was shown at the fifth session. In the present study, we found that subjects' average online BCI performance using a large EEG montage does not show significantly better performance after the first session than a smaller montage comprised of a common subset of these electrodes. The LAP/S method with a small EEG montage allowed the subjects to improve their skills across sessions, but no improvement was shown for the CSP method. PMID:29681792
Meng, Jianjun; Edelman, Bradley J; Olsoe, Jaron; Jacobs, Gabriel; Zhang, Shuying; Beyko, Angeliki; He, Bin
2018-01-01
Motor imagery-based brain-computer interface (BCI) using electroencephalography (EEG) has demonstrated promising applications by directly decoding users' movement related mental intention. The selection of control signals, e.g., the channel configuration and decoding algorithm, plays a vital role in the online performance and progressing of BCI control. While several offline analyses report the effect of these factors on BCI accuracy for a single session-performance increases asymptotically by increasing the number of channels, saturates, and then decreases-no online study, to the best of our knowledge, has yet been performed to compare for a single session or across training. The purpose of the current study is to assess, in a group of forty-five subjects, the effect of channel number and decoding method on the progression of BCI performance across multiple training sessions and the corresponding neurophysiological changes. The 45 subjects were divided into three groups using Laplacian Filtering (LAP/S) with nine channels, Common Spatial Pattern (CSP/L) with 40 channels and CSP (CSP/S) with nine channels for online decoding. At the first training session, subjects using CSP/L displayed no significant difference compared to CSP/S but a higher average BCI performance over those using LAP/S. Despite the average performance when using the LAP/S method was initially lower, but LAP/S displayed improvement over first three sessions, whereas the other two groups did not. Additionally, analysis of the recorded EEG during BCI control indicates that the LAP/S produces control signals that are more strongly correlated with the target location and a higher R-square value was shown at the fifth session. In the present study, we found that subjects' average online BCI performance using a large EEG montage does not show significantly better performance after the first session than a smaller montage comprised of a common subset of these electrodes. The LAP/S method with a small EEG montage allowed the subjects to improve their skills across sessions, but no improvement was shown for the CSP method.
A multi-channel biomimetic neuroprosthesis to support treadmill gait training in stroke patients.
Chia, Noelia; Ambrosini, Emilia; Baccinelli, Walter; Nardone, Antonio; Monticone, Marco; Ferrigno, Giancarlo; Pedrocchi, Alessandra; Ferrante, Simona
2015-01-01
This study presents an innovative multi-channel neuroprosthesis that induces a biomimetic activation of the main lower-limb muscles during treadmill gait training to be used in the rehabilitation of stroke patients. The electrostimulation strategy replicates the physiological muscle synergies used by healthy subjects to walk on a treadmill at their self-selected speed. This strategy is mapped to the current gait sub-phases, which are identified in real time by a custom algorithm. This algorithm divides the gait cycle into six sub-phases, based on two inertial sensors placed laterally on the shanks. Therefore, the pre-defined stimulation profiles are expanded or stretched based on the actual gait pattern of each single subject. A preliminary experimental protocol, involving 10 healthy volunteers, was carried out to extract the muscle synergies and validate the gait-detection algorithm, which were afterwards used in the development of the neuroprosthesis. The feasibility of the neuroprosthesis was tested on one healthy subject who simulated different gait patterns, and a chronic stroke patient. The results showed the correct functioning of the system. A pilot study of the neurorehabilitation treatment for stroke patients is currently being carried out.
Bouridane, Ahmed; Ling, Bingo Wing-Kuen
2018-01-01
This paper presents an unsupervised learning algorithm for sparse nonnegative matrix factor time–frequency deconvolution with optimized fractional β-divergence. The β-divergence is a group of cost functions parametrized by a single parameter β. The Itakura–Saito divergence, Kullback–Leibler divergence and Least Square distance are special cases that correspond to β=0, 1, 2, respectively. This paper presents a generalized algorithm that uses a flexible range of β that includes fractional values. It describes a maximization–minimization (MM) algorithm leading to the development of a fast convergence multiplicative update algorithm with guaranteed convergence. The proposed model operates in the time–frequency domain and decomposes an information-bearing matrix into two-dimensional deconvolution of factor matrices that represent the spectral dictionary and temporal codes. The deconvolution process has been optimized to yield sparse temporal codes through maximizing the likelihood of the observations. The paper also presents a method to estimate the fractional β value. The method is demonstrated on separating audio mixtures recorded from a single channel. The paper shows that the extraction of the spectral dictionary and temporal codes is significantly more efficient by using the proposed algorithm and subsequently leads to better source separation performance. Experimental tests and comparisons with other factorization methods have been conducted to verify its efficacy. PMID:29702629
Estevez, Claudio; Kailas, Aravind
2012-01-01
Millimeter-wave technology shows high potential for future wireless personal area networks, reaching over 1 Gbps transmissions using simple modulation techniques. Current specifications consider dividing the spectrum into effortlessly separable spectrum ranges. These low requirements open a research area in time and space multiplexing techniques for millimeter-waves. In this work a process-stacking multiplexing access algorithm is designed for single channel operation. The concept is intuitive, but its implementation is not trivial. The key to stacking single channel events is to operate while simultaneously obtaining and handling a-posteriori time-frame information of scheduled events. This information is used to shift a global time pointer that the wireless access point manages and uses to synchronize all serviced nodes. The performance of the proposed multiplexing access technique is lower bounded by the performance of legacy TDMA and can significantly improve the effective throughput. Work is validated by simulation results.
A formally verified algorithm for interactive consistency under a hybrid fault model
NASA Technical Reports Server (NTRS)
Lincoln, Patrick; Rushby, John
1993-01-01
Consistent distribution of single-source data to replicated computing channels is a fundamental problem in fault-tolerant system design. The 'Oral Messages' (OM) algorithm solves this problem of Interactive Consistency (Byzantine Agreement) assuming that all faults are worst-cass. Thambidurai and Park introduced a 'hybrid' fault model that distinguished three fault modes: asymmetric (Byzantine), symmetric, and benign; they also exhibited, along with an informal 'proof of correctness', a modified version of OM. Unfortunately, their algorithm is flawed. The discipline of mechanically checked formal verification eventually enabled us to develop a correct algorithm for Interactive Consistency under the hybrid fault model. This algorithm withstands $a$ asymmetric, $s$ symmetric, and $b$ benign faults simultaneously, using $m+1$ rounds, provided $n is greater than 2a + 2s + b + m$, and $m\\geg a$. We present this algorithm, discuss its subtle points, and describe its formal specification and verification in PVS. We argue that formal verification systems such as PVS are now sufficiently effective that their application to fault-tolerance algorithms should be considered routine.
Liu, Weisong; Huang, Zhitao; Wang, Xiang; Sun, Weichao
2017-01-01
In a cognitive radio sensor network (CRSN), wideband spectrum sensing devices which aims to effectively exploit temporarily vacant spectrum intervals as soon as possible are of great importance. However, the challenge of increasingly high signal frequency and wide bandwidth requires an extremely high sampling rate which may exceed today’s best analog-to-digital converters (ADCs) front-end bandwidth. Recently, the newly proposed architecture called modulated wideband converter (MWC), is an attractive analog compressed sensing technique that can highly reduce the sampling rate. However, the MWC has high hardware complexity owing to its parallel channel structure especially when the number of signals increases. In this paper, we propose a single channel modulated wideband converter (SCMWC) scheme for spectrum sensing of band-limited wide-sense stationary (WSS) signals. With one antenna or sensor, this scheme can save not only sampling rate but also hardware complexity. We then present a new, SCMWC based, single node CR prototype System, on which the spectrum sensing algorithm was tested. Experiments on our hardware prototype show that the proposed architecture leads to successful spectrum sensing. And the total sampling rate as well as hardware size is only one channel’s consumption of MWC. PMID:28471410
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John M.; Iredell, Lena; Keita, Fricky
2009-01-01
This paper describes the AIRS Science Team Version 5 retrieval algorithm in terms of its three most significant improvements over the methodology used in the AIRS Science Team Version 4 retrieval algorithm. Improved physics in Version 5 allows for use of AIRS clear column radiances in the entire 4.3 micron CO2 absorption band in the retrieval of temperature profiles T(p) during both day and night. Tropospheric sounding 15 micron CO2 observations are now used primarily in the generation of clear column radiances .R(sub i) for all channels. This new approach allows for the generation of more accurate values of .R(sub i) and T(p) under most cloud conditions. Secondly, Version 5 contains a new methodology to provide accurate case-by-case error estimates for retrieved geophysical parameters and for channel-by-channel clear column radiances. Thresholds of these error estimates are used in a new approach for Quality Control. Finally, Version 5 also contains for the first time an approach to provide AIRS soundings in partially cloudy conditions that does not require use of any microwave data. This new AIRS Only sounding methodology, referred to as AIRS Version 5 AO, was developed as a backup to AIRS Version 5 should the AMSU-A instrument fail. Results are shown comparing the relative performance of the AIRS Version 4, Version 5, and Version 5 AO for the single day, January 25, 2003. The Goddard DISC is now generating and distributing products derived using the AIRS Science Team Version 5 retrieval algorithm. This paper also described the Quality Control flags contained in the DISC AIRS/AMSU retrieval products and their intended use for scientific research purposes.
DSP-Based dual-polarity mass spectrum pattern recognition for bio-detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riot, V; Coffee, K; Gard, E
2006-04-21
The Bio-Aerosol Mass Spectrometry (BAMS) instrument analyzes single aerosol particles using a dual-polarity time-of-flight mass spectrometer recording simultaneously spectra of thirty to a hundred thousand points on each polarity. We describe here a real-time pattern recognition algorithm developed at Lawrence Livermore National Laboratory that has been implemented on a nine Digital Signal Processor (DSP) system from Signatec Incorporated. The algorithm first preprocesses independently the raw time-of-flight data through an adaptive baseline removal routine. The next step consists of a polarity dependent calibration to a mass-to-charge representation, reducing the data to about five hundred to a thousand channels per polarity. Themore » last step is the identification step using a pattern recognition algorithm based on a library of known particle signatures including threat agents and background particles. The identification step includes integrating the two polarities for a final identification determination using a score-based rule tree. This algorithm, operating on multiple channels per-polarity and multiple polarities, is well suited for parallel real-time processing. It has been implemented on the PMP8A from Signatec Incorporated, which is a computer based board that can interface directly to the two one-Giga-Sample digitizers (PDA1000 from Signatec Incorporated) used to record the two polarities of time-of-flight data. By using optimized data separation, pipelining, and parallel processing across the nine DSPs it is possible to achieve a processing speed of up to a thousand particles per seconds, while maintaining the recognition rate observed on a non-real time implementation. This embedded system has allowed the BAMS technology to improve its throughput and therefore its sensitivity while maintaining a large dynamic range (number of channels and two polarities) thus maintaining the systems specificity for bio-detection.« less
A fast-initializing digital equalizer with on-line tracking for data communications
NASA Technical Reports Server (NTRS)
Houts, R. C.; Barksdale, W. J.
1974-01-01
A theory is developed for a digital equalizer for use in reducing intersymbol interference (ISI) on high speed data communications channels. The equalizer is initialized with a single isolated transmitter pulse, provided the signal-to-noise ratio (SNR) is not unusually low, then switches to a decision directed, on-line mode of operation that allows tracking of channel variations. Conditions for optimal tap-gain settings are obtained first for a transversal equalizer structure by using a mean squared error (MSE) criterion, a first order gradient algorithm to determine the adjustable equalizer tap-gains, and a sequence of isolated initializing pulses. Since the rate of tap-gain convergence depends on the eigenvalues of a channel output correlation matrix, convergence can be improved by making a linear transformation on to obtain a new correlation matrix.
Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network
Lin, Kai; Wang, Di; Hu, Long
2016-01-01
With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC). The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S) evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods. PMID:27376302
Algorithm for Simulating Atmospheric Turbulence and Aeroelastic Effects on Simulator Motion Systems
NASA Technical Reports Server (NTRS)
Ercole, Anthony V.; Cardullo, Frank M.; Kelly, Lon C.; Houck, Jacob A.
2012-01-01
Atmospheric turbulence produces high frequency accelerations in aircraft, typically greater than the response to pilot input. Motion system equipped flight simulators must present cues representative of the aircraft response to turbulence in order to maintain the integrity of the simulation. Currently, turbulence motion cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. This report presents a new turbulence motion cueing algorithm, referred to as the augmented turbulence channel. Like the previous turbulence algorithms, the output of the channel only augments the vertical degree of freedom of motion. This algorithm employs a parallel aircraft model and an optional high bandwidth cueing filter. Simulation of aeroelastic effects is also an area where frequency content must be preserved by the cueing algorithm. The current aeroelastic implementation uses a similar secondary channel that supplements the primary motion cue. Two studies were conducted using the NASA Langley Visual Motion Simulator and Cockpit Motion Facility to evaluate the effect of the turbulence channel and aeroelastic model on pilot control input. Results indicate that the pilot is better correlated with the aircraft response, when the augmented channel is in place.
Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun
2018-01-01
Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition. PMID:29786665
Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun
2018-05-22
Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition.
Infrared traffic image enhancement algorithm based on dark channel prior and gamma correction
NASA Astrophysics Data System (ADS)
Zheng, Lintao; Shi, Hengliang; Gu, Ming
2017-07-01
The infrared traffic image acquired by the intelligent traffic surveillance equipment has low contrast, little hierarchical differences in perceptions of image and the blurred vision effect. Therefore, infrared traffic image enhancement, being an indispensable key step, is applied to nearly all infrared imaging based traffic engineering applications. In this paper, we propose an infrared traffic image enhancement algorithm that is based on dark channel prior and gamma correction. In existing research dark channel prior, known as a famous image dehazing method, here is used to do infrared image enhancement for the first time. Initially, in the proposed algorithm, the original degraded infrared traffic image is transformed with dark channel prior as the initial enhanced result. A further adjustment based on the gamma curve is needed because initial enhanced result has lower brightness. Comprehensive validation experiments reveal that the proposed algorithm outperforms the current state-of-the-art algorithms.
Simulation of polymer translocation through protein channels
Muthukumar, M.; Kong, C. Y.
2006-01-01
A modeling algorithm is presented to compute simultaneously polymer conformations and ionic current, as single polymer molecules undergo translocation through protein channels. The method is based on a combination of Langevin dynamics for coarse-grained models of polymers and the Poisson–Nernst–Planck formalism for ionic current. For the illustrative example of ssDNA passing through the α-hemolysin pore, vivid details of conformational fluctuations of the polymer inside the vestibule and β-barrel compartments of the protein pore, and their consequent effects on the translocation time and extent of blocked ionic current are presented. In addition to yielding insights into several experimentally reported puzzles, our simulations offer experimental strategies to sequence polymers more efficiently. PMID:16567657
Ion flux through membrane channels--an enhanced algorithm for the Poisson-Nernst-Planck model.
Dyrka, Witold; Augousti, Andy T; Kotulska, Malgorzata
2008-09-01
A novel algorithmic scheme for numerical solution of the 3D Poisson-Nernst-Planck model is proposed. The algorithmic improvements are universal and independent of the detailed physical model. They include three major steps: an adjustable gradient-based step value, an adjustable relaxation coefficient, and an optimized segmentation of the modeled space. The enhanced algorithm significantly accelerates the speed of computation and reduces the computational demands. The theoretical model was tested on a regular artificial channel and validated on a real protein channel-alpha-hemolysin, proving its efficiency. (c) 2008 Wiley Periodicals, Inc.
Diversity combining in laser Doppler vibrometry for improved signal reliability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dräbenstedt, Alexander
2014-05-27
Because of the speckle nature of the light reflected from rough surfaces the signal quality of a vibrometer suffers from varying signal power. Deep signal outages manifest themselves as noise bursts and spikes in the demodulated velocity signal. Here we show that the signal quality of a single point vibrometer can be substantially improved by diversity reception. This concept is widely used in RF communication and can be transferred into optical interferometry. When two statistically independent measurement channels are available which measure the same motion on the same spot, the probability for both channels to see a signal drop-out atmore » the same time is very low. We built a prototype instrument that uses polarization diversity to constitute two independent reception channels that are separately demodulated into velocity signals. Send and receive beams go through different parts of the aperture so that the beams can be spatially separated. The two velocity channels are mixed into one more reliable signal by a PC program in real time with the help of the signal power information. An algorithm has been developed that ensures a mixing of two or more channels with minimum resulting variance. The combination algorithm delivers also an equivalent signal power for the combined signal. The combined signal lacks the vast majority of spikes that are present in the raw signals and it extracts the true vibration information present in both channels. A statistical analysis shows that the probability for deep signal outages is largely decreased. A 60 fold improvement can be shown. The reduction of spikes and noise bursts reduces the noise in the spectral analysis of vibrations too. Over certain frequency bands a reduction of the noise density by a factor above 10 can be shown.« less
A source number estimation method for single optical fiber sensor
NASA Astrophysics Data System (ADS)
Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu
2015-10-01
The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.
NASA Technical Reports Server (NTRS)
Petty, G. W.
1994-01-01
Microwave rain rate retrieval algorithms have most often been formulated in terms of the raw brightness temperatures observed by one or more channels of a satellite radiometer. Taken individually, single-channel brightness temperatures generally represent a near-arbitrary combination of positive contributions due to liquid water emission and negative contributions due to scattering by ice and/or visibility of the radiometrically cold ocean surface. Unfortunately, for a given rain rate, emission by liquid water below the freezing level and scattering by ice particles above the freezing level are rather loosely coupled in both a physical and statistical sense. Furthermore, microwave brightness temperatures may vary significantly (approx. 30-70 K) in response to geophysical parameters other than liquid water and precipitation. Because of these complications, physical algorithms which attempt to directly invert observed brightness temperatures have typically relied on the iterative adjustment of detailed micro-physical profiles or cloud models, guided by explicit forward microwave radiative transfer calculations. In support of an effort to develop a significantly simpler and more efficient inversion-type rain rate algorithm, the physical information content of two linear transformations of single-frequency, dual-polarization brightness temperatures is studied: the normalized polarization difference P of Petty and Katsaros (1990, 1992), which is intended as a measure of footprint-averaged rain cloud transmittance for a given frequency; and a scattering index S (similar to the polarization corrected temperature of Spencer et al.,1989) which is sensitive almost exclusively to ice. A reverse Monte Carlo radiative transfer model is used to elucidate the qualitative response of these physically distinct single-frequency indices to idealized 3-dimensional rain clouds and to demonstrate their advantages over raw brightness temperatures both as stand-alone indices of precipitation activity and as primary variables in physical, multichannel rain rate retrieval schemes. As a byproduct of the present analysis, it is shown that conventional plane-parallel analyses of the well-known foot-print-filling problem for emission-based algorithms may in some cases give seriously misleading results.
NASA Astrophysics Data System (ADS)
Rousseau, Yannick Y.; Van de Wiel, Marco J.; Biron, Pascale M.
2017-10-01
Meandering river channels are often associated with cohesive banks. Yet only a few river modelling packages include geotechnical and plant effects. Existing packages are solely compatible with single-threaded channels, require a specific mesh structure, derive lateral migration rates from hydraulic properties, determine stability based on friction angle, rely on nonphysical assumptions to describe cutoffs, or exclude floodplain processes and vegetation. In this paper, we evaluate the accuracy of a new geotechnical module that was developed and coupled with Telemac-Mascaret to address these limitations. Innovatively, the newly developed module relies on a fully configurable, universal genetic algorithm with tournament selection that permits it (1) to assess geotechnical stability along potentially unstable slope profiles intersecting liquid-solid boundaries, and (2) to predict the shape and extent of slump blocks while considering mechanical plant effects, bank hydrology, and the hydrostatic pressure caused by flow. The profiles of unstable banks are altered while ensuring mass conservation. Importantly, the new stability module is independent of mesh structure and can operate efficiently along multithreaded channels, cutoffs, and islands. Data collected along a 1.5-km-long reach of the semialluvial Medway Creek, Canada, over a period of 3.5 years are used to evaluate the capacity of the coupled model to accurately predict bank retreat in meandering river channels and to evaluate the extent to which the new model can be applied to a natural river reach located in a complex environment. Our results indicate that key geotechnical parameters can indeed be adjusted to fit observations, even with a minimal calibration effort, and that the model correctly identifies the location of the most severely eroded bank regions. The combined use of genetic and spatial analysis algorithms, in particular for the evaluation of geotechnical stability independently of the hydrodynamic mesh, permits the consideration of biophysical conditions for an extended river reach with complex bank geometries, with only a minor increase in run time. Further improvements with respect to plant representation could assist scientists in better understanding channel-floodplain interactions and in evaluating channel designs in river management projects.
NASA Astrophysics Data System (ADS)
Lv, ZhuoKai; Yang, Tiejun; Zhu, Chunhua
2018-03-01
Through utilizing the technology of compressive sensing (CS), the channel estimation methods can achieve the purpose of reducing pilots and improving spectrum efficiency. The channel estimation and pilot design scheme are explored during the correspondence under the help of block-structured CS in massive MIMO systems. The block coherence property of the aggregate system matrix can be minimized so that the pilot design scheme based on stochastic search is proposed. Moreover, the block sparsity adaptive matching pursuit (BSAMP) algorithm under the common sparsity model is proposed so that the channel estimation can be caught precisely. Simulation results are to be proved the proposed design algorithm with superimposed pilots design and the BSAMP algorithm can provide better channel estimation than existing methods.
Quantifying short-lived events in multistate ionic current measurements.
Balijepalli, Arvind; Ettedgui, Jessica; Cornio, Andrew T; Robertson, Joseph W F; Cheung, Kin P; Kasianowicz, John J; Vaz, Canute
2014-02-25
We developed a generalized technique to characterize polymer-nanopore interactions via single channel ionic current measurements. Physical interactions between analytes, such as DNA, proteins, or synthetic polymers, and a nanopore cause multiple discrete states in the current. We modeled the transitions of the current to individual states with an equivalent electrical circuit, which allowed us to describe the system response. This enabled the estimation of short-lived states that are presently not characterized by existing analysis techniques. Our approach considerably improves the range and resolution of single-molecule characterization with nanopores. For example, we characterized the residence times of synthetic polymers that are three times shorter than those estimated with existing algorithms. Because the molecule's residence time follows an exponential distribution, we recover nearly 20-fold more events per unit time that can be used for analysis. Furthermore, the measurement range was extended from 11 monomers to as few as 8. Finally, we applied this technique to recover a known sequence of single-stranded DNA from previously published ion channel recordings, identifying discrete current states with subpicoampere resolution.
A high-throughput two channel discrete wavelet transform architecture for the JPEG2000 standard
NASA Astrophysics Data System (ADS)
Badakhshannoory, Hossein; Hashemi, Mahmoud R.; Aminlou, Alireza; Fatemi, Omid
2005-07-01
The Discrete Wavelet Transform (DWT) is increasingly recognized in image and video compression standards, as indicated by its use in JPEG2000. The lifting scheme algorithm is an alternative DWT implementation that has a lower computational complexity and reduced resource requirement. In the JPEG2000 standard two lifting scheme based filter banks are introduced: the 5/3 and 9/7. In this paper a high throughput, two channel DWT architecture for both of the JPEG2000 DWT filters is presented. The proposed pipelined architecture has two separate input channels that process the incoming samples simultaneously with minimum memory requirement for each channel. The architecture had been implemented in VHDL and synthesized on a Xilinx Virtex2 XCV1000. The proposed architecture applies DWT on a 2K by 1K image at 33 fps with a 75 MHZ clock frequency. This performance is achieved with 70% less resources than two independent single channel modules. The high throughput and reduced resource requirement has made this architecture the proper choice for real time applications such as Digital Cinema.
Computer simulator for a mobile telephone system
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1981-01-01
A software simulator was developed to assist NASA in the design of the land mobile satellite service. Structured programming techniques were used by developing the algorithm using an ALCOL-like pseudo language and then encoding the algorithm into FORTRAN 4. The basic input data to the system is a sine wave signal although future plans call for actual sampled voice as the input signal. The simulator is capable of studying all the possible combinations of types and modes of calls through the use of five communication scenarios: single hop systems; double hop, signal gateway system; double hop, double gateway system; mobile to wireline system; and wireline to mobile system. The transmitter, fading channel, and interference source simulation are also discussed.
Channel estimation based on quantized MMP for FDD massive MIMO downlink
NASA Astrophysics Data System (ADS)
Guo, Yao-ting; Wang, Bing-he; Qu, Yi; Cai, Hua-jie
2016-10-01
In this paper, we consider channel estimation for Massive MIMO systems operating in frequency division duplexing mode. By exploiting the sparsity of propagation paths in Massive MIMO channel, we develop a compressed sensing(CS) based channel estimator which can reduce the pilot overhead. As compared with the conventional least squares (LS) and linear minimum mean square error(LMMSE) estimation, the proposed algorithm is based on the quantized multipath matching pursuit - MMP - reduced the pilot overhead and performs better than other CS algorithms. The simulation results demonstrate the advantage of the proposed algorithm over various existing methods including the LS, LMMSE, CoSaMP and conventional MMP estimators.
NASA Astrophysics Data System (ADS)
Yan, H.; Zheng, M. J.; Zhu, D. Y.; Wang, H. T.; Chang, W. S.
2015-07-01
When using clutter suppression interferometry (CSI) algorithm to perform signal processing in a three-channel wide-area surveillance radar system, the primary concern is to effectively suppress the ground clutter. However, a portion of moving target's energy is also lost in the process of channel cancellation, which is often neglected in conventional applications. In this paper, we firstly investigate the two-dimensional (radial velocity dimension and squint angle dimension) residual amplitude of moving targets after channel cancellation with CSI algorithm. Then, a new approach is proposed to increase the two-dimensional detection probability of moving targets by reserving the maximum value of the three channel cancellation results in non-uniformly spaced channel system. Besides, theoretical expression of the false alarm probability with the proposed approach is derived in the paper. Compared with the conventional approaches in uniformly spaced channel system, simulation results validate the effectiveness of the proposed approach. To our knowledge, it is the first time that the two-dimensional detection probability of CSI algorithm is studied.
Improved Detection of Vowel Envelope Frequency Following Responses Using Hotelling's T2 Analysis.
Vanheusden, Frederique J; Bell, Steven L; Chesnaye, Michael A; Simpson, David M
2018-05-11
Objective detection of brainstem responses to natural speech stimuli is an important tool for the evaluation of hearing aid fitting, especially in people who may not be able to respond reliably in behavioral tests. Of particular interest is the envelope frequency following response (eFFR), which refers to the EEG response at the stimulus' fundamental frequency (and its harmonics), and here in particular to the response to natural spoken vowel sounds. This article introduces the frequency-domain Hotelling's T (HT2) method for eFFR detection. This method was compared, in terms of sensitivity in detecting eFFRs at the fundamental frequency (HT2_F0), to two different single-channel frequency domain methods (F test on Fourier analyzer (FA) amplitude spectra [FA-F-Test] and magnitude-squared coherence [MSC]) in detecting envelope following responses to natural vowel stimuli in simulated data and EEG data from normal-hearing subjects. Sensitivity was assessed based on the number of detections and the time needed to detect a response for a false-positive rate of 5%. The study also explored whether a single-channel, multifrequency HT2 (HT2_3F) and a multichannel, multifrequency HT2 (HT2_MC) could further improve response detection. Four repeated words were presented sequentially at 70 dB SPL LAeq through ER-2 insert earphones. The stimuli consisted of a prolonged vowel in a /hVd/ structure (where V represents different vowel sounds). Each stimulus was presented over 440 sweeps (220 condensation and 220 rarefaction). EEG data were collected from 12 normal-hearing adult participants. After preprocessing and artifact removal, eFFR detection was compared between the algorithms. For the simulation study, simulated EEG signals were generated by adding random noise at multiple signal to noise ratios (SNRs; 0 to -60dB) to the auditory stimuli as well as to a single sinusoid at the fluctuating and flattened fundamental frequency (f0). For each SNR, 1000 sets of 440 simulated epochs were generated. Performance of the algorithms was assessed based on the number of sets for which a response could be detected at each SNR. In simulation studies, HT2_3F significantly outperformed the other algorithms when detecting a vowel stimulus in noise. For simulations containing responses only at a single frequency, HT2_3F performs worse compared with other approaches applied in this study as the additional frequencies included do not contain additional information. For recorded EEG data, HT2_MC showed a significantly higher response detection rate compared with MSC and FA-F-Test. Both HT2_MC and HT2_F0 also showed a significant reduction in detection time compared with the FA-F-Test algorithm. Comparisons between different electrode locations confirmed a higher number of detections for electrodes close to Cz compared to more peripheral locations. The HT2 method is more sensitive than FA-F-Test and MSC in detecting responses to complex stimuli because it allows detection of multiple frequencies (HT2_F3) and multiple EEG channels (HT2_MC) simultaneously. This effect was shown in simulation studies for HT2_3F and in EEG data for the HT2_MC algorithm. The spread in detection time across subjects is also lower for the HT2 algorithm, with decision on the presence of an eFFR possible within 5 min.
IDMA-Based MAC Protocol for Satellite Networks with Consideration on Channel Quality
2014-01-01
In order to overcome the shortcomings of existing medium access control (MAC) protocols based on TDMA or CDMA in satellite networks, interleave division multiple access (IDMA) technique is introduced into satellite communication networks. Therefore, a novel wide-band IDMA MAC protocol based on channel quality is proposed in this paper, consisting of a dynamic power allocation algorithm, a rate adaptation algorithm, and a call admission control (CAC) scheme. Firstly, the power allocation algorithm combining the technique of IDMA SINR-evolution and channel quality prediction is developed to guarantee high power efficiency even in terrible channel conditions. Secondly, the effective rate adaptation algorithm, based on accurate channel information per timeslot and by the means of rate degradation, can be realized. What is more, based on channel quality prediction, the CAC scheme, combining the new power allocation algorithm, rate scheduling, and buffering strategies together, is proposed for the emerging IDMA systems, which can support a variety of traffic types, and offering quality of service (QoS) requirements corresponding to different priority levels. Simulation results show that the new wide-band IDMA MAC protocol can make accurate estimation of available resource considering the effect of multiuser detection (MUD) and QoS requirements of multimedia traffic, leading to low outage probability as well as high overall system throughput. PMID:25126592
A comparison of waveform processing algorithms for single-wavelength LiDAR bathymetry
NASA Astrophysics Data System (ADS)
Wang, Chisheng; Li, Qingquan; Liu, Yanxiong; Wu, Guofeng; Liu, Peng; Ding, Xiaoli
2015-03-01
Due to the low-cost and lightweight units, single-wavelength LiDAR bathymetric systems are an ideal option for shallow-water (<12 m) bathymetry. However, one disadvantage of such systems is the lack of near-infrared and Raman channels, which results in difficulties in extracting the water surface. Therefore, the choice of a suitable waveform processing method is extremely important to guarantee the accuracy of the bathymetric retrieval. In this paper, we test six algorithms for single-wavelength bathymetric waveform processing, i.e. peak detection (PD), the average square difference function (ASDF), Gaussian decomposition (GD), quadrilateral fitting (QF), Richardson-Lucy deconvolution (RLD), and Wiener filter deconvolution (WD). To date, most of these algorithms have previously only been applied in topographic LiDAR waveforms captured over land. A simulated dataset and an Optech Aquarius dataset were used to assess the algorithms, with the focus being on their capability of extracting the depth and the bottom response. The influences of a number of water and equipment parameters were also investigated by the use of a Monte Carlo method. The results showed that the RLD method had a superior performance in terms of a high detection rate and low errors in the retrieved depth and magnitude. The attenuation coefficient, noise level, water depth, and bottom reflectance had significant influences on the measurement error of the retrieved depth, while the effects of scan angle and water surface roughness were not so obvious.
NASA Astrophysics Data System (ADS)
Liu, Xuan; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Zhang, Qi; Wang, Yong-jun; Tian, Qing-hua; Tian, Feng; Mao, Ya-ya
2018-01-01
Traditional clock recovery scheme achieves timing adjustment by digital interpolation, thus recovering the sampling sequence. Based on this, an improved clock recovery architecture joint channel equalization for coherent optical communication system is presented in this paper. The loop is different from the traditional clock recovery. In order to reduce the interpolation error caused by the distortion in the frequency domain of the interpolator and to suppress the spectral mirroring generated by the sampling rate change, the proposed algorithm joint equalization, improves the original interpolator in the loop, along with adaptive filtering, and makes error compensation for the original signals according to the balanced pre-filtering signals. Then the signals are adaptive interpolated through the feedback loop. Furthermore, the phase splitting timing recovery algorithm is adopted in this paper. The time error is calculated according to the improved algorithm when there is no transition between the adjacent symbols, making calculated timing error more accurate. Meanwhile, Carrier coarse synchronization module is placed before the beginning of timing recovery to eliminate the larger frequency offset interference, which effectively adjust the sampling clock phase. In this paper, the simulation results show that the timing error is greatly reduced after the loop is changed. Based on the phase splitting algorithm, the BER and MSE are better than those in the unvaried architecture. In the fiber channel, using MQAM modulation format, after 100 km-transmission of single-mode fiber, especially when ROF(roll-off factor) values tends to 0, the algorithm shows a better clock performance under different ROFs. When SNR values are less than 8, the BER could achieve 10-2 to 10-1 magnitude. Furthermore, the proposed timing recovery is more suitable for the situation with low SNR values.
NASA Astrophysics Data System (ADS)
Heilman, Jesse Alan
The search for the production of four top quarks decaying in the dileptonic channel in proton-proton collisions at the LHC is presented. The analysis utilises the data recorded by the CMS experiment at sqrt{s} = 13 TeV in 2015, which corresponds to an integrated luminosity of 2.6 inverse femtobarns. A boosted decision tree algorithm is used to select signal and suppress background events. Upper limits on dileptonic four top quark production of 14.9 times the predicted standard model cross section observed and 22.3 +16.2-8.4 times the predicted standard model cross section expected are calculated at the 95% confidence level. A combination is then performed with a parallel analysis of the single lepton channel to extend the reach of the search.
Call sign intelligibility improvement using a spatial auditory display
NASA Technical Reports Server (NTRS)
Begault, Durand R.
1994-01-01
A spatial auditory display was designed for separating the multiple communication channels usually heard over one ear to different virtual auditory positions. The single 19 foot rack mount device utilizes digital filtering algorithms to separate up to four communication channels. The filters use four different binaural transfer functions, synthesized from actual outer ear measurements, to impose localization cues on the incoming sound. Hardware design features include 'fail-safe' operation in the case of power loss, and microphone/headset interfaces to the mobile launch communication system in use at KSC. An experiment designed to verify the intelligibility advantage of the display used 130 different call signs taken from the communications protocol used at NASA KSC. A 6 to 7 dB intelligibility advantage was found when multiple channels were spatially displayed, compared to monaural listening. The findings suggest that the use of a spatial auditory display could enhance both occupational and operational safety and efficiency of NASA operations.
NASA Astrophysics Data System (ADS)
Saito, Takahiro; Takahashi, Hiromi; Komatsu, Takashi
2006-02-01
The Retinex theory was first proposed by Land, and deals with separation of irradiance from reflectance in an observed image. The separation problem is an ill-posed problem. Land and others proposed various Retinex separation algorithms. Recently, Kimmel and others proposed a variational framework that unifies the previous Retinex algorithms such as the Poisson-equation-type Retinex algorithms developed by Horn and others, and presented a Retinex separation algorithm with the time-evolution of a linear diffusion process. However, the Kimmel's separation algorithm cannot achieve physically rational separation, if true irradiance varies among color channels. To cope with this problem, we introduce a nonlinear diffusion process into the time-evolution. Moreover, as to its extension to color images, we present two approaches to treat color channels: the independent approach to treat each color channel separately and the collective approach to treat all color channels collectively. The latter approach outperforms the former. Furthermore, we apply our separation algorithm to a high quality chroma key in which before combining a foreground frame and a background frame into an output image a color of each pixel in the foreground frame are spatially adaptively corrected through transformation of the separated irradiance. Experiments demonstrate superiority of our separation algorithm over the Kimmel's separation algorithm.
Jia, Zhensheng; Chien, Hung-Chang; Cai, Yi; Yu, Jianjun; Zhang, Chengliang; Li, Junjie; Ma, Yiran; Shang, Dongdong; Zhang, Qi; Shi, Sheping; Wang, Huitao
2015-02-09
We experimentally demonstrate a quad-carrier 1-Tb/s solution with 37.5-GBaud PM-16QAM signal over 37.5-GHz optical grid at 6.7 b/s/Hz net spectral efficiency. Digital Nyquist pulse shaping at the transmitter and post-equalization at the receiver are employed to mitigate the impairments of joint inter-symbol-interference (ISI) and inter-channel-interference (ICI) symbol degradation. The post-equalization algorithms consist of one sample/symbol based decision-directed least mean square (DD-LMS) adaptive filter, digital post filter and maximum likelihood sequence estimation (MLSE), and a positive iterative process among them. By combining these algorithms, the improvement as much as 4-dB OSNR (0.1nm) at SD-FEC limit (Q(2) = 6.25 corresponding to BER = 2.0e-2) is obtained when compared to no such post-equalization process, and transmission over 820-km EDFA-only standard single-mode fiber (SSMF) link is achieved for two 1.2-Tb/s signals with the averaged Q(2) factor larger than 6.5 dB for all sub-channels. Additionally, 50-GBaud 16QAM operating at 1.28 samples/symbol in a DAC is also investigated and successful transmission over 410-km SSMF link is achieved at 62.5-GHz optical grid.
Belief propagation decoding of quantum channels by passing quantum messages
NASA Astrophysics Data System (ADS)
Renes, Joseph M.
2017-07-01
The belief propagation (BP) algorithm is a powerful tool in a wide range of disciplines from statistical physics to machine learning to computational biology, and is ubiquitous in decoding classical error-correcting codes. The algorithm works by passing messages between nodes of the factor graph associated with the code and enables efficient decoding of the channel, in some cases even up to the Shannon capacity. Here we construct the first BP algorithm which passes quantum messages on the factor graph and is capable of decoding the classical-quantum channel with pure state outputs. This gives explicit decoding circuits whose number of gates is quadratic in the code length. We also show that this decoder can be modified to work with polar codes for the pure state channel and as part of a decoder for transmitting quantum information over the amplitude damping channel. These represent the first explicit capacity-achieving decoders for non-Pauli channels.
1D-VAR Retrieval Using Superchannels
NASA Technical Reports Server (NTRS)
Liu, Xu; Zhou, Daniel; Larar, Allen; Smith, William L.; Schluessel, Peter; Mango, Stephen; SaintGermain, Karen
2008-01-01
Since modern ultra-spectral remote sensors have thousands of channels, it is difficult to include all of them in a 1D-var retrieval system. We will describe a physical inversion algorithm, which includes all available channels for the atmospheric temperature, moisture, cloud, and surface parameter retrievals. Both the forward model and the inversion algorithm compress the channel radiances into super channels. These super channels are obtained by projecting the radiance spectra onto a set of pre-calculated eigenvectors. The forward model provides both super channel properties and jacobian in EOF space directly. For ultra-spectral sensors such as Infrared Atmospheric Sounding Interferometer (IASI) and the NPOESS Airborne Sounder Testbed Interferometer (NAST), a compression ratio of more than 80 can be achieved, leading to a significant reduction in computations involved in an inversion process. Results will be shown applying the algorithm to real IASI and NAST data.
Experiences with serial and parallel algorithms for channel routing using simulated annealing
NASA Technical Reports Server (NTRS)
Brouwer, Randall Jay
1988-01-01
Two algorithms for channel routing using simulated annealing are presented. Simulated annealing is an optimization methodology which allows the solution process to back up out of local minima that may be encountered by inappropriate selections. By properly controlling the annealing process, it is very likely that the optimal solution to an NP-complete problem such as channel routing may be found. The algorithm presented proposes very relaxed restrictions on the types of allowable transformations, including overlapping nets. By freeing that restriction and controlling overlap situations with an appropriate cost function, the algorithm becomes very flexible and can be applied to many extensions of channel routing. The selection of the transformation utilizes a number of heuristics, still retaining the pseudorandom nature of simulated annealing. The algorithm was implemented as a serial program for a workstation, and a parallel program designed for a hypercube computer. The details of the serial implementation are presented, including many of the heuristics used and some of the resulting solutions.
Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods
NASA Astrophysics Data System (ADS)
Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong
2008-12-01
Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.
HF band filter bank multi-carrier spread spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laraway, Stephen Andrew; Moradi, Hussein; Farhang-Boroujeny, Behrouz
Abstract—This paper describes modifications to the filter bank multicarrier spread spectrum (FB-MC-SS) system, that was presented in [1] and [2], to enable transmission of this waveform in the HF skywave channel. FB-MC-SS is well suited for the HF channel because it performs well in channels with frequency selective fading and interference. This paper describes new algorithms for packet detection, timing recovery and equalization that are suitable for the HF channel. Also, an algorithm for optimizing the peak to average power ratio (PAPR) of the FBMC- SS waveform is presented. Application of this algorithm results in a waveform with low PAPR.more » Simulation results using a wide band HF channel model demonstrate the robustness of this system over a wide range of delay and Doppler spreads.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, W.; Yin, J.; Li, C.
This paper presents a novel front-end electronics based on a front-end ASIC with post digital filtering and calibration dedicated to CZT detectors for PET imaging. A cascade amplifier based on split-leg topology is selected to realize the charge-sensitive amplifier (CSA) for the sake of low noise performances and the simple scheme of the power supplies. The output of the CSA is connected to a variable-gain amplifier to generate the compatible signals for the A/D conversion. A multi-channel single-slope ADC is designed to sample multiple points for the digital filtering and shaping. The digital signal processing algorithms are implemented by amore » FPGA. To verify the proposed scheme, a front-end readout prototype ASIC is designed and implemented in 0.35 μm CMOS process. In a single readout channel, a CSA, a VGA, a 10-bit ADC and registers are integrated. Two dummy channels, bias circuits, and time controller are also integrated. The die size is 2.0 mm x 2.1 mm. The input range of the ASIC is from 2000 e{sup -} to 100000 e{sup -}, which is suitable for the detection of the X-and gamma ray from 11.2 keV to 550 keV. The linearity of the output voltage is less than 1 %. The gain of the readout channel is 40.2 V/pC. The static power dissipation is about 10 mW/channel. The above tested results show that the electrical performances of the ASIC can well satisfy PET imaging applications. (authors)« less
A natural-color mapping for single-band night-time image based on FPGA
NASA Astrophysics Data System (ADS)
Wang, Yilun; Qian, Yunsheng
2018-01-01
A natural-color mapping for single-band night-time image method based on FPGA can transmit the color of the reference image to single-band night-time image, which is consistent with human visual habits and can help observers identify the target. This paper introduces the processing of the natural-color mapping algorithm based on FPGA. Firstly, the image can be transformed based on histogram equalization, and the intensity features and standard deviation features of reference image are stored in SRAM. Then, the real-time digital images' intensity features and standard deviation features are calculated by FPGA. At last, FPGA completes the color mapping through matching pixels between images using the features in luminance channel.
Active noise control in a duct to cancel broadband noise
NASA Astrophysics Data System (ADS)
Chen, Kuan-Chun; Chang, Cheng-Yuan; Kuo, Sen M.
2017-09-01
The paper presents cancelling duct noises by using the active noise control (ANC) techniques. We use the single channel feed forward algorithm with feedback neutralization to realize ANC. Several kinds of ducts noises including tonal noises, sweep tonal signals, and white noise had investigated. Experimental results show that the proposed ANC system can cancel these noises in a PVC duct very well. The noise reduction of white noise can be up to 20 dB.
Dai, Shengfa; Wei, Qingguo
2017-01-01
Common spatial pattern algorithm is widely used to estimate spatial filters in motor imagery based brain-computer interfaces. However, use of a large number of channels will make common spatial pattern tend to over-fitting and the classification of electroencephalographic signals time-consuming. To overcome these problems, it is necessary to choose an optimal subset of the whole channels to save computational time and improve the classification accuracy. In this paper, a novel method named backtracking search optimization algorithm is proposed to automatically select the optimal channel set for common spatial pattern. Each individual in the population is a N-dimensional vector, with each component representing one channel. A population of binary codes generate randomly in the beginning, and then channels are selected according to the evolution of these codes. The number and positions of 1's in the code denote the number and positions of chosen channels. The objective function of backtracking search optimization algorithm is defined as the combination of classification error rate and relative number of channels. Experimental results suggest that higher classification accuracy can be achieved with much fewer channels compared to standard common spatial pattern with whole channels.
Harlander, Niklas; Rosenkranz, Tobias; Hohmann, Volker
2012-08-01
Single channel noise reduction has been well investigated and seems to have reached its limits in terms of speech intelligibility improvement, however, the quality of such schemes can still be advanced. This study tests to what extent novel model-based processing schemes might improve performance in particular for non-stationary noise conditions. Two prototype model-based algorithms, a speech-model-based, and a auditory-model-based algorithm were compared to a state-of-the-art non-parametric minimum statistics algorithm. A speech intelligibility test, preference rating, and listening effort scaling were performed. Additionally, three objective quality measures for the signal, background, and overall distortions were applied. For a better comparison of all algorithms, particular attention was given to the usage of the similar Wiener-based gain rule. The perceptual investigation was performed with fourteen hearing-impaired subjects. The results revealed that the non-parametric algorithm and the auditory model-based algorithm did not affect speech intelligibility, whereas the speech-model-based algorithm slightly decreased intelligibility. In terms of subjective quality, both model-based algorithms perform better than the unprocessed condition and the reference in particular for highly non-stationary noise environments. Data support the hypothesis that model-based algorithms are promising for improving performance in non-stationary noise conditions.
Pinning impulsive control algorithms for complex network
NASA Astrophysics Data System (ADS)
Sun, Wen; Lü, Jinhu; Chen, Shihua; Yu, Xinghuo
2014-03-01
In this paper, we further investigate the synchronization of complex dynamical network via pinning control in which a selection of nodes are controlled at discrete times. Different from most existing work, the pinning control algorithms utilize only the impulsive signals at discrete time instants, which may greatly improve the communication channel efficiency and reduce control cost. Two classes of algorithms are designed, one for strongly connected complex network and another for non-strongly connected complex network. It is suggested that in the strongly connected network with suitable coupling strength, a single controller at any one of the network's nodes can always pin the network to its homogeneous solution. In the non-strongly connected case, the location and minimum number of nodes needed to pin the network are determined by the Frobenius normal form of the coupling matrix. In addition, the coupling matrix is not necessarily symmetric or irreducible. Illustrative examples are then given to validate the proposed pinning impulsive control algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papantoni-Kazakos, P.; Paterakis, M.
1988-07-01
For many communication applications with time constraints (e.g., transmission of packetized voice messages), a critical performance measure is the percentage of messages transmitted within a given amount of time after their generation at the transmitting station. This report presents a random-access algorithm (RAA) suitable for time-constrained applications. Performance analysis demonstrates that significant message-delay improvement is attained at the expense of minimal traffic loss. Also considered is the case of noisy channels. The noise effect appears at erroneously observed channel feedback. Error sensitivity analysis shows that the proposed random-access algorithm is insensitive to feedback channel errors. Window Random-Access Algorithms (RAAs) aremore » considered next. These algorithms constitute an important subclass of Multiple-Access Algorithms (MAAs); they are distributive, and they attain high throughput and low delays by controlling the number of simultaneously transmitting users.« less
Rowland, Joel C.; Shelef, Eitan; Pope, Paul A.; ...
2016-07-15
Remotely sensed imagery of rivers has long served as a means for characterizing channel properties and detection of planview change. In the last decade the dramatic increase in the availability of satellite imagery and processing tools has created the potential to greatly expand the spatial and temporal scale of our understanding of river morphology and dynamics. To date, the majority of GIS and automated analyses of planview changes in rivers from remotely sensed data has been developed for single-threaded meandering river systems. These methods have limited applicability to many of the earth's rivers with complex multi-channel planforms. Here we presentmore » the methodologies of a set of analysis algorithms collectively called Spatially Continuous Riverbank Erosion and Accretion Measurements (SCREAM). SCREAM analyzes planview river metrics regardless of river morphology. These algorithms quantify both the erosion and accretion rates of riverbanks from binary masks of channels generated from imagery acquired at two time periods. Additionally, the program quantifies the area of change between river channels and the surrounding floodplain and area of islands lost or formed between these two time periods. To examine variations in erosion rates in relation to local channel attributes and make rate comparisons between river systems of varying sizes, the program determines channel widths and bank curvature at every bank pixel. SCREAM was developed and tested on rivers with diverse and complex planform morphologies in imagery acquired from a range of observational platforms with varying spatial resolutions. Here, validation and verification of SCREAM-generated metrics against manual measurements show no significant measurement errors in determination of channel width, erosion, and bank aspects. SCREAM has the potential to provide data for both the quantitative examination of the controls on erosion rates and for the comparison of these rates across river systems ranging broadly in size and planform morphology.« less
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.; Leban, F.; Mashiko, S.
1986-01-01
Single-channel pilot manual control output in closed-tracking tasks is modeled in terms of linear discrete transfer functions which are parsimonious and guaranteed stable. The transfer functions are found by applying a modified super-position time series generation technique. A Levinson-Durbin algorithm is used to determine the filter which prewhitens the input and a projective (least squares) fit of pulse response estimates is used to guarantee identified model stability. Results from two case studies are compared to previous findings, where the source of data are relatively short data records, approximately 25 seconds long. Time delay effects and pilot seasonalities are discussed and analyzed. It is concluded that single-channel time series controller modeling is feasible on short records, and that it is important for the analyst to determine a criterion for best time domain fit which allows association of model parameter values, such as pure time delay, with actual physical and physiological constraints. The purpose of the modeling is thus paramount.
Active vibration control of a full scale aircraft wing using a reconfigurable controller
NASA Astrophysics Data System (ADS)
Prakash, Shashikala; Renjith Kumar, T. G.; Raja, S.; Dwarakanathan, D.; Subramani, H.; Karthikeyan, C.
2016-01-01
This work highlights the design of a Reconfigurable Active Vibration Control (AVC) System for aircraft structures using adaptive techniques. The AVC system with a multichannel capability is realized using Filtered-X Least Mean Square algorithm (FxLMS) on Xilinx Virtex-4 Field Programmable Gate Array (FPGA) platform in Very High Speed Integrated Circuits Hardware Description Language, (VHDL). The HDL design is made based on Finite State Machine (FSM) model with Floating point Intellectual Property (IP) cores for arithmetic operations. The use of FPGA facilitates to modify the system parameters even during runtime depending on the changes in user's requirements. The locations of the control actuators are optimized based on dynamic modal strain approach using genetic algorithm (GA). The developed system has been successfully deployed for the AVC testing of the full-scale wing of an all composite two seater transport aircraft. Several closed loop configurations like single channel and multi-channel control have been tested. The experimental results from the studies presented here are very encouraging. They demonstrate the usefulness of the system's reconfigurability for real time applications.
NASA Technical Reports Server (NTRS)
Peterson, Harold; Koshak, William J.
2009-01-01
An algorithm has been developed to estimate the altitude distribution of one-meter lightning channel segments. The algorithm is required as part of a broader objective that involves improving the lightning NOx emission inventories of both regional air quality and global chemistry/climate models. The algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand; VHF source amplitude thresholding and smoothing were applied to optimize results. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. One-meter channel segment altitude distributions were also obtained for the different seasons.
A hierarchical exact accelerated stochastic simulation algorithm
NASA Astrophysics Data System (ADS)
Orendorff, David; Mjolsness, Eric
2012-12-01
A new algorithm, "HiER-leap" (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled "blocks" and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms.
A new algorithm for ECG interference removal from single channel EMG recording.
Yazdani, Shayan; Azghani, Mahmood Reza; Sedaaghi, Mohammad Hossein
2017-09-01
This paper presents a new method to remove electrocardiogram (ECG) interference from electromyogram (EMG). This interference occurs during the EMG acquisition from trunk muscles. The proposed algorithm employs progressive image denoising (PID) algorithm and ensembles empirical mode decomposition (EEMD) to remove this type of interference. PID is a very recent method that is being used for denoising digital images mixed with white Gaussian noise. It detects white Gaussian noise by deterministic annealing. To the best of our knowledge, PID has never been used before, in the case of EMG and ECG separation or in other 1D signal denoising applications. We have used it according to this fact that amplitude of the EMG signal can be modeled as white Gaussian noise using a filter with time-variant properties. The proposed algorithm has been compared to the other well-known methods such as HPF, EEMD-ICA, Wavelet-ICA and PID. The results show that the proposed algorithm outperforms the others, on the basis of three evaluation criteria used in this paper: Normalized mean square error, Signal to noise ratio and Pearson correlation.
Real-time estimation of horizontal gaze angle by saccade integration using in-ear electrooculography
2018-01-01
The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG), which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy. However, the performance in terms of Pearson product-moment correlation coefficients was significantly better than the conventional approach in some instances. The results suggest that in-ear EOG signals captured with conductive ear moulds could serve as a basis for light-weight and portable horizontal eye gaze angle estimation suitable for a broad range of applications. For instance, for hearing aids to steer the directivity of microphones in the direction of the user’s eye gaze. PMID:29304120
Hládek, Ľuboš; Porr, Bernd; Brimijoin, W Owen
2018-01-01
The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG), which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy. However, the performance in terms of Pearson product-moment correlation coefficients was significantly better than the conventional approach in some instances. The results suggest that in-ear EOG signals captured with conductive ear moulds could serve as a basis for light-weight and portable horizontal eye gaze angle estimation suitable for a broad range of applications. For instance, for hearing aids to steer the directivity of microphones in the direction of the user's eye gaze.
Algorithmic complexity of quantum capacity
NASA Astrophysics Data System (ADS)
Oskouei, Samad Khabbazi; Mancini, Stefano
2018-04-01
We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.
Coordinated Beamforming for MISO Interference Channel: Complexity Analysis and Efficient Algorithms
2010-01-01
Algorithm The cyclic coordinate descent algorithm is also known as the nonlinear Gauss - Seidel iteration [32]. There are several studies of this type of...vkρ(vi−1). It can be shown that the above BB gradient projection direction is always a descent direction. The R-linear convergence of the BB method has...KKT solution ) of the inexact pricing algorithm for MISO interference channel. The latter is interesting since the convergence of the original pricing
Combined Dust Detection Algorithm by Using MODIS Infrared Channels over East Asia
NASA Technical Reports Server (NTRS)
Park, Sang Seo; Kim, Jhoon; Lee, Jaehwa; Lee, Sukjo; Kim, Jeong Soo; Chang, Lim Seok; Ou, Steve
2014-01-01
A new dust detection algorithm is developed by combining the results of multiple dust detectionmethods using IR channels onboard the MODerate resolution Imaging Spectroradiometer (MODIS). Brightness Temperature Difference (BTD) between two wavelength channels has been used widely in previous dust detection methods. However, BTDmethods have limitations in identifying the offset values of the BTDto discriminate clear-sky areas. The current algorithm overcomes the disadvantages of previous dust detection methods by considering the Brightness Temperature Ratio (BTR) values of the dual wavelength channels with 30-day composite, the optical properties of the dust particles, the variability of surface properties, and the cloud contamination. Therefore, the current algorithm shows improvements in detecting the dust loaded region over land during daytime. Finally, the confidence index of the current dust algorithm is shown in 10 × 10 pixels of the MODIS observations. From January to June, 2006, the results of the current algorithm are within 64 to 81% of those found using the fine mode fraction (FMF) and aerosol index (AI) from the MODIS and Ozone Monitoring Instrument (OMI). The agreement between the results of the current algorithm and the OMI AI over the non-polluted land also ranges from 60 to 67% to avoid errors due to the anthropogenic aerosol. In addition, the developed algorithm shows statistically significant results at four AErosol RObotic NETwork (AERONET) sites in East Asia.
Ghaderi, Parviz; Marateb, Hamid R
2017-07-01
The aim of this study was to reconstruct low-quality High-density surface EMG (HDsEMG) signals, recorded with 2-D electrode arrays, using image inpainting and surface reconstruction methods. It is common that some fraction of the electrodes may provide low-quality signals. We used variety of image inpainting methods, based on partial differential equations (PDEs), and surface reconstruction methods to reconstruct the time-averaged or instantaneous muscle activity maps of those outlier channels. Two novel reconstruction algorithms were also proposed. HDsEMG signals were recorded from the biceps femoris and brachial biceps muscles during low-to-moderate-level isometric contractions, and some of the channels (5-25%) were randomly marked as outliers. The root-mean-square error (RMSE) between the original and reconstructed maps was then calculated. Overall, the proposed Poisson and wave PDE outperformed the other methods (average RMSE 8.7 μV rms ± 6.1 μV rms and 7.5 μV rms ± 5.9 μV rms ) for the time-averaged single-differential and monopolar map reconstruction, respectively. Biharmonic Spline, the discrete cosine transform, and the Poisson PDE outperformed the other methods for the instantaneous map reconstruction. The running time of the proposed Poisson and wave PDE methods, implemented using a Vectorization package, was 4.6 ± 5.7 ms and 0.6 ± 0.5 ms, respectively, for each signal epoch or time sample in each channel. The proposed reconstruction algorithms could be promising new tools for reconstructing muscle activity maps in real-time applications. Proper reconstruction methods could recover the information of low-quality recorded channels in HDsEMG signals.
Shared Memory Parallelization of an Implicit ADI-type CFD Code
NASA Technical Reports Server (NTRS)
Hauser, Th.; Huang, P. G.
1999-01-01
A parallelization study designed for ADI-type algorithms is presented using the OpenMP specification for shared-memory multiprocessor programming. Details of optimizations specifically addressed to cache-based computer architectures are described and performance measurements for the single and multiprocessor implementation are summarized. The paper demonstrates that optimization of memory access on a cache-based computer architecture controls the performance of the computational algorithm. A hybrid MPI/OpenMP approach is proposed for clusters of shared memory machines to further enhance the parallel performance. The method is applied to develop a new LES/DNS code, named LESTool. A preliminary DNS calculation of a fully developed channel flow at a Reynolds number of 180, Re(sub tau) = 180, has shown good agreement with existing data.
A motion-classification strategy based on sEMG-EEG signal combination for upper-limb amputees.
Li, Xiangxin; Samuel, Oluwarotimi Williams; Zhang, Xu; Wang, Hui; Fang, Peng; Li, Guanglin
2017-01-07
Most of the modern motorized prostheses are controlled with the surface electromyography (sEMG) recorded on the residual muscles of amputated limbs. However, the residual muscles are usually limited, especially after above-elbow amputations, which would not provide enough sEMG for the control of prostheses with multiple degrees of freedom. Signal fusion is a possible approach to solve the problem of insufficient control commands, where some non-EMG signals are combined with sEMG signals to provide sufficient information for motion intension decoding. In this study, a motion-classification method that combines sEMG and electroencephalography (EEG) signals were proposed and investigated, in order to improve the control performance of upper-limb prostheses. Four transhumeral amputees without any form of neurological disease were recruited in the experiments. Five motion classes including hand-open, hand-close, wrist-pronation, wrist-supination, and no-movement were specified. During the motion performances, sEMG and EEG signals were simultaneously acquired from the skin surface and scalp of the amputees, respectively. The two types of signals were independently preprocessed and then combined as a parallel control input. Four time-domain features were extracted and fed into a classifier trained by the Linear Discriminant Analysis (LDA) algorithm for motion recognition. In addition, channel selections were performed by using the Sequential Forward Selection (SFS) algorithm to optimize the performance of the proposed method. The classification performance achieved by the fusion of sEMG and EEG signals was significantly better than that obtained by single signal source of either sEMG or EEG. An increment of more than 14% in classification accuracy was achieved when using a combination of 32-channel sEMG and 64-channel EEG. Furthermore, based on the SFS algorithm, two optimized electrode arrangements (10-channel sEMG + 10-channel EEG, 10-channel sEMG + 20-channel EEG) were obtained with classification accuracies of 84.2 and 87.0%, respectively, which were about 7.2 and 10% higher than the accuracy by using only 32-channel sEMG input. This study demonstrated the feasibility of fusing sEMG and EEG signals towards improving motion classification accuracy for above-elbow amputees, which might enhance the control performances of multifunctional myoelectric prostheses in clinical application. The study was approved by the ethics committee of Institutional Review Board of Shenzhen Institutes of Advanced Technology, and the reference number is SIAT-IRB-150515-H0077.
Detection of single and multilayer clouds in an artificial neural network approach
NASA Astrophysics Data System (ADS)
Sun-Mack, Sunny; Minnis, Patrick; Smith, William L.; Hong, Gang; Chen, Yan
2017-10-01
Determining whether a scene observed with a satellite imager is composed of a thin cirrus over a water cloud or thick cirrus contiguous with underlying layers of ice and water clouds is often difficult because of similarities in the observed radiance values. In this paper an artificial neural network (ANN) algorithm, employing several Aqua MODIS infrared channels and the retrieved total cloud visible optical depth, is trained to detect multilayer ice-over-water cloud systems as identified by matched April 2009 CloudSat and CALIPSO (CC) data. The CC lidar and radar profiles provide the vertical structure that serves as output truth for a multilayer ANN, or MLANN, algorithm. Applying the trained MLANN to independent July 2008 MODIS data resulted in a combined ML and single layer hit rate of 75% (72%) for nonpolar regions during the day (night). The results are comparable to or more accurate than currently available methods. Areas of improvement are identified and will be addressed in future versions of the MLANN.
Super-resolution for imagery from integrated microgrid polarimeters.
Hardie, Russell C; LeMaster, Daniel A; Ratliff, Bradley M
2011-07-04
Imagery from microgrid polarimeters is obtained by using a mosaic of pixel-wise micropolarizers on a focal plane array (FPA). Each distinct polarization image is obtained by subsampling the full FPA image. Thus, the effective pixel pitch for each polarization channel is increased and the sampling frequency is decreased. As a result, aliasing artifacts from such undersampling can corrupt the true polarization content of the scene. Here we present the first multi-channel multi-frame super-resolution (SR) algorithms designed specifically for the problem of image restoration in microgrid polarization imagers. These SR algorithms can be used to address aliasing and other degradations, without sacrificing field of view or compromising optical resolution with an anti-aliasing filter. The new SR methods are designed to exploit correlation between the polarimetric channels. One of the new SR algorithms uses a form of regularized least squares and has an iterative solution. The other is based on the faster adaptive Wiener filter SR method. We demonstrate that the new multi-channel SR algorithms are capable of providing significant enhancement of polarimetric imagery and that they outperform their independent channel counterparts.
Rainfall Estimation over the Nile Basin using an Adapted Version of the SCaMPR Algorithm
NASA Astrophysics Data System (ADS)
Habib, E. H.; Kuligowski, R. J.; Elshamy, M. E.; Ali, M. A.; Haile, A.; Amin, D.; Eldin, A.
2011-12-01
Management of Egypt's Aswan High Dam is critical not only for flood control on the Nile but also for ensuring adequate water supplies for most of Egypt since rainfall is scarce over the vast majority of its land area. However, reservoir inflow is driven by rainfall over Sudan, Ethiopia, Uganda, and several other countries from which routine rain gauge data are sparse. Satellite-derived estimates of rainfall offer a much more detailed and timely set of data to form a basis for decisions on the operation of the dam. A single-channel infrared algorithm is currently in operational use at the Egyptian Nile Forecast Center (NFC). This study reports on the adaptation of a multi-spectral, multi-instrument satellite rainfall estimation algorithm (Self-Calibrating Multivariate Precipitation Retrieval, SCaMPR) for operational application over the Nile Basin. The algorithm uses a set of rainfall predictors from multi-spectral Infrared cloud top observations and self-calibrates them to a set of predictands from Microwave (MW) rain rate estimates. For application over the Nile Basin, the SCaMPR algorithm uses multiple satellite IR channels recently available to NFC from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). Microwave rain rates are acquired from multiple sources such as SSM/I, SSMIS, AMSU, AMSR-E, and TMI. The algorithm has two main steps: rain/no-rain separation using discriminant analysis, and rain rate estimation using stepwise linear regression. We test two modes of algorithm calibration: real-time calibration with continuous updates of coefficients with newly coming MW rain rates, and calibration using static coefficients that are derived from IR-MW data from past observations. We also compare the SCaMPR algorithm to other global-scale satellite rainfall algorithms (e.g., 'Tropical Rainfall Measuring Mission (TRMM) and other sources' (TRMM-3B42) product, and the National Oceanographic and Atmospheric Administration Climate Prediction Center (NOAA-CPC) CMORPH product. The algorithm has several potential future applications such as: improving the performance accuracy of hydrologic forecasting models over the Nile Basin, and utilizing the enhanced rainfall datasets and better-calibrated hydrologic models to assess the impacts of climate change on the region's water availability.
Lok, U-Wai; Li, Pai-Chi
2016-03-01
Graphics processing unit (GPU)-based software beamforming has advantages over hardware-based beamforming of easier programmability and a faster design cycle, since complicated imaging algorithms can be efficiently programmed and modified. However, the need for a high data rate when transferring ultrasound radio-frequency (RF) data from the hardware front end to the software back end limits the real-time performance. Data compression methods can be applied to the hardware front end to mitigate the data transfer issue. Nevertheless, most decompression processes cannot be performed efficiently on a GPU, thus becoming another bottleneck of the real-time imaging. Moreover, lossless (or nearly lossless) compression is desirable to avoid image quality degradation. In a previous study, we proposed a real-time lossless compression-decompression algorithm and demonstrated that it can reduce the overall processing time because the reduction in data transfer time is greater than the computation time required for compression/decompression. This paper analyzes the lossless compression method in order to understand the factors limiting the compression efficiency. Based on the analytical results, a nearly lossless compression is proposed to further enhance the compression efficiency. The proposed method comprises a transformation coding method involving modified lossless compression that aims at suppressing amplitude data. The simulation results indicate that the compression ratio (CR) of the proposed approach can be enhanced from nearly 1.8 to 2.5, thus allowing a higher data acquisition rate at the front end. The spatial and contrast resolutions with and without compression were almost identical, and the process of decompressing the data of a single frame on a GPU took only several milliseconds. Moreover, the proposed method has been implemented in a 64-channel system that we built in-house to demonstrate the feasibility of the proposed algorithm in a real system. It was found that channel data from a 64-channel system can be transferred using the standard USB 3.0 interface in most practical imaging applications.
Demonstration of a small programmable quantum computer with atomic qubits.
Debnath, S; Linke, N M; Figgatt, C; Landsman, K A; Wright, K; Monroe, C
2016-08-04
Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels.
Demonstration of a small programmable quantum computer with atomic qubits
NASA Astrophysics Data System (ADS)
Debnath, S.; Linke, N. M.; Figgatt, C.; Landsman, K. A.; Wright, K.; Monroe, C.
2016-08-01
Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels.
NASA Astrophysics Data System (ADS)
Guegan, Loic; Murad, Nour Mohammad; Bonhommeau, Sylvain
2018-03-01
This paper deals with the modeling of the over sea radio channel and aims to establish sea turtles localization off the coast of Reunion Island, and also on Europa Island in the Mozambique Channel. In order to model this radio channel, a framework measurement protocol is proposed. The over sea measured channel is integrated to the localization algorithm to estimate the turtle trajectory based on Power of Arrival (PoA) technique compared to GPS localization. Moreover, cross correlation tool is used to characterize the over sea propagation channel. First measurement of the radio channel on the Reunion Island coast combine to the POA algorithm show an error of 18 m for 45% of the approximated points.
Smartphone-Based Indoor Localization with Bluetooth Low Energy Beacons
Zhuang, Yuan; Yang, Jun; Li, You; Qi, Longning; El-Sheimy, Naser
2016-01-01
Indoor wireless localization using Bluetooth Low Energy (BLE) beacons has attracted considerable attention after the release of the BLE protocol. In this paper, we propose an algorithm that uses the combination of channel-separate polynomial regression model (PRM), channel-separate fingerprinting (FP), outlier detection and extended Kalman filtering (EKF) for smartphone-based indoor localization with BLE beacons. The proposed algorithm uses FP and PRM to estimate the target’s location and the distances between the target and BLE beacons respectively. We compare the performance of distance estimation that uses separate PRM for three advertisement channels (i.e., the separate strategy) with that use an aggregate PRM generated through the combination of information from all channels (i.e., the aggregate strategy). The performance of FP-based location estimation results of the separate strategy and the aggregate strategy are also compared. It was found that the separate strategy can provide higher accuracy; thus, it is preferred to adopt PRM and FP for each BLE advertisement channel separately. Furthermore, to enhance the robustness of the algorithm, a two-level outlier detection mechanism is designed. Distance and location estimates obtained from PRM and FP are passed to the first outlier detection to generate improved distance estimates for the EKF. After the EKF process, the second outlier detection algorithm based on statistical testing is further performed to remove the outliers. The proposed algorithm was evaluated by various field experiments. Results show that the proposed algorithm achieved the accuracy of <2.56 m at 90% of the time with dense deployment of BLE beacons (1 beacon per 9 m), which performs 35.82% better than <3.99 m from the Propagation Model (PM) + EKF algorithm and 15.77% more accurate than <3.04 m from the FP + EKF algorithm. With sparse deployment (1 beacon per 18 m), the proposed algorithm achieves the accuracies of <3.88 m at 90% of the time, which performs 49.58% more accurate than <8.00 m from the PM + EKF algorithm and 21.41% better than <4.94 m from the FP + EKF algorithm. Therefore, the proposed algorithm is especially useful to improve the localization accuracy in environments with sparse beacon deployment. PMID:27128917
Smartphone-Based Indoor Localization with Bluetooth Low Energy Beacons.
Zhuang, Yuan; Yang, Jun; Li, You; Qi, Longning; El-Sheimy, Naser
2016-04-26
Indoor wireless localization using Bluetooth Low Energy (BLE) beacons has attracted considerable attention after the release of the BLE protocol. In this paper, we propose an algorithm that uses the combination of channel-separate polynomial regression model (PRM), channel-separate fingerprinting (FP), outlier detection and extended Kalman filtering (EKF) for smartphone-based indoor localization with BLE beacons. The proposed algorithm uses FP and PRM to estimate the target's location and the distances between the target and BLE beacons respectively. We compare the performance of distance estimation that uses separate PRM for three advertisement channels (i.e., the separate strategy) with that use an aggregate PRM generated through the combination of information from all channels (i.e., the aggregate strategy). The performance of FP-based location estimation results of the separate strategy and the aggregate strategy are also compared. It was found that the separate strategy can provide higher accuracy; thus, it is preferred to adopt PRM and FP for each BLE advertisement channel separately. Furthermore, to enhance the robustness of the algorithm, a two-level outlier detection mechanism is designed. Distance and location estimates obtained from PRM and FP are passed to the first outlier detection to generate improved distance estimates for the EKF. After the EKF process, the second outlier detection algorithm based on statistical testing is further performed to remove the outliers. The proposed algorithm was evaluated by various field experiments. Results show that the proposed algorithm achieved the accuracy of <2.56 m at 90% of the time with dense deployment of BLE beacons (1 beacon per 9 m), which performs 35.82% better than <3.99 m from the Propagation Model (PM) + EKF algorithm and 15.77% more accurate than <3.04 m from the FP + EKF algorithm. With sparse deployment (1 beacon per 18 m), the proposed algorithm achieves the accuracies of <3.88 m at 90% of the time, which performs 49.58% more accurate than <8.00 m from the PM + EKF algorithm and 21.41% better than <4.94 m from the FP + EKF algorithm. Therefore, the proposed algorithm is especially useful to improve the localization accuracy in environments with sparse beacon deployment.
NASA Astrophysics Data System (ADS)
Zeng, Rongping; Badano, Aldo; Myers, Kyle J.
2017-04-01
We showed in our earlier work that the choice of reconstruction methods does not affect the optimization of DBT acquisition parameters (angular span and number of views) using simulated breast phantom images in detecting lesions with a channelized Hotelling observer (CHO). In this work we investigate whether the model-observer based conclusion is valid when using humans to interpret images. We used previously generated DBT breast phantom images and recruited human readers to find the optimal geometry settings associated with two reconstruction algorithms, filtered back projection (FBP) and simultaneous algebraic reconstruction technique (SART). The human reader results show that image quality trends as a function of the acquisition parameters are consistent between FBP and SART reconstructions. The consistent trends confirm that the optimization of DBT system geometry is insensitive to the choice of reconstruction algorithm. The results also show that humans perform better in SART reconstructed images than in FBP reconstructed images. In addition, we applied CHOs with three commonly used channel models, Laguerre-Gauss (LG) channels, square (SQR) channels and sparse difference-of-Gaussian (sDOG) channels. We found that LG channels predict human performance trends better than SQR and sDOG channel models for the task of detecting lesions in tomosynthesis backgrounds. Overall, this work confirms that the choice of reconstruction algorithm is not critical for optimizing DBT system acquisition parameters.
On base station cooperation using statistical CSI in jointly correlated MIMO downlink channels
NASA Astrophysics Data System (ADS)
Zhang, Jun; Jiang, Bin; Jin, Shi; Gao, Xiqi; Wong, Kai-Kit
2012-12-01
This article studies the transmission of a single cell-edge user's signal using statistical channel state information at cooperative base stations (BSs) with a general jointly correlated multiple-input multiple-output (MIMO) channel model. We first present an optimal scheme to maximize the ergodic sum capacity with per-BS power constraints, revealing that the transmitted signals of all BSs are mutually independent and the optimum transmit directions for each BS align with the eigenvectors of the BS's own transmit correlation matrix of the channel. Then, we employ matrix permanents to derive a closed-form tight upper bound for the ergodic sum capacity. Based on these results, we develop a low-complexity power allocation solution using convex optimization techniques and a simple iterative water-filling algorithm (IWFA) for power allocation. Finally, we derive a necessary and sufficient condition for which a beamforming approach achieves capacity for all BSs. Simulation results demonstrate that the upper bound of ergodic sum capacity is tight and the proposed cooperative transmission scheme increases the downlink system sum capacity considerably.
Reduced-rank technique for joint channel estimation in TD-SCDMA systems
NASA Astrophysics Data System (ADS)
Kamil Marzook, Ali; Ismail, Alyani; Mohd Ali, Borhanuddin; Sali, Adawati; Khatun, Sabira
2013-02-01
In time division-synchronous code division multiple access systems, increasing the system capacity by exploiting the inserting of the largest number of users in one time slot (TS) requires adding more estimation processes to estimate the joint channel matrix for the whole system. The increase in the number of channel parameters due the increase in the number of users in one TS directly affects the precision of the estimator's performance. This article presents a novel channel estimation with low complexity, which relies on reducing the rank order of the total channel matrix H. The proposed method exploits the rank deficiency of H to reduce the number of parameters that characterise this matrix. The adopted reduced-rank technique is based on truncated singular value decomposition algorithm. The algorithms for reduced-rank joint channel estimation (JCE) are derived and compared against traditional full-rank JCEs: least squares (LS) or Steiner and enhanced (LS or MMSE) algorithms. Simulation results of the normalised mean square error showed the superiority of reduced-rank estimators. In addition, the channel impulse responses founded by reduced-rank estimator for all active users offers considerable performance improvement over the conventional estimator along the channel window length.
Hue-preserving and saturation-improved color histogram equalization algorithm.
Song, Ki Sun; Kang, Hee; Kang, Moon Gi
2016-06-01
In this paper, an algorithm is proposed to improve contrast and saturation without color degradation. The local histogram equalization (HE) method offers better performance than the global HE method, whereas the local HE method sometimes produces undesirable results due to the block-based processing. The proposed contrast-enhancement (CE) algorithm reflects the characteristics of the global HE method in the local HE method to avoid the artifacts, while global and local contrasts are enhanced. There are two ways to apply the proposed CE algorithm to color images. One is luminance processing methods, and the other one is each channel processing methods. However, these ways incur excessive or reduced saturation and color degradation problems. The proposed algorithm solves these problems by using channel adaptive equalization and similarity of ratios between the channels. Experimental results show that the proposed algorithm enhances contrast and saturation while preserving the hue and producing better performance than existing methods in terms of objective evaluation metrics.
Modulation frequency discrimination with single and multiple channels in cochlear implant users
Galvin, John J.; Oba, Sandy; Başkent, Deniz; Fu, Qian-Jie
2015-01-01
Temporal envelope cues convey important speech information for cochlear implant (CI) users. Many studies have explored CI users’ single-channel temporal envelope processing. However, in clinical CI speech processors, temporal envelope information is processed by multiple channels. Previous studies have shown that amplitude modulation frequency discrimination (AMFD) thresholds are better when temporal envelopes are delivered to multiple rather than single channels. In clinical fitting, current levels on single channels must often be reduced to accommodate multi-channel loudness summation. As such, it is unclear whether the multi-channel advantage in AMFD observed in previous studies was due to coherent envelope information distributed across the cochlea or to greater loudness associated with multi-channel stimulation. In this study, single- and multi-channel AMFD thresholds were measured in CI users. Multi-channel component electrodes were either widely or narrowly spaced to vary the degree of overlap between neural populations. The reference amplitude modulation (AM) frequency was 100 Hz, and coherent modulation was applied to all channels. In Experiment 1, single- and multi-channel AMFD thresholds were measured at similar loudness. In this case, current levels on component channels were higher for single- than for multi-channel AM stimuli, and the modulation depth was approximately 100% of the perceptual dynamic range (i.e., between threshold and maximum acceptable loudness). Results showed no significant difference in AMFD thresholds between similarly loud single- and multi-channel modulated stimuli. In Experiment 2, single- and multi-channel AMFD thresholds were compared at substantially different loudness. In this case, current levels on component channels were the same for single-and multi-channel stimuli (“summation-adjusted” current levels) and the same range of modulation (in dB) was applied to the component channels for both single- and multi-channel testing. With the summation-adjusted current levels, loudness was lower with single than with multiple channels and the AM depth resulted in substantial stimulation below single-channel audibility, thereby reducing the perceptual range of AM. Results showed that AMFD thresholds were significantly better with multiple channels than with any of the single component channels. There was no significant effect of the distribution of electrodes on multi-channel AMFD thresholds. The results suggest that increased loudness due to multi-channel summation may contribute to the multi-channel advantage in AMFD, and that that overall loudness may matter more than the distribution of envelope information in the cochlea. PMID:25746914
Orientation independence of single-vacancy and single-ion permeability ratios.
McGill, P; Schumaker, M F
1995-01-01
Single-vacancy models have been proposed as open channel permeation mechanisms for K+ channels. Single-ion models have been used to describe permeation through Na+ channels. This paper demonstrates that these models have a distinctive symmetry property. Their permeability ratios, measured under biionic conditions, are independent of channel orientation when the reversal potential is zero. This symmetry is a property of general m-site single-vacancy channels, m-site shaking-stack channels, as well as m-site single-ion channels. An experimental finding that the permeability ratios of a channel did not have this symmetry would provide evidence that a single-vacancy or single-ion model is an incorrect or incomplete description of permeation. Images FIGURE 1 PMID:7669913
NASA Astrophysics Data System (ADS)
Ling, Jun
Achieving reliable underwater acoustic communications (UAC) has long been recognized as a challenging problem owing to the scarce bandwidth available and the reverberant spread in both time and frequency domains. To pursue high data rates, we consider a multi-input multi-output (MIMO) UAC system, and our focus is placed on two main issues regarding a MIMO UAC system: (1) channel estimation, which involves the design of the training sequences and the development of a reliable channel estimation algorithm, and (2) symbol detection, which requires interference cancelation schemes due to simultaneous transmission from multiple transducers. To enhance channel estimation performance, we present a cyclic approach for designing training sequences with good auto- and cross-correlation properties, and a channel estimation algorithm called the iterative adaptive approach (IAA). Sparse channel estimates can be obtained by combining IAA with the Bayesian information criterion (BIC). Moreover, we present sparse learning via iterative minimization (SLIM) and demonstrate that SLIM gives similar performance to IAA but at a much lower computational cost. Furthermore, an extension of the SLIM algorithm is introduced to estimate the sparse and frequency modulated acoustic channels. The extended algorithm is referred to as generalization of SLIM (GoSLIM). Regarding symbol detection, a linear minimum mean-squared error based detection scheme, called RELAX-BLAST, which is a combination of vertical Bell Labs layered space-time (V-BLAST) algorithm and the cyclic principle of the RELAX algorithm, is presented and it is shown that RELAX-BLAST outperforms V-BLAST. We show that RELAX-BLAST can be implemented efficiently by making use of the conjugate gradient method and diagonalization properties of circulant matrices. This fast implementation approach requires only simple fast Fourier transform operations and facilitates parallel implementations. The effectiveness of the proposed MIMO schemes is verified by both computer simulations and experimental results obtained by analyzing the measurements acquired in multiple in-water experiments.
Wanting Wang; John J. Qu; Xianjun Hao; Yongqiang Liu; William T. Sommers
2006-01-01
Traditional fire detection algorithms mainly rely on hot spot detection using thermal infrared (TIR) channels with fixed or contextual thresholds. Three solar reflectance channels (0.65 μm, 0.86 μm, and 2.1 μm) were recently adopted into the MODIS version 4 contextual algorithm to improve the active fire detection. In the southeastern United...
NASA Astrophysics Data System (ADS)
Bansal, Shonak; Singh, Arun Kumar; Gupta, Neena
2017-02-01
In real-life, multi-objective engineering design problems are very tough and time consuming optimization problems due to their high degree of nonlinearities, complexities and inhomogeneity. Nature-inspired based multi-objective optimization algorithms are now becoming popular for solving multi-objective engineering design problems. This paper proposes original multi-objective Bat algorithm (MOBA) and its extended form, namely, novel parallel hybrid multi-objective Bat algorithm (PHMOBA) to generate shortest length Golomb ruler called optimal Golomb ruler (OGR) sequences at a reasonable computation time. The OGRs found their application in optical wavelength division multiplexing (WDM) systems as channel-allocation algorithm to reduce the four-wave mixing (FWM) crosstalk. The performances of both the proposed algorithms to generate OGRs as optical WDM channel-allocation is compared with other existing classical computing and nature-inspired algorithms, including extended quadratic congruence (EQC), search algorithm (SA), genetic algorithms (GAs), biogeography based optimization (BBO) and big bang-big crunch (BB-BC) optimization algorithms. Simulations conclude that the proposed parallel hybrid multi-objective Bat algorithm works efficiently as compared to original multi-objective Bat algorithm and other existing algorithms to generate OGRs for optical WDM systems. The algorithm PHMOBA to generate OGRs, has higher convergence and success rate than original MOBA. The efficiency improvement of proposed PHMOBA to generate OGRs up to 20-marks, in terms of ruler length and total optical channel bandwidth (TBW) is 100 %, whereas for original MOBA is 85 %. Finally the implications for further research are also discussed.
Efficient source separation algorithms for acoustic fall detection using a microsoft kinect.
Li, Yun; Ho, K C; Popescu, Mihail
2014-03-01
Falls have become a common health problem among older adults. In previous study, we proposed an acoustic fall detection system (acoustic FADE) that employed a microphone array and beamforming to provide automatic fall detection. However, the previous acoustic FADE had difficulties in detecting the fall signal in environments where interference comes from the fall direction, the number of interferences exceeds FADE's ability to handle or a fall is occluded. To address these issues, in this paper, we propose two blind source separation (BSS) methods for extracting the fall signal out of the interferences to improve the fall classification task. We first propose the single-channel BSS by using nonnegative matrix factorization (NMF) to automatically decompose the mixture into a linear combination of several basis components. Based on the distinct patterns of the bases of falls, we identify them efficiently and then construct the interference free fall signal. Next, we extend the single-channel BSS to the multichannel case through a joint NMF over all channels followed by a delay-and-sum beamformer for additional ambient noise reduction. In our experiments, we used the Microsoft Kinect to collect the acoustic data in real-home environments. The results show that in environments with high interference and background noise levels, the fall detection performance is significantly improved using the proposed BSS approaches.
Phylogenetic profiles reveal structural/functional determinants of TRPC3 signal-sensing antennae
Ko, Kyung Dae; Bhardwaj, Gaurav; Hong, Yoojin; Chang, Gue Su; Kiselyov, Kirill
2009-01-01
Biochemical assessment of channel structure/function is incredibly challenging. Developing computational tools that provide these data would enable translational research, accelerating mechanistic experimentation for the bench scientist studying ion channels. Starting with the premise that protein sequence encodes information about structure, function and evolution (SF&E), we developed a unified framework for inferring SF&E from sequence information using a knowledge-based approach. The Gestalt Domain Detection Algorithm-Basic Local Alignment Tool (GDDA-BLAST) provides phylogenetic profiles that can model, ab initio, SF&E relationships of biological sequences at the whole protein, single domain and single-amino acid level.1,2 In our recent paper,4 we have applied GDDA-BLAST analysis to study canonical TRP (TRPC) channels1 and empirically validated predicted lipid-binding and trafficking activities contained within the TRPC3 TRP_2 domain of unknown function. Overall, our in silico, in vitro, and in vivo experiments support a model in which TRPC3 has signal-sensing antennae which are adorned with lipid-binding, trafficking and calmodulin regulatory domains. In this Addendum, we correlate our functional domain analysis with the cryo-EM structure of TRPC3.3 In addition, we synthesize recent studies with our new findings to provide a refined model on the mechanism(s) of TRPC3 activation/deactivation. PMID:19704910
Joint channel estimation and multi-user detection for multipath fading channels in DS-CDMA systems
NASA Astrophysics Data System (ADS)
Wu, Sau-Hsuan; Kuo, C.-C. Jay
2002-11-01
The technique of joint blind channel estimation and multiple access interference (MAI) suppression for an asynchronous code-division multiple-access (CDMA) system is investigated in this research. To identify and track dispersive time-varying fading channels and to avoid the phase ambiguity that come with the second-order statistic approaches, a sliding-window scheme using the expectation maximization (EM) algorithm is proposed. The complexity of joint channel equalization and symbol detection for all users increases exponentially with system loading and the channel memory. The situation is exacerbated if strong inter-symbol interference (ISI) exists. To reduce the complexity and the number of samples required for channel estimation, a blind multiuser detector is developed. Together with multi-stage interference cancellation using soft outputs provided by this detector, our algorithm can track fading channels with no phase ambiguity even when channel gains attenuate close to zero.
Simulation of single-molecule trapping in a nanochannel
Robinson, William Neil; Davis, Lloyd M.
2010-01-01
The detection and trapping of single fluorescent molecules in solution within a nanochannel is studied using numerical simulations. As optical forces are insufficient for trapping molecules much smaller than the optical wavelength, a means for sensing a molecule’s position along the nanochannel and adjusting electrokinetic motion to compensate diffusion is assessed. Fluorescence excitation is provided by two adjacently focused laser beams containing temporally interleaved laser pulses. Photon detection is time-gated, and the displacement of the molecule from the middle of the two foci alters the count rates collected in the two detection channels. An algorithm for feedback control of the electrokinetic motion in response to the timing of photons, to reposition the molecule back toward the middle for trapping and to rapidly reload the trap after a molecule photobleaches or escapes, is evaluated. While accommodating the limited electrokinetic speed and the finite latency of feedback imposed by experimental hardware, the algorithm is shown to be effective for trapping fast-diffusing single-chromophore molecules within a micron-sized confocal region. Studies show that there is an optimum laser power for which loss of molecules from the trap due to either photobleaching or shot-noise fluctuations is minimized. PMID:20799801
Compact, cost-effective and field-portable microscope prototype based on MISHELF microscopy
NASA Astrophysics Data System (ADS)
Sanz, Martín; Picazo-Bueno, José Ángel; Granero, Luis; García, Javier; Micó, Vicente
2017-02-01
We report on a reduced cost, portable and compact prototype design of lensless holographic microscope with an illumination/detection scheme based on wavelength multiplexing, working with single hologram acquisition and using a fast convergence algorithm for image processing. All together, MISHELF (initials coming from Multi-Illumination Single-Holographic-Exposure Lensless Fresnel) microscopy allows the recording of three Fresnel domain diffraction patterns in a single camera snap-shot incoming from illuminating the sample with three coherent lights at once. Previous implementations have proposed an illumination/detection procedure based on a tuned (illumination wavelengths centered at the maximum sensitivity of the camera detection channels) configuration but here we report on a detuned (non-centered ones) scheme resulting in prototype miniaturization and cost reduction. Thus, MISHELF microscopy in combination with a novel and fast iterative algorithm allows high-resolution (μm range) phase-retrieved (twin image elimination) quantitative phase imaging of dynamic events (video rate recording speed). The performance of this microscope prototype is validated through experiments using both amplitude (USAF resolution test) and complex (live swine sperm cells and flowing microbeads) samples. The proposed method becomes in an alternative instrument improving some capabilities of existing lensless microscopes.
Compact, cost-effective and field-portable microscope prototype based on MISHELF microscopy
Sanz, Martín; Picazo-Bueno, José Ángel; Granero, Luis; García, Javier; Micó, Vicente
2017-01-01
We report on a reduced cost, portable and compact prototype design of lensless holographic microscope with an illumination/detection scheme based on wavelength multiplexing, working with single hologram acquisition and using a fast convergence algorithm for image processing. All together, MISHELF (initials coming from Multi-Illumination Single-Holographic-Exposure Lensless Fresnel) microscopy allows the recording of three Fresnel domain diffraction patterns in a single camera snap-shot incoming from illuminating the sample with three coherent lights at once. Previous implementations have proposed an illumination/detection procedure based on a tuned (illumination wavelengths centered at the maximum sensitivity of the camera detection channels) configuration but here we report on a detuned (non-centered ones) scheme resulting in prototype miniaturization and cost reduction. Thus, MISHELF microscopy in combination with a novel and fast iterative algorithm allows high-resolution (μm range) phase-retrieved (twin image elimination) quantitative phase imaging of dynamic events (video rate recording speed). The performance of this microscope prototype is validated through experiments using both amplitude (USAF resolution test) and complex (live swine sperm cells and flowing microbeads) samples. The proposed method becomes in an alternative instrument improving some capabilities of existing lensless microscopes. PMID:28233829
A review of channel selection algorithms for EEG signal processing
NASA Astrophysics Data System (ADS)
Alotaiby, Turky; El-Samie, Fathi E. Abd; Alshebeili, Saleh A.; Ahmad, Ishtiaq
2015-12-01
Digital processing of electroencephalography (EEG) signals has now been popularly used in a wide variety of applications such as seizure detection/prediction, motor imagery classification, mental task classification, emotion classification, sleep state classification, and drug effects diagnosis. With the large number of EEG channels acquired, it has become apparent that efficient channel selection algorithms are needed with varying importance from one application to another. The main purpose of the channel selection process is threefold: (i) to reduce the computational complexity of any processing task performed on EEG signals by selecting the relevant channels and hence extracting the features of major importance, (ii) to reduce the amount of overfitting that may arise due to the utilization of unnecessary channels, for the purpose of improving the performance, and (iii) to reduce the setup time in some applications. Signal processing tools such as time-domain analysis, power spectral estimation, and wavelet transform have been used for feature extraction and hence for channel selection in most of channel selection algorithms. In addition, different evaluation approaches such as filtering, wrapper, embedded, hybrid, and human-based techniques have been widely used for the evaluation of the selected subset of channels. In this paper, we survey the recent developments in the field of EEG channel selection methods along with their applications and classify these methods according to the evaluation approach.
Improvement in detection of small wildfires
NASA Astrophysics Data System (ADS)
Sleigh, William J.
1991-12-01
Detecting and imaging small wildfires with an Airborne Scanner is done against generally high background levels. The Airborne Scanner System used is a two-channel thermal IR scanner, with one channel selected for imaging the terrain and the other channel sensitive to hotter targets. If a relationship can be determined between the two channels that quantifies the background signal for hotter targets, then an algorithm can be determined that removes the background signal in that channel leaving only the fire signal. The relationship can be determined anywhere between various points in the signal processing of the radiometric data from the radiometric input to the quantized output of the system. As long as only linear operations are performed on the signal, the relationship will only depend on the system gain and offsets within the range of interest. The algorithm can be implemented either by using a look-up table or performing the calculation in the system computer. The current presentation will describe the algorithm, its derivation, and its implementation in the Firefly Wildfire Detection System by means of an off-the-shelf commercial scanner. Improvement over the previous algorithm used and the margin gained for improving the imaging of the terrain will be demonstrated.
Improvement in detection of small wildfires
NASA Technical Reports Server (NTRS)
Sleigh, William J.
1991-01-01
Detecting and imaging small wildfires with an Airborne Scanner is done against generally high background levels. The Airborne Scanner System used is a two-channel thermal IR scanner, with one channel selected for imaging the terrain and the other channel sensitive to hotter targets. If a relationship can be determined between the two channels that quantifies the background signal for hotter targets, then an algorithm can be determined that removes the background signal in that channel leaving only the fire signal. The relationship can be determined anywhere between various points in the signal processing of the radiometric data from the radiometric input to the quantized output of the system. As long as only linear operations are performed on the signal, the relationship will only depend on the system gain and offsets within the range of interest. The algorithm can be implemented either by using a look-up table or performing the calculation in the system computer. The current presentation will describe the algorithm, its derivation, and its implementation in the Firefly Wildfire Detection System by means of an off-the-shelf commercial scanner. Improvement over the previous algorithm used and the margin gained for improving the imaging of the terrain will be demonstrated.
NASA Astrophysics Data System (ADS)
Xi, Songnan; Zoltowski, Michael D.
2008-04-01
Multiuser multiple-input multiple-output (MIMO) systems are considered in this paper. We continue our research on uplink transmit beamforming design for multiple users under the assumption that the full multiuser channel state information, which is the collection of the channel state information between each of the users and the base station, is known not only to the receiver but also to all the transmitters. We propose an algorithm for designing optimal beamforming weights in terms of maximizing the signal-to-interference-plus-noise ratio (SINR). Through statistical modeling, we decouple the original mathematically intractable optimization problem and achieved a closed-form solution. As in our previous work, the minimum mean-squared error (MMSE) receiver with successive interference cancellation (SIC) is adopted for multiuser detection. The proposed scheme is compared with an existing jointly optimized transceiver design, referred to as the joint transceiver in this paper, and our previously proposed eigen-beamforming algorithm. Simulation results demonstrate that our algorithm, with much less computational burden, accomplishes almost the same performance as the joint transceiver for spatially independent MIMO channel and even better performance for spatially correlated MIMO channels. And it always works better than our previously proposed eigen beamforming algorithm.
Performance comparison of extracellular spike sorting algorithms for single-channel recordings.
Wild, Jiri; Prekopcsak, Zoltan; Sieger, Tomas; Novak, Daniel; Jech, Robert
2012-01-30
Proper classification of action potentials from extracellular recordings is essential for making an accurate study of neuronal behavior. Many spike sorting algorithms have been presented in the technical literature. However, no comparative analysis has hitherto been performed. In our study, three widely-used publicly-available spike sorting algorithms (WaveClus, KlustaKwik, OSort) were compared with regard to their parameter settings. The algorithms were evaluated using 112 artificial signals (publicly available online) with 2-9 different neurons and varying noise levels between 0.00 and 0.60. An optimization technique based on Adjusted Mutual Information was employed to find near-optimal parameter settings for a given artificial signal and algorithm. All three algorithms performed significantly better (p<0.01) with optimized parameters than with the default ones. WaveClus was the most accurate spike sorting algorithm, receiving the best evaluation score for 60% of all signals. OSort operated at almost five times the speed of the other algorithms. In terms of accuracy, OSort performed significantly less well (p<0.01) than WaveClus for signals with a noise level in the range 0.15-0.30. KlustaKwik achieved similar scores to WaveClus for signals with low noise level 0.00-0.15 and was worse otherwise. In conclusion, none of the three compared algorithms was optimal in general. The accuracy of the algorithms depended on proper choice of the algorithm parameters and also on specific properties of the examined signal. Copyright © 2011 Elsevier B.V. All rights reserved.
Digitally Controlled Slot Coupled Patch Array
NASA Technical Reports Server (NTRS)
D'Arista, Thomas; Pauly, Jerry
2010-01-01
A four-element array conformed to a singly curved conducting surface has been demonstrated to provide 2 dB axial ratio of 14 percent, while maintaining VSWR (voltage standing wave ratio) of 2:1 and gain of 13 dBiC. The array is digitally controlled and can be scanned with the LMS Adaptive Algorithm using the power spectrum as the objective, as well as the Direction of Arrival (DoA) of the beam to set the amplitude of the power spectrum. The total height of the array above the conducting surface is 1.5 inches (3.8 cm). A uniquely configured microstrip-coupled aperture over a conducting surface produced supergain characteristics, achieving 12.5 dBiC across the 2-to-2.13- GHz and 2.2-to-2.3-GHz frequency bands. This design is optimized to retain VSWR and axial ratio across the band as well. The four elements are uniquely configured with respect to one another for performance enhancement, and the appropriate phase excitation to each element for scan can be found either by analytical beam synthesis using the genetic algorithm with the measured or simulated far field radiation pattern, or an adaptive algorithm implemented with the digitized signal. The commercially available tuners and field-programmable gate array (FPGA) boards utilized required precise phase coherent configuration control, and with custom code developed by Nokomis, Inc., were shown to be fully functional in a two-channel configuration controlled by FPGA boards. A four-channel tuner configuration and oscilloscope configuration were also demonstrated although algorithm post-processing was required.
Zhang, Xiong; Zhao, Yacong; Zhang, Yu; Zhong, Xuefei; Fan, Zhaowen
2018-01-01
The novel human-computer interface (HCI) using bioelectrical signals as input is a valuable tool to improve the lives of people with disabilities. In this paper, surface electromyography (sEMG) signals induced by four classes of wrist movements were acquired from four sites on the lower arm with our designed system. Forty-two features were extracted from the time, frequency and time-frequency domains. Optimal channels were determined from single-channel classification performance rank. The optimal-feature selection was according to a modified entropy criteria (EC) and Fisher discrimination (FD) criteria. The feature selection results were evaluated by four different classifiers, and compared with other conventional feature subsets. In online tests, the wearable system acquired real-time sEMG signals. The selected features and trained classifier model were used to control a telecar through four different paradigms in a designed environment with simple obstacles. Performance was evaluated based on travel time (TT) and recognition rate (RR). The results of hardware evaluation verified the feasibility of our acquisition systems, and ensured signal quality. Single-channel analysis results indicated that the channel located on the extensor carpi ulnaris (ECU) performed best with mean classification accuracy of 97.45% for all movement’s pairs. Channels placed on ECU and the extensor carpi radialis (ECR) were selected according to the accuracy rank. Experimental results showed that the proposed FD method was better than other feature selection methods and single-type features. The combination of FD and random forest (RF) performed best in offline analysis, with 96.77% multi-class RR. Online results illustrated that the state-machine paradigm with a 125 ms window had the highest maneuverability and was closest to real-life control. Subjects could accomplish online sessions by three sEMG-based paradigms, with average times of 46.02, 49.06 and 48.08 s, respectively. These experiments validate the feasibility of proposed real-time wearable HCI system and algorithms, providing a potential assistive device interface for persons with disabilities. PMID:29543737
Designing spin-channel geometries for entanglement distribution
NASA Astrophysics Data System (ADS)
Levi, E. K.; Kirton, P. G.; Lovett, B. W.
2016-09-01
We investigate different geometries of spin-1/2 nitrogen impurity channels for distributing entanglement between pairs of remote nitrogen vacancy centers (NVs) in diamond. To go beyond the system size limits imposed by directly solving the master equation, we implement a matrix product operator method to describe the open system dynamics. In so doing, we provide an early demonstration of how the time-evolving block decimation algorithm can be used for answering a problem related to a real physical system that could not be accessed by other methods. For a fixed NV separation there is an interplay between incoherent impurity spin decay and coherent entanglement transfer: Long-transfer-time, few-spin systems experience strong dephasing that can be overcome by increasing the number of spins in the channel. We examine how missing spins and disorder in the coupling strengths affect the dynamics, finding that in some regimes a spin ladder is a more effective conduit for information than a single-spin chain.
On Modeling and Analysis of MIMO Wireless Mesh Networks with Triangular Overlay Topology
Cao, Zhanmao; Wu, Chase Q.; Zhang, Yuanping; ...
2015-01-01
Multiple input multiple output (MIMO) wireless mesh networks (WMNs) aim to provide the last-mile broadband wireless access to the Internet. Along with the algorithmic development for WMNs, some fundamental mathematical problems also emerge in various aspects such as routing, scheduling, and channel assignment, all of which require an effective mathematical model and rigorous analysis of network properties. In this paper, we propose to employ Cartesian product of graphs (CPG) as a multichannel modeling approach and explore a set of unique properties of triangular WMNs. In each layer of CPG with a single channel, we design a node coordinate scheme thatmore » retains the symmetric property of triangular meshes and develop a function for the assignment of node identity numbers based on their coordinates. We also derive a necessary-sufficient condition for interference-free links and combinatorial formulas to determine the number of the shortest paths for channel realization in triangular WMNs.« less
Thermosolutal convection and macrosegregation in dendritic alloys
NASA Technical Reports Server (NTRS)
Poirier, David R.; Heinrich, J. C.
1993-01-01
A mathematical model of solidification, that simulates the formation of channel segregates or freckles, is presented. The model simulates the entire solidification process, starting with the initial melt to the solidified cast, and the resulting segregation is predicted. Emphasis is given to the initial transient, when the dendritic zone begins to develop and the conditions for the possible nucleation of channels are established. The mechanisms that lead to the creation and eventual growth or termination of channels are explained in detail and illustrated by several numerical examples. A finite element model is used for the simulations. It uses a single system of equations to deal with the all-liquid region, the dendritic region, and the all-solid region. The dendritic region is treated as an anisotropic porous medium. The algorithm uses the bilinear isoparametric element, with a penalty function approximation and a Petrov-Galerkin formulation. The major task was to develop the solidification model. In addition, other tasks that were performed in conjunction with the modeling of dendritic solidification are briefly described.
Fast converging minimum probability of error neural network receivers for DS-CDMA communications.
Matyjas, John D; Psaromiligkos, Ioannis N; Batalama, Stella N; Medley, Michael J
2004-03-01
We consider a multilayer perceptron neural network (NN) receiver architecture for the recovery of the information bits of a direct-sequence code-division-multiple-access (DS-CDMA) user. We develop a fast converging adaptive training algorithm that minimizes the bit-error rate (BER) at the output of the receiver. The adaptive algorithm has three key features: i) it incorporates the BER, i.e., the ultimate performance evaluation measure, directly into the learning process, ii) it utilizes constraints that are derived from the properties of the optimum single-user decision boundary for additive white Gaussian noise (AWGN) multiple-access channels, and iii) it embeds importance sampling (IS) principles directly into the receiver optimization process. Simulation studies illustrate the BER performance of the proposed scheme.
NASA Astrophysics Data System (ADS)
Testorf, M. E.; Jobst, B. C.; Kleen, J. K.; Titiz, A.; Guillory, S.; Scott, R.; Bujarski, K. A.; Roberts, D. W.; Holmes, G. L.; Lenck-Santini, P.-P.
2012-10-01
Time-frequency transforms are used to identify events in clinical EEG data. Data are recorded as part of a study for correlating the performance of human subjects during a memory task with pathological events in the EEG, called spikes. The spectrogram and the scalogram are reviewed as tools for evaluating spike activity. A statistical evaluation of the continuous wavelet transform across trials is used to quantify phase-locking events. For simultaneously improving the time and frequency resolution, and for representing the EEG of several channels or trials in a single time-frequency plane, a multichannel matching pursuit algorithm is used. Fundamental properties of the algorithm are discussed as well as preliminary results, which were obtained with clinical EEG data.
Somers, Ben; Bertrand, Alexander
2016-12-01
Chronic, 24/7 EEG monitoring requires the use of highly miniaturized EEG modules, which only measure a few EEG channels over a small area. For improved spatial coverage, a wireless EEG sensor network (WESN) can be deployed, consisting of multiple EEG modules, which interact through short-distance wireless communication. In this paper, we aim to remove eye blink artifacts in each EEG channel of a WESN by optimally exploiting the correlation between EEG signals from different modules, under stringent communication bandwidth constraints. We apply a distributed canonical correlation analysis (CCA-)based algorithm, in which each module only transmits an optimal linear combination of its local EEG channels to the other modules. The method is validated on both synthetic and real EEG data sets, with emulated wireless transmissions. While strongly reducing the amount of data that is shared between nodes, we demonstrate that the algorithm achieves the same eye blink artifact removal performance as the equivalent centralized CCA algorithm, which is at least as good as other state-of-the-art multi-channel algorithms that require a transmission of all channels. Due to their potential for extreme miniaturization, WESNs are viewed as an enabling technology for chronic EEG monitoring. However, multi-channel analysis is hampered in WESNs due to the high energy cost for wireless communication. This paper shows that multi-channel eye blink artifact removal is possible with a significantly reduced wireless communication between EEG modules.
NASA Astrophysics Data System (ADS)
Somers, Ben; Bertrand, Alexander
2016-12-01
Objective. Chronic, 24/7 EEG monitoring requires the use of highly miniaturized EEG modules, which only measure a few EEG channels over a small area. For improved spatial coverage, a wireless EEG sensor network (WESN) can be deployed, consisting of multiple EEG modules, which interact through short-distance wireless communication. In this paper, we aim to remove eye blink artifacts in each EEG channel of a WESN by optimally exploiting the correlation between EEG signals from different modules, under stringent communication bandwidth constraints. Approach. We apply a distributed canonical correlation analysis (CCA-)based algorithm, in which each module only transmits an optimal linear combination of its local EEG channels to the other modules. The method is validated on both synthetic and real EEG data sets, with emulated wireless transmissions. Main results. While strongly reducing the amount of data that is shared between nodes, we demonstrate that the algorithm achieves the same eye blink artifact removal performance as the equivalent centralized CCA algorithm, which is at least as good as other state-of-the-art multi-channel algorithms that require a transmission of all channels. Significance. Due to their potential for extreme miniaturization, WESNs are viewed as an enabling technology for chronic EEG monitoring. However, multi-channel analysis is hampered in WESNs due to the high energy cost for wireless communication. This paper shows that multi-channel eye blink artifact removal is possible with a significantly reduced wireless communication between EEG modules.
Minimum Interference Channel Assignment Algorithm for Multicast in a Wireless Mesh Network.
Choi, Sangil; Park, Jong Hyuk
2016-12-02
Wireless mesh networks (WMNs) have been considered as one of the key technologies for the configuration of wireless machines since they emerged. In a WMN, wireless routers provide multi-hop wireless connectivity between hosts in the network and also allow them to access the Internet via gateway devices. Wireless routers are typically equipped with multiple radios operating on different channels to increase network throughput. Multicast is a form of communication that delivers data from a source to a set of destinations simultaneously. It is used in a number of applications, such as distributed games, distance education, and video conferencing. In this study, we address a channel assignment problem for multicast in multi-radio multi-channel WMNs. In a multi-radio multi-channel WMN, two nearby nodes will interfere with each other and cause a throughput decrease when they transmit on the same channel. Thus, an important goal for multicast channel assignment is to reduce the interference among networked devices. We have developed a minimum interference channel assignment (MICA) algorithm for multicast that accurately models the interference relationship between pairs of multicast tree nodes using the concept of the interference factor and assigns channels to tree nodes to minimize interference within the multicast tree. Simulation results show that MICA achieves higher throughput and lower end-to-end packet delay compared with an existing channel assignment algorithm named multi-channel multicast (MCM). In addition, MICA achieves much lower throughput variation among the destination nodes than MCM.
Minimum Interference Channel Assignment Algorithm for Multicast in a Wireless Mesh Network
Choi, Sangil; Park, Jong Hyuk
2016-01-01
Wireless mesh networks (WMNs) have been considered as one of the key technologies for the configuration of wireless machines since they emerged. In a WMN, wireless routers provide multi-hop wireless connectivity between hosts in the network and also allow them to access the Internet via gateway devices. Wireless routers are typically equipped with multiple radios operating on different channels to increase network throughput. Multicast is a form of communication that delivers data from a source to a set of destinations simultaneously. It is used in a number of applications, such as distributed games, distance education, and video conferencing. In this study, we address a channel assignment problem for multicast in multi-radio multi-channel WMNs. In a multi-radio multi-channel WMN, two nearby nodes will interfere with each other and cause a throughput decrease when they transmit on the same channel. Thus, an important goal for multicast channel assignment is to reduce the interference among networked devices. We have developed a minimum interference channel assignment (MICA) algorithm for multicast that accurately models the interference relationship between pairs of multicast tree nodes using the concept of the interference factor and assigns channels to tree nodes to minimize interference within the multicast tree. Simulation results show that MICA achieves higher throughput and lower end-to-end packet delay compared with an existing channel assignment algorithm named multi-channel multicast (MCM). In addition, MICA achieves much lower throughput variation among the destination nodes than MCM. PMID:27918438
A Degree Distribution Optimization Algorithm for Image Transmission
NASA Astrophysics Data System (ADS)
Jiang, Wei; Yang, Junjie
2016-09-01
Luby Transform (LT) code is the first practical implementation of digital fountain code. The coding behavior of LT code is mainly decided by the degree distribution which determines the relationship between source data and codewords. Two degree distributions are suggested by Luby. They work well in typical situations but not optimally in case of finite encoding symbols. In this work, the degree distribution optimization algorithm is proposed to explore the potential of LT code. Firstly selection scheme of sparse degrees for LT codes is introduced. Then probability distribution is optimized according to the selected degrees. In image transmission, bit stream is sensitive to the channel noise and even a single bit error may cause the loss of synchronization between the encoder and the decoder. Therefore the proposed algorithm is designed for image transmission situation. Moreover, optimal class partition is studied for image transmission with unequal error protection. The experimental results are quite promising. Compared with LT code with robust soliton distribution, the proposed algorithm improves the final quality of recovered images obviously with the same overhead.
RLS Channel Estimation with Adaptive Forgetting Factor for DS-CDMA Frequency-Domain Equalization
NASA Astrophysics Data System (ADS)
Kojima, Yohei; Tomeba, Hiromichi; Takeda, Kazuaki; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can increase the downlink bit error rate (BER) performance of DS-CDMA beyond that possible with conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. Recently, we proposed a pilot-assisted channel estimation (CE) based on the MMSE criterion. Using MMSE-CE, the channel estimation accuracy is almost insensitive to the pilot chip sequence, and a good BER performance is achieved. In this paper, we propose a channel estimation scheme using one-tap recursive least square (RLS) algorithm, where the forgetting factor is adapted to the changing channel condition by the least mean square (LMS)algorithm, for DS-CDMA with FDE. We evaluate the BER performance using RLS-CE with adaptive forgetting factor in a frequency-selective fast Rayleigh fading channel by computer simulation.
NASA Astrophysics Data System (ADS)
Schämann, M.; Bücker, M.; Hessel, S.; Langmann, U.
2008-05-01
High data rates combined with high mobility represent a challenge for the design of cellular devices. Advanced algorithms are required which result in higher complexity, more chip area and increased power consumption. However, this contrasts to the limited power supply of mobile devices. This presentation discusses the application of an HSDPA receiver which has been optimized regarding power consumption with the focus on the algorithmic and architectural level. On algorithmic level the Rake combiner, Prefilter-Rake equalizer and MMSE equalizer are compared regarding their BER performance. Both equalizer approaches provide a significant increase of performance for high data rates compared to the Rake combiner which is commonly used for lower data rates. For both equalizer approaches several adaptive algorithms are available which differ in complexity and convergence properties. To identify the algorithm which achieves the required performance with the lowest power consumption the algorithms have been investigated using SystemC models regarding their performance and arithmetic complexity. Additionally, for the Prefilter Rake equalizer the power estimations of a modified Griffith (LMS) and a Levinson (RLS) algorithm have been compared with the tool ORINOCO supplied by ChipVision. The accuracy of this tool has been verified with a scalable architecture of the UMTS channel estimation described both in SystemC and VHDL targeting a 130 nm CMOS standard cell library. An architecture combining all three approaches combined with an adaptive control unit is presented. The control unit monitors the current condition of the propagation channel and adjusts parameters for the receiver like filter size and oversampling ratio to minimize the power consumption while maintaining the required performance. The optimization strategies result in a reduction of the number of arithmetic operations up to 70% for single components which leads to an estimated power reduction of up to 40% while the BER performance is not affected. This work utilizes SystemC and ORINOCO for the first estimation of power consumption in an early step of the design flow. Thereby algorithms can be compared in different operating modes including the effects of control units. Here an algorithm having higher peak complexity and power consumption but providing more flexibility showed less consumption for normal operating modes compared to the algorithm which is optimized for peak performance.
Electronics and triggering challenges for the CMS High Granularity Calorimeter
NASA Astrophysics Data System (ADS)
Lobanov, A.
2018-02-01
The High Granularity Calorimeter (HGCAL), presently being designed by the CMS collaboration to replace the CMS endcap calorimeters for the High Luminosity phase of LHC, will feature six million channels distributed over 52 longitudinal layers. The requirements for the front-end electronics are extremely challenging, including high dynamic range (0.2 fC-10 pC), low noise (~2000 e- to be able to calibrate on single minimum ionising particles throughout the detector lifetime) and low power consumption (~20 mW/channel), as well as the need to select and transmit trigger information with a high granularity. Exploiting the intrinsic precision-timing capabilities of silicon sensors also requires careful design of the front-end electronics as well as the whole system, particularly clock distribution. The harsh radiation environment and requirement to keep the whole detector as dense as possible will require novel solutions to the on-detector electronics layout. Processing the data from the HGCAL imposes equally large challenges on the off-detector electronics, both for the hardware and incorporated algorithms. We present an overview of the complete electronics architecture, as well as the performance of prototype components and algorithms.
NASA Technical Reports Server (NTRS)
Wennersten, Miriam; Banes, Vince; Boegner, Greg; Clagnett, Charles; Dougherty, Lamar; Edwards, Bernard; Roman, Joe; Bauer, Frank H. (Technical Monitor)
2001-01-01
NASA Goddard Space Flight Center has built an open architecture, 24 channel spaceflight Global Positioning System (GPS) receiver. The compact PCI PiVoT GPS receiver card is based on the Mitel/GEC Plessey Builder 2 board. PiVoT uses two Plessey 2021 correlators to allow tracking of up to 24 separate GPS SV's on unique channels. Its four front ends can support four independent antennas, making it a useful card for hosting GPS attitude determination algorithms. It has been built using space quality, radiation tolerant parts. The PiVoT card works at a lower signal to noise ratio than the original Builder 2 board. It also hosts an improved clock oscillator. The PiVoT software is based on the original Piessey Builder 2 software ported to the Linux operating system. The software is posix compliant and can be easily converted to other posix operating systems. The software is open source to anyone with a licensing agreement with Plessey. Additional tasks can be added to the software to support GPS science experiments or attitude determination algorithms. The next generation PiVoT receiver will be a single radiation hardened compact PCI card containing the microprocessor and the GPS receiver optimized for use above the GPS constellation.
NASA Astrophysics Data System (ADS)
Clergeau, Jean-François; Ferraton, Matthieu; Guérard, Bruno; Khaplanov, Anton; Piscitelli, Francesco; Platz, Martin; Rigal, Jean-Marie; Van Esch, Patrick; Daullé, Thibault
2017-01-01
1D or 2D neutron position sensitive detectors with individual wire or strip readout using discriminators have the advantage of being able to treat several neutron impacts partially overlapping in time, hence reducing global dead time. A single neutron impact usually gives rise to several discriminator signals. In this paper, we introduce an information-theoretical definition of image resolution. Two point-like spots of neutron impacts with a given distance between them act as a source of information (each neutron hit belongs to one spot or the other), and the detector plus signal treatment is regarded as an imperfect communication channel that transmits this information. The maximal mutual information obtained from this channel as a function of the distance between the spots allows to define a calibration-independent measure of position resolution. We then apply this measure to quantify the power of position resolution of different algorithms treating these individual discriminator signals which can be implemented in firmware. The method is then applied to different detectors existing at the ILL. Center-of-gravity methods usually improve the position resolution over best-wire algorithms which are the standard way of treating these signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perea, Philip Michael
I have performed a search for t-channel single top quark production in pmore » $$\\bar{p}$$ single number sub collisions at 1.96 TeV on a 366 pb -1 dataset collected with the D0 detector from 2002-2005. The analysis is restricted to the leptonic decay of the W boson from the top quark to an electron or muon, tq$$\\bar{b}$$ → lv lb q$$\\bar{b}$$ (l = e,μ). A powerful b-quark tagging algorithm derived from neural networks is used to identify b jets and significantly reduce background. I further use neural networks to discriminate signal from background, and apply a binned likelihood calculation to the neural network output distributions to derive the final limits. No direct observation of single top quark production has been made, and I report expected/measured 95% confidence level limits of 3.5/8.0 pb.« less
A Flexible Annular-Array Imaging Platform for Micro-Ultrasound
Qiu, Weibao; Yu, Yanyan; Chabok, Hamid Reza; Liu, Cheng; Tsang, Fu Keung; Zhou, Qifa; Shung, K. Kirk; Zheng, Hairong; Sun, Lei
2013-01-01
Micro-ultrasound is an invaluable imaging tool for many clinical and preclinical applications requiring high resolution (approximately several tens of micrometers). Imaging systems for micro-ultrasound, including single-element imaging systems and linear-array imaging systems, have been developed extensively in recent years. Single-element systems are cheaper, but linear-array systems give much better image quality at a higher expense. Annular-array-based systems provide a third alternative, striking a balance between image quality and expense. This paper presents the development of a novel programmable and real-time annular-array imaging platform for micro-ultrasound. It supports multi-channel dynamic beamforming techniques for large-depth-of-field imaging. The major image processing algorithms were achieved by a novel field-programmable gate array technology for high speed and flexibility. Real-time imaging was achieved by fast processing algorithms and high-speed data transfer interface. The platform utilizes a printed circuit board scheme incorporating state-of-the-art electronics for compactness and cost effectiveness. Extensive tests including hardware, algorithms, wire phantom, and tissue mimicking phantom measurements were conducted to demonstrate good performance of the platform. The calculated contrast-to-noise ratio (CNR) of the tissue phantom measurements were higher than 1.2 in the range of 3.8 to 8.7 mm imaging depth. The platform supported more than 25 images per second for real-time image acquisition. The depth-of-field had about 2.5-fold improvement compared to single-element transducer imaging. PMID:23287923
NASA Astrophysics Data System (ADS)
Mayvan, Ali D.; Aghaeinia, Hassan; Kazemi, Mohammad
2017-12-01
This paper focuses on robust transceiver design for throughput enhancement on the interference channel (IC), under imperfect channel state information (CSI). In this paper, two algorithms are proposed to improve the throughput of the multi-input multi-output (MIMO) IC. Each transmitter and receiver has, respectively, M and N antennas and IC operates in a time division duplex mode. In the first proposed algorithm, each transceiver adjusts its filter to maximize the expected value of signal-to-interference-plus-noise ratio (SINR). On the other hand, the second algorithm tries to minimize the variances of the SINRs to hedge against the variability due to CSI error. Taylor expansion is exploited to approximate the effect of CSI imperfection on mean and variance. The proposed robust algorithms utilize the reciprocity of wireless networks to optimize the estimated statistical properties in two different working modes. Monte Carlo simulations are employed to investigate sum rate performance of the proposed algorithms and the advantage of incorporating variation minimization into the transceiver design.
Moraes, Carolina Borsoi; Yang, Gyongseon; Kang, Myungjoo; Freitas-Junior, Lucio H.; Hansen, Michael A. E.
2014-01-01
We present a customized high content (image-based) and high throughput screening algorithm for the quantification of Trypanosoma cruzi infection in host cells. Based solely on DNA staining and single-channel images, the algorithm precisely segments and identifies the nuclei and cytoplasm of mammalian host cells as well as the intracellular parasites infecting the cells. The algorithm outputs statistical parameters including the total number of cells, number of infected cells and the total number of parasites per image, the average number of parasites per infected cell, and the infection ratio (defined as the number of infected cells divided by the total number of cells). Accurate and precise estimation of these parameters allow for both quantification of compound activity against parasites, as well as the compound cytotoxicity, thus eliminating the need for an additional toxicity-assay, hereby reducing screening costs significantly. We validate the performance of the algorithm using two known drugs against T.cruzi: Benznidazole and Nifurtimox. Also, we have checked the performance of the cell detection with manual inspection of the images. Finally, from the titration of the two compounds, we confirm that the algorithm provides the expected half maximal effective concentration (EC50) of the anti-T. cruzi activity. PMID:24503652
A three-dimensional spectral algorithm for simulations of transition and turbulence
NASA Technical Reports Server (NTRS)
Zang, T. A.; Hussaini, M. Y.
1985-01-01
A spectral algorithm for simulating three dimensional, incompressible, parallel shear flows is described. It applies to the channel, to the parallel boundary layer, and to other shear flows with one wall bounded and two periodic directions. Representative applications to the channel and to the heated boundary layer are presented.
AMSR2 Soil Moisture Product Validation
NASA Technical Reports Server (NTRS)
Bindlish, R.; Jackson, T.; Cosh, M.; Koike, T.; Fuiji, X.; de Jeu, R.; Chan, S.; Asanuma, J.; Berg, A.; Bosch, D.;
2017-01-01
The Advanced Microwave Scanning Radiometer 2 (AMSR2) is part of the Global Change Observation Mission-Water (GCOM-W) mission. AMSR2 fills the void left by the loss of the Advanced Microwave Scanning Radiometer Earth Observing System (AMSR-E) after almost 10 years. Both missions provide brightness temperature observations that are used to retrieve soil moisture. Merging AMSR-E and AMSR2 will help build a consistent long-term dataset. Before tackling the integration of AMSR-E and AMSR2 it is necessary to conduct a thorough validation and assessment of the AMSR2 soil moisture products. This study focuses on validation of the AMSR2 soil moisture products by comparison with in situ reference data from a set of core validation sites. Three products that rely on different algorithms were evaluated; the JAXA Soil Moisture Algorithm (JAXA), the Land Parameter Retrieval Model (LPRM), and the Single Channel Algorithm (SCA). Results indicate that overall the SCA has the best performance based upon the metrics considered.
Bayesian cloud detection for MERIS, AATSR, and their combination
NASA Astrophysics Data System (ADS)
Hollstein, A.; Fischer, J.; Carbajal Henken, C.; Preusker, R.
2014-11-01
A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS), the Advanced Along-Track Scanning Radiometer (AATSR), and their combination. The cloud masks were designed to be numerically efficient and suited for the processing of large amounts of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient amounts of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.
Bayesian cloud detection for MERIS, AATSR, and their combination
NASA Astrophysics Data System (ADS)
Hollstein, A.; Fischer, J.; Carbajal Henken, C.; Preusker, R.
2015-04-01
A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS), the Advanced Along-Track Scanning Radiometer (AATSR), and their combination. The cloud detection schemes were designed to be numerically efficient and suited for the processing of large numbers of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient numbers of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.
NASA Astrophysics Data System (ADS)
Li, Xinhua; Song, Zhenyu; Zhan, Yongjie; Wu, Qiongzhi
2009-12-01
Since the system capacity is severely limited, reducing the multiple access interfere (MAI) is necessary in the multiuser direct-sequence code division multiple access (DS-CDMA) system which is used in the telecommunication terminals data-transferred link system. In this paper, we adopt an adaptive multistage parallel interference cancellation structure in the demodulator based on the least mean square (LMS) algorithm to eliminate the MAI on the basis of overviewing various of multiuser dectection schemes. Neither a training sequence nor a pilot signal is needed in the proposed scheme, and its implementation complexity can be greatly reduced by a LMS approximate algorithm. The algorithm and its FPGA implementation is then derived. Simulation results of the proposed adaptive PIC can outperform some of the existing interference cancellation methods in AWGN channels. The hardware setup of mutiuser demodulator is described, and the experimental results based on it demonstrate that the simulation results shows large performance gains over the conventional single-user demodulator.
Wearable EEG via lossless compression.
Dufort, Guillermo; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo
2016-08-01
This work presents a wearable multi-channel EEG recording system featuring a lossless compression algorithm. The algorithm, based in a previously reported algorithm by the authors, exploits the existing temporal correlation between samples at different sampling times, and the spatial correlation between different electrodes across the scalp. The low-power platform is able to compress, by a factor between 2.3 and 3.6, up to 300sps from 64 channels with a power consumption of 176μW/ch. The performance of the algorithm compares favorably with the best compression rates reported up to date in the literature.
Based on the CSI regional segmentation indoor localization algorithm
NASA Astrophysics Data System (ADS)
Zeng, Xi; Lin, Wei; Lan, Jingwei
2017-08-01
To solve the problem of high cost and low accuracy, the method of Channel State Information (CSI) regional segmentation are proposed in the indoor positioning. Because Channel State Information (CSI) stability, and effective against multipath effect, we used the Channel State Information (CSI) to segment location area. The method Acquisition CSI the influence of different link to pinpoint the location of the area. Then the method can improve the accuracy of positioning, and reduce the cost of the fingerprint localization algorithm.
On Channel-Discontinuity-Constraint Routing in Wireless Networks☆
Sankararaman, Swaminathan; Efrat, Alon; Ramasubramanian, Srinivasan; Agarwal, Pankaj K.
2011-01-01
Multi-channel wireless networks are increasingly deployed as infrastructure networks, e.g. in metro areas. Network nodes frequently employ directional antennas to improve spatial throughput. In such networks, between two nodes, it is of interest to compute a path with a channel assignment for the links such that the path and link bandwidths are the same. This is achieved when any two consecutive links are assigned different channels, termed as “Channel-Discontinuity-Constraint” (CDC). CDC-paths are also useful in TDMA systems, where, preferably, consecutive links are assigned different time-slots. In the first part of this paper, we develop a t-spanner for CDC-paths using spatial properties; a sub-network containing O(n/θ) links, for any θ > 0, such that CDC-paths increase in cost by at most a factor t = (1−2 sin (θ/2))−2. We propose a novel distributed algorithm to compute the spanner using an expected number of O(n log n) fixed-size messages. In the second part, we present a distributed algorithm to find minimum-cost CDC-paths between two nodes using O(n2) fixed-size messages, by developing an extension of Edmonds’ algorithm for minimum-cost perfect matching. In a centralized implementation, our algorithm runs in O(n2) time improving the previous best algorithm which requires O(n3) running time. Moreover, this running time improves to O(n/θ) when used in conjunction with the spanner developed. PMID:24443646
NASA Technical Reports Server (NTRS)
Yuan, Lu; LeBlanc, James
1998-01-01
This thesis investigates the effects of the High Power Amplifier (HPA) and the filters over a satellite or telemetry channel. The Volterra series expression is presented for the nonlinear channel with memory, and the algorithm is based on the finite-state machine model. A RAM-based algorithm operating on the receiver side, Pre-cursor Enhanced RAM-FSE Canceler (PERC) is developed. A high order modulation scheme , 16-QAM is used for simulation, the results show that PERC provides an efficient and reliable method to transmit data on the bandlimited nonlinear channel. The contribution of PERC algorithm is that it includes both pre-cursors and post-cursors as the RAM address lines, and suggests a new way to make decision on the pre-addresses. Compared with the RAM-DFE structure that only includes post- addresses, the BER versus Eb/NO performance of PERC is substantially enhanced. Experiments are performed for PERC algorithms with different parameters on AWGN channels, and the results are compared and analyzed. The investigation of this thesis includes software simulation and hardware verification. Hardware is setup to collect actual TWT data. Simulation on both the software-generated data and the real-world data are performed. Practical limitations are considered for the hardware collected data. Simulation results verified the reliability of the PERC algorithm. This work was conducted at NMSU in the Center for Space Telemetering and Telecommunications Systems in the Klipsch School of Electrical and Computer Engineering Department.
A parallel-pipelined architecture for a multi carrier demodulator
NASA Astrophysics Data System (ADS)
Kwatra, S. C.; Jamali, M. M.; Eugene, Linus P.
1991-03-01
Analog devices have been used for processing the information on board the satellites. Presently, digital devices are being used because they are economical and flexible as compared to their analog counterparts. Several schemes of digital transmission can be used depending on the data rate requirement of the user. An economical scheme of transmission for small earth stations uses single channel per carrier/frequency division multiple access (SCPC/FDMA) on the uplink and time division multiplexing (TDM) on the downlink. This is a typical communication service offered to low data rate users in commercial mass market. These channels usually pertain to either voice or data transmission. An efficient digital demodulator architecture is provided for a large number of law data rate users. A demodulator primarily consists of carrier, clock, and data recovery modules. This design uses principles of parallel processing, pipelining, and time sharing schemes to process large numbers of voice or data channels. It maintains the optimum throughput which is derived from the designed architecture and from the use of high speed components. The design is optimized for reduced power and area requirements. This is essential for satellite applications. The design is also flexible in processing a group of a varying number of channels. The algorithms that are used are verified by the use of a computer aided software engineering (CASE) tool called the Block Oriented System Simulator. The data flow, control circuitry, and interface of the hardware design is simulated in C language. Also, a multiprocessor approach is provided to map, model, and simulate the demodulation algorithms mainly from a speed view point. A hypercude based architecture implementation is provided for such a scheme of operation. The hypercube structure and the demodulation models on hypercubes are simulated in Ada.
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Jamali, M. M.; Eugene, Linus P.
1991-01-01
Analog devices have been used for processing the information on board the satellites. Presently, digital devices are being used because they are economical and flexible as compared to their analog counterparts. Several schemes of digital transmission can be used depending on the data rate requirement of the user. An economical scheme of transmission for small earth stations uses single channel per carrier/frequency division multiple access (SCPC/FDMA) on the uplink and time division multiplexing (TDM) on the downlink. This is a typical communication service offered to low data rate users in commercial mass market. These channels usually pertain to either voice or data transmission. An efficient digital demodulator architecture is provided for a large number of law data rate users. A demodulator primarily consists of carrier, clock, and data recovery modules. This design uses principles of parallel processing, pipelining, and time sharing schemes to process large numbers of voice or data channels. It maintains the optimum throughput which is derived from the designed architecture and from the use of high speed components. The design is optimized for reduced power and area requirements. This is essential for satellite applications. The design is also flexible in processing a group of a varying number of channels. The algorithms that are used are verified by the use of a computer aided software engineering (CASE) tool called the Block Oriented System Simulator. The data flow, control circuitry, and interface of the hardware design is simulated in C language. Also, a multiprocessor approach is provided to map, model, and simulate the demodulation algorithms mainly from a speed view point. A hypercude based architecture implementation is provided for such a scheme of operation. The hypercube structure and the demodulation models on hypercubes are simulated in Ada.
Progress towards NASA MODIS and Suomi NPP Cloud Property Data Record Continuity
NASA Astrophysics Data System (ADS)
Platnick, S.; Meyer, K.; Holz, R.; Ackerman, S. A.; Heidinger, A.; Wind, G.; Platnick, S. E.; Wang, C.; Marchant, B.; Frey, R.
2017-12-01
The Suomi NPP VIIRS imager provides an opportunity to extend the 17+ year EOS MODIS climate data record into the next generation operational era. Similar to MODIS, VIIRS provides visible through IR observations at moderate spatial resolution with a 1330 LT equatorial crossing consistent with the MODIS on the Aqua platform. However, unlike MODIS, VIIRS lacks key water vapor and CO2 absorbing channels used for high cloud detection and cloud-top property retrievals. In addition, there is a significant mismatch in the spectral location of the 2.2 μm shortwave-infrared channels used for cloud optical/microphysical retrievals and cloud thermodynamic phase. Given these instrument differences between MODIS EOS and VIIRS S-NPP/JPSS, a merged MODIS-VIIRS cloud record to serve the science community in the coming decades requires different algorithm approaches than those used for MODIS alone. This new approach includes two parallel efforts: (1) Imager-only algorithms with only spectral channels common to VIIRS and MODIS (i.e., eliminate use of MODIS CO2 and NIR/IR water vapor channels). Since the algorithms are run with similar spectral observations, they provide a basis for establishing a continuous cloud data record across the two imagers. (2) Merged imager and sounder measurements (i.e.., MODIS-AIRS, VIIRS-CrIS) in lieu of higher-spatial resolution MODIS absorption channels absent on VIIRS. The MODIS-VIIRS continuity algorithm for cloud optical property retrievals leverages heritage algorithms that produce the existing MODIS cloud mask (MOD35), optical and microphysical properties product (MOD06), and the NOAA AWG Cloud Height Algorithm (ACHA). We discuss our progress towards merging the MODIS observational record with VIIRS in order to generate cloud optical property climate data record continuity across the observing systems. In addition, we summarize efforts to reconcile apparent radiometric biases between analogous imager channels, a critical consideration for obtaining inter-sensor climate data record continuity.
Utilization of all Spectral Channels of IASI for the Retrieval of the Atmospheric State
NASA Astrophysics Data System (ADS)
Del Bianco, S.; Cortesi, U.; Carli, B.
2010-12-01
The retrieval of atmospheric state parameters from broadband measurements acquired by high spectral resolution sensors, such as the Infrared Atmospheric Sounding Interferometer (IASI) onboard the Meteorological Operational (MetOp) platform, generally requires to deal with a prohibitively large number of spectral elements available from a single observation (8461 samples in the case of IASI, covering the 645-2760 cm-1 range with a resolution of 0.5 cm-1 and a spectral sampling of 0.25 cm-1). Most inversion algorithms developed for both operational and scientific analysis of IASI spectra perform a reduction of the data - typically based on channel selection, super-channel clustering or Principal Component Analysis (PCA) techniques - in order to handle the high dimensionality of the problem. Accordingly, simultaneous processing of all IASI channels received relatively low attention. Here we prove the feasibility of a retrieval approach exploiting all spectral channels of IASI, to extract information on water vapor, temperature and ozone profiles. This multi-target retrieval removes the systematic errors due to interfering parameters and makes the channel selection no longer necessary. The challenging computation is made possible by the use of a coarse spectral grid for the forward model calculation and by the abatement of the associated modeling errors through the use of a variance-covariance matrix of the residuals that takes into account all the forward model errors.
Implementation of Complex Signal Processing Algorithms for Position-Sensitive Microcalorimeters
NASA Technical Reports Server (NTRS)
Smith, Stephen J.
2008-01-01
We have recently reported on a theoretical digital signal-processing algorithm for improved energy and position resolution in position-sensitive, transition-edge sensor (POST) X-ray detectors [Smith et al., Nucl, lnstr and Meth. A 556 (2006) 2371. PoST's consists of one or more transition-edge sensors (TES's) on a large continuous or pixellated X-ray absorber and are under development as an alternative to arrays of single pixel TES's. PoST's provide a means to increase the field-of-view for the fewest number of read-out channels. In this contribution we extend the theoretical correlated energy position optimal filter (CEPOF) algorithm (originally developed for 2-TES continuous absorber PoST's) to investigate the practical implementation on multi-pixel single TES PoST's or Hydras. We use numerically simulated data for a nine absorber device, which includes realistic detector noise, to demonstrate an iterative scheme that enables convergence on the correct photon absorption position and energy without any a priori assumptions. The position sensitivity of the CEPOF implemented on simulated data agrees very well with the theoretically predicted resolution. We discuss practical issues such as the impact of random arrival phase of the measured data on the performance of the CEPOF. The CEPOF algorithm demonstrates that full-width-at- half-maximum energy resolution of < 8 eV coupled with position-sensitivity down to a few 100 eV should be achievable for a fully optimized device.
Large-Scale Multiantenna Multisine Wireless Power Transfer
NASA Astrophysics Data System (ADS)
Huang, Yang; Clerckx, Bruno
2017-11-01
Wireless Power Transfer (WPT) is expected to be a technology reshaping the landscape of low-power applications such as the Internet of Things, Radio Frequency identification (RFID) networks, etc. Although there has been some progress towards multi-antenna multi-sine WPT design, the large-scale design of WPT, reminiscent of massive MIMO in communications, remains an open challenge. In this paper, we derive efficient multiuser algorithms based on a generalizable optimization framework, in order to design transmit sinewaves that maximize the weighted-sum/minimum rectenna output DC voltage. The study highlights the significant effect of the nonlinearity introduced by the rectification process on the design of waveforms in multiuser systems. Interestingly, in the single-user case, the optimal spatial domain beamforming, obtained prior to the frequency domain power allocation optimization, turns out to be Maximum Ratio Transmission (MRT). In contrast, in the general weighted sum criterion maximization problem, the spatial domain beamforming optimization and the frequency domain power allocation optimization are coupled. Assuming channel hardening, low-complexity algorithms are proposed based on asymptotic analysis, to maximize the two criteria. The structure of the asymptotically optimal spatial domain precoder can be found prior to the optimization. The performance of the proposed algorithms is evaluated. Numerical results confirm the inefficiency of the linear model-based design for the single and multi-user scenarios. It is also shown that as nonlinear model-based designs, the proposed algorithms can benefit from an increasing number of sinewaves.
Progress towards MODIS and VIIRS Cloud Optical Property Data Record Continuity
NASA Astrophysics Data System (ADS)
Meyer, K.; Platnick, S. E.; Wind, G.; Amarasinghe, N.; Holz, R.; Ackerman, S. A.; Heidinger, A. K.
2016-12-01
The launch of Suomi NPP in the fall of 2011 began the next generation of U.S. operational polar orbiting Earth observations, and its VIIRS imager provides an opportunity to extend the 15+ year climate data record of MODIS EOS. Similar to MODIS, VIIRS provides visible through IR observations at moderate spatial resolution with a 1330 LT equatorial crossing consistent with the MODIS on the Aqua platform. However, unlike MODIS, VIIRS lacks key water vapor and CO2 absorbing channels used for high cloud detection and cloud-top property retrievals, and there is a significant change in the spectral location of the 2.1μm shortwave-infrared channel used for cloud optical/microphysical retrievals and cloud thermodynamic phase. Given these instrument differences between MODIS EOS and VIIRS S-NPP/JPSS, we discuss our progress towards merging the MODIS observational record with VIIRS in order to generate cloud optical property climate data record continuity across the observing systems. The MODIS-VIIRS continuity algorithm for cloud optical property retrievals leverages heritage algorithms that produce the existing MODIS cloud optical and microphysical properties product (MOD06); the NOAA AWG/CLAVR-x cloud-top property algorithm and a common MODIS-VIIRS cloud mask feed into the optical property algorithm. To account for the different channel sets of MODIS and VIIRS, each algorithm nominally uses a subset of channels common to both imagers. Data granule and aggregated examples for the current version of the continuity algorithm (MODAWG) will be shown. In addition, efforts to reconcile apparent radiometric biases between analogous channels of the two imagers, a critical consideration for obtaining inter-sensor climate data record continuity, will be discussed.
NASA Astrophysics Data System (ADS)
Zhao, Yongli; Zhu, Ye; Wang, Chunhui; Yu, Xiaosong; Liu, Chuan; Liu, Binglin; Zhang, Jie
2017-07-01
With the capacity increasing in optical networks enabled by spatial division multiplexing (SDM) technology, spatial division multiplexing elastic optical networks (SDM-EONs) attract much attention from both academic and industry. Super-channel is an important type of service provisioning in SDM-EONs. This paper focuses on the issue of super-channel construction in SDM-EONs. Mixed super-channel oriented routing, spectrum and core assignment (MS-RSCA) algorithm is proposed in SDM-EONs considering inter-core crosstalk. Simulation results show that MS-RSCA can improve spectrum resource utilization and reduce blocking probability significantly compared with the baseline RSCA algorithms.
NASA Astrophysics Data System (ADS)
Kim, Hyo-Su; Kim, Dong-Hoi
The dynamic channel allocation (DCA) scheme in multi-cell systems causes serious inter-cell interference (ICI) problem to some existing calls when channels for new calls are allocated. Such a problem can be addressed by advanced centralized DCA design that is able to minimize ICI. Thus, in this paper, a centralized DCA is developed for the downlink of multi-cell orthogonal frequency division multiple access (OFDMA) systems with full spectral reuse. However, in practice, as the search space of channel assignment for centralized DCA scheme in multi-cell systems grows exponentially with the increase of the number of required calls, channels, and cells, it becomes an NP-hard problem and is currently too complicated to find an optimum channel allocation. In this paper, we propose an ant colony optimization (ACO) based DCA scheme using a low-complexity ACO algorithm which is a kind of heuristic algorithm in order to solve the aforementioned problem. Simulation results demonstrate significant performance improvements compared to the existing schemes in terms of the grade of service (GoS) performance and the forced termination probability of existing calls without degrading the system performance of the average throughput.
NASA Astrophysics Data System (ADS)
Chatterjee, R. S.; Singh, Narendra; Thapa, Shailaja; Sharma, Dravneeta; Kumar, Dheeraj
2017-06-01
The present study proposes land surface temperature (LST) retrieval from satellite-based thermal IR data by single channel radiative transfer algorithm using atmospheric correction parameters derived from satellite-based and in-situ data and land surface emissivity (LSE) derived by a hybrid LSE model. For example, atmospheric transmittance (τ) was derived from Terra MODIS spectral radiance in atmospheric window and absorption bands, whereas the atmospheric path radiance and sky radiance were estimated using satellite- and ground-based in-situ solar radiation, geographic location and observation conditions. The hybrid LSE model which is coupled with ground-based emissivity measurements is more versatile than the previous LSE models and yields improved emissivity values by knowledge-based approach. It uses NDVI-based and NDVI Threshold method (NDVITHM) based algorithms and field-measured emissivity values. The model is applicable for dense vegetation cover, mixed vegetation cover, bare earth including coal mining related land surface classes. The study was conducted in a coalfield of India badly affected by coal fire for decades. In a coal fire affected coalfield, LST would provide precise temperature difference between thermally anomalous coal fire pixels and background pixels to facilitate coal fire detection and monitoring. The derived LST products of the present study were compared with radiant temperature images across some of the prominent coal fire locations in the study area by graphical means and by some standard mathematical dispersion coefficients such as coefficient of variation, coefficient of quartile deviation, coefficient of quartile deviation for 3rd quartile vs. maximum temperature, coefficient of mean deviation (about median) indicating significant increase in the temperature difference among the pixels. The average temperature slope between adjacent pixels, which increases the potential of coal fire pixel detection from background pixels, is significantly larger in the derived LST products than the corresponding radiant temperature images.
Online track detection in triggerless mode for INO
NASA Astrophysics Data System (ADS)
Jain, A.; Padmini, S.; Joseph, A. N.; Mahesh, P.; Preetha, N.; Behere, A.; Sikder, S. S.; Majumder, G.; Behera, S. P.
2018-03-01
The India based Neutrino Observatory (INO) is a proposed particle physics research project to study the atmospheric neutrinos. INO-Iron Calorimeter (ICAL) will consist of 28,800 detectors having 3.6 million electronic channels expected to activate with 100 Hz single rate, producing data at a rate of 3 GBps. Data collected contains a few real hits generated by muon tracks and the remaining noise-induced spurious hits. Estimated reduction factor after filtering out data of interest from generated data is of the order of 103. This makes trigger generation critical for efficient data collection and storage. Trigger is generated by detecting coincidence across multiple channels satisfying trigger criteria, within a small window of 200 ns in the trigger region. As the probability of neutrino interaction is very low, track detection algorithm has to be efficient and fast enough to process 5 × 106 events-candidates/s without introducing significant dead time, so that not even a single neutrino event is missed out. A hardware based trigger system is presently proposed for on-line track detection considering stringent timing requirements. Though the trigger system can be designed with scalability, a lot of hardware devices and interconnections make it a complex and expensive solution with limited flexibility. A software based track detection approach working on the hit information offers an elegant solution with possibility of varying trigger criteria for selecting various potentially interesting physics events. An event selection approach for an alternative triggerless readout scheme has been developed. The algorithm is mathematically simple, robust and parallelizable. It has been validated by detecting simulated muon events for energies of the range of 1 GeV-10 GeV with 100% efficiency at a processing rate of 60 μs/event on a 16 core machine. The algorithm and result of a proof-of-concept for its faster implementation over multiple cores is presented. The paper also discusses about harnessing the computing capabilities of multi-core computing farm, thereby optimizing number of nodes required for the proposed system.
Orio, Patricio; Soudry, Daniel
2012-01-01
Background The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled gating particles, while the DA was modeled using uncoupled gating particles. Implementations of DA with coupled particles, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. Main Contributions We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable – allowing an easy, transparent and efficient DA implementation, avoiding unnecessary approximations. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods, except when short time steps or low channel numbers were used. PMID:22629320
Multi-feature classifiers for burst detection in single EEG channels from preterm infants
NASA Astrophysics Data System (ADS)
Navarro, X.; Porée, F.; Kuchenbuch, M.; Chavez, M.; Beuchée, Alain; Carrault, G.
2017-08-01
Objective. The study of electroencephalographic (EEG) bursts in preterm infants provides valuable information about maturation or prognostication after perinatal asphyxia. Over the last two decades, a number of works proposed algorithms to automatically detect EEG bursts in preterm infants, but they were designed for populations under 35 weeks of post menstrual age (PMA). However, as the brain activity evolves rapidly during postnatal life, these solutions might be under-performing with increasing PMA. In this work we focused on preterm infants reaching term ages (PMA ⩾36 weeks) using multi-feature classification on a single EEG channel. Approach. Five EEG burst detectors relying on different machine learning approaches were compared: logistic regression (LR), linear discriminant analysis (LDA), k-nearest neighbors (kNN), support vector machines (SVM) and thresholding (Th). Classifiers were trained by visually labeled EEG recordings from 14 very preterm infants (born after 28 weeks of gestation) with 36-41 weeks PMA. Main results. The most performing classifiers reached about 95% accuracy (kNN, SVM and LR) whereas Th obtained 84%. Compared to human-automatic agreements, LR provided the highest scores (Cohen’s kappa = 0.71) using only three EEG features. Applying this classifier in an unlabeled database of 21 infants ⩾36 weeks PMA, we found that long EEG bursts and short inter-burst periods are characteristic of infants with the highest PMA and weights. Significance. In view of these results, LR-based burst detection could be a suitable tool to study maturation in monitoring or portable devices using a single EEG channel.
Simultaneous optical and electrical recording of a single ion-channel.
Ide, Toru; Takeuchi, Yuko; Aoki, Takaaki; Yanagida, Toshio
2002-10-01
In recent years, the single-molecule imaging technique has proven to be a valuable tool in solving many basic problems in biophysics. The technique used to measure single-molecule functions was initially developed to study electrophysiological properties of channel proteins. However, the technology to visualize single channels at work has not received as much attention. In this study, we have for the first time, simultaneously measured the optical and electrical properties of single-channel proteins. The large conductance calcium-activated potassium channel (BK-channel) labeled with fluorescent dye molecules was incorporated into a planar bilayer membrane and the fluorescent image captured with a total internal reflection fluorescence microscope simultaneously with single-channel current recording. This innovative technology will greatly advance the study of channel proteins as well as signal transduction processes that involve ion permeation processes.
Evaluation of Long-term Aerosol Data Records from SeaWiFS over Land and Ocean
NASA Astrophysics Data System (ADS)
Bettenhausen, C.; Hsu, C.; Jeong, M.; Huang, J.
2010-12-01
Deserts around the globe produce mineral dust aerosols that may then be transported over cities, across continents, or even oceans. These aerosols affect the Earth’s energy balance through direct and indirect interactions with incoming solar radiation. They also have a biogeochemical effect as they deliver scarce nutrients to remote ecosystems. Large dust storms regularly disrupt air traffic and are a general nuisance to those living in transport regions. In the past, measuring dust aerosols has been incomplete at best. Satellite retrieval algorithms were limited to oceans or vegetated surfaces and typically neglected desert regions due to their high surface reflectivity in the mid-visible and near-infrared wavelengths, which have been typically used for aerosol retrievals. The Deep Blue aerosol retrieval algorithm was developed to resolve these shortcomings by utilizing the blue channels from instruments such as the Sea-Viewing Wide-Field-of-View Sensor (SeaWiFS) and the Moderate Resolution Imaging Spectroradiometer (MODIS) to infer aerosol properties over these highly reflective surfaces. The surface reflectivity of desert regions is much lower in the blue channels and thus it is easier to separate the aerosol and surface signals than at the longer wavelengths used in other algorithms. More recently, the Deep Blue algorithm has been expanded to retrieve over vegetated surfaces and oceans as well. A single algorithm can now follow dust from source to sink. In this work, we introduce the SeaWiFS instrument and the Deep Blue aerosol retrieval algorithm. We have produced global aerosol data records over land and ocean from 1997 through 2009 using the Deep Blue algorithm and SeaWiFS data. We describe these data records and validate them with data from the Aerosol Robotic Network (AERONET). We also show the relative performance compared to the current MODIS Deep Blue operational aerosol data in desert regions. The current results are encouraging and this dataset will be useful to future studies in understanding the effects of dust aerosols on global processes, long-term aerosol trends, quantifying dust emissions, transport, and inter-annual variability.
A Space-Time Signal Decomposition Algorithm for Downlink MIMO DS-CDMA Receivers
NASA Astrophysics Data System (ADS)
Wang, Yung-Yi; Fang, Wen-Hsien; Chen, Jiunn-Tsair
We propose a dimension reduction algorithm for the receiver of the downlink of direct-sequence code-division multiple access (DS-CDMA) systems in which both the transmitters and the receivers employ antenna arrays of multiple elements. To estimate the high order channel parameters, we develop a layered architecture using dimension-reduced parameter estimation algorithms to estimate the frequency-selective multipath channels. In the proposed architecture, to exploit the space-time geometric characteristics of multipath channels, spatial beamformers and constrained (or unconstrained) temporal filters are adopted for clustered-multipath grouping and path isolation. In conjunction with the multiple access interference (MAI) suppression techniques, the proposed architecture jointly estimates the direction of arrivals, propagation delays, and fading amplitudes of the downlink fading multipaths. With the outputs of the proposed architecture, the signals of interest can then be naturally detected by using path-wise maximum ratio combining. Compared to the traditional techniques, such as the Joint-Angle-and-Delay-Estimation (JADE) algorithm for DOA-delay joint estimation and the space-time minimum mean square error (ST-MMSE) algorithm for signal detection, computer simulations show that the proposed algorithm substantially mitigate the computational complexity at the expense of only slight performance degradation.
Liu, Tao; Djordjevic, Ivan B
2014-12-29
In this paper, we first describe an optimal signal constellation design algorithm suitable for the coherent optical channels dominated by the linear phase noise. Then, we modify this algorithm to be suitable for the nonlinear phase noise dominated channels. In optimization procedure, the proposed algorithm uses the cumulative log-likelihood function instead of the Euclidian distance. Further, an LDPC coded modulation scheme is proposed to be used in combination with signal constellations obtained by proposed algorithm. Monte Carlo simulations indicate that the LDPC-coded modulation schemes employing the new constellation sets, obtained by our new signal constellation design algorithm, outperform corresponding QAM constellations significantly in terms of transmission distance and have better nonlinearity tolerance.
NASA Astrophysics Data System (ADS)
Tao, R.; Ma, Y.; Si, L.; Dong, X.; Zhou, P.; Liu, Z.
2011-11-01
We present a theoretical and experimental study of a target-in-the-loop (TIL) high-power adaptive phase-locked fiber laser array. The system configuration of the TIL adaptive phase-locked fiber laser array is introduced, and the fundamental theory for TIL based on the single-dithering technique is deduced for the first time. Two 10-W-level high-power fiber amplifiers are set up and adaptive phase locking of the two fiber amplifiers is accomplished successfully by implementing a single-dithering algorithm on a signal processor. The experimental results demonstrate that the optical phase noise for each beam channel can be effectively compensated by the TIL adaptive optics system under high-power applications and the fringe contrast on a remotely located extended target is advanced from 12% to 74% for the two 10-W-level fiber amplifiers.
NASA Astrophysics Data System (ADS)
Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar
2011-12-01
This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.
A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching
Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Zhang, Peng
2017-01-01
Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images. PMID:28885547
A Stereo Dual-Channel Dynamic Programming Algorithm for UAV Image Stitching.
Li, Ming; Chen, Ruizhi; Zhang, Weilong; Li, Deren; Liao, Xuan; Wang, Lei; Pan, Yuanjin; Zhang, Peng
2017-09-08
Dislocation is one of the major challenges in unmanned aerial vehicle (UAV) image stitching. In this paper, we propose a new algorithm for seamlessly stitching UAV images based on a dynamic programming approach. Our solution consists of two steps: Firstly, an image matching algorithm is used to correct the images so that they are in the same coordinate system. Secondly, a new dynamic programming algorithm is developed based on the concept of a stereo dual-channel energy accumulation. A new energy aggregation and traversal strategy is adopted in our solution, which can find a more optimal seam line for image stitching. Our algorithm overcomes the theoretical limitation of the classical Duplaquet algorithm. Experiments show that the algorithm can effectively solve the dislocation problem in UAV image stitching, especially for the cases in dense urban areas. Our solution is also direction-independent, which has better adaptability and robustness for stitching images.
Multi-carrier Communications over Time-varying Acoustic Channels
NASA Astrophysics Data System (ADS)
Aval, Yashar M.
Acoustic communication is an enabling technology for many autonomous undersea systems, such as those used for ocean monitoring, offshore oil and gas industry, aquaculture, or port security. There are three main challenges in achieving reliable high-rate underwater communication: the bandwidth of acoustic channels is extremely limited, the propagation delays are long, and the Doppler distortions are more pronounced than those found in wireless radio channels. In this dissertation we focus on assessing the fundamental limitations of acoustic communication, and designing efficient signal processing methods that cam overcome these limitations. We address the fundamental question of acoustic channel capacity (achievable rate) for single-input-multi-output (SIMO) acoustic channels using a per-path Rician fading model, and focusing on two scenarios: narrowband channels where the channel statistics can be approximated as frequency- independent, and wideband channels where the nominal path loss is frequency-dependent. In each scenario, we compare several candidate power allocation techniques, and show that assigning uniform power across all frequencies for the first scenario, and assigning uniform power across a selected frequency-band for the second scenario, are the best practical choices in most cases, because the long propagation delay renders the feedback information outdated for power allocation based on the estimated channel response. We quantify our results using the channel information extracted form the 2010 Mobile Acoustic Communications Experiment (MACE'10). Next, we focus on achieving reliable high-rate communication over underwater acoustic channels. Specifically, we investigate orthogonal frequency division multiplexing (OFDM) as the state-of-the-art technique for dealing with frequency-selective multipath channels, and propose a class of methods that compensate for the time-variation of the underwater acoustic channel. These methods are based on multiple-FFT demodulation, and are implemented as partial (P), shaped (S), fractional (F), and Taylor series expansion (T) FFT demodulation. They replace the conventional FFT demodulation with a few FFTs and a combiner. The input to each FFT is a specific transformation of the input signal (P,S,F,T), while the combiner performs weighted summation of the FFT outputs. We design an adaptive algorithm of stochastic gradient type to learn the combiner weights for coherent and differentially coherent detection. The algorithm is cast into the framework of multiple receiving elements to take advantage of spatial diversity. Synthetic data, as well as experimental data from the MACE'10 experiment are used to demonstrate the performance of the proposed methods, showing significant improvement over conventional detection techniques with or without inter-carrier interference equalization (5 dB--7 dB on average over multiple hours), as well as improved bandwidth efficiency.
Spatio-temporal colour correction of strongly degraded movies
NASA Astrophysics Data System (ADS)
Islam, A. B. M. Tariqul; Farup, Ivar
2011-01-01
The archives of motion pictures represent an important part of precious cultural heritage. Unfortunately, these cinematography collections are vulnerable to different distortions such as colour fading which is beyond the capability of photochemical restoration process. Spatial colour algorithms-Retinex and ACE provide helpful tool in restoring strongly degraded colour films but, there are some challenges associated with these algorithms. We present an automatic colour correction technique for digital colour restoration of strongly degraded movie material. The method is based upon the existing STRESS algorithm. In order to cope with the problem of highly correlated colour channels, we implemented a preprocessing step in which saturation enhancement is performed in a PCA space. Spatial colour algorithms tend to emphasize all details in the images, including dust and scratches. Surprisingly, we found that the presence of these defects does not affect the behaviour of the colour correction algorithm. Although the STRESS algorithm is already in itself more efficient than traditional spatial colour algorithms, it is still computationally expensive. To speed it up further, we went beyond the spatial domain of the frames and extended the algorithm to the temporal domain. This way, we were able to achieve an 80 percent reduction of the computational time compared to processing every single frame individually. We performed two user experiments and found that the visual quality of the resulting frames was significantly better than with existing methods. Thus, our method outperforms the existing ones in terms of both visual quality and computational efficiency.
NASA Astrophysics Data System (ADS)
Surti, S.; Karp, J. S.
2018-03-01
The advent of silicon photomultipliers (SiPMs) has introduced the possibility of increased detector performance in commercial whole-body PET scanners. The primary advantage of these photodetectors is the ability to couple a single SiPM channel directly to a single pixel of PET scintillator that is typically 4 mm wide (one-to-one coupled detector design). We performed simulation studies to evaluate the impact of three different event positioning algorithms in such detectors: (i) a weighted energy centroid positioning (Anger logic), (ii) identifying the crystal with maximum energy deposition (1st max crystal), and (iii) identifying the crystal with the second highest energy deposition (2nd max crystal). Detector simulations performed with LSO crystals indicate reduced positioning errors when using the 2nd max crystal positioning algorithm. These studies are performed over a range of crystal cross-sections varying from 1 × 1 mm2 to 4 × 4 mm2 as well as crystal thickness of 1 cm to 3 cm. System simulations were performed for a whole-body PET scanner (85 cm ring diameter) with a long axial FOV (70 cm long) and show an improvement in reconstructed spatial resolution for a point source when using the 2nd max crystal positioning algorithm. Finally, we observe a 30-40% gain in contrast recovery coefficient values for 1 and 0.5 cm diameter spheres when using the 2nd max crystal positioning algorithm compared to the 1st max crystal positioning algorithm. These results show that there is an advantage to implementing the 2nd max crystal positioning algorithm in a new generation of PET scanners using one-to-one coupled detector design with lutetium based crystals, including LSO, LYSO or scintillators that have similar density and effective atomic number as LSO.
NASA Astrophysics Data System (ADS)
Choi, Wonjoon; Yoon, Myungchul; Roh, Byeong-Hee
Eavesdropping on backward channels in RFID environments may cause severe privacy problems because it means the exposure of personal information related to tags that each person has. However, most existing RFID tag security schemes are focused on the forward channel protections. In this paper, we propose a simple but effective method to solve the backward channel eavesdropping problem based on Randomized-tree walking algorithm for securing tag ID information and privacy in RFID-based applications. In order to show the efficiency of the proposed scheme, we derive two performance models for the cases when CRC is used and not used. It is shown that the proposed method can lower the probability of eavesdropping on backward channels near to ‘0.’
Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors
NASA Technical Reports Server (NTRS)
Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.
2009-01-01
A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.
NASA Astrophysics Data System (ADS)
Meng, Luming; Sheong, Fu Kit; Zeng, Xiangze; Zhu, Lizhe; Huang, Xuhui
2017-07-01
Constructing Markov state models from large-scale molecular dynamics simulation trajectories is a promising approach to dissect the kinetic mechanisms of complex chemical and biological processes. Combined with transition path theory, Markov state models can be applied to identify all pathways connecting any conformational states of interest. However, the identified pathways can be too complex to comprehend, especially for multi-body processes where numerous parallel pathways with comparable flux probability often coexist. Here, we have developed a path lumping method to group these parallel pathways into metastable path channels for analysis. We define the similarity between two pathways as the intercrossing flux between them and then apply the spectral clustering algorithm to lump these pathways into groups. We demonstrate the power of our method by applying it to two systems: a 2D-potential consisting of four metastable energy channels and the hydrophobic collapse process of two hydrophobic molecules. In both cases, our algorithm successfully reveals the metastable path channels. We expect this path lumping algorithm to be a promising tool for revealing unprecedented insights into the kinetic mechanisms of complex multi-body processes.
Automatic channel trimming for control systems: A concept
NASA Technical Reports Server (NTRS)
Vandervoort, R. J.; Sykes, H. A.
1977-01-01
Set of bias signals added to channel inputs automatically normalize differences between channels. Algorithm and second feedback loop compute trim biases. Concept could be applied to regulators and multichannel servosystems for remote manipulators in undersea mining.
Hu, Guohong; Wang, Hui-Yun; Greenawalt, Danielle M.; Azaro, Marco A.; Luo, Minjie; Tereshchenko, Irina V.; Cui, Xiangfeng; Yang, Qifeng; Gao, Richeng; Shen, Li; Li, Honghua
2006-01-01
Microarray-based analysis of single nucleotide polymorphisms (SNPs) has many applications in large-scale genetic studies. To minimize the influence of experimental variation, microarray data usually need to be processed in different aspects including background subtraction, normalization and low-signal filtering before genotype determination. Although many algorithms are sophisticated for these purposes, biases are still present. In the present paper, new algorithms for SNP microarray data analysis and the software, AccuTyping, developed based on these algorithms are described. The algorithms take advantage of a large number of SNPs included in each assay, and the fact that the top and bottom 20% of SNPs can be safely treated as homozygous after sorting based on their ratios between the signal intensities. These SNPs are then used as controls for color channel normalization and background subtraction. Genotype calls are made based on the logarithms of signal intensity ratios using two cutoff values, which were determined after training the program with a dataset of ∼160 000 genotypes and validated by non-microarray methods. AccuTyping was used to determine >300 000 genotypes of DNA and sperm samples. The accuracy was shown to be >99%. AccuTyping can be downloaded from . PMID:16982644
Mahmoudzadeh, Batoul; Liu, Longcheng; Moreno, Luis; Neretnieks, Ivars
2014-08-01
A model is developed to describe solute transport and retention in fractured rocks. It accounts for advection along the fracture, molecular diffusion from the fracture to the rock matrix composed of several geological layers, adsorption on the fracture surface, adsorption in the rock matrix layers and radioactive decay-chains. The analytical solution, obtained for the Laplace-transformed concentration at the outlet of the flowing channel, can conveniently be transformed back to the time domain by the use of the de Hoog algorithm. This allows one to readily include it into a fracture network model or a channel network model to predict nuclide transport through channels in heterogeneous fractured media consisting of an arbitrary number of rock units with piecewise constant properties. More importantly, the simulations made in this study recommend that it is necessary to account for decay-chains and also rock matrix comprising at least two different geological layers, if justified, in safety and performance assessment of the repositories for spent nuclear fuel. Copyright © 2014 Elsevier B.V. All rights reserved.
Active phase locking of thirty fiber channels using multilevel phase dithering method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhimeng; Luo, Yongquan, E-mail: yongquan-l@sina.com; Liu, Cangli
2016-03-15
An active phase locking of a large-scale fiber array with thirty channels has been demonstrated experimentally. In the experiment, the first group of thirty phase controllers is used to compensate the phase noises between the elements and the second group of thirty phase modulators is used to impose additional phase disturbances to mimic the phase noises in the high power fiber amplifiers. A multi-level phase dithering algorithm using dual-level rectangular-wave phase modulation and time division multiplexing can achieve the same phase control as single/multi-frequency dithering technique, but without coherent demodulation circuit. The phase locking efficiency of 30 fiber channels ismore » achieved about 98.68%, 97.82%, and 96.50% with no additional phase distortion, modulated phase distortion I (±1 rad), and phase distortion II (±2 rad), corresponding to the phase error of λ/54, λ/43, and λ/34 rms. The contrast of the coherent combined beam profile is about 89%. Experimental results reveal that the multi-level phase dithering technique has great potential in scaling to a large number of laser beams.« less
A digital audio/video interleaving system. [for Shuttle Orbiter
NASA Technical Reports Server (NTRS)
Richards, R. W.
1978-01-01
A method of interleaving an audio signal with its associated video signal for simultaneous transmission or recording, and the subsequent separation of the two signals, is described. Comparisons are made between the new audio signal interleaving system and the Skylab Pam audio/video interleaving system, pointing out improvements gained by using the digital audio/video interleaving system. It was found that the digital technique is the simplest, most effective and most reliable method for interleaving audio and/or other types of data into the video signal for the Shuttle Orbiter application. Details of the design of a multiplexer capable of accommodating two basic data channels, each consisting of a single 31.5-kb/s digital bit stream are given. An adaptive slope delta modulation system is introduced to digitize audio signals, producing a high immunity of work intelligibility to channel errors, primarily due to the robust nature of the delta-modulation algorithm.
Measuring the global distribution of intense convection over land with passive microwave radiometry
NASA Technical Reports Server (NTRS)
Spencer, R. W.; Santek, D. A.
1985-01-01
The global distribution of intense convective activity over land is shown to be measurable with satellite passive-microwave methods through a comparison of an empirical rain rate algorithm with a climatology of thunderstorm days for the months of June-August. With the 18 and 37 GHz channels of the Nimbus-7 Scanning Multichannel Microwave Radiometer (SMMR), the strong volume scattering effects of precipitation can be measured. Even though a single frequency (37 GHz) is responsive to the scattering signature, two frequencies are needed to remove most of the effect that variations in thermometric temperatures and soil moisture have on the brightness temperatures. Because snow cover is also a volume scatterer of microwave energy at these microwavelengths, a discrimination procedure involving four of the SMMR channels is employed to separate the rain and snow classes, based upon their differences in average thermometric temperature.
The utilization of Nimbus-7 SMMR measurements to delineate rainfall over land
NASA Technical Reports Server (NTRS)
Rogers, E.; Siddalingaiah, H.
1982-01-01
Based on previous theoretical calculations, an empirical statistical approach to use satellite multifrequency dual polarized passive microwave data to detect rainfall areas over land was initiated. The addition of information from a lower frequency channel (18.0 or 10.7 GHz) was shown to improve the discrimination of rain from wet ground achieved by using a single frequency dual polarized (37 GHz) channel alone. The algorithm was developed and independently tested using data from the Nimbus-7 Scanning Multichannel Microwave Radiometer. Horizontally and vertically polarized brightness temperature pairs at 37, 18, 10.7 GHz were sampled for raining areas over land (determined from ground base radar), wet ground areas (adjacent and upwind from rain areas determined from radar), and dry land regions (areas where rain had not fallen in a 24h period) over the central and eastern United States. Surface thermodynamic temperatures were both above and below 15 deg C.
Multi-LED parallel transmission for long distance underwater VLC system with one SPAD receiver
NASA Astrophysics Data System (ADS)
Wang, Chao; Yu, Hong-Yi; Zhu, Yi-Jun; Wang, Tao; Ji, Ya-Wei
2018-03-01
In this paper, a multiple light emitting diode (LED) chips parallel transmission (Multi-LED-PT) scheme for underwater visible light communication system with one photon-counting single photon avalanche diode (SPAD) receiver is proposed. As the lamp always consists of multi-LED chips, the data rate could be improved when we drive these multi-LED chips parallel by using the interleaver-division-multiplexing technique. For each chip, the on-off-keying modulation is used to reduce the influence of clipping. Then a serial successive interference cancellation detection algorithm based on ideal Poisson photon-counting channel by the SPAD is proposed. Finally, compared to the SPAD-based direct current-biased optical orthogonal frequency division multiplexing system, the proposed Multi-LED-PT system could improve the error-rate performance and anti-nonlinearity performance significantly under the effects of absorption, scattering and weak turbulence-induced channel fading together.
NASA Astrophysics Data System (ADS)
Wibisana, H.; Zainab, S.; Dara K., A.
2018-01-01
Chlorophyll-a is one of the parameters used to detect the presence of fish populations, as well as one of the parameters to state the quality of a water. Research on chlorophyll concentrations has been extensively investigated as well as with chlorophyll-a mapping using remote sensing satellites. Mapping of chlorophyll concentration is used to obtain an optimal picture of the condition of waters that is often used as a fishing area by the fishermen. The role of remote sensing is a technological breakthrough in broadly monitoring the condition of waters. And in the process to get a complete picture of the aquatic conditions it would be used an algorithm that can provide an image of the concentration of chlorophyll at certain points scattered in the research area of capture fisheries. Remote sensing algorithms have been widely used by researchers to detect the presence of chlorophyll content, where the channels corresponding to the mapping of chlorophyll -concentrations from Landsat 8 images are canals 4, 3 and 2. With multiple channels from Landsat-8 satellite imagery used for chlorophyll detection, optimum algorithmic search can be formulated to obtain maximum results of chlorophyll-a concentration in the research area. From the calculation of remote sensing algorithm hence can be known the suitable algorithm for condition at coast of Pasuruan, where green channel give good enough correlation equal to R2 = 0,853 with algorithm for Chlorophyll-a (mg / m3) = 0,093 (R (-0) Red - 3,7049, from this result it can be concluded that there is a good correlation of the green channel that can illustrate the concentration of chlorophyll scattered along the coast of Pasuruan
NASA Astrophysics Data System (ADS)
Li, Zhengyan; Zgadzaj, Rafal; Wang, Xiaoming; Reed, Stephen; Dong, Peng; Downer, Michael C.
2010-11-01
We demonstrate a prototype Frequency Domain Streak Camera (FDSC) that can capture the picosecond time evolution of the plasma accelerator structure in a single shot. In our prototype Frequency-Domain Streak Camera, a probe pulse propagates obliquely to a sub-picosecond pump pulse that creates an evolving nonlinear index "bubble" in fused silica glass, supplementing a conventional Frequency Domain Holographic (FDH) probe-reference pair that co-propagates with the "bubble". Frequency Domain Tomography (FDT) generalizes Frequency-Domain Streak Camera by probing the "bubble" from multiple angles and reconstructing its morphology and evolution using algorithms similar to those used in medical CAT scans. Multiplexing methods (Temporal Multiplexing and Angular Multiplexing) improve data storage and processing capability, demonstrating a compact Frequency Domain Tomography system with a single spectrometer.
Enhanced performance of visible light communication employing 512-QAM N-SC-FDE and DD-LMS.
Wang, Yuanquan; Huang, Xingxing; Zhang, Junwen; Wang, Yiguang; Chi, Nan
2014-06-30
In this paper, a novel hybrid time-frequency adaptive equalization algorithm based on a combination of frequency domain equalization (FDE) and decision-directed least mean square (DD-LMS) is proposed and experimentally demonstrated in a Nyquist single carrier visible light communication (VLC) system. Adopting this scheme, as well with 512-ary quadrature amplitude modulation (512-QAM) and wavelength multiplexing division (WDM), an aggregate data rate of 4.22-Gb/s is successfully achieved employing a single commercially available red-green-blue (RGB) light emitting diode (LED) with low bandwidth. The measured Q-factors for 3 wavelength channels are all above the Q-limit. To the best of our knowledge, this is the highest data rate ever achieved by employing a commercially available RGB-LED.
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Formaggio, A. R.; Dossantos, J. R.; Dias, L. A. V.
1984-01-01
Automatic pre-processing technique called Principal Components (PRINCO) in analyzing LANDSAT digitized data, for land use and vegetation cover, on the Brazilian cerrados was evaluated. The chosen pilot area, 223/67 of MSS/LANDSAT 3, was classified on a GE Image-100 System, through a maximum-likehood algorithm (MAXVER). The same procedure was applied to the PRINCO treated image. PRINCO consists of a linear transformation performed on the original bands, in order to eliminate the information redundancy of the LANDSAT channels. After PRINCO only two channels were used thus reducing computer effort. The original channels and the PRINCO channels grey levels for the five identified classes (grassland, "cerrado", burned areas, anthropic areas, and gallery forest) were obtained through the MAXVER algorithm. This algorithm also presented the average performance for both cases. In order to evaluate the results, the Jeffreys-Matusita distance (JM-distance) between classes was computed. The classification matrix, obtained through MAXVER, after a PRINCO pre-processing, showed approximately the same average performance in the classes separability.
A temperature and pressure controlled calibration system for pressure sensors
NASA Technical Reports Server (NTRS)
Chapman, John J.; Kahng, Seun K.
1989-01-01
A data acquisition and experiment control system capable of simulating temperatures from -184 to +220 C and pressures either absolute or differential from 0 to 344.74 kPa is developed to characterize silicon pressure sensor response to temperature and pressure. System software is described that includes sensor data acquisition, algorithms for numerically derived thermal offset and sensitivity correction, and operation of the environmental chamber and pressure standard. This system is shown to be capable of computer interfaced cryogenic testing to within 1 C and 34.47 Pa of single channel or multiplexed arrays of silicon pressure sensors.
Extraction of tidal channel networks from airborne scanning laser altimetry
NASA Astrophysics Data System (ADS)
Mason, David C.; Scott, Tania R.; Wang, Hai-Jing
Tidal channel networks are important features of the inter-tidal zone, and play a key role in tidal propagation and in the evolution of salt marshes and tidal flats. The study of their morphology is currently an active area of research, and a number of theories related to networks have been developed which require validation using dense and extensive observations of network forms and cross-sections. The conventional method of measuring networks is cumbersome and subjective, involving manual digitisation of aerial photographs in conjunction with field measurement of channel depths and widths for selected parts of the network. This paper describes a semi-automatic technique developed to extract networks from high-resolution LiDAR data of the inter-tidal zone. A multi-level knowledge-based approach has been implemented, whereby low-level algorithms first extract channel fragments based mainly on image properties then a high-level processing stage improves the network using domain knowledge. The approach adopted at low level uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels. The higher level processing includes a channel repair mechanism. The algorithm may be extended to extract networks from aerial photographs as well as LiDAR data. Its performance is illustrated using LiDAR data of two study sites, the River Ems, Germany and the Venice Lagoon. For the River Ems data, the error of omission for the automatic channel extractor is 26%, partly because numerous small channels are lost because they fall below the edge threshold, though these are less than 10 cm deep and unlikely to be hydraulically significant. The error of commission is lower, at 11%. For the Venice Lagoon data, the error of omission is 14%, but the error of commission is 42%, due partly to the difficulty of interpreting channels in these natural scenes. As a benchmark, previous work has shown that this type of algorithm specifically designed for extracting tidal networks from LiDAR data is able to achieve substantially improved results compared with those obtained using standard algorithms for drainage network extraction from Digital Terrain Models.
An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm
Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En
2015-01-01
A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction. PMID:26287193
Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En
2015-08-13
A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.
Analysing the Effects of Different Land Cover Types on Land Surface Temperature Using Satellite Data
NASA Astrophysics Data System (ADS)
Şekertekin, A.; Kutoglu, Ş. H.; Kaya, S.; Marangoz, A. M.
2015-12-01
Monitoring Land Surface Temperature (LST) via remote sensing images is one of the most important contributions to climatology. LST is an important parameter governing the energy balance on the Earth and it also helps us to understand the behavior of urban heat islands. There are lots of algorithms to obtain LST by remote sensing techniques. The most commonly used algorithms are split-window algorithm, temperature/emissivity separation method, mono-window algorithm and single channel method. In this research, mono window algorithm was implemented to Landsat 5 TM image acquired on 28.08.2011. Besides, meteorological data such as humidity and temperature are used in the algorithm. Moreover, high resolution Geoeye-1 and Worldview-2 images acquired on 29.08.2011 and 12.07.2013 respectively were used to investigate the relationships between LST and land cover type. As a result of the analyses, area with vegetation cover has approximately 5 ºC lower temperatures than the city center and arid land., LST values change about 10 ºC in the city center because of different surface properties such as reinforced concrete construction, green zones and sandbank. The temperature around some places in thermal power plant region (ÇATES and ZETES) Çatalağzı, is about 5 ºC higher than city center. Sandbank and agricultural areas have highest temperature due to the land cover structure.
Regalia, Giulia; Coelli, Stefania; Biffi, Emilia; Ferrigno, Giancarlo; Pedrocchi, Alessandra
2016-01-01
Neuronal spike sorting algorithms are designed to retrieve neuronal network activity on a single-cell level from extracellular multiunit recordings with Microelectrode Arrays (MEAs). In typical analysis of MEA data, one spike sorting algorithm is applied indiscriminately to all electrode signals. However, this approach neglects the dependency of algorithms' performances on the neuronal signals properties at each channel, which require data-centric methods. Moreover, sorting is commonly performed off-line, which is time and memory consuming and prevents researchers from having an immediate glance at ongoing experiments. The aim of this work is to provide a versatile framework to support the evaluation and comparison of different spike classification algorithms suitable for both off-line and on-line analysis. We incorporated different spike sorting "building blocks" into a Matlab-based software, including 4 feature extraction methods, 3 feature clustering methods, and 1 template matching classifier. The framework was validated by applying different algorithms on simulated and real signals from neuronal cultures coupled to MEAs. Moreover, the system has been proven effective in running on-line analysis on a standard desktop computer, after the selection of the most suitable sorting methods. This work provides a useful and versatile instrument for a supported comparison of different options for spike sorting towards more accurate off-line and on-line MEA data analysis.
Pedrocchi, Alessandra
2016-01-01
Neuronal spike sorting algorithms are designed to retrieve neuronal network activity on a single-cell level from extracellular multiunit recordings with Microelectrode Arrays (MEAs). In typical analysis of MEA data, one spike sorting algorithm is applied indiscriminately to all electrode signals. However, this approach neglects the dependency of algorithms' performances on the neuronal signals properties at each channel, which require data-centric methods. Moreover, sorting is commonly performed off-line, which is time and memory consuming and prevents researchers from having an immediate glance at ongoing experiments. The aim of this work is to provide a versatile framework to support the evaluation and comparison of different spike classification algorithms suitable for both off-line and on-line analysis. We incorporated different spike sorting “building blocks” into a Matlab-based software, including 4 feature extraction methods, 3 feature clustering methods, and 1 template matching classifier. The framework was validated by applying different algorithms on simulated and real signals from neuronal cultures coupled to MEAs. Moreover, the system has been proven effective in running on-line analysis on a standard desktop computer, after the selection of the most suitable sorting methods. This work provides a useful and versatile instrument for a supported comparison of different options for spike sorting towards more accurate off-line and on-line MEA data analysis. PMID:27239191
NASA Technical Reports Server (NTRS)
Zhou, Xiaoming (Inventor); Baras, John S. (Inventor)
2010-01-01
The present invention relates to an improved communications protocol which increases the efficiency of transmission in return channels on a multi-channel slotted Alohas system by incorporating advanced error correction algorithms, selective retransmission protocols and the use of reserved channels to satisfy the retransmission requests.
Silva, Adão; Gameiro, Atílio
2014-01-01
We present in this work a low-complexity algorithm to solve the sum rate maximization problem in multiuser MIMO broadcast channels with downlink beamforming. Our approach decouples the user selection problem from the resource allocation problem and its main goal is to create a set of quasiorthogonal users. The proposed algorithm exploits physical metrics of the wireless channels that can be easily computed in such a way that a null space projection power can be approximated efficiently. Based on the derived metrics we present a mathematical model that describes the dynamics of the user selection process which renders the user selection problem into an integer linear program. Numerical results show that our approach is highly efficient to form groups of quasiorthogonal users when compared to previously proposed algorithms in the literature. Our user selection algorithm achieves a large portion of the optimum user selection sum rate (90%) for a moderate number of active users. PMID:24574928
A proposed study of multiple scattering through clouds up to 1 THz
NASA Technical Reports Server (NTRS)
Gerace, G. C.; Smith, E. K.
1992-01-01
A rigorous computation of the electromagnetic field scattered from an atmospheric liquid water cloud is proposed. The recent development of a fast recursive algorithm (Chew algorithm) for computing the fields scattered from numerous scatterers now makes a rigorous computation feasible. A method is presented for adapting this algorithm to a general case where there are an extremely large number of scatterers. It is also proposed to extend a new binary PAM channel coding technique (El-Khamy coding) to multiple levels with non-square pulse shapes. The Chew algorithm can be used to compute the transfer function of a cloud channel. Then the transfer function can be used to design an optimum El-Khamy code. In principle, these concepts can be applied directly to the realistic case of a time-varying cloud (adaptive channel coding and adaptive equalization). A brief review is included of some preliminary work on cloud dispersive effects on digital communication signals and on cloud liquid water spectra and correlations.
A noise resistant symmetric key cryptosystem based on S8 S-boxes and chaotic maps
NASA Astrophysics Data System (ADS)
Hussain, Iqtadar; Anees, Amir; Aslam, Muhammad; Ahmed, Rehan; Siddiqui, Nasir
2018-04-01
In this manuscript, we have proposed an encryption algorithm to encrypt any digital data. The proposed algorithm is primarily based on the substitution-permutation in which the substitution process is performed by the S 8 Substitution boxes. The proposed algorithm incorporates three different chaotic maps. We have analysed the behaviour of chaos by secure communication in great length, and accordingly, we have applied those chaotic sequences in the proposed encryption algorithm. The simulation and statistical results revealed that the proposed encryption scheme is secure against different attacks. Moreover, the encryption scheme can tolerate the channel noise as well; if the encrypted data is corrupted by the unauthenticated user or by the channel noise, the decryption can still be successfully done with some distortion. The overall results confirmed that the presented work has good cryptographic features, low computational complexity and resistant to the channel noise which makes it suitable for low profile mobile applications.
NASA Astrophysics Data System (ADS)
Khamukhin, A. A.
2017-02-01
Simple navigation algorithms are needed for small autonomous unmanned aerial vehicles (UAVs). These algorithms can be implemented in a small microprocessor with low power consumption. This will help to reduce the weight of the UAVs computing equipment and to increase the flight range. The proposed algorithm uses only the number of opaque channels (ommatidia in bees) through which a target can be seen by moving an observer from location 1 to 2 toward the target. The distance estimation is given relative to the distance between locations 1 and 2. The simple scheme of an appositional compound eye to develop calculation formula is proposed. The distance estimation error analysis shows that it decreases with an increase of the total number of opaque channels to a certain limit. An acceptable error of about 2 % is achieved with the angle of view from 3 to 10° when the total number of opaque channels is 21600.
NASA Technical Reports Server (NTRS)
Wennersten, Miriam Dvorak; Banes, Anthony Vince; Boegner, Gregory J.; Dougherty, Lamar; Edwards, Bernard L.; Roman, Joseph; Bauer, Frank H. (Technical Monitor)
2001-01-01
NASA Goddard Space Flight Center has built an open architecture, 24 channel space flight GPS receiver. The CompactPCI PiVoT GPS receiver card is based on the Mitel/GEC Plessey Builder-2 board. PiVoT uses two Plessey 2021 correlators to allow tracking of up to 24 separate GPS SV's on unique channels. Its four front ends can support four independent antennas, making it a useful card for hosting GPS attitude determination algorithms. It has been built using space quality, radiation tolerant parts. The PiVoT card will track a weaker signal than the original Builder 2 board. It also hosts an improved clock oscillator. The PiVoT software is based on the original Plessey Builder 2 software ported to the Linux operating system. The software is POSIX complaint and can easily be converted to other POSIX operating systems. The software is open source to anyone with a licensing agreement with Plessey. Additional tasks can be added to the software to support GPS science experiments or attitude determination algorithms. The next generation PiVoT receiver will be a single radiation hardened CompactPCI card containing the microprocessor and the GPS receiver optimized for use above the GPS constellation. PiVoT was flown successfully on a balloon in July, 2001, for its first non-simulated flight.
Weather, Climate, and Society: New Demands on Science and Services
NASA Technical Reports Server (NTRS)
2010-01-01
A new algorithm has been constructed to estimate the path length of lightning channels for the purpose of improving the model predictions of lightning NOx in both regional air quality and global chemistry/climate models. This algorithm was tested and applied to VHF signals detected by the North Alabama Lightning Mapping Array (NALMA). The accuracy of the algorithm was characterized by comparing algorithm output to the plots of individual discharges whose lengths were computed by hand. Several thousands of lightning flashes within 120 km of the NALMA network centroid were gathered from all four seasons, and were analyzed by the algorithm. The mean, standard deviation, and median statistics were obtained for all the flashes, the ground flashes, and the cloud flashes. Channel length distributions were also obtained for the different seasons.
Systolic Signal Processor/High Frequency Direction Finding
1990-10-01
MUSIC ) algorithm and the finite impulse response (FIR) filter onto the testbed hardware was supported by joint sponsorship of the block and major bid...computational throughput. The systolic implementations of a four-channel finite impulse response (FIR) filter and multiple signal classification ( MUSIC ... MUSIC ) algorithm was mated to a bank of finite impulse response (FIR) filters and a four-channel data acquisition subsystem. A complete description
An Adaptive Channel Access Method for Dynamic Super Dense Wireless Sensor Networks.
Lei, Chunyang; Bie, Hongxia; Fang, Gengfa; Zhang, Xuekun
2015-12-03
Super dense and distributed wireless sensor networks have become very popular with the development of small cell technology, Internet of Things (IoT), Machine-to-Machine (M2M) communications, Vehicular-to-Vehicular (V2V) communications and public safety networks. While densely deployed wireless networks provide one of the most important and sustainable solutions to improve the accuracy of sensing and spectral efficiency, a new channel access scheme needs to be designed to solve the channel congestion problem introduced by the high dynamics of competing nodes accessing the channel simultaneously. In this paper, we firstly analyzed the channel contention problem using a novel normalized channel contention analysis model which provides information on how to tune the contention window according to the state of channel contention. We then proposed an adaptive channel contention window tuning algorithm in which the contention window tuning rate is set dynamically based on the estimated channel contention level. Simulation results show that our proposed adaptive channel access algorithm based on fast contention window tuning can achieve more than 95 % of the theoretical optimal throughput and 0 . 97 of fairness index especially in dynamic and dense networks.
Experimental Research on Boundary Shear Stress in Typical Meandering Channel
NASA Astrophysics Data System (ADS)
Chen, Kai-hua; Xia, Yun-feng; Zhang, Shi-zhao; Wen, Yun-cheng; Xu, Hua
2018-06-01
A novel instrument named Micro-Electro-Mechanical System (MEMS) flexible hot-film shear stress sensor was used to study the boundary shear stress distribution in the generalized natural meandering open channel, and the mean sidewall shear stress distribution along the meandering channel, and the lateral boundary shear stress distribution in the typical cross-section of the meandering channel was analysed. Based on the measurement of the boundary shear stress, a semi-empirical semi-theoretical computing approach of the boundary shear stress was derived including the effects of the secondary flow, sidewall roughness factor, eddy viscosity and the additional Reynolds stress, and more importantly, for the first time, it combined the effects of the cross-section central angle and the Reynolds number into the expressions. Afterwards, a comparison between the previous research and this study was developed. Following the result, we found that the semi-empirical semi-theoretical boundary shear stress distribution algorithm can predict the boundary shear stress distribution precisely. Finally, a single factor analysis was conducted on the relationship between the average sidewall shear stress on the convex and concave bank and the flow rate, water depth, slope ratio, or the cross-section central angle of the open channel bend. The functional relationship with each of the above factors was established, and then the distance from the location of the extreme sidewall shear stress to the bottom of the open channel was deduced based on the statistical theory.
NASA Astrophysics Data System (ADS)
Scarino, B. R.; Minnis, P.; Yost, C. R.; Chee, T.; Palikonda, R.
2015-12-01
Single-channel algorithms for satellite thermal-infrared- (TIR-) derived land and sea surface skin temperature (LST and SST) are advantageous in that they can be easily applied to a variety of satellite sensors. They can also accommodate decade-spanning instrument series, particularly for periods when split-window capabilities are not available. However, the benefit of one unified retrieval methodology for all sensors comes at the cost of critical sensitivity to surface emissivity (ɛs) and atmospheric transmittance estimation. It has been demonstrated that as little as 0.01 variance in ɛs can amount to more than a 0.5-K adjustment in retrieved LST values. Atmospheric transmittance requires calculations that employ vertical profiles of temperature and humidity from numerical weather prediction (NWP) models. Selection of a given NWP model can significantly affect LST and SST agreement relative to their respective validation sources. Thus, it is necessary to understand the accuracies of the retrievals for various NWP models to ensure the best LST/SST retrievals. The sensitivities of the single-channel retrievals to surface emittance and NWP profiles are investigated using NASA Langley historic land and ocean clear-sky skin temperature (Ts) values derived from high-resolution 11-μm TIR brightness temperature measured from geostationary satellites (GEOSat) and Advanced Very High Resolution Radiometers (AVHRR). It is shown that mean GEOSat-derived, anisotropy-corrected LST can vary by up to ±0.8 K depending on whether CERES or MODIS ɛs sources are used. Furthermore, the use of either NOAA Global Forecast System (GFS) or NASA Goddard Modern-Era Retrospective Analysis for Research and Applications (MERRA) for the radiative transfer model initial atmospheric state can account for more than 0.5-K variation in mean Ts. The results are compared to measurements from the Surface Radiation Budget Network (SURFRAD), an Atmospheric Radiation Measurement (ARM) Program ground station, and NOAA ESRL high-resolution Optimum Interpolation SST (OISST). Precise understanding of the influence these auxiliary inputs have on final satellite-based Ts retrievals may help guide refinement in ɛs characterization and NWP development, e.g., future Goddard Earth Observing System Data Assimilation System versions.
An improved dehazing algorithm of aerial high-definition image
NASA Astrophysics Data System (ADS)
Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying
2016-01-01
For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.
Zhang, Yong; Li, Yuan; Rong, Zhi-Guo
2010-06-01
Remote sensors' channel spectral response function (SRF) was one of the key factors to influence the quantitative products' inversion algorithm, accuracy and the geophysical characteristics. Aiming at the adjustments of FY-2E's split window channels' SRF, detailed comparisons between the FY-2E and FY-2C corresponding channels' SRF differences were carried out based on three data collections: the NOAA AVHRR corresponding channels' calibration look up tables, field measured water surface radiance and atmospheric profiles at Lake Qinghai and radiance calculated from the PLANK function within all dynamic range of FY-2E/C. The results showed that the adjustments of FY-2E's split window channels' SRF would result in the spectral range's movements and influence the inversion algorithms of some ground quantitative products. On the other hand, these adjustments of FY-2E SRFs would increase the brightness temperature differences between FY-2E's two split window channels within all dynamic range relative to FY-2C's. This would improve the inversion ability of FY-2E's split window channels.
Optimal Control of a Surge-Mode WEC in Random Waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertok, Allan; Ceberio, Olivier; Staby, Bill
2016-08-30
The objective of this project was to develop one or more real-time feedback and feed-forward (MPC) control algorithms for an Oscillating Surge Wave Converter (OSWC) developed by RME called SurgeWEC™ that leverages recent innovations in wave energy converter (WEC) control theory to maximize power production in random wave environments. The control algorithms synthesized innovations in dynamic programming and nonlinear wave dynamics using anticipatory wave sensors and localized sensor measurements; e.g. position and velocity of the WEC Power Take Off (PTO), with predictive wave forecasting data. The result was an advanced control system that uses feedback or feed-forward data from anmore » array of sensor channels comprised of both localized and deployed sensors fused into a single decision process that optimally compensates for uncertainties in the system dynamics, wave forecasts, and sensor measurement errors.« less
NASA Technical Reports Server (NTRS)
Doxley, Charles A.
2016-01-01
In the current world of applications that use reconfigurable technology implemented on field programmable gate arrays (FPGAs), there is a need for flexible architectures that can grow as the systems evolve. A project has limited resources and a fixed set of requirements that development efforts are tasked to meet. Designers must develop robust solutions that practically meet the current customer demands and also have the ability to grow for future performance. This paper describes the development of a high speed serial data streaming algorithm that allows for transmission of multiple data channels over a single serial link. The technique has the ability to change to meet new applications developed for future design considerations. This approach uses the Xilinx Serial RapidIO LOGICORE Solution to implement a flexible infrastructure to meet the current project requirements with the ability to adapt future system designs.
Numerical Simulation of 3-D Supersonic Viscous Flow in an Experimental MHD Channel
NASA Technical Reports Server (NTRS)
Kato, Hiromasa; Tannehill, John C.; Gupta, Sumeet; Mehta, Unmeel B.
2004-01-01
The 3-D supersonic viscous flow in an experimental MHD channel has been numerically simulated. The experimental MHD channel is currently in operation at NASA Ames Research Center. The channel contains a nozzle section, a center section, and an accelerator section where magnetic and electric fields can be imposed on the flow. In recent tests, velocity increases of up to 40% have been achieved in the accelerator section. The flow in the channel is numerically computed using a new 3-D parabolized Navier-Stokes (PNS) algorithm that has been developed to efficiently compute MHD flows in the low magnetic Reynolds number regime. The MHD effects are modeled by introducing source terms into the PNS equations which can then be solved in a very e5uent manner. To account for upstream (elliptic) effects, the flowfield can be computed using multiple streamwise sweeps with an iterated PNS algorithm. The new algorithm has been used to compute two test cases that match the experimental conditions. In both cases, magnetic and electric fields are applied to the flow. The computed results are in good agreement with the available experimental data.
Accurate Sybil Attack Detection Based on Fine-Grained Physical Channel Information.
Wang, Chundong; Zhu, Likun; Gong, Liangyi; Zhao, Zhentang; Yang, Lei; Liu, Zheli; Cheng, Xiaochun
2018-03-15
With the development of the Internet-of-Things (IoT), wireless network security has more and more attention paid to it. The Sybil attack is one of the famous wireless attacks that can forge wireless devices to steal information from clients. These forged devices may constantly attack target access points to crush the wireless network. In this paper, we propose a novel Sybil attack detection based on Channel State Information (CSI). This detection algorithm can tell whether the static devices are Sybil attackers by combining a self-adaptive multiple signal classification algorithm with the Received Signal Strength Indicator (RSSI). Moreover, we develop a novel tracing scheme to cluster the channel characteristics of mobile devices and detect dynamic attackers that change their channel characteristics in an error area. Finally, we experiment on mobile and commercial WiFi devices. Our algorithm can effectively distinguish the Sybil devices. The experimental results show that our Sybil attack detection system achieves high accuracy for both static and dynamic scenarios. Therefore, combining the phase and similarity of channel features, the multi-dimensional analysis of CSI can effectively detect Sybil nodes and improve the security of wireless networks.
Accurate Sybil Attack Detection Based on Fine-Grained Physical Channel Information
Wang, Chundong; Zhao, Zhentang; Yang, Lei; Liu, Zheli; Cheng, Xiaochun
2018-01-01
With the development of the Internet-of-Things (IoT), wireless network security has more and more attention paid to it. The Sybil attack is one of the famous wireless attacks that can forge wireless devices to steal information from clients. These forged devices may constantly attack target access points to crush the wireless network. In this paper, we propose a novel Sybil attack detection based on Channel State Information (CSI). This detection algorithm can tell whether the static devices are Sybil attackers by combining a self-adaptive multiple signal classification algorithm with the Received Signal Strength Indicator (RSSI). Moreover, we develop a novel tracing scheme to cluster the channel characteristics of mobile devices and detect dynamic attackers that change their channel characteristics in an error area. Finally, we experiment on mobile and commercial WiFi devices. Our algorithm can effectively distinguish the Sybil devices. The experimental results show that our Sybil attack detection system achieves high accuracy for both static and dynamic scenarios. Therefore, combining the phase and similarity of channel features, the multi-dimensional analysis of CSI can effectively detect Sybil nodes and improve the security of wireless networks. PMID:29543773
NASA Technical Reports Server (NTRS)
Li, Rong-Rong; Kaufman, Yoram J.
2002-01-01
We have developed an algorithm to detect suspended sediments and shallow coastal waters using imaging data acquired with the Moderate Resolution Imaging SpectroRadiometer (MODIS). The MODIS instruments on board the NASA Terra and Aqua Spacecrafts are equipped with one set of narrow channels located in a wide 0.4 - 2.5 micron spectral range. These channels were designed primarily for remote sensing of the land surface and atmosphere. We have found that the set of land and cloud channels are also quite useful for remote sensing of the bright coastal waters. We have developed an empirical algorithm, which uses the narrow MODIS channels in this wide spectral range, for identifying areas with suspended sediments in turbid waters and shallow waters with bottom reflections. In our algorithm, we take advantage of the strong water absorption at wavelengths longer than 1 micron that does not allow illumination of sediments in the water or a shallow ocean floor. MODIS data acquired over the east coast of China, west coast of Africa, Arabian Sea, Mississippi Delta, and west coast of Florida are used in this study.
NASA Astrophysics Data System (ADS)
Li, R.; Kaufman, Y.
2002-12-01
ABSTRACT We have developed an algorithm to detect suspended sediments and shallow coastal waters using imaging data acquired with the Moderate Resolution Imaging SpectroRadiometer (MODIS). The MODIS instruments on board the NASA Terra and Aqua Spacecrafts are equipped with one set of narrow channels located in a wide 0.4 - 2.5 micron spectral range. These channels were designed primarily for remote sensing of the land surface and atmosphere. We have found that the set of land and cloud channels are also quite useful for remote sensing of the bright coastal waters. We have developed an empirical algorithm, which uses the narrow MODIS channels in this wide spectral range, for identifying areas with suspended sediments in turbid waters and shallow waters with bottom reflections. In our algorithm, we take advantage of the strong water absorption at wavelengths longer than 1 æm that does not allow illumination of sediments in the water or a shallow ocean floor. MODIS data acquired over the east coast of China, west coast of Africa, Arabian Sea, Mississippi Delta, and west coast of Florida are used in this study.
NASA Astrophysics Data System (ADS)
Foronda, Augusto; Ohta, Chikara; Tamaki, Hisashi
Dirty paper coding (DPC) is a strategy to achieve the region capacity of multiple input multiple output (MIMO) downlink channels and a DPC scheduler is throughput optimal if users are selected according to their queue states and current rates. However, DPC is difficult to implement in practical systems. One solution, zero-forcing beamforming (ZFBF) strategy has been proposed to achieve the same asymptotic sum rate capacity as that of DPC with an exhaustive search over the entire user set. Some suboptimal user group selection schedulers with reduced complexity based on ZFBF strategy (ZFBF-SUS) and proportional fair (PF) scheduling algorithm (PF-ZFBF) have also been proposed to enhance the throughput and fairness among the users, respectively. However, they are not throughput optimal, fairness and throughput decrease if each user queue length is different due to different users channel quality. Therefore, we propose two different scheduling algorithms: a throughput optimal scheduling algorithm (ZFBF-TO) and a reduced complexity scheduling algorithm (ZFBF-RC). Both are based on ZFBF strategy and, at every time slot, the scheduling algorithms have to select some users based on user channel quality, user queue length and orthogonality among users. Moreover, the proposed algorithms have to produce the rate allocation and power allocation for the selected users based on a modified water filling method. We analyze the schedulers complexity and numerical results show that ZFBF-RC provides throughput and fairness improvements compared to the ZFBF-SUS and PF-ZFBF scheduling algorithms.
Rainfall Estimates from the TMI and the SSM/I
NASA Technical Reports Server (NTRS)
Hong, Ye; Kummerow, Christian D.; Olson, William S.; Viltard, Nicolas
1999-01-01
The Tropical Rainfall Measuring Mission (TRMM), which is a joint Japan-U.S. Earth observing satellite, has been successfully launched from Japan on November 27, 1997. The main purpose of the TRMM is to measure quantitatively rainfall over the tropics for the research of climate and weather. One of three rainfall measuring instruments abroad the TRMM is the high resolution TRMM Microwave Imager (TMI). The TMI instrument is essentially the copy of the SSM/I with a dual-polarized pair of 10.7 GHz channels added to increase the dynamic range of rainfall estimates. In addition, the 21.3 GHz water vapor absorption channel is designed in the TMI as opposed to the 22.235 GHz in the SSM/I to avoid saturation in the tropics. This paper will present instantaneous rain rates estimated from the coincident TMI and SSM/I observations. The algorithm for estimating instantaneous rainfall rates from both sensors is the Goddard Profiling algorithm (Gprof). The Gprof algorithm is a physically based, multichannel rainfall retrieval algorithm, The algorithm is very portable and can be used for various sensors with different channels and resolutions. The comparison of rain rates estimated from TMI and SSM/I on the same rain regions will be performed. The results from the comparison and the insight of tile retrieval algorithm will be given.
Image defog algorithm based on open close filter and gradient domain recursive bilateral filter
NASA Astrophysics Data System (ADS)
Liu, Daqian; Liu, Wanjun; Zhao, Qingguo; Fei, Bowen
2017-11-01
To solve the problems of fuzzy details, color distortion, low brightness of the image obtained by the dark channel prior defog algorithm, an image defog algorithm based on open close filter and gradient domain recursive bilateral filter, referred to as OCRBF, was put forward. The algorithm named OCRBF firstly makes use of weighted quad tree to obtain more accurate the global atmospheric value, then exploits multiple-structure element morphological open and close filter towards the minimum channel map to obtain a rough scattering map by dark channel prior, makes use of variogram to correct the transmittance map,and uses gradient domain recursive bilateral filter for the smooth operation, finally gets recovery images by image degradation model, and makes contrast adjustment to get bright, clear and no fog image. A large number of experimental results show that the proposed defog method in this paper can be good to remove the fog , recover color and definition of the fog image containing close range image, image perspective, the image including the bright areas very well, compared with other image defog algorithms,obtain more clear and natural fog free images with details of higher visibility, what's more, the relationship between the time complexity of SIDA algorithm and the number of image pixels is a linear correlation.
NASA Astrophysics Data System (ADS)
Yuan, Chunhua; Wang, Jiang; Yi, Guosheng
2017-03-01
Estimation of ion channel parameters is crucial to spike initiation of neurons. The biophysical neuron models have numerous ion channel parameters, but only a few of them play key roles in the firing patterns of the models. So we choose three parameters featuring the adaptation in the Ermentrout neuron model to be estimated. However, the traditional particle swarm optimization (PSO) algorithm is still easy to fall into local optimum and has the premature convergence phenomenon in the study of some problems. In this paper, we propose an improved method that uses a concave function and dynamic logistic chaotic mapping mixed to adjust the inertia weights of the fitness value, effectively improve the global convergence ability of the algorithm. The perfect predicting firing trajectories of the rebuilt model using the estimated parameters prove that only estimating a few important ion channel parameters can establish the model well and the proposed algorithm is effective. Estimations using two classic PSO algorithms are also compared to the improved PSO to verify that the algorithm proposed in this paper can avoid local optimum and quickly converge to the optimal value. The results provide important theoretical foundations for building biologically realistic neuron models.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena
2011-01-01
The Goddard DISC has generated products derived from AIRS/AMSU-A observations, starting from September 2002 when the AIRS instrument became stable, using the AIRS Science Team Version-5 retrieval algorithm. The AIRS Science Team Version-6 retrieval algorithm will be finalized in September 2011. This paper describes some of the significant improvements contained in the Version-6 retrieval algorithm, compared to that used in Version-5, with an emphasis on the improvement of atmospheric temperature profiles, ocean and land surface skin temperatures, and ocean and land surface spectral emissivities. AIRS contains 2378 spectral channels covering portions of the spectral region 650 cm(sup -1) (15.38 micrometers) - 2665 cm(sup -1) (3.752 micrometers). These spectral regions contain significant absorption features from two CO2 absorption bands, the 15 micrometers (longwave) CO2 band, and the 4.3 micrometers (shortwave) CO2 absorption band. There are also two atmospheric window regions, the 12 micrometer - 8 micrometer (longwave) window, and the 4.17 micrometer - 3.75 micrometer (shortwave) window. Historically, determination of surface and atmospheric temperatures from satellite observations was performed using primarily observations in the longwave window and CO2 absorption regions. According to cloud clearing theory, more accurate soundings of both surface skin and atmospheric temperatures can be obtained under partial cloud cover conditions if one uses observations in longwave channels to determine coefficients which generate cloud cleared radiances R(sup ^)(sub i) for all channels, and uses R(sup ^)(sub i) only from shortwave channels in the determination of surface and atmospheric temperatures. This procedure is now being used in the AIRS Version-6 Retrieval Algorithm. Results are presented for both daytime and nighttime conditions showing improved Version-6 surface and atmospheric soundings under partial cloud cover.
Second-order Poisson Nernst-Planck solver for ion channel transport
Zheng, Qiong; Chen, Duan; Wei, Guo-Wei
2010-01-01
The Poisson Nernst-Planck (PNP) theory is a simplified continuum model for a wide variety of chemical, physical and biological applications. Its ability of providing quantitative explanation and increasingly qualitative predictions of experimental measurements has earned itself much recognition in the research community. Numerous computational algorithms have been constructed for the solution of the PNP equations. However, in the realistic ion-channel context, no second order convergent PNP algorithm has ever been reported in the literature, due to many numerical obstacles, including discontinuous coefficients, singular charges, geometric singularities, and nonlinear couplings. The present work introduces a number of numerical algorithms to overcome the abovementioned numerical challenges and constructs the first second-order convergent PNP solver in the ion-channel context. First, a Dirichlet to Neumann mapping (DNM) algorithm is designed to alleviate the charge singularity due to the protein structure. Additionally, the matched interface and boundary (MIB) method is reformulated for solving the PNP equations. The MIB method systematically enforces the interface jump conditions and achieves the second order accuracy in the presence of complex geometry and geometric singularities of molecular surfaces. Moreover, two iterative schemes are utilized to deal with the coupled nonlinear equations. Furthermore, extensive and rigorous numerical validations are carried out over a number of geometries, including a sphere, two proteins and an ion channel, to examine the numerical accuracy and convergence order of the present numerical algorithms. Finally, application is considered to a real transmembrane protein, the Gramicidin A channel protein. The performance of the proposed numerical techniques is tested against a number of factors, including mesh sizes, diffusion coefficient profiles, iterative schemes, ion concentrations, and applied voltages. Numerical predictions are compared with experimental measurements. PMID:21552336
Optimal Refueling Pattern Search for a CANDU Reactor Using a Genetic Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quang Binh, DO; Gyuhong, ROH; Hangbok, CHOI
2006-07-01
This paper presents the results from the application of genetic algorithms to a refueling optimization of a Canada deuterium uranium (CANDU) reactor. This work aims at making a mathematical model of the refueling optimization problem including the objective function and constraints and developing a method based on genetic algorithms to solve the problem. The model of the optimization problem and the proposed method comply with the key features of the refueling strategy of the CANDU reactor which adopts an on-power refueling operation. In this study, a genetic algorithm combined with an elitism strategy was used to automatically search for themore » refueling patterns. The objective of the optimization was to maximize the discharge burn-up of the refueling bundles, minimize the maximum channel power, or minimize the maximum change in the zone controller unit (ZCU) water levels. A combination of these objectives was also investigated. The constraints include the discharge burn-up, maximum channel power, maximum bundle power, channel power peaking factor and the ZCU water level. A refueling pattern that represents the refueling rate and channels was coded by a one-dimensional binary chromosome, which is a string of binary numbers 0 and 1. A computer program was developed in FORTRAN 90 running on an HP 9000 workstation to conduct the search for the optimal refueling patterns for a CANDU reactor at the equilibrium state. The results showed that it was possible to apply genetic algorithms to automatically search for the refueling channels of the CANDU reactor. The optimal refueling patterns were compared with the solutions obtained from the AUTOREFUEL program and the results were consistent with each other. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krzyżanowska, A.; Deptuch, G. W.; Maj, P.
This paper presents the detailed characterization of a single photon counting chip, named CHASE Jr., built in a CMOS 40-nm process, operating with synchrotron radiation. The chip utilizes an on-chip implementation of the C8P1 algorithm. The algorithm eliminates the charge sharing related uncertainties, namely, the dependence of the number of registered photons on the discriminator’s threshold, set for monochromatic irradiation, and errors in the assignment of an event to a certain pixel. The article presents a short description of the algorithm as well as the architecture of the CHASE Jr., chip. The analog and digital functionalities, allowing for proper operationmore » of the C8P1 algorithm are described, namely, an offset correction for two discriminators independently, two-stage gain correction, and different operation modes of the digital blocks. The results of tests of the C8P1 operation are presented for the chip bump bonded to a silicon sensor and exposed to the 3.5- μm -wide pencil beam of 8-keV photons of synchrotron radiation. It was studied how sensitive the algorithm performance is to the chip settings, as well as the uniformity of parameters of the analog front-end blocks. Presented results prove that the C8P1 algorithm enables counting all photons hitting the detector in between readout channels and retrieving the actual photon energy.« less
Low complexity adaptive equalizers for underwater acoustic communications
NASA Astrophysics Data System (ADS)
Soflaei, Masoumeh; Azmi, Paeiz
2014-08-01
Interference signals due to scattering from surface and reflecting from bottom is one of the most important problems of reliable communications in shallow water channels. To solve this problem, one of the best suggested ways is to use adaptive equalizers. Convergence rate and misadjustment error in adaptive algorithms play important roles in adaptive equalizer performance. In this paper, affine projection algorithm (APA), selective regressor APA(SR-APA), family of selective partial update (SPU) algorithms, family of set-membership (SM) algorithms and selective partial update selective regressor APA (SPU-SR-APA) are compared with conventional algorithms such as the least mean square (LMS) in underwater acoustic communications. We apply experimental data from the Strait of Hormuz for demonstrating the efficiency of the proposed methods over shallow water channel. We observe that the values of the steady-state mean square error (MSE) of SR-APA, SPU-APA, SPU-normalized least mean square (SPU-NLMS), SPU-SR-APA, SM-APA and SM-NLMS algorithms decrease in comparison with the LMS algorithm. Also these algorithms have better convergence rates than LMS type algorithm.
NASA Astrophysics Data System (ADS)
Arslan, Musa T.; Tofighi, Mohammad; Sevimli, Rasim A.; ćetin, Ahmet E.
2015-05-01
One of the main disadvantages of using commercial broadcasts in a Passive Bistatic Radar (PBR) system is the range resolution. Using multiple broadcast channels to improve the radar performance is offered as a solution to this problem. However, it suffers from detection performance due to the side-lobes that matched filter creates for using multiple channels. In this article, we introduce a deconvolution algorithm to suppress the side-lobes. The two-dimensional matched filter output of a PBR is further analyzed as a deconvolution problem. The deconvolution algorithm is based on making successive projections onto the hyperplanes representing the time delay of a target. Resulting iterative deconvolution algorithm is globally convergent because all constraint sets are closed and convex. Simulation results in an FM based PBR system are presented.
Electroweak production of the top quark in the Run II of the D0 experiment (in French)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clement, Benoit
The work exposed in this thesis deals with the search for electroweak production of top quark (single top) in proton-antiproton collisions at √s = 1.96 TeV. This production mode has not been observed yet. Analyzed data have been collected during the Run II of the D0 experiment at the Fermilab Tevatron collider. These data correspond to an integrated luminosity of 370 pb -1. In the Standard Model, the decay of a top quark always produce a high momentum bottom quark. Therefore bottom quark jets identification plays a major role in this analysis. The large lifetime of b hadrons and themore » subsequent large impact parameters relative to the interaction vertex of charged particle tracks are used to tag bottom quark jets. Impact parameters of tracks attached to a jet are converted into the probability for the jet to originate from the primary vertex. This algorithm has a 45% tagging efficiency for a 0.5% mistag rate. Two processes (s and t channels) dominate single top production with slightly different final states. The searched signature consists in 2 to 4 jets with at least one bottom quark jet, one charged lepton (electron or muon) and missing energy accounting for a neutrino. This final state is background dominated and multivariate techniques are needed to separate the signal from the two main backgrounds: associated production of a W boson and jets and top quarks pair production. The achieved sensitivity is not enough to reach observation and we computed upper limits at the 95% confidence level at 5 pb (s-channel) and 4.3 pb (t-channel) on single top production cross-sections.« less
NASA Astrophysics Data System (ADS)
Cui, Chenxuan
When cognitive radio (CR) operates, it starts by sensing spectrum and looking for idle bandwidth. There are several methods for CR to make a decision on either the channel is occupied or idle, for example, energy detection scheme, cyclostationary detection scheme and matching filtering detection scheme [1]. Among them, the most common method is energy detection scheme because of its algorithm and implementation simplicities [2]. There are two major methods for sensing, the first one is to sense single channel slot with varying bandwidth, whereas the second one is to sense multiple channels and each with same bandwidth. After sensing periods, samples are compared with a preset detection threshold and a decision is made on either the primary user (PU) is transmitting or not. Sometimes the sensing and decision results can be erroneous, for example, false alarm error and misdetection error may occur. In order to better control error probabilities and improve CR network performance (i.e. energy efficiency), we introduce cooperative sensing; in which several CR within a certain range detect and make decisions on channel availability together. The decisions are transmitted to and analyzed by a data fusion center (DFC) to make a final decision on channel availability. After the final decision is been made, DFC sends back the decision to the CRs in order to tell them to stay idle or start to transmit data to secondary receiver (SR) within a preset transmission time. After the transmission, a new cycle starts again with sensing. This thesis report is organized as followed: Chapter II review some of the papers on optimizing CR energy efficiency. In Chapter III, we study how to achieve maximal energy efficiency when CR senses single channel with changing bandwidth and with constrain on misdetection threshold in order to protect PU; furthermore, a case study is given and we calculate the energy efficiency. In Chapter IV, we study how to achieve maximal energy efficiency when CR senses multiple channels and each channel with same bandwidth, also, we preset a misdetection threshold and calculate the energy efficiency. A comparison will be shown between two sensing methods at the end of the chapter. Finally, Chapter V concludes this thesis.
Earthquake Early Warning: New Strategies for Seismic Hardware
NASA Astrophysics Data System (ADS)
Allardice, S.; Hill, P.
2017-12-01
Implementing Earthquake Early Warning System (EEWS) triggering algorithms into seismic networks has been a hot topic of discussion for some years now. With digitizer technology now available, such as the Güralp Minimus, with on average 40-60ms delay time (latency) from earthquake origin to issuing an alert the next step is to provide network operators with a simple interface for on board parameter calculations from a seismic station. A voting mechanism is implemented on board which mitigates the risk of false positives being communicated. Each Minimus can be configured to with a `score' from various sources i.e. Z channel on seismometer, N/S E/W channels on accelerometer and MEMS inside Minimus. If the score exceeds the set threshold then an alert is sent to the `Master Minimus'. The Master Minimus within the network will also be configured as to when the alert should be issued i.e. at least 3 stations must have triggered. Industry standard algorithms focus around the calculation of Peak Ground Acceleration (PGA), Peak Ground Velocity (PGV), Peak Ground Displacement (PGD) and C. Calculating these single station parameters on-board in order to stream only the results could help network operators with possible issues, such as restricted bandwidth. Developments on the Minimus allow these parameters to be calculated and distributed through Common Alert Protocol (CAP). CAP is the XML based data format used for exchanging and describing public warnings and emergencies. Whenever the trigger conditions are met the Minimus can send a signed UDP packet to the configured CAP receiver which can then send the alert via SMS, e-mail or CAP forwarding. Increasing network redundancy is also a consideration when developing these features, therefore the forwarding CAP message can be sent to multiple destinations. This allows for a hierarchical approach by which the single station (or network) parameters can be streamed to another Minimus, or data centre, or both, so that there is no one single point of failure. Developments on the Guralp Minimus to calculate these on board parameters which are capable of streaming single station parameters, accompanied with the ultra-low latency is the next generation of EEWS and Güralps contribution to the community.
Arrhythmia discrimination by physician and defibrillator: importance of atrial channel.
Diemberger, Igor; Martignani, Cristian; Biffi, Mauro; Frabetti, Lorenzo; Valzania, Cinzia; Cooke, Robin M T; Rapezzi, Claudio; Branzi, Angelo; Boriani, Giuseppe
2012-01-26
Many ICD carriers experience inappropriate shocks, but the relative merits of dual- /single-chamber devices for arrhythmia discrimination still remain unclear. We explored possible advantages of the atrial data provided by dual-chamber implantable defibrillators (ICD) for discrimination of real-life supraventricular/ventricular tachyarrhythmias (SVT/VT). 100 dual-chamber traces from 24 ICD were blindly reviewed in dual-chamber and simulated single-chamber (with/without discriminator data) reading modes by five electrophysiologists who determined chamber of origin and provided Likert-scale "confidence" ratings. We assessed 1) intra/interobserver concordance; 2) diagnostic accuracy, using expert diagnoses as a reference standard; 3) ROC curves of sensitivity/specificity of "likelihood perception" scores, generated by combining chamber-of-origin diagnostic judgments with Likert-scale "confidence" ratings. We also assessed diagnostic accuracy of automated discrimination by all possible dual-/single-chamber algorithm configurations. Interobserver concordance was "substantial" (modified Cohen kappa-test values for dual-/single-chamber, 0.79/0.68); intraobserver concordance "almost complete" (kappa ≥ 0.89). Dual-chamber mode provided best diagnostic sensitivity/specificity (99%/92%) and highest reader confidence (p<0.001). Area under ROC curves of sensitivity/specificity values for the "likelihood perception" score (representing electrophysiologists' perceptions of the likelihood that an episode was of ventricular origin) was highest in dual-chamber mode (0.98 vs. 0.93 for both single-chamber modes; p<0.001). Regarding automated discrimination, all four dual-chamber configurations conferred 100% sensitivity (specificity values ranged 39%-88%), whereas single-chamber configurations appeared inferior (best sensitivity/specificity combination, 89%/64%). Availability of the atrial channel helps in reducing inappropriate ICD therapies by providing relevant advantages in terms of both appropriate cardiologist's post-hoc discrimination of SVT/VT (improving program tailoring) and automated arrhythmia discrimination. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Chowdhury, Shubhajit Roy
2012-04-01
The paper reports of a Field Programmable Gate Array (FPGA) based embedded system for detection of QRS complex in a noisy electrocardiogram (ECG) signal and thereafter differential diagnosis of tachycardia and tachyarrhythmia. The QRS complex has been detected after application of entropy measure of fuzziness to build a detection function of ECG signal, which has been previously filtered to remove power line interference and base line wander. Using the detected QRS complexes, differential diagnosis of tachycardia and tachyarrhythmia has been performed. The entire algorithm has been realized in hardware on an FPGA. Using the standard CSE ECG database, the algorithm performed highly effectively. The performance of the algorithm in respect of QRS detection with sensitivity (Se) of 99.74% and accuracy of 99.5% is achieved when tested using single channel ECG with entropy criteria. The performance of the QRS detection system has been compared and found to be better than most of the QRS detection systems available in literature. Using the system, 200 patients have been diagnosed with an accuracy of 98.5%.
How many neurons can we see with current spike sorting algorithms?
Pedreira, Carlos; Martinez, Juan; Ison, Matias J.; Quian Quiroga, Rodrigo
2012-01-01
Recent studies highlighted the disagreement between the typical number of neurons observed with extracellular recordings and the ones to be expected based on anatomical and physiological considerations. This disagreement has been mainly attributed to the presence of sparsely firing neurons. However, it is also possible that this is due to limitations of the spike sorting algorithms used to process the data. To address this issue, we used realistic simulations of extracellular recordings and found a relatively poor spike sorting performance for simulations containing a large number of neurons. In fact, the number of correctly identified neurons for single-channel recordings showed an asymptotic behavior saturating at about 8–10 units, when up to 20 units were present in the data. This performance was significantly poorer for neurons with low firing rates, as these units were twice more likely to be missed than the ones with high firing rates in simulations containing many neurons. These results uncover one of the main reasons for the relatively low number of neurons found in extracellular recording and also stress the importance of further developments of spike sorting algorithms. PMID:22841630
How many sleep stages do we need for an efficient automatic insomnia diagnosis?
Hamida, Sana Tmar-Ben; Glos, Martin; Penzel, Thomas; Ahmed, Beena
2016-08-01
Tools used by clinicians to diagnose and treat insomnia typically include sleep diaries and questionnaires. Overnight polysomnography (PSG) recordings are used when the initial diagnosis is uncertain due to the presence of other sleep disorders or when the treatment, either behavioral or pharmacologic, is unsuccessful. However, the analysis and the scoring of PSG data are time-consuming. To simplify the diagnosis process, in this paper we have proposed an efficient insomnia detection algorithm based on a central single electroencephalographic (EEG) channel (C3) using only deep sleep. We also analyzed several spectral and statistical EEG features of good sleeper controls and subjects suffering from insomnia in different sleep stages to identify the features that offered the best discrimination between the two groups. Our proposed algorithm was evaluated using EEG recordings from 19 patients diagnosed with primary insomnia (11 females, 8 males) and 16 matched control subjects (11 females, 5 males). The sensitivity of our algorithm is 92%, the specificity is 89.9%, the Cohen's kappa is 0.81 and the agreement is 91%, indicating the effectiveness of our proposed method.
Blind color isolation for color-channel-based fringe pattern profilometry using digital projection
NASA Astrophysics Data System (ADS)
Hu, Yingsong; Xi, Jiangtao; Chicharo, Joe; Yang, Zongkai
2007-08-01
We present an algorithm for estimating the color demixing matrix based on the color fringe patterns captured from the reference plane or the surface of the object. The advantage of this algorithm is that it is a blind approach to calculating the demixing matrix in the sense that no extra images are required for color calibration before performing profile measurement. Simulation and experimental results convince us that the proposed algorithm can significantly reduce the influence of the color cross talk and at the same time improve the measurement accuracy of the color-channel-based phase-shifting profilometry.
Decoding communities in networks
NASA Astrophysics Data System (ADS)
Radicchi, Filippo
2018-02-01
According to a recent information-theoretical proposal, the problem of defining and identifying communities in networks can be interpreted as a classical communication task over a noisy channel: memberships of nodes are information bits erased by the channel, edges and nonedges in the network are parity bits introduced by the encoder but degraded through the channel, and a community identification algorithm is a decoder. The interpretation is perfectly equivalent to the one at the basis of well-known statistical inference algorithms for community detection. The only difference in the interpretation is that a noisy channel replaces a stochastic network model. However, the different perspective gives the opportunity to take advantage of the rich set of tools of coding theory to generate novel insights on the problem of community detection. In this paper, we illustrate two main applications of standard coding-theoretical methods to community detection. First, we leverage a state-of-the-art decoding technique to generate a family of quasioptimal community detection algorithms. Second and more important, we show that the Shannon's noisy-channel coding theorem can be invoked to establish a lower bound, here named as decodability bound, for the maximum amount of noise tolerable by an ideal decoder to achieve perfect detection of communities. When computed for well-established synthetic benchmarks, the decodability bound explains accurately the performance achieved by the best community detection algorithms existing on the market, telling us that only little room for their improvement is still potentially left.
Decoding communities in networks.
Radicchi, Filippo
2018-02-01
According to a recent information-theoretical proposal, the problem of defining and identifying communities in networks can be interpreted as a classical communication task over a noisy channel: memberships of nodes are information bits erased by the channel, edges and nonedges in the network are parity bits introduced by the encoder but degraded through the channel, and a community identification algorithm is a decoder. The interpretation is perfectly equivalent to the one at the basis of well-known statistical inference algorithms for community detection. The only difference in the interpretation is that a noisy channel replaces a stochastic network model. However, the different perspective gives the opportunity to take advantage of the rich set of tools of coding theory to generate novel insights on the problem of community detection. In this paper, we illustrate two main applications of standard coding-theoretical methods to community detection. First, we leverage a state-of-the-art decoding technique to generate a family of quasioptimal community detection algorithms. Second and more important, we show that the Shannon's noisy-channel coding theorem can be invoked to establish a lower bound, here named as decodability bound, for the maximum amount of noise tolerable by an ideal decoder to achieve perfect detection of communities. When computed for well-established synthetic benchmarks, the decodability bound explains accurately the performance achieved by the best community detection algorithms existing on the market, telling us that only little room for their improvement is still potentially left.
Performance evaluation of spatial compounding in the presence of aberration and adaptive imaging
NASA Astrophysics Data System (ADS)
Dahl, Jeremy J.; Guenther, Drake; Trahey, Gregg E.
2003-05-01
Spatial compounding has been used for years to reduce speckle in ultrasonic images and to resolve anatomical features hidden behind the grainy appearance of speckle. Adaptive imaging restores image contrast and resolution by compensating for beamforming errors caused by tissue-induced phase errors. Spatial compounding represents a form of incoherent imaging, whereas adaptive imaging attempts to maintain a coherent, diffraction-limited aperture in the presence of aberration. Using a Siemens Antares scanner, we acquired single channel RF data on a commercially available 1-D probe. Individual channel RF data was acquired on a cyst phantom in the presence of a near field electronic phase screen. Simulated data was also acquired for both a 1-D and a custom built 8x96, 1.75-D probe (Tetrad Corp.). The data was compounded using a receive spatial compounding algorithm; a widely used algorithm because it takes advantage of parallel beamforming to avoid reductions in frame rate. Phase correction was also performed by using a least mean squares algorithm to estimate the arrival time errors. We present simulation and experimental data comparing the performance of spatial compounding to phase correction in contrast and resolution tasks. We evaluate spatial compounding and phase correction, and combinations of the two methods, under varying aperture sizes, aperture overlaps, and aberrator strength to examine the optimum configuration and conditions in which spatial compounding will provide a similar or better result than adaptive imaging. We find that, in general, phase correction is hindered at high aberration strengths and spatial frequencies, whereas spatial compounding is helped by these aberrators.
The ultraviolet detection component based on Te-Cs image intensifier
NASA Astrophysics Data System (ADS)
Qian, Yunsheng; Zhou, Xiaoyu; Wu, Yujing; Wang, Yan; Xu, Hua
2017-05-01
Ultraviolet detection technology has been widely focused and adopted in the fields of ultraviolet warning and corona detection for its significant value and practical meaning. The component structure of ultraviolet ICMOS, imaging driving and the photon counting algorithm are studied in this paper. Firstly, the one-inch and wide dynamic range CMOS chip with the coupling optical fiber panel is coupled to the ultraviolet image intensifier. The photocathode material in ultraviolet image intensifier is Te-Cs, which contributes to the solar blind characteristic, and the dual micro-channel plates (MCP) structure ensures the sufficient gain to achieve the single photon counting. Then, in consideration of the ultraviolet detection demand, the drive circuit of the CMOS chip is designed and the corresponding program based on Verilog language is written. According to the characteristics of ultraviolet imaging, the histogram equalization method is applied to enhance the ultraviolet image and the connected components labeling way is utilized for the ultraviolet single photon counting. Moreover, one visible light video channel is reserved in the ultraviolet ICOMS camera, which can be used for the fusion of ultraviolet and visible images. Based upon the module, the ultraviolet optical lens and the deep cut-off solar blind filter are adopted to construct the ultraviolet detector. At last, the detection experiment of the single photon signal is carried out, and the test results are given and analyzed.
New algorithms for microwave measurements of ocean winds
NASA Technical Reports Server (NTRS)
Wentz, F. J.; Peteherych, S.
1984-01-01
Improved second generation wind algorithms are used to process the three month SEASAT SMMR and SASS data sets. The new algorithms are derived without using in situ anemometer measurements. All known biases in the sensors prime measurements are removed, and the algorithms prime model functions are internally self-consistent. The computed SMMR and SASS winds are collocated and compared on a 150 km cell-by-cell basis, giving a total of 115444 wind comparisons. The comparisons are done using three different sets of SMMR channels. When the 6.6H SMMR channel is used for wind retrieval, the SMMR and SASS winds agree to within 1.3 m/s over the SASS primary swath. At nadir where the radar cross section is less sensitive to wind, the agreement degrades to 1.9 m/s. The agreement is very good for winds from 0 to 15 m/s. Above 15 m/s, the off-nadir SASS winds are consistently lower than the SMMR winds, while at nadir the high SASS winds are greater than SMMR's. When 10.7H is used for the SMMR wind channel, the SMMR/SASS wind comparisons are not quite as good. When the frequency of the wind channel is increased to 18 GHz, the SMMR/SASS agreement substantially degrades to about 5 m/s.
Side-information-dependent correlation channel estimation in hash-based distributed video coding.
Deligiannis, Nikos; Barbarien, Joeri; Jacobs, Marc; Munteanu, Adrian; Skodras, Athanassios; Schelkens, Peter
2012-04-01
In the context of low-cost video encoding, distributed video coding (DVC) has recently emerged as a potential candidate for uplink-oriented applications. This paper builds on a concept of correlation channel (CC) modeling, which expresses the correlation noise as being statistically dependent on the side information (SI). Compared with classical side-information-independent (SII) noise modeling adopted in current DVC solutions, it is theoretically proven that side-information-dependent (SID) modeling improves the Wyner-Ziv coding performance. Anchored in this finding, this paper proposes a novel algorithm for online estimation of the SID CC parameters based on already decoded information. The proposed algorithm enables bit-plane-by-bit-plane successive refinement of the channel estimation leading to progressively improved accuracy. Additionally, the proposed algorithm is included in a novel DVC architecture that employs a competitive hash-based motion estimation technique to generate high-quality SI at the decoder. Experimental results corroborate our theoretical gains and validate the accuracy of the channel estimation algorithm. The performance assessment of the proposed architecture shows remarkable and consistent coding gains over a germane group of state-of-the-art distributed and standard video codecs, even under strenuous conditions, i.e., large groups of pictures and highly irregular motion content.
Single channel recording of a mitochondrial calcium uniporter.
Wu, Guangyan; Li, Shunjin; Zong, Guangning; Liu, Xiaofen; Fei, Shuang; Shen, Linda; Guan, Xiangchen; Yang, Xue; Shen, Yuequan
2018-01-29
Mitochondrial calcium uniporter (MCU) is the pore-forming subunit of the entire uniporter complex and plays an important role in mitochondrial calcium uptake. However, the single channel recording of MCU remains controversial. Here, we expressed and purified different MCU proteins and then reconstituted them into planar lipid bilayers for single channel recording. We showed that MCU alone from Pyronema omphalodes (pMCU) is active with prominent single channel Ca 2+ currents. In sharp contrast, MCU alone from Homo sapiens (hMCU) is inactive. The essential MCU regulator (EMRE) activates hMCU, and therefore, the complex (hMCU-hEMRE) shows prominent single channel Ca 2+ currents. These single channel currents are sensitive to the specific MCU inhibitor Ruthenium Red. Our results clearly demonstrate that active MCU can conduct large amounts of calcium into the mitochondria. Copyright © 2018 Elsevier Inc. All rights reserved.
Tsanas, Athanasios; Clifford, Gari D
2015-01-01
Sleep spindles are critical in characterizing sleep and have been associated with cognitive function and pathophysiological assessment. Typically, their detection relies on the subjective and time-consuming visual examination of electroencephalogram (EEG) signal(s) by experts, and has led to large inter-rater variability as a result of poor definition of sleep spindle characteristics. Hitherto, many algorithmic spindle detectors inherently make signal stationarity assumptions (e.g., Fourier transform-based approaches) which are inappropriate for EEG signals, and frequently rely on additional information which may not be readily available in many practical settings (e.g., more than one EEG channels, or prior hypnogram assessment). This study proposes a novel signal processing methodology relying solely on a single EEG channel, and provides objective, accurate means toward probabilistically assessing the presence of sleep spindles in EEG signals. We use the intuitively appealing continuous wavelet transform (CWT) with a Morlet basis function, identifying regions of interest where the power of the CWT coefficients corresponding to the frequencies of spindles (11-16 Hz) is large. The potential for assessing the signal segment as a spindle is refined using local weighted smoothing techniques. We evaluate our findings on two databases: the MASS database comprising 19 healthy controls and the DREAMS sleep spindle database comprising eight participants diagnosed with various sleep pathologies. We demonstrate that we can replicate the experts' sleep spindles assessment accurately in both databases (MASS database: sensitivity: 84%, specificity: 90%, false discovery rate 83%, DREAMS database: sensitivity: 76%, specificity: 92%, false discovery rate: 67%), outperforming six competing automatic sleep spindle detection algorithms in terms of correctly replicating the experts' assessment of detected spindles.
NASA Technical Reports Server (NTRS)
Nguyen, Tien Manh
1989-01-01
MT's algorithm was developed as an aid in the design of space telecommunications systems when utilized with simultaneous range/command/telemetry operations. This algorithm provides selection of modulation indices for: (1) suppression of undesired signals to achieve desired link performance margins and/or to allow for a specified performance degradation in the data channel (command/telemetry) due to the presence of undesired signals (interferers); and (2) optimum power division between the carrier, the range, and the data channel. A software program using this algorithm was developed for use with MathCAD software. This software program, called the MT program, provides the computation of optimum modulation indices for all possible cases that are recommended by the Consultative Committee on Space Data System (CCSDS) (with emphasis on the squarewave, NASA/JPL ranging system).
Kupinski, M. K.; Clarkson, E.
2015-01-01
We present a new method for computing optimized channels for channelized quadratic observers (CQO) that is feasible for high-dimensional image data. The method for calculating channels is applicable in general and optimal for Gaussian distributed image data. Gradient-based algorithms for determining the channels are presented for five different information-based figures of merit (FOMs). Analytic solutions for the optimum channels for each of the five FOMs are derived for the case of equal mean data for both classes. The optimum channels for three of the FOMs under the equal mean condition are shown to be the same. This result is critical since some of the FOMs are much easier to compute. Implementing the CQO requires a set of channels and the first- and second-order statistics of channelized image data from both classes. The dimensionality reduction from M measurements to L channels is a critical advantage of CQO since estimating image statistics from channelized data requires smaller sample sizes and inverting a smaller covariance matrix is easier. In a simulation study we compare the performance of ideal and Hotelling observers to CQO. The optimal CQO channels are calculated using both eigenanalysis and a new gradient-based algorithm for maximizing Jeffrey's divergence (J). Optimal channel selection without eigenanalysis makes the J-CQO on large-dimensional image data feasible. PMID:26366764
A hybrid frame concealment algorithm for H.264/AVC.
Yan, Bo; Gharavi, Hamid
2010-01-01
In packet-based video transmissions, packets loss due to channel errors may result in the loss of the whole video frame. Recently, many error concealment algorithms have been proposed in order to combat channel errors; however, most of the existing algorithms can only deal with the loss of macroblocks and are not able to conceal the whole missing frame. In order to resolve this problem, in this paper, we have proposed a new hybrid motion vector extrapolation (HMVE) algorithm to recover the whole missing frame, and it is able to provide more accurate estimation for the motion vectors of the missing frame than other conventional methods. Simulation results show that it is highly effective and significantly outperforms other existing frame recovery methods.
NASA Technical Reports Server (NTRS)
Ferris, Alice T.; White, William C.
1988-01-01
Balance dynamic display unit (BDDU) is compact system conditioning six dynamic analog signals so they are monitored simultaneously in real time on single-trace oscilloscope. Typical BDDU oscilloscope display in scan mode shows each channel occupying one-sixth of total trace. System features two display modes usable with conventional, single-channel oscilloscope: multiplexed six-channel "bar-graph" format and single-channel display. Two-stage visual and audible limit alarm provided for each channel.
Local SAR in Parallel Transmission Pulse Design
Lee, Joonsung; Gebhardt, Matthias; Wald, Lawrence L.; Adalsteinsson, Elfar
2011-01-01
The management of local and global power deposition in human subjects (Specific Absorption Rate, SAR) is a fundamental constraint to the application of parallel transmission (pTx) systems. Even though the pTx and single channel have to meet the same SAR requirements, the complex behavior of the spatial distribution of local SAR for transmission arrays poses problems that are not encountered in conventional single-channel systems and places additional requirements on pTx RF pulse design. We propose a pTx pulse design method which builds on recent work to capture the spatial distribution of local SAR in numerical tissue models in a compressed parameterization in order to incorporate local SAR constraints within computation times that accommodate pTx pulse design during an in vivo MRI scan. Additionally, the algorithm yields a Protocol-specific Ultimate Peak in Local SAR (PUPiL SAR), which is shown to bound the achievable peak local SAR for a given excitation profile fidelity. The performance of the approach was demonstrated using a numerical human head model and a 7T eight-channel transmit array. The method reduced peak local 10g SAR by 14–66% for slice-selective pTx excitations and 2D selective pTx excitations compared to a pTx pulse design constrained only by global SAR. The primary tradeoff incurred for reducing peak local SAR was an increase in global SAR, up to 34% for the evaluated examples, which is favorable in cases where local SAR constraints dominate the pulse applications. PMID:22083594
NASA Astrophysics Data System (ADS)
Fan, Tong-liang; Wen, Yu-cang; Kadri, Chaibou
Orthogonal frequency-division multiplexing (OFDM) is robust against frequency selective fading because of the increase of the symbol duration. However, the time-varying nature of the channel causes inter-carrier interference (ICI) which destroys the orthogonal of sub-carriers and degrades the system performance severely. To alleviate the detrimental effect of ICI, there is a need for ICI mitigation within one OFDM symbol. We propose an iterative Inter-Carrier Interference (ICI) estimation and cancellation technique for OFDM systems based on regularized constrained total least squares. In the proposed scheme, ICI aren't treated as additional additive white Gaussian noise (AWGN). The effect of Inter-Carrier Interference (ICI) and inter-symbol interference (ISI) on channel estimation is regarded as perturbation of channel. We propose a novel algorithm for channel estimation o based on regularized constrained total least squares. Computer simulations show that significant improvement can be obtained by the proposed scheme in fast fading channels.
Tang, Bo-Hui; Wu, Hua-; Li, Zhao-Liang; Nerry, Françoise
2012-07-30
This work addressed the validation of the MODIS-derived bidirectional reflectivity retrieval algorithm in mid-infrared (MIR) channel, proposed by Tang and Li [Int. J. Remote Sens. 29, 4907 (2008)], with ground-measured data, which were collected from a field campaign that took place in June 2004 at the ONERA (Office National d'Etudes et de Recherches Aérospatiales) center of Fauga-Mauzac, on the PIRRENE (Programme Interdisciplinaire de Recherche sur la Radiométrie en Environnement Extérieur) experiment site [Opt. Express 15, 12464 (2007)]. The leaving-surface spectral radiances measured by a BOMEM (MR250 Series) Fourier transform interferometer were used to calculate the ground brightness temperatures with the combination of the inversion of the Planck function and the spectral response functions of MODIS channels 22 and 23, and then to estimate the ground brightness temperature without the contribution of the solar direct beam and the bidirectional reflectivity by using Tang and Li's proposed algorithm. On the other hand, the simultaneously measured atmospheric profiles were used to obtain the atmospheric parameters and then to calculate the ground brightness temperature without the contribution of the solar direct beam, based on the atmospheric radiative transfer equation in the MIR region. Comparison of those two kinds of brightness temperature obtained by two different methods indicated that the Root Mean Square Error (RMSE) between the brightness temperatures estimated respectively using Tang and Li's algorithm and the atmospheric radiative transfer equation is 1.94 K. In addition, comparison of the hemispherical-directional reflectances derived by Tang and Li's algorithm with those obtained from the field measurements showed that the RMSE is 0.011, which indicates that Tang and Li's algorithm is feasible to retrieve the bidirectional reflectivity in MIR channel from MODIS data.
Simulation of 3-D Nonequilibrium Seeded Air Flow in the NASA-Ames MHD Channel
NASA Technical Reports Server (NTRS)
Gupta, Sumeet; Tannehill, John C.; Mehta, Unmeel B.
2004-01-01
The 3-D nonequilibrium seeded air flow in the NASA-Ames experimental MHD channel has been numerically simulated. The channel contains a nozzle section, a center section, and an accelerator section where magnetic and electric fields can be imposed on the flow. In recent tests, velocity increases of up to 40% have been achieved in the accelerator section. The flow in the channel is numerically computed us ing a 3-D parabolized Navier-Stokes (PNS) algorithm that has been developed to efficiently compute MHD flows in the low magnetic Reynolds number regime: The MHD effects are modeled by introducing source terms into the PNS equations which can then be solved in a very efficient manner. The algorithm has been extended in the present study to account for nonequilibrium seeded air flows. The electrical conductivity of the flow is determined using the program of Park. The new algorithm has been used to compute two test cases that match the experimental conditions. In both cases, magnetic and electric fields are applied to the seeded flow. The computed results are in good agreement with the experimental data.
Performance of convolutional codes on fading channels typical of planetary entry missions
NASA Technical Reports Server (NTRS)
Modestino, J. W.; Mui, S. Y.; Reale, T. J.
1974-01-01
The performance of convolutional codes in fading channels typical of the planetary entry channel is examined in detail. The signal fading is due primarily to turbulent atmospheric scattering of the RF signal transmitted from an entry probe through a planetary atmosphere. Short constraint length convolutional codes are considered in conjunction with binary phase-shift keyed modulation and Viterbi maximum likelihood decoding, and for longer constraint length codes sequential decoding utilizing both the Fano and Zigangirov-Jelinek (ZJ) algorithms are considered. Careful consideration is given to the modeling of the channel in terms of a few meaningful parameters which can be correlated closely with theoretical propagation studies. For short constraint length codes the bit error probability performance was investigated as a function of E sub b/N sub o parameterized by the fading channel parameters. For longer constraint length codes the effect was examined of the fading channel parameters on the computational requirements of both the Fano and ZJ algorithms. The effects of simple block interleaving in combatting the memory of the channel is explored, using the analytic approach or digital computer simulation.
Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, V.; Farvardin, N.
1986-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
Fast convergent frequency-domain MIMO equalizer for few-mode fiber communication systems
NASA Astrophysics Data System (ADS)
He, Xuan; Weng, Yi; Wang, Junyi; Pan, Z.
2018-02-01
Space division multiplexing using few-mode fibers has been extensively explored to sustain the continuous traffic growth. In few-mode fiber optical systems, both spatial and polarization modes are exploited to transmit parallel channels, thus increasing the overall capacity. However, signals on spatial channels inevitably suffer from the intrinsic inter-modal coupling and large accumulated differential mode group delay (DMGD), which causes spatial modes de-multiplex even harder. Many research articles have demonstrated that frequency domain adaptive multi-input multi-output (MIMO) equalizer can effectively compensate the DMGD and demultiplex the spatial channels with digital signal processing (DSP). However, the large accumulated DMGD usually requires a large number of training blocks for the initial convergence of adaptive MIMO equalizers, which will decrease the overall system efficiency and even degrade the equalizer performance in fast-changing optical channels. Least mean square (LMS) algorithm is always used in MIMO equalization to dynamically demultiplex the spatial signals. We have proposed to use signal power spectral density (PSD) dependent method and noise PSD directed method to improve the convergence speed of adaptive frequency domain LMS algorithm. We also proposed frequency domain recursive least square (RLS) algorithm to further increase the convergence speed of MIMO equalizer at cost of greater hardware complexity. In this paper, we will compare the hardware complexity and convergence speed of signal PSD dependent and noise power directed algorithms against the conventional frequency domain LMS algorithm. In our numerical study of a three-mode 112 Gbit/s PDM-QPSK optical system with 3000 km transmission, the noise PSD directed and signal PSD dependent methods could improve the convergence speed by 48.3% and 36.1% respectively, at cost of 17.2% and 10.7% higher hardware complexity. We will also compare the frequency domain RLS algorithm against conventional frequency domain LMS algorithm. Our numerical study shows that, in a three-mode 224 Gbit/s PDM-16-QAM system with 3000 km transmission, the RLS algorithm could improve the convergence speed by 53.7% over conventional frequency domain LMS algorithm.
Holland, Katherine D; Bouley, Thomas M; Horn, Paul S
2017-07-01
Variants in neuronal voltage-gated sodium channel α-subunits genes SCN1A, SCN2A, and SCN8A are common in early onset epileptic encephalopathies and other autosomal dominant childhood epilepsy syndromes. However, in clinical practice, missense variants are often classified as variants of uncertain significance when missense variants are identified but heritability cannot be determined. Genetic testing reports often include results of computational tests to estimate pathogenicity and the frequency of that variant in population-based databases. The objective of this work was to enhance clinicians' understanding of results by (1) determining how effectively computational algorithms predict epileptogenicity of sodium channel (SCN) missense variants; (2) optimizing their predictive capabilities; and (3) determining if epilepsy-associated SCN variants are present in population-based databases. This will help clinicians better understand the results of indeterminate SCN test results in people with epilepsy. Pathogenic, likely pathogenic, and benign variants in SCNs were identified using databases of sodium channel variants. Benign variants were also identified from population-based databases. Eight algorithms commonly used to predict pathogenicity were compared. In addition, logistic regression was used to determine if a combination of algorithms could better predict pathogenicity. Based on American College of Medical Genetic Criteria, 440 variants were classified as pathogenic or likely pathogenic and 84 were classified as benign or likely benign. Twenty-eight variants previously associated with epilepsy were present in population-based gene databases. The output provided by most computational algorithms had a high sensitivity but low specificity with an accuracy of 0.52-0.77. Accuracy could be improved by adjusting the threshold for pathogenicity. Using this adjustment, the Mendelian Clinically Applicable Pathogenicity (M-CAP) algorithm had an accuracy of 0.90 and a combination of algorithms increased the accuracy to 0.92. Potentially pathogenic variants are present in population-based sources. Most computational algorithms overestimate pathogenicity; however, a weighted combination of several algorithms increased classification accuracy to >0.90. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Zhengyan; Zgadzaj, Rafal; Wang Xiaoming
2010-11-04
We demonstrate a prototype Frequency Domain Streak Camera (FDSC) that can capture the picosecond time evolution of the plasma accelerator structure in a single shot. In our prototype Frequency-Domain Streak Camera, a probe pulse propagates obliquely to a sub-picosecond pump pulse that creates an evolving nonlinear index 'bubble' in fused silica glass, supplementing a conventional Frequency Domain Holographic (FDH) probe-reference pair that co-propagates with the 'bubble'. Frequency Domain Tomography (FDT) generalizes Frequency-Domain Streak Camera by probing the 'bubble' from multiple angles and reconstructing its morphology and evolution using algorithms similar to those used in medical CAT scans. Multiplexing methods (Temporalmore » Multiplexing and Angular Multiplexing) improve data storage and processing capability, demonstrating a compact Frequency Domain Tomography system with a single spectrometer.« less
Vector Sum Excited Linear Prediction (VSELP) speech coding at 4.8 kbps
NASA Technical Reports Server (NTRS)
Gerson, Ira A.; Jasiuk, Mark A.
1990-01-01
Code Excited Linear Prediction (CELP) speech coders exhibit good performance at data rates as low as 4800 bps. The major drawback to CELP type coders is their larger computational requirements. The Vector Sum Excited Linear Prediction (VSELP) speech coder utilizes a codebook with a structure which allows for a very efficient search procedure. Other advantages of the VSELP codebook structure is discussed and a detailed description of a 4.8 kbps VSELP coder is given. This coder is an improved version of the VSELP algorithm, which finished first in the NSA's evaluation of the 4.8 kbps speech coders. The coder uses a subsample resolution single tap long term predictor, a single VSELP excitation codebook, a novel gain quantizer which is robust to channel errors, and a new adaptive pre/postfilter arrangement.
NASA Astrophysics Data System (ADS)
Price, V.; Weber, T.; Jerram, K.; Doucet, M.
2016-12-01
The analysis of multi-frequency, narrow-band single-beam acoustic data for fisheries applications has long been established, with methodology focusing on characterizing targets in the water column by utilizing complex algorithms and false-color time series data to create and compare frequency response curves for dissimilar biological groups. These methods were built on concepts developed for multi-frequency analysis of satellite imagery for terrestrial analysis and have been applied to a broad range of data types and applications. Single-beam systems operating at multiple frequencies are also used for the detection and identification of seeps in water column data. Here we incorporate the same analysis and visualization techniques used for fisheries applications to attempt to characterize and quantify seeps by creating and comparing frequency response curves and applying false coloration to shallow and deep multi-channel seep data. From this information, we can establish methods to differentiate bubble size in the echogram and differentiate seep composition. These techniques are also useful in differentiating plume content from biological noise (volume reverberation) created by euphausid layers and fish with gas-filled swim bladders. The combining of the multiple frequencies using false coloring and other image analysis techniques after applying established normalization and beam pattern correction algorithms is a novel approach to quantitatively describing seeps. Further, this information could be paired with geological models, backscatter, and bathymetry data to assess seep distribution.
Demodulation Algorithms for the Ofdm Signals in the Time- and Frequency-Scattering Channels
NASA Astrophysics Data System (ADS)
Bochkov, G. N.; Gorokhov, K. V.; Kolobkov, A. V.
2016-06-01
We consider a method based on the generalized maximum-likelihood rule for solving the problem of reception of the signals with orthogonal frequency division multiplexing of their harmonic components (OFDM signals) in the time- and frequency-scattering channels. The coherent and incoherent demodulators effectively using the time scattering due to the fast fading of the signal are developed. Using computer simulation, we performed comparative analysis of the proposed algorithms and well-known signal-reception algorithms with equalizers. The proposed symbolby-symbol detector with decision feedback and restriction of the number of searched variants is shown to have the best bit-error-rate performance. It is shown that under conditions of the limited accuracy of estimating the communication-channel parameters, the incoherent OFDMsignal detectors with differential phase-shift keying can ensure a better bit-error-rate performance compared with the coherent OFDM-signal detectors with absolute phase-shift keying.
NASA Astrophysics Data System (ADS)
Xu, Ding; Li, Qun
2017-01-01
This paper addresses the power allocation problem for cognitive radio (CR) based on hybrid-automatic-repeat-request (HARQ) with chase combining (CC) in Nakagamimslow fading channels. We assume that, instead of the perfect instantaneous channel state information (CSI), only the statistical CSI is available at the secondary user (SU) transmitter. The aim is to minimize the SU outage probability under the primary user (PU) interference outage constraint. Using the Lagrange multiplier method, an iterative and recursive algorithm is derived to obtain the optimal power allocation for each transmission round. Extensive numerical results are presented to illustrate the performance of the proposed algorithm.
Robust vector quantization for noisy channels
NASA Technical Reports Server (NTRS)
Demarca, J. R. B.; Farvardin, N.; Jayant, N. S.; Shoham, Y.
1988-01-01
The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance.
Toward a Lake Ice Phenology Derived from VIIRS Data
NASA Astrophysics Data System (ADS)
Sütterlin, Melanie; Duguay-Tetzlaff, Anke; Wunderle, Stefan
2017-04-01
Ice cover on lakes plays an essential role in the physical, chemical, and biological processes of freshwater systems (e.g., ice duration controls the seasonal heat budget of lakes), and it also has many economic implications (e.g., for hydroelectricity, transportation, winter tourism). The variability and trends in the seasonal cycle of lake ice (e.g., timing of freeze-up and break-up) represent robust and direct indicators of climate change; they therefore emphasize the importance of monitoring lake ice phenology. Satellite remote sensing has proven its great potential for detecting and measuring the ice cover on lakes. Different remote sensing systems have been successfully used to collect recordings of freeze-up, break-up, and ice thickness and increase the spatial and temporal coverage of ground-based observations. Therefore, within the Global Climate Observing System (GCOS) Swiss project, "Integrated Monitoring of Ice in Selected Swiss Lakes," initiated by MeteoSwiss, satellite images from various sensors and different approaches are used and compared to perform investigations aimed at integrated monitoring of lake ice in Switzerland and contributing to the collection of lake ice phenology recordings. Within the framework of this project, the Remote Sensing Research Group of the University of Bern (RSGB) utilizes data acquired in the fine-resolution imagery (I) bands (1-5) of the Visible Infrared Imaging Radiometer Suite (VIIRS) sensor that is mounted onboard the SUOMI-NPP. Visible and near-infrared reflectances, as well as thermal infrared-derived lake surface water temperatures (LSWT), are used to retrieve lake ice phenology dates. The VIIRS instrument, which combines a high temporal resolution ( 2 times per day) with a reasonable spatial resolution (375 m), is equipped with a single broad-band thermal I-channel (I05). Thus, a single-channel LSWT retrieval algorithm is employed to correct for the atmospheric influence. The single channel algorithm applied in this study is a physical mono-window (PMW) model based on the Radiative Transfer for the Television Infrared Observation Satellite Operational Vertical Sounder code (RTTOV). RTTOV, which is a fast radiative transfer model, can be used to estimate upward and downward atmospheric path radiance and atmospheric transmittance in the thermal infrared for a specific atmospheric profile. In this study, atmospheric profiles from ECMWF ERA-interim are utilized to run RTTOV and simulate top-of-atmosphere (TOA) brightness temperatures. We present the first retrievals of LSWT and ice features from corrected clear-sky channel I05 data of the VIIRS sensor. Together with VIS and NIR reflectance values, these first LSWT retrievals are used to derive ice-on/off dates for selected Swiss lakes by applying a threshold method. After successful validation based on in-situ measurements of Swiss lakes, the method can be utilized for global application.
NASA Astrophysics Data System (ADS)
Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.
2013-04-01
The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.
NASA Astrophysics Data System (ADS)
Worley, Jennings F.; Deitmer, Joachim W.; Nelson, Mark T.
1986-08-01
Single smooth muscle cells were enzymatically isolated from the rabbit mesenteric artery. At physiological levels of external Ca, these cells were relaxed and contracted on exposure to norepinephrine, caffeine, or high levels of potassium. The patch-clamp technique was used to measure unitary currents through single channels in the isolated cells. Single channels were selective for divalent cations and exhibited two conductance levels, 8 pS and 15 pS. Both types of channels were voltage-dependent, and channel activity occurred at potentials positive to -40 mV. The activity of both channel types was almost completely inhibited by 50 nM nisoldipine. These channels appear to be the pathways for voltage-dependent Ca influx in vascular smooth muscle and may be the targets of the clinically used dihydropyridines.
High capacity low delay packet broadcasting multiaccess schemes for satellite repeater systems
NASA Astrophysics Data System (ADS)
Bose, S. K.
1980-12-01
Demand assigned packet radio schemes using satellite repeaters can achieve high capacities but often exhibit relatively large delays under low traffic conditions when compared to random access. Several schemes which improve delay performance at low traffic but which have high capacity are presented and analyzed. These schemes allow random acess attempts by users, who are waiting for channel assignments. The performance of these are considered in the context of a multiple point communication system carrying fixed length messages between geographically distributed (ground) user terminals which are linked via a satellite repeater. Channel assignments are done following a BCC queueing discipline by a (ground) central controller on the basis of requests correctly received over a collision type access channel. In TBACR Scheme A, some of the forward message channels are set aside for random access transmissions; the rest are used in a demand assigned mode. Schemes B and C operate all their forward message channels in a demand assignment mode but, by means of appropriate algorithms for trailer channel selection, allow random access attempts on unassigned channels. The latter scheme also introduces framing and slotting of the time axis to implement a more efficient algorithm for trailer channel selection than the former.
Rainfall Estimation over the Nile Basin using Multi-Spectral, Multi- Instrument Satellite Techniques
NASA Astrophysics Data System (ADS)
Habib, E.; Kuligowski, R.; Sazib, N.; Elshamy, M.; Amin, D.; Ahmed, M.
2012-04-01
Management of Egypt's Aswan High Dam is critical not only for flood control on the Nile but also for ensuring adequate water supplies for most of Egypt since rainfall is scarce over the vast majority of its land area. However, reservoir inflow is driven by rainfall over Sudan, Ethiopia, Uganda, and several other countries from which routine rain gauge data are sparse. Satellite- derived estimates of rainfall offer a much more detailed and timely set of data to form a basis for decisions on the operation of the dam. A single-channel infrared (IR) algorithm is currently in operational use at the Egyptian Nile Forecast Center (NFC). In this study, the authors report on the adaptation of a multi-spectral, multi-instrument satellite rainfall estimation algorithm (Self- Calibrating Multivariate Precipitation Retrieval, SCaMPR) for operational application by NFC over the Nile Basin. The algorithm uses a set of rainfall predictors that come from multi-spectral Infrared cloud top observations and self-calibrate them to a set of predictands that come from the more accurate, but less frequent, Microwave (MW) rain rate estimates. For application over the Nile Basin, the SCaMPR algorithm uses multiple satellite IR channels that have become recently available to NFC from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). Microwave rain rates are acquired from multiple sources such as the Special Sensor Microwave/Imager (SSM/I), the Special Sensor Microwave Imager and Sounder (SSMIS), the Advanced Microwave Sounding Unit (AMSU), the Advanced Microwave Scanning Radiometer on EOS (AMSR-E), and the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm has two main steps: rain/no-rain separation using discriminant analysis, and rain rate estimation using stepwise linear regression. We test two modes of algorithm calibration: real- time calibration with continuous updates of coefficients with newly coming MW rain rates, and calibration using static coefficients that are derived from IR-MW data from past observations. We also compare the SCaMPR algorithm to other global-scale satellite rainfall algorithms (e.g., 'Tropical Rainfall Measuring Mission (TRMM) and other sources' (TRMM-3B42) product, and the National Oceanographic and Atmospheric Administration Climate Prediction Center (NOAA-CPC) CMORPH product. The algorithm has several potential future applications such as: improving the performance accuracy of hydrologic forecasting models over the Nile Basin, and utilizing the enhanced rainfall datasets and better-calibrated hydrologic models to assess the impacts of climate change on the region's water availability using global circulation models and regional climate models.
A high data rate universal lattice decoder on FPGA
NASA Astrophysics Data System (ADS)
Ma, Jing; Huang, Xinming; Kura, Swapna
2005-06-01
This paper presents the architecture design of a high data rate universal lattice decoder for MIMO channels on FPGA platform. A phost strategy based lattice decoding algorithm is modified in this paper to reduce the complexity of the closest lattice point search. The data dependency of the improved algorithm is examined and a parallel and pipeline architecture is developed with the iterative decoding function on FPGA and the division intensive channel matrix preprocessing on DSP. Simulation results demonstrate that the improved lattice decoding algorithm provides better bit error rate and less iteration number compared with the original algorithm. The system prototype of the decoder shows that it supports data rate up to 7Mbit/s on a Virtex2-1000 FPGA, which is about 8 times faster than the original algorithm on FPGA platform and two-orders of magnitude better than its implementation on a DSP platform.
NASA Technical Reports Server (NTRS)
Gleman, Stuart M. (Inventor); Rowe, Geoffrey K. (Inventor)
1999-01-01
An ultrasonic bolt gage is described which uses a crosscorrelation algorithm to determine a tension applied to a fastener, such as a bolt. The cross-correlation analysis is preferably performed using a processor operating on a series of captured ultrasonic echo waveforms. The ultrasonic bolt gage is further described as using the captured ultrasonic echo waveforms to perform additional modes of analysis, such as feature recognition. Multiple tension data outputs, therefore, can be obtained from a single data acquisition for increased measurement reliability. In addition, one embodiment of the gage has been described as multi-channel, having a multiplexer for performing a tension analysis on one of a plurality of bolts.
Smart wireless sensor for physiological monitoring.
Tomasic, Ivan; Avbelj, Viktor; Trobec, Roman
2015-01-01
Presented is a wireless body sensor capable of measuring local potential differences on a body surface. By using on-sensor signal processing capabilities, and developed algorithms for off-line signal processing on a personal computing device, it is possible to record single channel ECG, heart rate, breathing rate, EMG, and when three sensors are applied, even the 12-lead ECG. The sensor is portable, unobtrusive, and suitable for both inpatient and outpatient monitoring. The paper presents the sensor's hardware and results of power consumption analysis. The sensor's capabilities of recording various physiological parameters are also presented and illustrated. The paper concludes with envisioned sensor's future developments and prospects.
Retrieving Volcanic SO2 from the 4-UV channels on DSCOVR/EPIC
NASA Astrophysics Data System (ADS)
Fisher, B. L.; Krotkov, N. A.; Carn, S. A.; Taylor, S.; Li, C.; Bhartia, P. K.; Huang, L. K.; Haffner, D. P.
2017-12-01
Since arriving at the L1 Lagrange point in June 2015, the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) has been collecting continuous full disk images of the sunlit earth from a distance of 1.5 million km. EPIC is a 10-band spectroradiometer that that has a field of view (FoV) at the earth's surface of about 25 km, providing a unique opportunity to observe the initial appearance and evolution of SO2 plumes from volcanic eruptions at about 90 minute temporal resolution. Our algorithm uses the 317.5, 325, 340 and 388 nm UV channels on EPIC to retrieve volcanic SO2, total column ozone, Lambertian equivalent reflectivity and its spectral dependence. The MS_SO2 algorithm has been successfully applied to the data from legacy and current NASA missions (e.g., Nimbus7/TOMS, SNPP/OMPS, and Aura/OMI). The separation between ozone and SO2 is possible due differences in the cross sections at the two shortest UV channels. The images for each spectral channel are not perfectly aligned due to the earth's rotation, geo-rectification, cloud noise, exposure time and spacecraft jitter. These issues introduce additional noise, for a multi-channel inversion. In this presentation, we describe some modifications to the algorithm that attempt to account for these issues. By comparing the plume areas, mass tonnage and peak SO2 values from other low earth observing satellites, it is shown that the algorithm significantly improves the identification of the plume, while eliminating false positives.
Aerosol Correction for Remotely Sensed Sea Surface Temperatures From the NOAA AVHRR: Phase II
NASA Astrophysics Data System (ADS)
Nalli, N. R.; Ignatov, A.
2002-05-01
For over two decades, the National Oceanic and Atmospheric Administration (NOAA) has produced global retrievals of sea surface temperature (SST) using infrared (IR) data from the Advanced Very High Resolution Radiometer (AVHRR). The standard multichannel retrieval algorithms are derived from regression analyses of AVHRR window channel brightness temperatures against in situ buoy measurements under non-cloudy conditions thus providing a correction for IR attenuation due to molecular water vapor absorption. However, for atmospheric conditions with elevated aerosol levels (e.g., arising from dust, biomass burning and volcanic eruptions), such algorithms lead to significant negative biases in SST because of IR attenuation arising from aerosol absorption and scattering. This research presents the development of a 2nd-phase aerosol correction algorithm for daytime AVHRR SST. To accomplish this, a long-term (1990-1998), global AVHRR-buoy matchup database was created by merging the Pathfinder Atmospheres (PATMOS) and Oceans (PFMDB) data sets. The merged data are unique in that they include multi-year, global daytime estimates of aerosol optical depth (AOD) derived from AVHRR channels 1 and 2 (0.63 and 0.83 μ m, respectively), along with an effective Angstrom exponent derived from the AOD retrievals (Ignatov and Nalli, 2002). Recent enhancements in the aerosol data constitute an improvement over the Phase I algorithm (Nalli and Stowe, 2002) which relied only on channel 1 AOD and the ratio of normalized reflectance from channels 1 and 2. The Angstrom exponent and channel 2 AOD provide important statistical information about the particle size distribution of the aerosol. The SST bias can be parametrically expressed as a function of observed AVHRR channels 1 and 2 slant-path AOD, normalized reflectance ratio and the Angstrom exponent. Based upon these empirical relationships, aerosol correction equations are then derived for the daytime multichannel and nonlinear SST (MCSST and NLSST) algorithms. Separate sets of coefficients are utilized for two aerosol modes, these being stratospheric/tropospheric (e.g., volcanic aerosol) and tropospheric (e.g., dust, smoke). The algorithms are subsequently applied to retrospective PATMOS data to demonstrate the potential for climate applications. The minimization of cold biases in the AVHRR SST, as demonstrated in this work, should improve its overall utility for the general user community.
Linear methods for reducing EMG contamination in peripheral nerve motor decodes.
Kagan, Zachary B; Wendelken, Suzanne; Page, David M; Davis, Tyler; Hutchinson, Douglas T; Clark, Gregory A; Warren, David J
2016-08-01
Signals recorded from the peripheral nervous system (PNS) with high channel count penetrating microelectrode arrays, such as the Utah Slanted Electrode Array (USEA), often have electromyographic (EMG) signals contaminating the neural signal. This common-mode signal source may prevent single neural units from successfully being detected, thus hindering motor decode algorithms. Reducing this EMG contamination may lead to more accurate motor decode performance. A virtual reference (VR), created by a weighted linear combination of signals from a subset of all available channels, can be used to reduce this EMG contamination. Four methods of determining individual channel weights and six different methods of selecting subsets of channels were investigated (24 different VR types in total). The methods of determining individual channel weights were equal weighting, regression-based weighting, and two different proximity-based weightings. The subsets of channels were selected by a radius-based criteria, such that a channel was included if it was within a particular radius of inclusion from the target channel. These six radii of inclusion were 1.5, 2.9, 3.2, 5, 8.4, and 12.8 electrode-distances; the 12.8 electrode radius includes all USEA electrodes. We found that application of a VR improves the detectability of neural events via increasing the SNR, but we found no statistically meaningful difference amongst the VR types we examined. The computational complexity of implementation varies with respect to the method of determining channel weights and the number of channels in a subset, but does not correlate with VR performance. Hence, we examined the computational costs of calculating and applying the VR and based on these criteria, we recommend an equal weighting method of assigning weights with a 3.2 electrode-distance radius of inclusion. Further, we found empirically that application of the recommended VR will require less than 1 ms for 33.3 ms of data from one USEA.
Comparison Between Three Different Types of Routing Algorithms of Network on Chip
NASA Astrophysics Data System (ADS)
Soni, Neetu; Deshmukh, Khemraj
Network on Chip (NoC) is an on-chip communication technology in which a large number of processing elements and storage blocks are integrated on a single chip. Due to scalability, adaptive nature, well resource utilization NoCs have become popular in and has efficiently replaced SoCs. NoCs performance depends mainly on the type of routing algorithm chosen. In this paper three different types of routing algorithms are being compared firstly one is deterministic routing (XY routing algorithm), secondly three partially adaptive routing (West-first, North-last and Negative-first) and two adaptive routing (DyAD, OE) are being compared with respect to Packet Injection Rate (PIR) of load for random traffic pattern for 4 × 4 mesh topology. All these comparison and simulation is done in NOXIM 2.3.1 simulator which is a cycle accurate systemC based simulator. The distribution of packets is Poisson type with Buffer depth (number of buffers) of input channel FIFO is 8. Packet size is taken as 8 bytes. The simulation time is taken 50,000 cycles. We found that XY routing is better when the PIR is low. The partially adaptive routing is good when the PIR is moderate. DyAD routing is suited when the load i.e. PIR is high.
Two MODIS Aerosol Products over Ocean on the Terra and Aqua CERES SSF Datasets.
NASA Astrophysics Data System (ADS)
Ignatov, Alexander; Minnis, Patrick; Loeb, Norman; Wielicki, Bruce; Miller, Walter; Sun-Mack, Sunny; Tanré, Didier; Remer, Lorraine; Laszlo, Istvan; Geier, Erika
2005-04-01
Understanding the impact of aerosols on the earth's radiation budget and the long-term climate record requires consistent measurements of aerosol properties and radiative fluxes. The Clouds and the Earth's Radiant Energy System (CERES) Science Team combines satellite-based retrievals of aerosols, clouds, and radiative fluxes into Single Scanner Footprint (SSF) datasets from the Terra and Aqua satellites. Over ocean, two aerosol products are derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) using different sampling and aerosol algorithms. The primary, or M, product is taken from the standard multispectral aerosol product developed by the MODIS aerosol group while a simpler, secondary [Advanced Very High Resolution Radiometer (AVHRR) like], or A, product is derived by the CERES Science Team using a different cloud clearing method and a single-channel aerosol algorithm. Two aerosol optical depths (AOD), τA1 and τA2, are derived from MODIS bands 1 (0.644 μm) and 6 (1.632 μm) resembling the AVHRR/3 channels 1 and 3A, respectively. On Aqua the retrievals are made in band 7 (2.119 μm) because of poor quality data from band 6. The respective Ångström exponents can be derived from the values of τ. The A product serves as a backup for the M product. More importantly, the overlap of these aerosol products is essential for placing the 20+ year heritage AVHRR aerosol record in the context of more advanced aerosol sensors and algorithms such as that used for the M product.This study documents the M and A products, highlighting their CERES SSF specifics. Based on 2 weeks of global Terra data, coincident M and A AODs are found to be strongly correlated in both bands. However, both domains in which the M and A aerosols are available, and the respective τ/α statistics significantly differ because of discrepancies in sampling due to differences in cloud and sun-glint screening. In both aerosol products, correlation is observed between the retrieved aerosol parameters (τ/α) and ambient cloud amount, with the dependence in the M product being more pronounced than in the A product.
Evaluation of a novel triple-channel radiochromic film analysis procedure using EBT2.
van Hoof, Stefan J; Granton, Patrick V; Landry, Guillaume; Podesta, Mark; Verhaegen, Frank
2012-07-07
A novel approach to read out radiochromic film was introduced recently by the manufacturer of GafChromic film. In this study, the performance of this triple-channel film dosimetry method was compared against the conventional single-red-channel film dosimetry procedure, with and without inclusion of a pre-irradiation (pre-IR) film scan, using EBT2 film and kilo- and megavoltage photon beams up to 10 Gy. When considering regions of interest averaged doses, the triple-channel method and both single-channel methods produced equivalent results. Absolute dose discrepancies between the triple-channel method, both single-channel methods and the treatment planning system calculated dose values, were no larger than 5 cGy for dose levels up to 2.2 Gy. Signal to noise in triple-channel dose images was found to be similar to signal to noise in single-channel dose images. The accuracy of resulting dose images from the triple- and single-channel methods with inclusion of pre-IR film scan was found to be similar. Results of a comparison of EBT2 data from a kilovoltage depth dose experiment to corresponding Monte Carlo depth dose data produced dose discrepancies of 9.5 ± 12 cGy and 7.6 ± 6 cGy for the single-channel method with inclusion of a pre-IR film scan and the triple-channel method, respectively. EBT2 showed to be energy sensitive at low kilovoltage energies with response differences of 11.9% and 15.6% in the red channel at 2 Gy between 50-225 kVp and 80-225 kVp photon spectra, respectively. We observed that the triple-channel method resulted in non-uniformity corrections of ±1% and consistency values of 0-3 cGy for the batches and dose levels studied. Results of this study indicate that the triple-channel radiochromic film read-out method performs at least as well as the single-channel method with inclusion of a pre-IR film scan, reduces film non-uniformity and saves time with elimination of a pre-IR film scan.
Wen, Hong; Wang, Longye; Xie, Ping; Song, Liang; Tang, Jie; Liao, Runfa
2017-01-01
In this paper, we propose an adaptive single differential coherent detection (SDCD) scheme for the binary phase shift keying (BPSK) signals in IEEE 802.15.4 Wireless Sensor Networks (WSNs). In particular, the residual carrier frequency offset effect (CFOE) for differential detection is adaptively estimated, with only linear operation, according to the changing channel conditions. It was found that the carrier frequency offset (CFO) and chip signal-to-noise ratio (SNR) conditions do not need a priori knowledge. This partly benefits from that the combination of the trigonometric approximation sin−1(x)≈x and a useful assumption, namely, the asymptotic or high chip SNR, is considered for simplification of the full estimation scheme. Simulation results demonstrate that the proposed algorithm can achieve an accurate estimation and the detection performance can completely meet the requirement of the IEEE 802.15.4 standard, although with a little loss of reliability and robustness as compared with the conventional optimal single-symbol detector. PMID:29278404
Zhang, Gaoyuan; Wen, Hong; Wang, Longye; Xie, Ping; Song, Liang; Tang, Jie; Liao, Runfa
2017-12-26
In this paper, we propose an adaptive single differential coherent detection (SDCD) scheme for the binary phase shift keying (BPSK) signals in IEEE 802.15.4 Wireless Sensor Networks (WSNs). In particular, the residual carrier frequency offset effect (CFOE) for differential detection is adaptively estimated, with only linear operation, according to the changing channel conditions. It was found that the carrier frequency offset (CFO) and chip signal-to-noise ratio (SNR) conditions do not need a priori knowledge. This partly benefits from that the combination of the trigonometric approximation sin - 1 ( x ) ≈ x and a useful assumption, namely, the asymptotic or high chip SNR, is considered for simplification of the full estimation scheme. Simulation results demonstrate that the proposed algorithm can achieve an accurate estimation and the detection performance can completely meet the requirement of the IEEE 802.15.4 standard, although with a little loss of reliability and robustness as compared with the conventional optimal single-symbol detector.
Liu, Jianbo; Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Neal, Maxwell; Cashmere, David J; Germain, Anne; Reifman, Jaques
2018-02-01
Electroencephalography (EEG) recordings during sleep are often contaminated by muscle and ocular artefacts, which can affect the results of spectral power analyses significantly. However, the extent to which these artefacts affect EEG spectral power across different sleep states has not been quantified explicitly. Consequently, the effectiveness of automated artefact-rejection algorithms in minimizing these effects has not been characterized fully. To address these issues, we analysed standard 10-channel EEG recordings from 20 subjects during one night of sleep. We compared their spectral power when the recordings were contaminated by artefacts and after we removed them by visual inspection or by using automated artefact-rejection algorithms. During both rapid eye movement (REM) and non-REM (NREM) sleep, muscle artefacts contaminated no more than 5% of the EEG data across all channels. However, they corrupted delta, beta and gamma power levels substantially by up to 126, 171 and 938%, respectively, relative to the power level computed from artefact-free data. Although ocular artefacts were infrequent during NREM sleep, they affected up to 16% of the frontal and temporal EEG channels during REM sleep, primarily corrupting delta power by up to 33%. For both REM and NREM sleep, the automated artefact-rejection algorithms matched power levels to within ~10% of the artefact-free power level for each EEG channel and frequency band. In summary, although muscle and ocular artefacts affect only a small fraction of EEG data, they affect EEG spectral power significantly. This suggests the importance of using artefact-rejection algorithms before analysing EEG data. © 2017 European Sleep Research Society.
Description and control of dissociation channels in gas-phase protein complexes
NASA Astrophysics Data System (ADS)
Thachuk, Mark; Fegan, Sarah K.; Raheem, Nigare
2016-08-01
Using molecular dynamics simulations of a coarse-grained model of the charged apo-hemoglobin protein complex, this work expands upon our initial report [S. K. Fegan and M. Thachuk, J. Am. Soc. Mass Spectrom. 25, 722-728 (2014)] about control of dissociation channels in the gas phase using specially designed charge tags. Employing a charge hopping algorithm and a range of temperatures, a variety of dissociation channels are found for activated gas-phase protein complexes. At low temperatures, a single monomer unfolds and becomes charge enriched. At higher temperatures, two additional channels open: (i) two monomers unfold and charge enrich and (ii) two monomers compete for unfolding with one eventually dominating and the other reattaching to the complex. At even higher temperatures, other more complex dissociation channels open with three or more monomers competing for unfolding. A model charge tag with five sites is specially designed to either attract or exclude charges. By attaching this tag to the N-terminus of specific monomers, the unfolding of those monomers can be decidedly enhanced or suppressed. In other words, using charge tags to direct the motion of charges in a protein complex provides a mechanism for controlling dissociation. This technique could be used in mass spectrometry experiments to direct forces at specific attachment points in a protein complex, and hence increase the diversity of product channels available for quantitative analysis. In turn, this could provide insight into the function of the protein complex in its native biological environment. From a dynamics perspective, this system provides an interesting example of cooperative behaviour involving motions with differing time scales.
Dynamic variable selection in SNP genotype autocalling from APEX microarray data.
Podder, Mohua; Welch, William J; Zamar, Ruben H; Tebbutt, Scott J
2006-11-30
Single nucleotide polymorphisms (SNPs) are DNA sequence variations, occurring when a single nucleotide--adenine (A), thymine (T), cytosine (C) or guanine (G)--is altered. Arguably, SNPs account for more than 90% of human genetic variation. Our laboratory has developed a highly redundant SNP genotyping assay consisting of multiple probes with signals from multiple channels for a single SNP, based on arrayed primer extension (APEX). This mini-sequencing method is a powerful combination of a highly parallel microarray with distinctive Sanger-based dideoxy terminator sequencing chemistry. Using this microarray platform, our current genotype calling system (known as SNP Chart) is capable of calling single SNP genotypes by manual inspection of the APEX data, which is time-consuming and exposed to user subjectivity bias. Using a set of 32 Coriell DNA samples plus three negative PCR controls as a training data set, we have developed a fully-automated genotyping algorithm based on simple linear discriminant analysis (LDA) using dynamic variable selection. The algorithm combines separate analyses based on the multiple probe sets to give a final posterior probability for each candidate genotype. We have tested our algorithm on a completely independent data set of 270 DNA samples, with validated genotypes, from patients admitted to the intensive care unit (ICU) of St. Paul's Hospital (plus one negative PCR control sample). Our method achieves a concordance rate of 98.9% with a 99.6% call rate for a set of 96 SNPs. By adjusting the threshold value for the final posterior probability of the called genotype, the call rate reduces to 94.9% with a higher concordance rate of 99.6%. We also reversed the two independent data sets in their training and testing roles, achieving a concordance rate up to 99.8%. The strength of this APEX chemistry-based platform is its unique redundancy having multiple probes for a single SNP. Our model-based genotype calling algorithm captures the redundancy in the system considering all the underlying probe features of a particular SNP, automatically down-weighting any 'bad data' corresponding to image artifacts on the microarray slide or failure of a specific chemistry. In this regard, our method is able to automatically select the probes which work well and reduce the effect of other so-called bad performing probes in a sample-specific manner, for any number of SNPs.
Automated brain tumor segmentation using spatial accuracy-weighted hidden Markov Random Field.
Nie, Jingxin; Xue, Zhong; Liu, Tianming; Young, Geoffrey S; Setayesh, Kian; Guo, Lei; Wong, Stephen T C
2009-09-01
A variety of algorithms have been proposed for brain tumor segmentation from multi-channel sequences, however, most of them require isotropic or pseudo-isotropic resolution of the MR images. Although co-registration and interpolation of low-resolution sequences, such as T2-weighted images, onto the space of the high-resolution image, such as T1-weighted image, can be performed prior to the segmentation, the results are usually limited by partial volume effects due to interpolation of low-resolution images. To improve the quality of tumor segmentation in clinical applications where low-resolution sequences are commonly used together with high-resolution images, we propose the algorithm based on Spatial accuracy-weighted Hidden Markov random field and Expectation maximization (SHE) approach for both automated tumor and enhanced-tumor segmentation. SHE incorporates the spatial interpolation accuracy of low-resolution images into the optimization procedure of the Hidden Markov Random Field (HMRF) to segment tumor using multi-channel MR images with different resolutions, e.g., high-resolution T1-weighted and low-resolution T2-weighted images. In experiments, we evaluated this algorithm using a set of simulated multi-channel brain MR images with known ground-truth tissue segmentation and also applied it to a dataset of MR images obtained during clinical trials of brain tumor chemotherapy. The results show that more accurate tumor segmentation results can be obtained by comparing with conventional multi-channel segmentation algorithms.
Hu, Yi; Loizou, Philipos C
2010-06-01
Attempts to develop noise-suppression algorithms that can significantly improve speech intelligibility in noise by cochlear implant (CI) users have met with limited success. This is partly because algorithms were sought that would work equally well in all listening situations. Accomplishing this has been quite challenging given the variability in the temporal/spectral characteristics of real-world maskers. A different approach is taken in the present study focused on the development of environment-specific noise suppression algorithms. The proposed algorithm selects a subset of the envelope amplitudes for stimulation based on the signal-to-noise ratio (SNR) of each channel. Binary classifiers, trained using data collected from a particular noisy environment, are first used to classify the mixture envelopes of each channel as either target-dominated (SNR>or=0 dB) or masker-dominated (SNR<0 dB). Only target-dominated channels are subsequently selected for stimulation. Results with CI listeners indicated substantial improvements (by nearly 44 percentage points at 5 dB SNR) in intelligibility with the proposed algorithm when tested with sentences embedded in three real-world maskers. The present study demonstrated that the environment-specific approach to noise reduction has the potential to restore speech intelligibility in noise to a level near to that attained in quiet.
FPGA implementation of low complexity LDPC iterative decoder
NASA Astrophysics Data System (ADS)
Verma, Shivani; Sharma, Sanjay
2016-07-01
Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.
Hemmateenejad, Bahram; Akhond, Morteza; Miri, Ramin; Shamsipur, Mojtaba
2003-01-01
A QSAR algorithm, principal component-genetic algorithm-artificial neural network (PC-GA-ANN), has been applied to a set of newly synthesized calcium channel blockers, which are of special interest because of their role in cardiac diseases. A data set of 124 1,4-dihydropyridines bearing different ester substituents at the C-3 and C-5 positions of the dihydropyridine ring and nitroimidazolyl, phenylimidazolyl, and methylsulfonylimidazolyl groups at the C-4 position with known Ca(2+) channel binding affinities was employed in this study. Ten different sets of descriptors (837 descriptors) were calculated for each molecule. The principal component analysis was used to compress the descriptor groups into principal components. The most significant descriptors of each set were selected and used as input for the ANN. The genetic algorithm (GA) was used for the selection of the best set of extracted principal components. A feed forward artificial neural network with a back-propagation of error algorithm was used to process the nonlinear relationship between the selected principal components and biological activity of the dihydropyridines. A comparison between PC-GA-ANN and routine PC-ANN shows that the first model yields better prediction ability.
Color filter array design based on a human visual model
NASA Astrophysics Data System (ADS)
Parmar, Manu; Reeves, Stanley J.
2004-05-01
To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of the error criterion.
Active control of fan noise from a turbofan engine
NASA Technical Reports Server (NTRS)
Thomas, Russell H.; Burdisso, Ricardo A.; Fuller, Christopher R.; O'Brien, Walter F.
1993-01-01
A three channel active control system is applied to an operational turbofan engine in order to reduce tonal noise produced by both the fan and high pressure compressor. The control approach is the feedforward filtered-x least-mean-square algorithm implemented on a digital signal processing board. Reference transducers mounted on the engine case provides blade passing and harmonics frequency information to the controller. Error information is provided by large area microphones placed in the acoustic far field. In order to minimize the error signal, the controller actuates loudspeakers mounted on the inlet to produce destructive interference. The sound pressure level of the fundamental tone of the fan was reduced using the three channel controller by up to 16 dB over a 60 deg angle about the engine axis. A single channel controller could produce reduction over a 30 deg angle. The experimental results show the control to be robust. Simultaneous control of two tones is done with parallel controllers. The fundamental and the first harmonic tones of the fan were controlled simultaneously with reductions of 12 dBA and 5 dBA, respectively, measured on the engine axis. Simultaneous control was also demonstrated for the fan fundamental and the high pressure compressor fundamental tones.
Nivala, Michael; de Lange, Enno; Rovetti, Robert; Qu, Zhilin
2012-01-01
Intracellular calcium (Ca) cycling dynamics in cardiac myocytes is regulated by a complex network of spatially distributed organelles, such as sarcoplasmic reticulum (SR), mitochondria, and myofibrils. In this study, we present a mathematical model of intracellular Ca cycling and numerical and computational methods for computer simulations. The model consists of a coupled Ca release unit (CRU) network, which includes a SR domain and a myoplasm domain. Each CRU contains 10 L-type Ca channels and 100 ryanodine receptor channels, with individual channels simulated stochastically using a variant of Gillespie’s method, modified here to handle time-dependent transition rates. Both the SR domain and the myoplasm domain in each CRU are modeled by 5 × 5 × 5 voxels to maintain proper Ca diffusion. Advanced numerical algorithms implemented on graphical processing units were used for fast computational simulations. For a myocyte containing 100 × 20 × 10 CRUs, a 1-s heart time simulation takes about 10 min of machine time on a single NVIDIA Tesla C2050. Examples of simulated Ca cycling dynamics, such as Ca sparks, Ca waves, and Ca alternans, are shown. PMID:22586402
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guha, Saikat; Shapiro, Jeffrey H.; Erkmen, Baris I.
Previous work on the classical information capacities of bosonic channels has established the capacity of the single-user pure-loss channel, bounded the capacity of the single-user thermal-noise channel, and bounded the capacity region of the multiple-access channel. The latter is a multiple-user scenario in which several transmitters seek to simultaneously and independently communicate to a single receiver. We study the capacity region of the bosonic broadcast channel, in which a single transmitter seeks to simultaneously and independently communicate to two different receivers. It is known that the tightest available lower bound on the capacity of the single-user thermal-noise channel is thatmore » channel's capacity if, as conjectured, the minimum von Neumann entropy at the output of a bosonic channel with additive thermal noise occurs for coherent-state inputs. Evidence in support of this minimum output entropy conjecture has been accumulated, but a rigorous proof has not been obtained. We propose a minimum output entropy conjecture that, if proved to be correct, will establish that the capacity region of the bosonic broadcast channel equals the inner bound achieved using a coherent-state encoding and optimum detection. We provide some evidence that supports this conjecture, but again a full proof is not available.« less
Content-based multiple bitstream image transmission over noisy channels.
Cao, Lei; Chen, Chang Wen
2002-01-01
In this paper, we propose a novel combined source and channel coding scheme for image transmission over noisy channels. The main feature of the proposed scheme is a systematic decomposition of image sources so that unequal error protection can be applied according to not only bit error sensitivity but also visual content importance. The wavelet transform is adopted to hierarchically decompose the image. The association between the wavelet coefficients and what they represent spatially in the original image is fully exploited so that wavelet blocks are classified based on their corresponding image content. The classification produces wavelet blocks in each class with similar content and statistics, therefore enables high performance source compression using the set partitioning in hierarchical trees (SPIHT) algorithm. To combat the channel noise, an unequal error protection strategy with rate-compatible punctured convolutional/cyclic redundancy check (RCPC/CRC) codes is implemented based on the bit contribution to both peak signal-to-noise ratio (PSNR) and visual quality. At the receiving end, a postprocessing method making use of the SPIHT decoding structure and the classification map is developed to restore the degradation due to the residual error after channel decoding. Experimental results show that the proposed scheme is indeed able to provide protection both for the bits that are more sensitive to errors and for the more important visual content under a noisy transmission environment. In particular, the reconstructed images illustrate consistently better visual quality than using the single-bitstream-based schemes.
NASA Astrophysics Data System (ADS)
Valiallah Mousavi, S.; Barzegar Gerdroodbary, M.; Sheikholeslami, Mohsen; Ganji, D. D.
2016-09-01
In this study, two dimensional numerical simulations are performed to investigate the influence of the magnetic field on the nanofluid flow inside a sinusoidal channel. This work reveals the influence of variable magnetic field in the heat transfer of heat exchanger while the mixture is in a single phase. In this heat exchanger, the inner tube is sinusoidal and the outer tube is considered smooth. The magnetic field is applied orthogonal to the axis of the sinusoidal tube. In our study, the ferrofluid (water with 4 vol% nanoparticles (Fe3O4)) flows in a channel with sinusoidal bottom. The finite volume method with the SIMPLEC algorithm is used for handling the pressure-velocity coupling. The numerical results present validated data with experimentally measured data and show good agreement with measurement. The influence of different parameters, like the intensity of magnetic field and Reynolds number, on the heat transfer is investigated. According to the obtained results, the sinusoidal formation of the internal tube significantly increases the Nusselt number inside the channel. Our findings show that the magnetic field increases the probability of eddy formation inside the cavities and consequently enhances the heat transfer (more than 200%) in the vicinity of the magnetic field at low Reynolds number ( Re=50). In addition, the variation of the skin friction shows that the magnetic field increases the skin friction (more than 600%) inside the sinusoidal channel.
On the Performance of Adaptive Data Rate over Deep Space Ka-Bank Link: Case Study Using Kepler Data
NASA Technical Reports Server (NTRS)
Gao, Jay L.
2016-01-01
Future missions envisioned for both human and robotic exploration demand increasing communication capacity through the use of Ka-band communications. The Ka-band channel, being more sensitive to weather impairments, presents a unique trade-offs between data storage, latency, data volume and reliability. While there are many possible techniques for optimizing Ka-band operations such as adaptive modulation and coding and site-diversity, this study focus exclusively on the use of adaptive data rate (ADR) to achieve significant improvement in the data volume-availability tradeoff over a wide range of link distances for near Earth and Mars exploration. Four years of Kepler Ka-band downlink symbol signal-to-noise (SNR) data reported by the Deep Space Network were utilized to characterize the Ka-band channel statistics at each site and conduct various what-if performance analysis for different link distances. We model a notional closed-loop adaptive data rate system in which an algorithm predicts the channel condition two-way light time (TWLT) into the future using symbol SNR reported in near-real time by the ground receiver and determines the best data rate to use. Fixed and adaptive margins were used to mitigate errors in channel prediction. The performance of this closed-loop adaptive data rate approach is quantified in terms of data volume and availability and compared to the actual mission configuration and a hypothetical, optimized single rate configuration assuming full a priori channel knowledge.
[Colorimetric characterization of LCD based on wavelength partition spectral model].
Liu, Hao-Xue; Cui, Gui-Hua; Huang, Min; Wu, Bing; Xu, Yan-Fang; Luo, Ming
2013-10-01
To establish a colorimetrical characterization model of LCDs, an experiment with EIZO CG19, IBM 19, DELL 19 and HP 19 LCDs was designed and carried out to test the interaction between RGB channels, and then to test the spectral additive property of LCDs. The RGB digital values of single channel and two channels were given and the corresponding tristimulus values were measured, then a chart was plotted and calculations were made to test the independency of RGB channels. The results showed that the interaction between channels was reasonably weak and spectral additivity property was held well. We also found that the relations between radiations and digital values at different wavelengths varied, that is, they were the functions of wavelength. A new calculation method based on piecewise spectral model, in which the relation between radiations and digital values was fitted by a cubic polynomial in each piece of wavelength with measured spectral radiation curves, was proposed and tested. The spectral radiation curves of RGB primaries with any digital values can be found out with only a few measurements and fitted cubic polynomial in this way and then any displayed color can be turned out by the spectral additivity property of primaries at given digital values. The algorithm of this method was discussed in detail in this paper. The computations showed that the proposed method was simple and the number of measurements needed was reduced greatly while keeping a very high computation precision. This method can be used as a colorimetrical characterization model.
NASA Astrophysics Data System (ADS)
Biswas, Rahul; Blackburn, Lindy; Cao, Junwei; Essick, Reed; Hodge, Kari Alison; Katsavounidis, Erotokritos; Kim, Kyungmin; Kim, Young-Min; Le Bigot, Eric-Olivier; Lee, Chang-Hwan; Oh, John J.; Oh, Sang Hoon; Son, Edwin J.; Tao, Ye; Vaulin, Ruslan; Wang, Xiaoge
2013-09-01
The sensitivity of searches for astrophysical transients in data from the Laser Interferometer Gravitational-wave Observatory (LIGO) is generally limited by the presence of transient, non-Gaussian noise artifacts, which occur at a high enough rate such that accidental coincidence across multiple detectors is non-negligible. These “glitches” can easily be mistaken for transient gravitational-wave signals, and their robust identification and removal will help any search for astrophysical gravitational waves. We apply machine-learning algorithms (MLAs) to the problem, using data from auxiliary channels within the LIGO detectors that monitor degrees of freedom unaffected by astrophysical signals. Noise sources may produce artifacts in these auxiliary channels as well as the gravitational-wave channel. The number of auxiliary-channel parameters describing these disturbances may also be extremely large; high dimensionality is an area where MLAs are particularly well suited. We demonstrate the feasibility and applicability of three different MLAs: artificial neural networks, support vector machines, and random forests. These classifiers identify and remove a substantial fraction of the glitches present in two different data sets: four weeks of LIGO’s fourth science run and one week of LIGO’s sixth science run. We observe that all three algorithms agree on which events are glitches to within 10% for the sixth-science-run data, and support this by showing that the different optimization criteria used by each classifier generate the same decision surface, based on a likelihood-ratio statistic. Furthermore, we find that all classifiers obtain similar performance to the benchmark algorithm, the ordered veto list, which is optimized to detect pairwise correlations between transients in LIGO auxiliary channels and glitches in the gravitational-wave data. This suggests that most of the useful information currently extracted from the auxiliary channels is already described by this model. Future performance gains are thus likely to involve additional sources of information, rather than improvements in the classification algorithms themselves. We discuss several plausible sources of such new information as well as the ways of propagating it through the classifiers into gravitational-wave searches.
First results of the silicon telescope using an 'artificial retina' for fast track finding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neri, N.; Abba, A.; Caponio, F.
We present the first results of the prototype of a silicon tracker with trigger capabilities based on a novel approach for fast track finding. The working principle of the 'artificial retina' is inspired by the processing of visual images by the brain and it is based on extensive parallelization of data distribution and pattern recognition. The algorithm has been implemented in commercial FPGAs in three main logic modules: a switch for the routing of the detector hits, a pool of engines for the digital processing of the hits, and a block for the calculation of the track parameters. The architecturemore » is fully pipelined and allows the reconstruction of real-time tracks with a latency less then 100 clock cycles, corresponding to 0.25 microsecond at 400 MHz clock. The silicon telescope consists of 8 layers of single-sided silicon strip detectors with 512 strips each. The detector size is about 10 cm x 10 cm and the strip pitch is 183 μm. The detectors are read out by the Beetle chip, a custom ASICs developed for LHCb, which provides the measurement of the hit position and pulse height of 128 channels. The 'artificial retina' algorithm has been implemented on custom data acquisition boards based on FPGAs Xilinx Kintex 7 lx160. The parameters of the tracks detected are finally transferred to host PC via USB 3.0. The boards manage the read-out ASICs and the sampling of the analog channels. The read-out is performed at 40 MHz on 4 channels for each ASIC that corresponds to a decoding of the telescope information at 1.1 MHz. We report on the first results of the fast tracking device and compare with simulations. (authors)« less
Stewart, C M; Newlands, S D; Perachio, A A
2004-12-01
Rapid and accurate discrimination of single units from extracellular recordings is a fundamental process for the analysis and interpretation of electrophysiological recordings. We present an algorithm that performs detection, characterization, discrimination, and analysis of action potentials from extracellular recording sessions. The program was entirely written in LabVIEW (National Instruments), and requires no external hardware devices or a priori information about action potential shapes. Waveform events are detected by scanning the digital record for voltages that exceed a user-adjustable trigger. Detected events are characterized to determine nine different time and voltage levels for each event. Various algebraic combinations of these waveform features are used as axis choices for 2-D Cartesian plots of events. The user selects axis choices that generate distinct clusters. Multiple clusters may be defined as action potentials by manually generating boundaries of arbitrary shape. Events defined as action potentials are validated by visual inspection of overlain waveforms. Stimulus-response relationships may be identified by selecting any recorded channel for comparison to continuous and average cycle histograms of binned unit data. The algorithm includes novel aspects of feature analysis and acquisition, including higher acquisition rates for electrophysiological data compared to other channels. The program confirms that electrophysiological data may be discriminated with high-speed and efficiency using algebraic combinations of waveform features derived from high-speed digital records.
NASA Astrophysics Data System (ADS)
Grigoryan, Artyom M.; John, Aparna; Agaian, Sos S.
2017-03-01
2-D quaternion discrete Fourier transform (2-D QDFT) is the Fourier transform applied to color images when the color images are considered in the quaternion space. The quaternion numbers are four dimensional hyper-complex numbers. Quaternion representation of color image allows us to see the color of the image as a single unit. In quaternion approach of color image enhancement, each color is seen as a vector. This permits us to see the merging effect of the color due to the combination of the primary colors. The color images are used to be processed by applying the respective algorithm onto each channels separately, and then, composing the color image from the processed channels. In this article, the alpha-rooting and zonal alpha-rooting methods are used with the 2-D QDFT. In the alpha-rooting method, the alpha-root of the transformed frequency values of the 2-D QDFT are determined before taking the inverse transform. In the zonal alpha-rooting method, the frequency spectrum of the 2-D QDFT is divided by different zones and the alpha-rooting is applied with different alpha values for different zones. The optimization of the choice of alpha values is done with the genetic algorithm. The visual perception of 3-D medical images is increased by changing the reference gray line.
Zhang, H; Bolton, T B
1995-01-01
1. Single-channel recordings were made from cell-attached and isolated patches, and whole-cell currents were recorded under voltage clamp from single smooth muscle cells obtained by enzymic digestion of a small branch of the rat mesenteric artery. 2. In single voltage-clamped cells 1 mM uridine diphosphate (UDP) or guanidine diphosphate (GDP) added to the pipette solution, or pinacidil (100 microM) a K-channel opener (KCO) applied in the bathing solution, evoked an outward current of up to 100pA which was blocked by glibenclamide (10 microM). In single cells from which recordings were made by the 'perforated patch' (nystatin pipette) technique, metabolic inhibition by 1 mM NaCN and 10 mM 2-deoxy-glucose also evoked a similar glibenclamide-sensitive current. 3. Single K-channel activity was observed in cell-attached patches only infrequently unless the metabolism of the cell was inhibited, whereupon channel activity blocked by glibenclamide was seen; pinacidil applied to the cell evoked similar glibenclamide-sensitive channel activity. If the patch was pulled off the cell to form an isolated inside-out patch, similar glibenclamide-sensitive single-channel currents were observed in the presence of UDP and/or pinacidil to those seen in cell-attached mode; channel conductance was 20 pS (60:130 K-gradient) and openings showed no voltage-dependence and noisy inward currents, typical of the nucleoside diphosphate (NDP) activated K-channel (KNDP) seen previously in rabbit portal vein. 4. Formation of an isolated inside-out patch into an ATP-free solution did not increase the probability of channel opening which declined with time even when some single-channel activity had occurred in the cell-attached mode before detachment. However, application of 1 mM UDP or GDP, but not ATP, to inside-out patches evoked single-channel activity. Application of ATP-free solution to isolated patches, previously exposed to ATP and in which channel activity had been seen, did not evoke channel activity. 5. It is concluded that small conductance K-channels (KNDP) open in smooth muscle cells from this small artery in response to UDP or GDP acting from the inside, or pinacidil acting from the outside; the same channels open during inhibition of metabolism presumably mainly due to the rise in nucleoside diphosphates, but a fall in the ATP concentration on the inside of the channel did not by itself evoke channel activity.(ABSTRACT TRUNCATED AT 400 WORDS) PMID:7735693
Method and device for measuring single-shot transient signals
Yin, Yan
2004-05-18
Methods, apparatus, and systems, including computer program products, implementing and using techniques for measuring multi-channel single-shot transient signals. A signal acquisition unit receives one or more single-shot pulses from a multi-channel source. An optical-fiber recirculating loop reproduces the one or more received single-shot optical pulses to form a first multi-channel pulse train for circulation in the recirculating loop, and a second multi-channel pulse train for display on a display device. The optical-fiber recirculating loop also optically amplifies the first circulating pulse train to compensate for signal losses and performs optical multi-channel noise filtration.
Comparison of a single-channel EEG sleep study to polysomnography
Lucey, Brendan P.; McLeland, Jennifer S.; Toedebusch, Cristina D.; Boyd, Jill; Morris, John C.; Landsness, Eric C.; Yamada, Kelvin; Holtzman, David M.
2016-01-01
Summary An accurate home sleep study to assess electroencephalography (EEG)-based sleep stages and EEG power would be advantageous for both clinical and research purposes, such as for longitudinal studies measuring changes in sleep stages over time. The purpose of this study was to compare sleep scoring of a single-channel EEG recorded simultaneously on the forehead against attended polysomnography. Participants were recruited from both a clinical sleep center and a longitudinal research study investigating cognitively-normal aging and Alzheimer's disease. Analysis for overall epoch-by-epoch agreement found strong and substantial agreement between the single-channel EEG compared to polysomnography (kappa=0.67). Slow wave activity in the frontal regions was also similar when comparing the single-channel EEG device to polysomnography. As expected, stage N1 showed poor agreement (sensitivity 0.2) due to lack of occipital electrodes. Other sleep parameters such as sleep latency and REM onset latency had decreased agreement. Participants with disrupted sleep consolidation, such as from obstructive sleep apnea, also had poor agreement. We suspect that disagreement in sleep parameters between the single-channel EEG and polysomnography is partially due to altered waveform morphology and/or poorer signal quality in the single-channel derivation. Our results show that single-channel EEG provides comparable results to polysomnography in assessing REM, combined stages N2 and N3 sleep, and several other parameters including frontal slow wave activity. The data establish that single-channel EEG can be a useful research tool. PMID:27252090
NASA Astrophysics Data System (ADS)
Panicker, Rahul Alex
Multimode fibers (MMF) are widely deployed in local-, campus-, and storage-area-networks. Achievable data rates and transmission distances are, however, limited by the phenomenon of modal dispersion. We propose a system to compensate for modal dispersion using adaptive optics. This leads to a 10- to 100-fold improvement in performance over current standards. We propose a provably optimal technique for minimizing inter-symbol interference (ISI) in MMF systems using adaptive optics via convex optimization. We use a spatial light modulator (SLM) to shape the spatial profile of light launched into an MMF. We derive an expression for the system impulse response in terms of the SLM reflectance and the field patterns of the MMF principal modes. Finding optimal SLM settings to minimize ISI, subject to physical constraints, is posed as an optimization problem. We observe that our problem can be cast as a second-order cone program, which is a convex optimization problem. Its global solution can, therefore, be found with minimal computational complexity. Simulations show that this technique opens up an eye pattern originally closed due to ISI. We then propose fast, low-complexity adaptive algorithms for optimizing the SLM settings. We show that some of these converge to the global optimum in the absence of noise. We also propose modified versions of these algorithms to improve resilience to noise and speed of convergence. Next, we experimentally compare the proposed adaptive algorithms in 50-mum graded-index (GRIN) MMFs using a liquid-crystal SLM. We show that continuous-phase sequential coordinate ascent (CPSCA) gives better bit-error-ratio performance than 2- or 4-phase sequential coordinate ascent, in concordance with simulations. We evaluate the bandwidth characteristics of CPSCA, and show that a single SLM is able to simultaneously compensate over up to 9 wavelength-division-multiplexed (WDM) 10-Gb/s channels, spaced by 50 GHz, over a total bandwidth of 450 GHz. We also show that CPSCA is able to compensate for modal dispersion over up to 2.2 km, even in the presence of mid-span connector offsets up to 4 mum (simulated in experiment by offset splices). A known non-adaptive launching technique using a fusion-spliced single-mode-to-multimode patchcord is shown to fail under these conditions. Finally, we demonstrate 10 x 10 Gb/s dense WDM transmission over 2.2 km of 50-mum GRIN MMF. We combine transmitter-based adaptive optics and receiver-based single-mode filtering, and control the launched field pattern for ten 10-Gb/s non-return-to-zero channels, wavelength-division multiplexed on a 200-GHz grid in the C band. We achieve error-free transmission through 2.2 km of 50-mum GRIN MMF for launch offsets up to 10 mum and for worst-case launched polarization. We employ a ten-channel transceiver based on parallel integration of electronics and photonics.
Siuly; Li, Yan; Paul Wen, Peng
2014-03-01
Motor imagery (MI) tasks classification provides an important basis for designing brain-computer interface (BCI) systems. If the MI tasks are reliably distinguished through identifying typical patterns in electroencephalography (EEG) data, a motor disabled people could communicate with a device by composing sequences of these mental states. In our earlier study, we developed a cross-correlation based logistic regression (CC-LR) algorithm for the classification of MI tasks for BCI applications, but its performance was not satisfactory. This study develops a modified version of the CC-LR algorithm exploring a suitable feature set that can improve the performance. The modified CC-LR algorithm uses the C3 electrode channel (in the international 10-20 system) as a reference channel for the cross-correlation (CC) technique and applies three diverse feature sets separately, as the input to the logistic regression (LR) classifier. The present algorithm investigates which feature set is the best to characterize the distribution of MI tasks based EEG data. This study also provides an insight into how to select a reference channel for the CC technique with EEG signals considering the anatomical structure of the human brain. The proposed algorithm is compared with eight of the most recently reported well-known methods including the BCI III Winner algorithm. The findings of this study indicate that the modified CC-LR algorithm has potential to improve the identification performance of MI tasks in BCI systems. The results demonstrate that the proposed technique provides a classification improvement over the existing methods tested. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
SST algorithm based on radiative transfer model
NASA Astrophysics Data System (ADS)
Mat Jafri, Mohd Z.; Abdullah, Khiruddin; Bahari, Alui
2001-03-01
An algorithm for measuring sea surface temperature (SST) without recourse to the in-situ data for calibration has been proposed. The algorithm which is based on the recorded infrared signal by the satellite sensor is composed of three terms, namely, the surface emission, the up-welling radiance emitted by the atmosphere, and the down-welling atmospheric radiance reflected at the sea surface. This algorithm requires the transmittance values of thermal bands. The angular dependence of the transmittance function was modeled using the MODTRAN code. Radiosonde data were used with the MODTRAN code. The expression of transmittance as a function of zenith view angle was obtained for each channel through regression of the MODTRAN output. The Ocean Color Temperature Scanner (OCTS) data from the Advanced Earth Observation Satellite (ADEOS) were used in this study. The study area covers the seas of the North West of Peninsular Malaysia region. The in-situ data (ship collected SST values) were used for verification of the results. Cloud contaminated pixels were masked out using the standard procedures which have been applied to the Advanced Very High Resolution Radiometer (AVHRR) data. The cloud free pixels at the in-situ sites were extracted for analysis. The OCTS data were then substituted in the proposed algorithm. The appropriate transmittance value for each channel was then assigned in the calculation. Assessment for the accuracy was made by observing the correlation and the rms deviations between the computed and the ship collected values. The results were also compared with the results from OCTS multi- channel sea surface temperature algorithm. The comparison produced high correlation values. The performance of this algorithm is comparable with the established OCTS algorithm. The effect of emissivity on the retrieved SST values was also investigated. SST map was generated and contoured manually.
Sodhro, Ali Hassan; Sodhro, Gul Hassan; Lohano, Sonia; Pirbhulal, Sandeep
2018-01-01
Rapid progress and emerging trends in miniaturized medical devices have enabled the un-obtrusive monitoring of physiological signals and daily activities of everyone’s life in a prominent and pervasive manner. Due to the power-constrained nature of conventional wearable sensor devices during ubiquitous sensing (US), energy-efficiency has become one of the highly demanding and debatable issues in healthcare. This paper develops a single chip-based wearable wireless electrocardiogram (ECG) monitoring system by adopting analog front end (AFE) chip model ADS1292R from Texas Instruments. The developed chip collects real-time ECG data with two adopted channels for continuous monitoring of human heart activity. Then, these two channels and the AFE are built into a right leg drive right leg drive (RLD) driver circuit with lead-off detection and medical graded test signal. Human ECG data was collected at 60 beats per minute (BPM) to 120 BPM with 60 Hz noise and considered throughout the experimental set-up. Moreover, notch filter (cutoff frequency 60 Hz), high-pass filter (cutoff frequency 0.67 Hz), and low-pass filter (cutoff frequency 100 Hz) with cut-off frequencies of 60 Hz, 0.67 Hz, and 100 Hz, respectively, were designed with bilinear transformation for rectifying the power-line noise and artifacts while extracting real-time ECG signals. Finally, a transmission power control-based energy-efficient (ETPC) algorithm is proposed, implemented on the hardware and then compared with the several conventional TPC methods. Experimental results reveal that our developed chip collects real-time ECG data efficiently, and the proposed ETPC algorithm achieves higher energy savings of 35.5% with a slightly larger packet loss ratio (PLR) as compared to conventional TPC (e.g., constant TPC, Gao’s, and Xiao’s methods). PMID:29558433
Blind information-theoretic multiuser detection algorithms for DS-CDMA and WCDMA downlink systems.
Waheed, Khuram; Salem, Fathi M
2005-07-01
Code division multiple access (CDMA) is based on the spread-spectrum technology and is a dominant air interface for 2.5G, 3G, and future wireless networks. For the CDMA downlink, the transmitted CDMA signals from the base station (BS) propagate through a noisy multipath fading communication channel before arriving at the receiver of the user equipment/mobile station (UE/MS). Classical CDMA single-user detection (SUD) algorithms implemented in the UE/MS receiver do not provide the required performance for modern high data-rate applications. In contrast, multi-user detection (MUD) approaches require a lot of a priori information not available to the UE/MS. In this paper, three promising adaptive Riemannian contra-variant (or natural) gradient based user detection approaches, capable of handling the highly dynamic wireless environments, are proposed. The first approach, blind multiuser detection (BMUD), is the process of simultaneously estimating multiple symbol sequences associated with all the users in the downlink of a CDMA communication system using only the received wireless data and without any knowledge of the user spreading codes. This approach is applicable to CDMA systems with relatively short spreading codes but becomes impractical for systems using long spreading codes. We also propose two other adaptive approaches, namely, RAKE -blind source recovery (RAKE-BSR) and RAKE-principal component analysis (RAKE-PCA) that fuse an adaptive stage into a standard RAKE receiver. This adaptation results in robust user detection algorithms with performance exceeding the linear minimum mean squared error (LMMSE) detectors for both Direct Sequence CDMA (DS-CDMA) and wide-band CDMA (WCDMA) systems under conditions of congestion, imprecise channel estimation and unmodeled multiple access interference (MAI).
Sodhro, Ali Hassan; Sangaiah, Arun Kumar; Sodhro, Gul Hassan; Lohano, Sonia; Pirbhulal, Sandeep
2018-03-20
Rapid progress and emerging trends in miniaturized medical devices have enabled the un-obtrusive monitoring of physiological signals and daily activities of everyone's life in a prominent and pervasive manner. Due to the power-constrained nature of conventional wearable sensor devices during ubiquitous sensing (US), energy-efficiency has become one of the highly demanding and debatable issues in healthcare. This paper develops a single chip-based wearable wireless electrocardiogram (ECG) monitoring system by adopting analog front end (AFE) chip model ADS1292R from Texas Instruments. The developed chip collects real-time ECG data with two adopted channels for continuous monitoring of human heart activity. Then, these two channels and the AFE are built into a right leg drive right leg drive (RLD) driver circuit with lead-off detection and medical graded test signal. Human ECG data was collected at 60 beats per minute (BPM) to 120 BPM with 60 Hz noise and considered throughout the experimental set-up. Moreover, notch filter (cutoff frequency 60 Hz), high-pass filter (cutoff frequency 0.67 Hz), and low-pass filter (cutoff frequency 100 Hz) with cut-off frequencies of 60 Hz, 0.67 Hz, and 100 Hz, respectively, were designed with bilinear transformation for rectifying the power-line noise and artifacts while extracting real-time ECG signals. Finally, a transmission power control-based energy-efficient (ETPC) algorithm is proposed, implemented on the hardware and then compared with the several conventional TPC methods. Experimental results reveal that our developed chip collects real-time ECG data efficiently, and the proposed ETPC algorithm achieves higher energy savings of 35.5% with a slightly larger packet loss ratio (PLR) as compared to conventional TPC (e.g., constant TPC, Gao's, and Xiao's methods).
Guo, Qiang; Qi, Liangang
2017-04-10
In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP)-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone) and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP)-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA) and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR) beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences). Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF) of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal.
Guo, Qiang; Qi, Liangang
2017-01-01
In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP)-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone) and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP)-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA) and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR) beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences). Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF) of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal. PMID:28394290
Analysis and simulation tools for solar array power systems
NASA Astrophysics Data System (ADS)
Pongratananukul, Nattorn
This dissertation presents simulation tools developed specifically for the design of solar array power systems. Contributions are made in several aspects of the system design phases, including solar source modeling, system simulation, and controller verification. A tool to automate the study of solar array configurations using general purpose circuit simulators has been developed based on the modeling of individual solar cells. Hierarchical structure of solar cell elements, including semiconductor properties, allows simulation of electrical properties as well as the evaluation of the impact of environmental conditions. A second developed tool provides a co-simulation platform with the capability to verify the performance of an actual digital controller implemented in programmable hardware such as a DSP processor, while the entire solar array including the DC-DC power converter is modeled in software algorithms running on a computer. This "virtual plant" allows developing and debugging code for the digital controller, and also to improve the control algorithm. One important task in solar arrays is to track the maximum power point on the array in order to maximize the power that can be delivered. Digital controllers implemented with programmable processors are particularly attractive for this task because sophisticated tracking algorithms can be implemented and revised when needed to optimize their performance. The proposed co-simulation tools are thus very valuable in developing and optimizing the control algorithm, before the system is built. Examples that demonstrate the effectiveness of the proposed methodologies are presented. The proposed simulation tools are also valuable in the design of multi-channel arrays. In the specific system that we have designed and tested, the control algorithm is implemented on a single digital signal processor. In each of the channels the maximum power point is tracked individually. In the prototype we built, off-the-shelf commercial DC-DC converters were utilized. At the end, the overall performance of the entire system was evaluated using solar array simulators capable of simulating various I-V characteristics, and also by using an electronic load. Experimental results are presented.
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2008-01-01
In this paper, an enhanced on-line diagnostic system which utilizes dual-channel sensor measurements is developed for the aircraft engine application. The enhanced system is composed of a nonlinear on-board engine model (NOBEM), the hybrid Kalman filter (HKF) algorithm, and fault detection and isolation (FDI) logic. The NOBEM provides the analytical third channel against which the dual-channel measurements are compared. The NOBEM is further utilized as part of the HKF algorithm which estimates measured engine parameters. Engine parameters obtained from the dual-channel measurements, the NOBEM, and the HKF are compared against each other. When the discrepancy among the signals exceeds a tolerance level, the FDI logic determines the cause of discrepancy. Through this approach, the enhanced system achieves the following objectives: 1) anomaly detection, 2) component fault detection, and 3) sensor fault detection and isolation. The performance of the enhanced system is evaluated in a simulation environment using faults in sensors and components, and it is compared to an existing baseline system.
MuLoG, or How to Apply Gaussian Denoisers to Multi-Channel SAR Speckle Reduction?
Deledalle, Charles-Alban; Denis, Loic; Tabti, Sonia; Tupin, Florence
2017-09-01
Speckle reduction is a longstanding topic in synthetic aperture radar (SAR) imaging. Since most current and planned SAR imaging satellites operate in polarimetric, interferometric, or tomographic modes, SAR images are multi-channel and speckle reduction techniques must jointly process all channels to recover polarimetric and interferometric information. The distinctive nature of SAR signal (complex-valued, corrupted by multiplicative fluctuations) calls for the development of specialized methods for speckle reduction. Image denoising is a very active topic in image processing with a wide variety of approaches and many denoising algorithms available, almost always designed for additive Gaussian noise suppression. This paper proposes a general scheme, called MuLoG (MUlti-channel LOgarithm with Gaussian denoising), to include such Gaussian denoisers within a multi-channel SAR speckle reduction technique. A new family of speckle reduction algorithms can thus be obtained, benefiting from the ongoing progress in Gaussian denoising, and offering several speckle reduction results often displaying method-specific artifacts that can be dismissed by comparison between results.
Channel Deviation-Based Power Control in Body Area Networks.
Van, Son Dinh; Cotton, Simon L; Smith, David B
2018-05-01
Internet enabled body area networks (BANs) will form a core part of future remote health monitoring and ambient assisted living technology. In BAN applications, due to the dynamic nature of human activity, the off-body BAN channel can be prone to deep fading caused by body shadowing and multipath fading. Using this knowledge, we present some novel practical adaptive power control protocols based on the channel deviation to simultaneously prolong the lifetime of wearable devices and reduce outage probability. The proposed schemes are both flexible and relatively simple to implement on hardware platforms with constrained resources making them inherently suitable for BAN applications. We present the key algorithm parameters used to dynamically respond to the channel variation. This allows the algorithms to achieve a better energy efficiency and signal reliability in everyday usage scenarios such as those in which a person undertakes many different activities (e.g., sitting, walking, standing, etc.). We also profile their performance against traditional, optimal, and other existing schemes for which it is demonstrated that not only does the outage probability reduce significantly, but the proposed algorithms also save up to average transmit power compared to the competing schemes.
Design of the low area monotonic trim DAC in 40 nm CMOS technology for pixel readout chips
NASA Astrophysics Data System (ADS)
Drozd, A.; Szczygiel, R.; Maj, P.; Satlawa, T.; Grybos, P.
2014-12-01
The recent research in hybrid pixel detectors working in single photon counting mode focuses on nanometer or 3D technologies which allow making pixels smaller and implementing more complex solutions in each of the pixels. Usually single pixel in readout electronics for X-ray detection comprises of charge amplifier, shaper and discriminator that allow classification of events occurring at the detector as true or false hits by comparing amplitude of the signal obtained with threshold voltage, which minimizes the influence of noise effects. However, making the pixel size smaller often causes problems with pixel to pixel uniformity and additional effects like charge sharing become more visible. To improve channel-to-channel uniformity or implement an algorithm for charge sharing effect minimization, small area trimming DACs working in each pixel independently are necessary. However, meeting the requirement of small area often results in poor linearity and even non-monotonicity. In this paper we present a novel low-area thermometer coded 6-bit DAC implemented in 40 nm CMOS technology. Monte Carlo simulations were performed on the described design proving that under all conditions designed DAC is inherently monotonic. Presented DAC was implemented in the prototype readout chip with 432 pixels working in single photon counting mode, with two trimming DACs in each pixel. Each DAC occupies the area of 8 μm × 18.5 μm. Measurements and chips' tests were performed to obtain reliable statistical results.
Geng, Jia; Wang, Shaoying; Fang, Huaming; Guo, Peixuan
2013-01-01
Nanopores have been utilized to detect the conformation and dynamics of polymers, including DNA and RNA. Biological pores are extremely reproducible at the atomic level with uniform channel sizes. The channel of the bacterial virus phi29 DNA packaging motor is a natural conduit for the transportation of double-stranded DNA (dsDNA), and has the largest diameter among the well-studied biological channels. The larger channel facilitates translocation of dsDNA, and offers more space for further channel modification and conjugation. Interestingly, the relatively large wild type channel, which translocates dsDNA, cannot detect single-stranded nucleic acids (ssDNA or ssRNA) under the current experimental conditions. Herein, we reengineered this motor channel by removing the internal loop segment of the channel. The modification resulted in two classes of channels. One class was the same size as the wild type channel, while the other class had a cross-sectional area about 60% of the wild type. This smaller channel was able to detect the real-time translocation of single stranded nucleic acids at single-molecule level. While the wild type connector exhibited a one-way traffic property with respect to dsDNA translocation, the loop deleted connector was able to translocate ssDNA and ssRNA with equal competencies from both termini. This finding of size alterations in reengineered motor channels expands the potential application of the phi29 DNA packaging motor in nanomedicine, nanobiotechnology, and high-throughput single pore DNA sequencing. PMID:23488809
Interference Canceller Based on Cycle-and-Add Property for Single User Detection in DS-CDMA
NASA Astrophysics Data System (ADS)
Hettiarachchi, Ranga; Yokoyama, Mitsuo; Uehara, Hideyuki; Ohira, Takashi
In this paper, performance of a novel interference cancellation technique for the single user detection in a direct-sequence code-division multiple access (DS-CDMA) system has been investigated. This new algorithm is based on the Cycle-and-Add property of PN (Pseudorandom Noise) sequences and can be applied for both synchronous and asynchronous systems. The proposed strategy provides a simple method that can delete interference signals one by one in spite of the power levels of interferences. Therefore, it is possible to overcome the near-far problem (NFP) in a successive manner without using transmit power control (TPC) techniques. The validity of the proposed procedure is corroborated by computer simulations in additive white Gaussian noise (AWGN) and frequency-nonselective fading channels. Performance results indicate that the proposed receiver outperforms the conventional receiver and, in many cases, it does so with a considerable gain.
Properties of Single K+ and Cl− Channels in Asclepias tuberosa Protoplasts 1
Schauf, Charles L.; Wilson, Kathryn J.
1987-01-01
Potassium and chloride channels were characterized in Asclepias tuberosa suspension cell derived protoplasts by patch voltage-clamp. Whole-cell currents and single channels in excised patches had linear instantaneous current-voltage relations, reversing at the Nernst potentials for K+ and Cl−, respectively. Whole cell K+ currents activated exponentially during step depolarizations, while voltage-dependent Cl− channels were activated by hyperpolarizations. Single K+ channel conductance was 40 ± 5 pS with a mean open time of 4.5 milliseconds at 100 millivolts. Potassium channels were blocked by Cs+ and tetraethylammonium, but were insensitive to 4-aminopyridine. Chloride channels had a single-channel conductance of 100 ± 17 picosiemens, mean open time of 8.8 milliseconds, and were blocked by Zn2+ and ethacrynic acid. Whole-cell Cl− currents were inhibited by abscisic acid, and were unaffected by indole-3-acetic acid and 2,4-dichlorophenoxyacetic acid. Since internal and external composition can be controlled, patch-clamped protoplasts are ideal systems for studying the role of ion channels in plant physiology and development. Images Fig. 5 PMID:16665712
Pan, Bifeng; Géléoc, Gwenaelle S; Asai, Yukako; Horwitz, Geoffrey C; Kurima, Kiyoto; Ishikawa, Kotaro; Kawashima, Yoshiyuki; Griffith, Andrew J; Holt, Jeffrey R
2013-08-07
Sensory transduction in auditory and vestibular hair cells requires expression of transmembrane channel-like (Tmc) 1 and 2 genes, but the function of these genes is unknown. To investigate the hypothesis that TMC1 and TMC2 proteins are components of the mechanosensitive ion channels that convert mechanical information into electrical signals, we recorded whole-cell and single-channel currents from mouse hair cells that expressed Tmc1, Tmc2, or mutant Tmc1. Cells that expressed Tmc2 had high calcium permeability and large single-channel currents, while cells with mutant Tmc1 had reduced calcium permeability and reduced single-channel currents. Cells that expressed Tmc1 and Tmc2 had a broad range of single-channel currents, suggesting multiple heteromeric assemblies of TMC subunits. The data demonstrate TMC1 and TMC2 are components of hair cell transduction channels and contribute to permeation properties. Gradients in TMC channel composition may also contribute to variation in sensory transduction along the tonotopic axis of the mammalian cochlea. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sadeghi, Saman; MacKay, William A.; van Dam, R. Michael; Thompson, Michael
2011-02-01
Real-time analysis of multi-channel spatio-temporal sensor data presents a considerable technical challenge for a number of applications. For example, in brain-computer interfaces, signal patterns originating on a time-dependent basis from an array of electrodes on the scalp (i.e. electroencephalography) must be analyzed in real time to recognize mental states and translate these to commands which control operations in a machine. In this paper we describe a new technique for recognition of spatio-temporal patterns based on performing online discrimination of time-resolved events through the use of correlation of phase dynamics between various channels in a multi-channel system. The algorithm extracts unique sensor signature patterns associated with each event during a training period and ranks importance of sensor pairs in order to distinguish between time-resolved stimuli to which the system may be exposed during real-time operation. We apply the algorithm to electroencephalographic signals obtained from subjects tested in the neurophysiology laboratories at the University of Toronto. The extension of this algorithm for rapid detection of patterns in other sensing applications, including chemical identification via chemical or bio-chemical sensor arrays, is also discussed.
Local SAR in parallel transmission pulse design.
Lee, Joonsung; Gebhardt, Matthias; Wald, Lawrence L; Adalsteinsson, Elfar
2012-06-01
The management of local and global power deposition in human subjects (specific absorption rate, SAR) is a fundamental constraint to the application of parallel transmission (pTx) systems. Even though the pTx and single channel have to meet the same SAR requirements, the complex behavior of the spatial distribution of local SAR for transmission arrays poses problems that are not encountered in conventional single-channel systems and places additional requirements on pTx radio frequency pulse design. We propose a pTx pulse design method which builds on recent work to capture the spatial distribution of local SAR in numerical tissue models in a compressed parameterization in order to incorporate local SAR constraints within computation times that accommodate pTx pulse design during an in vivo magnetic resonance imaging scan. Additionally, the algorithm yields a protocol-specific ultimate peak in local SAR, which is shown to bound the achievable peak local SAR for a given excitation profile fidelity. The performance of the approach was demonstrated using a numerical human head model and a 7 Tesla eight-channel transmit array. The method reduced peak local 10 g SAR by 14-66% for slice-selective pTx excitations and 2D selective pTx excitations compared to a pTx pulse design constrained only by global SAR. The primary tradeoff incurred for reducing peak local SAR was an increase in global SAR, up to 34% for the evaluated examples, which is favorable in cases where local SAR constraints dominate the pulse applications. Copyright © 2011 Wiley Periodicals, Inc.
Distributed Channel Allocation and Time Slot Optimization for Green Internet of Things.
Ding, Kaiqi; Zhao, Haitao; Hu, Xiping; Wei, Jibo
2017-10-28
In sustainable smart cities, power saving is a severe challenge in the energy-constrained Internet of Things (IoT). Efficient utilization of limited multiple non-overlap channels and time resources is a promising solution to reduce the network interference and save energy consumption. In this paper, we propose a joint channel allocation and time slot optimization solution for IoT. First, we propose a channel ranking algorithm which enables each node to rank its available channels based on the channel properties. Then, we propose a distributed channel allocation algorithm so that each node can choose a proper channel based on the channel ranking and its own residual energy. Finally, the sleeping duration and spectrum sensing duration are jointly optimized to maximize the normalized throughput and satisfy energy consumption constraints simultaneously. Different from the former approaches, our proposed solution requires no central coordination or any global information that each node can operate based on its own local information in a total distributed manner. Also, theoretical analysis and extensive simulations have validated that when applying our solution in the network of IoT: (i) each node can be allocated to a proper channel based on the residual energy to balance the lifetime; (ii) the network can rapidly converge to a collision-free transmission through each node's learning ability in the process of the distributed channel allocation; and (iii) the network throughput is further improved via the dynamic time slot optimization.
Characteristics of camel-gate structures with active doping channel profiles
NASA Astrophysics Data System (ADS)
Tsai, Jung-Hui; Lour, Wen-Shiung; Laih, Lih-Wen; Liu, Rong-Chau; Liu, Wen-Chau
1996-03-01
In this paper, we demonstrate the influence of channel doping profile on the performances of camel-gate field effect transistors (CAMFETs). For comparison, single and tri-step doping channel structures with identical doping thickness products are employed, while other parameters are kept unchanged. The results of a theoretical analysis show that the single doping channel FET with lightly doping active layer has higher barrier height and drain-source saturation current. However, the transconductance is decreased. For a tri-step doping channel structure, it is found that the output drain-source saturation current and the barrier height are enhanced. Furthermore, the relatively voltage independent performances are improved. Two CAMFETs with single and tri-step doping channel structures have been fabricated and discussed. The devices exhibit nearly voltage independent transconductances of 144 mS mm -1 and 222 mS mm -1 for single and tri-step doping channel CAMFETs, respectively. The operation gate voltage may extend to ± 1.5 V for a tri-step doping channel CAMFET. In addition, the drain current densities of > 750 and 405 mA mm -1 are obtained for the tri-step and single doping CAMFETs. These experimental results are inconsistent with theoretical analysis.
A TDM link with channel coding and digital voice.
NASA Technical Reports Server (NTRS)
Jones, M. W.; Tu, K.; Harton, P. L.
1972-01-01
The features of a TDM (time-division multiplexed) link model are described. A PCM telemetry sequence was coded for error correction and multiplexed with a digitized voice channel. An all-digital implementation of a variable-slope delta modulation algorithm was used to digitize the voice channel. The results of extensive testing are reported. The measured coding gain and the system performance over a Gaussian channel are compared with theoretical predictions and computer simulations. Word intelligibility scores are reported as a measure of voice channel performance.
Minimizing embedding impact in steganography using trellis-coded quantization
NASA Astrophysics Data System (ADS)
Filler, Tomáš; Judas, Jan; Fridrich, Jessica
2010-01-01
In this paper, we propose a practical approach to minimizing embedding impact in steganography based on syndrome coding and trellis-coded quantization and contrast its performance with bounds derived from appropriate rate-distortion bounds. We assume that each cover element can be assigned a positive scalar expressing the impact of making an embedding change at that element (single-letter distortion). The problem is to embed a given payload with minimal possible average embedding impact. This task, which can be viewed as a generalization of matrix embedding or writing on wet paper, has been approached using heuristic and suboptimal tools in the past. Here, we propose a fast and very versatile solution to this problem that can theoretically achieve performance arbitrarily close to the bound. It is based on syndrome coding using linear convolutional codes with the optimal binary quantizer implemented using the Viterbi algorithm run in the dual domain. The complexity and memory requirements of the embedding algorithm are linear w.r.t. the number of cover elements. For practitioners, we include detailed algorithms for finding good codes and their implementation. Finally, we report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel.
How many neurons can we see with current spike sorting algorithms?
Pedreira, Carlos; Martinez, Juan; Ison, Matias J; Quian Quiroga, Rodrigo
2012-10-15
Recent studies highlighted the disagreement between the typical number of neurons observed with extracellular recordings and the ones to be expected based on anatomical and physiological considerations. This disagreement has been mainly attributed to the presence of sparsely firing neurons. However, it is also possible that this is due to limitations of the spike sorting algorithms used to process the data. To address this issue, we used realistic simulations of extracellular recordings and found a relatively poor spike sorting performance for simulations containing a large number of neurons. In fact, the number of correctly identified neurons for single-channel recordings showed an asymptotic behavior saturating at about 8-10 units, when up to 20 units were present in the data. This performance was significantly poorer for neurons with low firing rates, as these units were twice more likely to be missed than the ones with high firing rates in simulations containing many neurons. These results uncover one of the main reasons for the relatively low number of neurons found in extracellular recording and also stress the importance of further developments of spike sorting algorithms. Copyright © 2012 Elsevier B.V. All rights reserved.
Damrath, Martin; Korte, Sebastian; Hoeher, Peter Adam
2017-01-01
This paper introduces the equivalent discrete-time channel model (EDTCM) to the area of diffusion-based molecular communication (DBMC). Emphasis is on an absorbing receiver, which is based on the so-called first passage time concept. In the wireless communications community the EDTCM is well known. Therefore, it is anticipated that the EDTCM improves the accessibility of DBMC and supports the adaptation of classical wireless communication algorithms to the area of DBMC. Furthermore, the EDTCM has the capability to provide a remarkable reduction of computational complexity compared to random walk based DBMC simulators. Besides the exact EDTCM, three approximations thereof based on binomial, Gaussian, and Poisson approximation are proposed and analyzed in order to further reduce computational complexity. In addition, the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm is adapted to all four channel models. Numerical results show the performance of the exact EDTCM, illustrate the performance of the adapted BCJR algorithm, and demonstrate the accuracy of the approximations.
NASA Astrophysics Data System (ADS)
Meng, Qingxin; Hu, Xiangyun; Pan, Heping; Xi, Yufei
2018-04-01
We propose an algorithm for calculating all-time apparent resistivity from transient electromagnetic induction logging. The algorithm is based on the whole-space transient electric field expression of the uniform model and Halley's optimisation. In trial calculations for uniform models, the all-time algorithm is shown to have high accuracy. We use the finite-difference time-domain method to simulate the transient electromagnetic field in radial two-layer models without wall rock and convert the simulation results to apparent resistivity using the all-time algorithm. The time-varying apparent resistivity reflects the radially layered geoelectrical structure of the models and the apparent resistivity of the earliest time channel follows the true resistivity of the inner layer; however, the apparent resistivity at larger times reflects the comprehensive electrical characteristics of the inner and outer layers. To accurately identify the outer layer resistivity based on the series relationship model of the layered resistance, the apparent resistivity and diffusion depth of the different time channels are approximately replaced by related model parameters; that is, we propose an apparent resistivity correction algorithm. By correcting the time-varying apparent resistivity of radial two-layer models, we show that the correction results reflect the radially layered electrical structure and the corrected resistivities of the larger time channels follow the outer layer resistivity. The transient electromagnetic fields of radially layered models with wall rock are simulated to obtain the 2D time-varying profiles of the apparent resistivity and corrections. The results suggest that the time-varying apparent resistivity and correction results reflect the vertical and radial geoelectrical structures. For models with small wall-rock effect, the correction removes the effect of the low-resistance inner layer on the apparent resistivity of the larger time channels.
NASA Astrophysics Data System (ADS)
Merk, D.; Zinner, T.
2013-02-01
In this paper a new detection scheme for Convective Initation (CI) under day and night conditions is presented. The new algorithm combines the strengths of two existing methods for detecting Convective Initation with geostationary satellite data and uses the channels of the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG). For the new algorithm five infrared criteria from the Satellite Convection Analysis and Tracking algorithm (SATCAST) and one High Resolution Visible channel (HRV) criteria from Cb-TRAM were adapted. This set of criteria aims for identifying the typical development of quickly developing convective cells in an early stage. The different criteria include timetrends of the 10.8 IR channel and IR channel differences as well as their timetrends. To provide the trend fields an optical flow based method is used, the Pyramidal Matching algorithm which is part of Cb-TRAM. The new detection scheme is implemented in Cb-TRAM and is verified for seven days which comprise different weather situations in Central Europe. Contrasted with the original early stage detection scheme of Cb-TRAM skill scores are provided. From the comparison against detections of later thunderstorm stages, which are also provided by Cb-TRAM, a decrease in false prior warnings (false alarm ratio) from 91 to 81% is presented, an increase of the critical success index from 7.4 to 12.7%, and a decrease of the BIAS from 320 to 146% for normal scan mode. Similar trends are found for rapid scan mode. Most obvious is the decline of false alarms found for synoptic conditions with upper cold air masses triggering convection.
NASA Astrophysics Data System (ADS)
Merk, D.; Zinner, T.
2013-08-01
In this paper a new detection scheme for convective initiation (CI) under day and night conditions is presented. The new algorithm combines the strengths of two existing methods for detecting CI with geostationary satellite data. It uses the channels of the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG). For the new algorithm five infrared (IR) criteria from the Satellite Convection Analysis and Tracking algorithm (SATCAST) and one high-resolution visible channel (HRV) criteria from Cb-TRAM were adapted. This set of criteria aims to identify the typical development of quickly developing convective cells in an early stage. The different criteria include time trends of the 10.8 IR channel, and IR channel differences, as well as their time trends. To provide the trend fields an optical-flow-based method is used: the pyramidal matching algorithm, which is part of Cb-TRAM. The new detection scheme is implemented in Cb-TRAM, and is verified for seven days which comprise different weather situations in central Europe. Contrasted with the original early-stage detection scheme of Cb-TRAM, skill scores are provided. From the comparison against detections of later thunderstorm stages, which are also provided by Cb-TRAM, a decrease in false prior warnings (false alarm ratio) from 91 to 81% is presented, an increase of the critical success index from 7.4 to 12.7%, and a decrease of the BIAS from 320 to 146% for normal scan mode. Similar trends are found for rapid scan mode. Most obvious is the decline of false alarms found for the synoptic class "cold air" masses.
Han, Jaehee; Gnatenco, Carmen; Sladek, Celia D; Kim, Donghee
2003-01-01
Magnocellular neurosecretory cells (MNCs) were isolated from the supraoptic nucleus of rat hypothalamus, and properties of K+ channels that may regulate the resting membrane potential and the excitability of MNCs were studied. MNCs showed large transient outward currents, typical of vasopressin- and oxytocin-releasing neurons. K+ channels in MNCs were identified by recording K+ channels that were open at rest in cell-attached and inside-out patches in symmetrical 150 mm KCl. Eight different K+ channels were identified and could be distinguished unambiguously by their single-channel kinetics and voltage-dependent rectification. Two K+ channels could be considered functional correlates of TASK-1 and TASK-3, as judged by their single-channel kinetics and high sensitivity to pHo. Three K+ channels showed properties similar to TREK-type tandem-pore K+ channels (TREK-1, TREK-2 and a novel TREK), as judged by their activation by membrane stretch, intracellular acidosis and arachidonic acid. One K+ channel was activated by application of pressure, arachidonic acid and alkaline pHi, and showed single-channel kinetics indistinguishable from those of TRAAK. One K+ channel showed strong inward rectification and single-channel conductance similar to those of a classical inward rectifier, IRK3. Finally, a K+ channel whose cloned counterpart has not yet been identified was highly sensitive to extracellular pH near the physiological range similar to those of TASK channels, and was the most active among all K+ channels. Our results show that in MNCs at rest, eight different types of K+ channels can be found and six of them belong to the tandem-pore K+ channel family. Various physiological and pathophysiological conditions may modulate these K+ channels and regulate the excitability of MNCs. PMID:12562991
Bio-Inspired Microsystem for Robust Genetic Assay Recognition
Lue, Jaw-Chyng; Fang, Wai-Chi
2008-01-01
A compact integrated system-on-chip (SoC) architecture solution for robust, real-time, and on-site genetic analysis has been proposed. This microsystem solution is noise-tolerable and suitable for analyzing the weak fluorescence patterns from a PCR prepared dual-labeled DNA microchip assay. In the architecture, a preceding VLSI differential logarithm microchip is designed for effectively computing the logarithm of the normalized input fluorescence signals. A posterior VLSI artificial neural network (ANN) processor chip is used for analyzing the processed signals from the differential logarithm stage. A single-channel logarithmic circuit was fabricated and characterized. A prototype ANN chip with unsupervised winner-take-all (WTA) function was designed, fabricated, and tested. An ANN learning algorithm using a novel sigmoid-logarithmic transfer function based on the supervised backpropagation (BP) algorithm is proposed for robustly recognizing low-intensity patterns. Our results show that the trained new ANN can recognize low-fluorescence patterns better than an ANN using the conventional sigmoid function. PMID:18566679
Hyperspectral data discrimination methods
NASA Astrophysics Data System (ADS)
Casasent, David P.; Chen, Xuewen
2000-12-01
Hyperspectral data provides spectral response information that provides detailed chemical, moisture, and other description of constituent parts of an item. These new sensor data are useful in USDA product inspection. However, such data introduce problems such as the curse of dimensionality, the need to reduce the number of features used to accommodate realistic small training set sizes, and the need to employ discriminatory features and still achieve good generalization (comparable training and test set performance). Several two-step methods are compared to a new and preferable single-step spectral decomposition algorithm. Initial results on hyperspectral data for good/bad almonds and for good/bad (aflatoxin infested) corn kernels are presented. The hyperspectral application addressed differs greatly from prior USDA work (PLS) in which the level of a specific channel constituent in food was estimated. A validation set (separate from the test set) is used in selecting algorithm parameters. Threshold parameters are varied to select the best Pc operating point. Initial results show that nonlinear features yield improved performance.
NASA Technical Reports Server (NTRS)
Alexandrov, Mikhail Dmitrievic; Cairns, Brian; Emde, Claudia; Ackerman, Andrew S.; vanDiedenhove, Bastiaan
2012-01-01
We present an algorithm for the retrieval of cloud droplet size distribution parameters (effective radius and variance) from the Research Scanning Polarimeter (RSP) measurements. The RSP is an airborne prototype for the Aerosol Polarimetery Sensor (APS), which was on-board of the NASA Glory satellite. This instrument measures both polarized and total reflectance in 9 spectral channels with central wavelengths ranging from 410 to 2260 nm. The cloud droplet size retrievals use the polarized reflectance in the scattering angle range between 135deg and 165deg, where they exhibit the sharply defined structure known as the rain- or cloud-bow. The shape of the rainbow is determined mainly by the single scattering properties of cloud particles. This significantly simplifies both forward modeling and inversions, while also substantially reducing uncertainties caused by the aerosol loading and possible presence of undetected clouds nearby. In this study we present the accuracy evaluation of our algorithm based on the results of sensitivity tests performed using realistic simulated cloud radiation fields.
Single underwater image enhancement based on color cast removal and visibility restoration
NASA Astrophysics Data System (ADS)
Li, Chongyi; Guo, Jichang; Wang, Bo; Cong, Runmin; Zhang, Yan; Wang, Jian
2016-05-01
Images taken under underwater condition usually have color cast and serious loss of contrast and visibility. Degraded underwater images are inconvenient for observation and analysis. In order to address these problems, an underwater image-enhancement method is proposed. A simple yet effective underwater image color cast removal algorithm is first presented based on the optimization theory. Then, based on the minimum information loss principle and inherent relationship of medium transmission maps of three color channels in an underwater image, an effective visibility restoration algorithm is proposed to recover visibility, contrast, and natural appearance of degraded underwater images. To evaluate the performance of the proposed method, qualitative comparison, quantitative comparison, and color accuracy test are conducted. Experimental results demonstrate that the proposed method can effectively remove color cast, improve contrast and visibility, and recover natural appearance of degraded underwater images. Additionally, the proposed method is comparable to and even better than several state-of-the-art methods.
An Evolutionary Optimization of the Refueling Simulation for a CANDU Reactor
NASA Astrophysics Data System (ADS)
Do, Q. B.; Choi, H.; Roh, G. H.
2006-10-01
This paper presents a multi-cycle and multi-objective optimization method for the refueling simulation of a 713 MWe Canada deuterium uranium (CANDU-6) reactor based on a genetic algorithm, an elitism strategy and a heuristic rule. The proposed algorithm searches for the optimal refueling patterns for a single cycle that maximizes the average discharge burnup, minimizes the maximum channel power and minimizes the change in the zone controller unit water fills while satisfying the most important safety-related neutronic parameters of the reactor core. The heuristic rule generates an initial population of individuals very close to a feasible solution and it reduces the computing time of the optimization process. The multi-cycle optimization is carried out based on a single cycle refueling simulation. The proposed approach was verified by a refueling simulation of a natural uranium CANDU-6 reactor for an operation period of 6 months at an equilibrium state and compared with the experience-based automatic refueling simulation and the generalized perturbation theory. The comparison has shown that the simulation results are consistent from each other and the proposed approach is a reasonable optimization method of the refueling simulation that controls all the safety-related parameters of the reactor core during the simulation
Single-channel kinetics of BK (Slo1) channels
Geng, Yanyan; Magleby, Karl L.
2014-01-01
Single-channel kinetics has proven a powerful tool to reveal information about the gating mechanisms that control the opening and closing of ion channels. This introductory review focuses on the gating of large conductance Ca2+- and voltage-activated K+ (BK or Slo1) channels at the single-channel level. It starts with single-channel current records and progresses to presentation and analysis of single-channel data and the development of gating mechanisms in terms of discrete state Markov (DSM) models. The DSM models are formulated in terms of the tetrameric modular structure of BK channels, consisting of a central transmembrane pore-gate domain (PGD) attached to four surrounding transmembrane voltage sensing domains (VSD) and a large intracellular cytosolic domain (CTD), also referred to as the gating ring. The modular structure and data analysis shows that the Ca2+ and voltage dependent gating considered separately can each be approximated by 10-state two-tiered models with five closed states on the upper tier and five open states on the lower tier. The modular structure and joint Ca2+ and voltage dependent gating are consistent with a 50 state two-tiered model with 25 closed states on the upper tier and 25 open states on the lower tier. Adding an additional tier of brief closed (flicker states) to the 10-state or 50-state models improved the description of the gating. For fixed experimental conditions a channel would gate in only a subset of the potential number of states. The detected number of states and the correlations between adjacent interval durations are consistent with the tiered models. The examined models can account for the single-channel kinetics and the bursting behavior of gating. Ca2+ and voltage activate BK channels by predominantly increasing the effective opening rate of the channel with a smaller decrease in the effective closing rate. Ca2+ and depolarization thus activate by mainly destabilizing the closed states. PMID:25653620
Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.
Tao, Liang; Kwan, Hon Keung
2012-07-01
Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.
Spatial detection of tv channel logos as outliers from the content
NASA Astrophysics Data System (ADS)
Ekin, Ahmet; Braspenning, Ralph
2006-01-01
This paper proposes a purely image-based TV channel logo detection algorithm that can detect logos independently from their motion and transparency features. The proposed algorithm can robustly detect any type of logos, such as transparent and animated, without requiring any temporal constraints whereas known methods have to wait for the occurrence of large motion in the scene and assume stationary logos. The algorithm models logo pixels as outliers from the actual scene content that is represented by multiple 3-D histograms in the YC BC R space. We use four scene histograms corresponding to each of the four corners because the content characteristics change from one image corner to another. A further novelty of the proposed algorithm is that we define image corners and the areas where we compute the scene histograms by a cinematic technique called Golden Section Rule that is used by professionals. The robustness of the proposed algorithm is demonstrated over a dataset of representative TV content.
Optimized design of embedded DSP system hardware supporting complex algorithms
NASA Astrophysics Data System (ADS)
Li, Yanhua; Wang, Xiangjun; Zhou, Xinling
2003-09-01
The paper presents an optimized design method for a flexible and economical embedded DSP system that can implement complex processing algorithms as biometric recognition, real-time image processing, etc. It consists of a floating-point DSP, 512 Kbytes data RAM, 1 Mbytes FLASH program memory, a CPLD for achieving flexible logic control of input channel and a RS-485 transceiver for local network communication. Because of employing a high performance-price ratio DSP TMS320C6712 and a large FLASH in the design, this system permits loading and performing complex algorithms with little algorithm optimization and code reduction. The CPLD provides flexible logic control for the whole DSP board, especially in input channel, and allows convenient interface between different sensors and DSP system. The transceiver circuit can transfer data between DSP and host computer. In the paper, some key technologies are also introduced which make the whole system work efficiently. Because of the characters referred above, the hardware is a perfect flat for multi-channel data collection, image processing, and other signal processing with high performance and adaptability. The application section of this paper presents how this hardware is adapted for the biometric identification system with high identification precision. The result reveals that this hardware is easy to interface with a CMOS imager and is capable of carrying out complex biometric identification algorithms, which require real-time process.
Comparison of dual-k spacer and single-k spacer for single NWFET and 3-stack NWFET
NASA Astrophysics Data System (ADS)
Ko, Hyungwoo; Kim, Jongsu; Kim, Minsoo; Kang, Myounggon; Shin, Hyungcheol
2018-02-01
The investigation of the Dual-k spacer through comparative analysis of single nanowire-FET(NWFET)/3-stack NWFET and underlap/overlap channel is conducted. It is known that the dug 3-stack NWFET has better delay characteristics than single NWFET with the use of high permittivity material of Cin in Dual-k spacer structure. In addition, there is no difference of delay between overlap and underlap channel when it used Dual-k spacer structure but underlap channel of Dual-k 3-stack NWFET shows better short channel immunity.
Zhang, Shen; Zheng, Yanchun; Wang, Daifa; Wang, Ling; Ma, Jianai; Zhang, Jing; Xu, Weihao; Li, Deyu; Zhang, Dan
2017-08-10
Motor imagery is one of the most investigated paradigms in the field of brain-computer interfaces (BCIs). The present study explored the feasibility of applying a common spatial pattern (CSP)-based algorithm for a functional near-infrared spectroscopy (fNIRS)-based motor imagery BCI. Ten participants performed kinesthetic imagery of their left- and right-hand movements while 20-channel fNIRS signals were recorded over the motor cortex. The CSP method was implemented to obtain the spatial filters specific for both imagery tasks. The mean, slope, and variance of the CSP filtered signals were taken as features for BCI classification. Results showed that the CSP-based algorithm outperformed two representative channel-wise methods for classifying the two imagery statuses using either data from all channels or averaged data from imagery responsive channels only (oxygenated hemoglobin: CSP-based: 75.3±13.1%; all-channel: 52.3±5.3%; averaged: 64.8±13.2%; deoxygenated hemoglobin: CSP-based: 72.3±13.0%; all-channel: 48.8±8.2%; averaged: 63.3±13.3%). Furthermore, the effectiveness of the CSP method was also observed for the motor execution data to a lesser extent. A partial correlation analysis revealed significant independent contributions from all three types of features, including the often-ignored variance feature. To our knowledge, this is the first study demonstrating the effectiveness of the CSP method for fNIRS-based motor imagery BCIs. Copyright © 2017 Elsevier B.V. All rights reserved.
Single channel double-duct liquid metal electrical generator using a magnetohydrodynamic device
Haaland, C.M.; Deeds, W.E.
1999-07-13
A single channel double-duct liquid metal electrical generator using a magnetohydrodynamic (MHD) device. The single channel device provides useful output AC electric energy. The generator includes a two-cylinder linear-piston engine which drives liquid metal in a single channel looped around one side of the MHD device to form a double-duct contra-flowing liquid metal MHD generator. A flow conduit network and drive mechanism are provided for moving liquid metal with an oscillating flow through a static magnetic field to produce useful AC electric energy at practical voltages and currents. Variable stroke is obtained by controlling the quantity of liquid metal in the channel. High efficiency is obtained over a wide range of frequency and power output. 5 figs.
Single channel double-duct liquid metal electrical generator using a magnetohydrodynamic device
Haaland, Carsten M.; Deeds, W. Edward
1999-01-01
A single channel double-duct liquid metal electrical generator using a magnetohydrodynamic (MHD) device. The single channel device provides useful output AC electric energy. The generator includes a two-cylinder linear-piston engine which drives liquid metal in a single channel looped around one side of the MHD device to form a double-duct contra-flowing liquid metal MHD generator. A flow conduit network and drive mechanism are provided for moving liquid metal with an oscillating flow through a static magnetic field to produce useful AC electric energy at practical voltages and currents. Variable stroke is obtained by controlling the quantity of liquid metal in the channel. High efficiency is obtained over a wide range of frequency and power output.
NASA Astrophysics Data System (ADS)
Li, Zhong-xiao; Li, Zhen-chun
2016-09-01
The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.
Song, Jiajia; Li, Dan; Ma, Xiaoyuan; Teng, Guowei; Wei, Jianming
2017-01-01
Dynamic accurate heart-rate (HR) estimation using a photoplethysmogram (PPG) during intense physical activities is always challenging due to corruption by motion artifacts (MAs). It is difficult to reconstruct a clean signal and extract HR from contaminated PPG. This paper proposes a robust HR-estimation algorithm framework that uses one-channel PPG and tri-axis acceleration data to reconstruct the PPG and calculate the HR based on features of the PPG and spectral analysis. Firstly, the signal is judged by the presence of MAs. Then, the spectral peaks corresponding to acceleration data are filtered from the periodogram of the PPG when MAs exist. Different signal-processing methods are applied based on the amount of remaining PPG spectral peaks. The main MA-removal algorithm (NFEEMD) includes the repeated single-notch filter and ensemble empirical mode decomposition. Finally, HR calibration is designed to ensure the accuracy of HR tracking. The NFEEMD algorithm was performed on the 23 datasets from the 2015 IEEE Signal Processing Cup Database. The average estimation errors were 1.12 BPM (12 training datasets), 2.63 BPM (10 testing datasets) and 1.87 BPM (all 23 datasets), respectively. The Pearson correlation was 0.992. The experiment results illustrate that the proposed algorithm is not only suitable for HR estimation during continuous activities, like slow running (13 training datasets), but also for intense physical activities with acceleration, like arm exercise (10 testing datasets). PMID:29068403
Confidence Sharing: An Economic Strategy for Efficient Information Flows in Animal Groups
Korman, Amos; Greenwald, Efrat; Feinerman, Ofer
2014-01-01
Social animals may share information to obtain a more complete and accurate picture of their surroundings. However, physical constraints on communication limit the flow of information between interacting individuals in a way that can cause an accumulation of errors and deteriorated collective behaviors. Here, we theoretically study a general model of information sharing within animal groups. We take an algorithmic perspective to identify efficient communication schemes that are, nevertheless, economic in terms of communication, memory and individual internal computation. We present a simple and natural algorithm in which each agent compresses all information it has gathered into a single parameter that represents its confidence in its behavior. Confidence is communicated between agents by means of active signaling. We motivate this model by novel and existing empirical evidences for confidence sharing in animal groups. We rigorously show that this algorithm competes extremely well with the best possible algorithm that operates without any computational constraints. We also show that this algorithm is minimal, in the sense that further reduction in communication may significantly reduce performances. Our proofs rely on the Cramér-Rao bound and on our definition of a Fisher Channel Capacity. We use these concepts to quantify information flows within the group which are then used to obtain lower bounds on collective performance. The abstract nature of our model makes it rigorously solvable and its conclusions highly general. Indeed, our results suggest confidence sharing as a central notion in the context of animal communication. PMID:25275649
Confidence sharing: an economic strategy for efficient information flows in animal groups.
Korman, Amos; Greenwald, Efrat; Feinerman, Ofer
2014-10-01
Social animals may share information to obtain a more complete and accurate picture of their surroundings. However, physical constraints on communication limit the flow of information between interacting individuals in a way that can cause an accumulation of errors and deteriorated collective behaviors. Here, we theoretically study a general model of information sharing within animal groups. We take an algorithmic perspective to identify efficient communication schemes that are, nevertheless, economic in terms of communication, memory and individual internal computation. We present a simple and natural algorithm in which each agent compresses all information it has gathered into a single parameter that represents its confidence in its behavior. Confidence is communicated between agents by means of active signaling. We motivate this model by novel and existing empirical evidences for confidence sharing in animal groups. We rigorously show that this algorithm competes extremely well with the best possible algorithm that operates without any computational constraints. We also show that this algorithm is minimal, in the sense that further reduction in communication may significantly reduce performances. Our proofs rely on the Cramér-Rao bound and on our definition of a Fisher Channel Capacity. We use these concepts to quantify information flows within the group which are then used to obtain lower bounds on collective performance. The abstract nature of our model makes it rigorously solvable and its conclusions highly general. Indeed, our results suggest confidence sharing as a central notion in the context of animal communication.
The use of dwell time cross-correlation functions to study single-ion channel gating kinetics.
Ball, F G; Kerry, C J; Ramsey, R L; Sansom, M S; Usherwood, P N
1988-01-01
The derivation of cross-correlation functions from single-channel dwell (open and closed) times is described. Simulation of single-channel data for simple gating models, alongside theoretical treatment, is used to demonstrate the relationship of cross-correlation functions to underlying gating mechanisms. It is shown that time irreversibility of gating kinetics may be revealed in cross-correlation functions. Application of cross-correlation function analysis to data derived from the locust muscle glutamate receptor-channel provides evidence for multiple gateway states and time reversibility of gating. A model for the gating of this channel is used to show the effect of omission of brief channel events on cross-correlation functions. PMID:2462924
Torque-Summing Brushless Motor
NASA Technical Reports Server (NTRS)
Vaidya, J. G.
1986-01-01
Torque channels function cooperatively but electrically independent for reliability. Brushless, electronically-commutated dc motor sums electromagnetic torques on four channels and applies them to single shaft. Motor operates with any combination of channels and continues if one or more of channels fail electrically. Motor employs single stator and rotor and mechanically simple; however, each of channels electrically isolated from other so that failure of one does not adversely affect others.
Resistive Plate Chambers for imaging calorimetry — The DHCAL
NASA Astrophysics Data System (ADS)
Repond, J.
2014-09-01
The DHCAL — the Digital Hadron Calorimeter — is a prototype calorimeter based on Resistive Plate Chambers (RPCs). The design emphasizes the imaging capabilities of the detector in an effort to optimize the calorimeter for the application of Particle Flow Algorithms (PFAs) to the reconstruction of hadronic jet energies in a colliding beam environment. The readout of the chambers is segmented into 1 × 1 cm2 pads, each read out with a 1-bit (single threshold) resolution. The prototype with approximately 500,000 readout channels underwent extensive testing in both the Fermilab and CERN test beams. This talk presents preliminary findings from the analysis of data collected at the test beams.
High-accuracy user identification using EEG biometrics.
Koike-Akino, Toshiaki; Mahajan, Ruhi; Marks, Tim K; Ye Wang; Watanabe, Shinji; Tuzel, Oncel; Orlik, Philip
2016-08-01
We analyze brain waves acquired through a consumer-grade EEG device to investigate its capabilities for user identification and authentication. First, we show the statistical significance of the P300 component in event-related potential (ERP) data from 14-channel EEGs across 25 subjects. We then apply a variety of machine learning techniques, comparing the user identification performance of various different combinations of a dimensionality reduction technique followed by a classification algorithm. Experimental results show that an identification accuracy of 72% can be achieved using only a single 800 ms ERP epoch. In addition, we demonstrate that the user identification accuracy can be significantly improved to more than 96.7% by joint classification of multiple epochs.
Development of the mathematical model for design and verification of acoustic modal analysis methods
NASA Astrophysics Data System (ADS)
Siner, Alexander; Startseva, Maria
2016-10-01
To reduce the turbofan noise it is necessary to develop methods for the analysis of the sound field generated by the blade machinery called modal analysis. Because modal analysis methods are very difficult and their testing on the full scale measurements are very expensive and tedious it is necessary to construct some mathematical models allowing to test modal analysis algorithms fast and cheap. At this work the model allowing to set single modes at the channel and to analyze generated sound field is presented. Modal analysis of the sound generated by the ring array of point sound sources is made. Comparison of experimental and numerical modal analysis results is presented at this work.
Bierer, Julie Arenberg; Faulkner, Kathleen F
2010-04-01
The goal of this study was to evaluate the ability of a threshold measure, made with a restricted electrode configuration, to identify channels exhibiting relatively poor spatial selectivity. With a restricted electrode configuration, channel-to-channel variability in threshold may reflect variations in the interface between the electrodes and auditory neurons (i.e., nerve survival, electrode placement, and tissue impedance). These variations in the electrode-neuron interface should also be reflected in psychophysical tuning curve (PTC) measurements. Specifically, it is hypothesized that high single-channel thresholds obtained with the spatially focused partial tripolar (pTP) electrode configuration are predictive of wide or tip-shifted PTCs. Data were collected from five cochlear implant listeners implanted with the HiRes90k cochlear implant (Advanced Bionics Corp., Sylmar, CA). Single-channel thresholds and most comfortable listening levels were obtained for stimuli that varied in presumed electrical field size by using the pTP configuration for which a fraction of current (sigma) from a center-active electrode returns through two neighboring electrodes and the remainder through a distant indifferent electrode. Forward-masked PTCs were obtained for channels with the highest, lowest, and median tripolar (sigma = 1 or 0.9) thresholds. The probe channel and level were fixed and presented with either the monopolar (sigma = 0) or a more focused pTP (sigma > or = 0.55) configuration. The masker channel and level were varied, whereas the configuration was fixed to sigma = 0.5. A standard, three-interval, two-alternative forced choice procedure was used for thresholds and masked levels. Single-channel threshold and variability in threshold across channels systematically increased as the compensating current, sigma, increased and the presumed electrical field became more focused. Across subjects, channels with the highest single-channel thresholds, when measured with a narrow, pTP stimulus, had significantly broader PTCs than the lowest threshold channels. In two subjects, the tips of the tuning curves were shifted away from the probe channel. Tuning curves were also wider for the monopolar probes than with pTP probes for both the highest and lowest threshold channels. These results suggest that single-channel thresholds measured with a restricted stimulus can be used to identify cochlear implant channels with poor spatial selectivity. Channels having wide or tip-shifted tuning characteristics would likely not deliver the appropriate spectral information to the intended auditory neurons, leading to suboptimal perception. As a clinical tool, quick identification of impaired channels could lead to patient-specific mapping strategies and result in improved speech and music perception.
NASA Astrophysics Data System (ADS)
Lim, Kwon-Seob; Yu, Hong-Yeon; Park, Hyoung-Jun; Kang, Hyun Seo; Jang, Jae-Hyung
2016-06-01
Low-cost single-mode four-channel optical transmitter and receiver modules using the wavelength-division multiplexing (WDM) method have been developed for long-reach fiber optic applications. The single-mode four-channel WDM optical transmitter and receiver modules consist of two dual-wavelength optical transmitter and receiver submodules, respectively. The integration of two channels in a glass-sealed transistor outline-can package is an effective way to reduce cost and size and to extend the number of channels. The clear eye diagrams with more than about 6 dB of the extinction ratio and the minimum receiver sensitivity of lower than -16 dBm at a bit error rate of 10-12 have been obtained for the transmitter and receiver modules, respectively, at 5 Gbps/channel. The 4K ultrahigh definition contents have been transmitted over a 1-km-long single-mode fiber using a pair of proposed four-channel transmitter optical subassembly and receiver optical subassembly.
Detection of single ion channel activity with carbon nanotubes
NASA Astrophysics Data System (ADS)
Zhou, Weiwei; Wang, Yung Yu; Lim, Tae-Sun; Pham, Ted; Jain, Dheeraj; Burke, Peter J.
2015-03-01
Many processes in life are based on ion currents and membrane voltages controlled by a sophisticated and diverse family of membrane proteins (ion channels), which are comparable in size to the most advanced nanoelectronic components currently under development. Here we demonstrate an electrical assay of individual ion channel activity by measuring the dynamic opening and closing of the ion channel nanopores using single-walled carbon nanotubes (SWNTs). Two canonical dynamic ion channels (gramicidin A (gA) and alamethicin) and one static biological nanopore (α-hemolysin (α-HL)) were successfully incorporated into supported lipid bilayers (SLBs, an artificial cell membrane), which in turn were interfaced to the carbon nanotubes through a variety of polymer-cushion surface functionalization schemes. The ion channel current directly charges the quantum capacitance of a single nanotube in a network of purified semiconducting nanotubes. This work forms the foundation for a scalable, massively parallel architecture of 1d nanoelectronic devices interrogating electrophysiology at the single ion channel level.
Determination of cloud liquid water content using the SSM/I
NASA Technical Reports Server (NTRS)
Alishouse, John C.; Snider, Jack B.; Westwater, Ed R.; Swift, Calvin T.; Ruf, Christopher S.
1990-01-01
As part of a calibration/validation effort for the special sensor microwave/imager (SSM/I), coincident observations of SSM/I brightness temperatures and surface-based observations of cloud liquid water were obtained. These observations were used to validate initial algorithms and to derive an improved algorithm. The initial algorithms were divided into latitudinal-, seasonal-, and surface-type zones. It was found that these initial algorithms, which were of the D-matrix type, did not yield sufficiently accurate results. The surface-based measurements of channels were investigated; however, the 85V channel was excluded because of excessive noise. It was found that there is no significant correlation between the SSM/I brightness temperatures and the surface-based cloud liquid water determination when the background surface is land or snow. A high correlation was found between brightness temperatures and ground-based measurements over the ocean.
Local anaesthetics transiently block currents through single acetylcholine-receptor channels.
Neher, E; Steinbach, J H
1978-01-01
1. Single channel currents through acetylcholine receptor channels (ACh channels) were recorded at chronically denervated frog muscle extrajunctional membranes in the absence and presence of the lidocaine derivatives QX-222 and QX-314. 2. The current wave forms due to the opening and closing of single ACh channels (activated by suberyldicholine) normally are square pulses. These single pulses appear to be chopped into bursts of much shorter pulses, when the drug QX-222 is present in addition to the agonist. 3. The mean duration of the bursts is comparable to or longer than the normal channel open time, and increases with increasing drug concentration. 4. The duration of the short pulses within a burst decreases with increasing drug concentration. 5. It is concluded that drug molecules reversibly block open end-plate channels and that the flickering within a burst represents this fast, repeatedly occurring reaction. 6. The voltage dependence of the reaction rates involved, suggested that the site of the blocking reaction is in the centre of the membrane, probably inside the ionic channel. PMID:306437
Nikolaev, Yury A; Dosen, Peter J; Laver, Derek R; van Helden, Dirk F; Hamill, Owen P
2015-05-22
The mammalian brain is a mechanosensitive organ that responds to different mechanical forces ranging from intrinsic forces implicated in brain morphogenesis to extrinsic forces that can cause concussion and traumatic brain injury. However, little is known of the mechanosensors that transduce these forces. In this study we use cell-attached patch recording to measure single mechanically-gated (MG) channel currents and their affects on spike activity in identified neurons in neonatal mouse brain slices. We demonstrate that both neocortical and hippocampal pyramidal neurons express stretch-activated MG cation channels that are activated by suctions of ~25mm Hg, have a single channel conductance for inward current of 50-70pS and show weak selectivity for alkali metal cations (i.e., Na(+)
Single- and multi-channel underwater acoustic communication channel capacity: a computational study.
Hayward, Thomas J; Yang, T C
2007-09-01
Acoustic communication channel capacity determines the maximum data rate that can be supported by an acoustic channel for a given source power and source/receiver configuration. In this paper, broadband acoustic propagation modeling is applied to estimate the channel capacity for a time-invariant shallow-water waveguide for a single source-receiver pair and for vertical source and receiver arrays. Without bandwidth constraints, estimated single-input, single-output (SISO) capacities approach 10 megabitss at 1 km range, but beyond 2 km range they decay at a rate consistent with previous estimates by Peloquin and Leinhos (unpublished, 1997), which were based on a sonar equation calculation. Channel capacities subject to source bandwidth constraints are approximately 30-90% lower than for the unconstrained case, and exhibit a significant wind speed dependence. Channel capacity is investigated for single-input, multi-output (SIMO) and multi-input, multi-output (MIMO) systems, both for finite arrays and in the limit of a dense array spanning the entire water column. The limiting values of the SIMO and MIMO channel capacities for the modeled environment are found to be about four times higher and up to 200-400 times higher, respectively, than for the SISO case. Implications for underwater acoustic communication systems are discussed.
Haque, Farzin; Lunn, Jennifer; Fang, Huaming; Smithrud, David; Guo, Peixuan
2012-01-01
A highly sensitive and reliable method to sense and identify a single chemical at extremely low concentrations and high contamination is important for environmental surveillance, homeland security, athlete drug monitoring, toxin/drug screening, and earlier disease diagnosis. This manuscript reports a method for precise detection of single chemicals. The hub of the bacteriophage phi29 DNA packaging motor is a connector consisting of twelve protein subunits encircled into a 3.6-nm channel as a path for dsDNA to enter during packaging and to exit during infection. The connector has previously been inserted into a lipid bilayer to serve as a membrane-embedded channel. Herein we report the modification of the phi29 channel to develop a class of sensors to detect single chemicals. The Lysine-234 of each protein subunit was mutated to cysteine, generating 12-SH ring lining the channel wall. Chemicals passing through this robust channel and interactions with the SH-group generated extremely reliable, precise, and sensitive current signatures as revealed by single channel conductance assays. Ethane (57 Daltons), thymine (167 Daltons), and benzene (105 Daltons) with reactive thioester moieties were clearly discriminated upon interaction with the available set of cysteine residues. The covalent attachment of each analyte induced discrete step-wise blockage in current signature with a corresponding decrease in conductance due to the physical blocking of the channel. Transient binding of the chemicals also produced characteristic fingerprints that were deduced from the unique blockage amplitude and pattern of the signals. This study shows that the phi29 connector can be used to sense chemicals with reactive thioesters or maleimide using single channel conduction assays based on their distinct fingerprints. The results demonstrated that this channel system could be further developed into very sensitive sensing devices. PMID:22458779
Characteristics of single Ca(2+) channel kinetics in feline hypertrophied ventricular myocytes.
Yang, Xiangjun; Hui, Jie; Jiang, Tingbo; Song, Jianping; Liu, Zhihua; Jiang, Wenping
2002-04-01
To explore the mechanism underlying the prolongation of action potential and delayed inactivation of the L-type Ca(2+) (I(Ca, L)) current in a feline model of left ventricular system hypertension and concomitant hypertrophy. Single Ca(2+) channel properties in myocytes isolated from normal and pressure overloaded cat left ventricles were studied, using patch-clamp techniques. Left ventricular pressure overload was induced by partial ligation of the ascending aorta for 4 - 6 weeks. The amplitude of single Ca(2+) channel current evoked by depolarizing pulses from -40 mV to 0 mV was 1.02 +/- 0.03 pA in normal cells and 1.05 +/- 0.03 pA in hypertrophied cells, and there was no difference in single channel current-voltage relationships between the groups since slope conductance was 26.2 +/- 1.0 pS in normal and hypertrophied cells, respectively. Peak amplitudes of the ensemble-averaged single Ca(2+) channel currents were not different between the two groups of cells. However, the amplitude of this averaged current at the end of the clamp pulse was significantly larger in hypertrophied cells than in normal cells. Open-time histograms revealed that open-time distribution was fitted by a single exponential function in channels of normal cells and by a two exponential function in channels of hypertrophied cells. The number of long-lasting openings was increased in channels of hypertrophied cells, and therefore the calculated mean open time of the channel was significantly longer compared to normal controls. Kinetic changes in the Ca(2+) channel may underlie both hypertrophy-associated delayed inactivation of the Ca(2+) current and, in part, the pressure overload-induced action potential lengthening in this cat model of ventricular left systolic hypertension and hypertrophy.
Haque, Farzin; Lunn, Jennifer; Fang, Huaming; Smithrud, David; Guo, Peixuan
2012-04-24
A highly sensitive and reliable method to sense and identify a single chemical at extremely low concentrations and high contamination is important for environmental surveillance, homeland security, athlete drug monitoring, toxin/drug screening, and earlier disease diagnosis. This article reports a method for precise detection of single chemicals. The hub of the bacteriophage phi29 DNA packaging motor is a connector consisting of 12 protein subunits encircled into a 3.6 nm channel as a path for dsDNA to enter during packaging and to exit during infection. The connector has previously been inserted into a lipid bilayer to serve as a membrane-embedded channel. Herein we report the modification of the phi29 channel to develop a class of sensors to detect single chemicals. The lysine-234 of each protein subunit was mutated to cysteine, generating 12-SH ring lining the channel wall. Chemicals passing through this robust channel and interactions with the SH group generated extremely reliable, precise, and sensitive current signatures as revealed by single channel conductance assays. Ethane (57 Da), thymine (167 Da), and benzene (105 Da) with reactive thioester moieties were clearly discriminated upon interaction with the available set of cysteine residues. The covalent attachment of each analyte induced discrete stepwise blockage in current signature with a corresponding decrease in conductance due to the physical blocking of the channel. Transient binding of the chemicals also produced characteristic fingerprints that were deduced from the unique blockage amplitude and pattern of the signals. This study shows that the phi29 connector can be used to sense chemicals with reactive thioesters or maleimide using single channel conduction assays based on their distinct fingerprints. The results demonstrated that this channel system could be further developed into very sensitive sensing devices.
Multicore and GPU algorithms for Nussinov RNA folding
2014-01-01
Background One segment of a RNA sequence might be paired with another segment of the same RNA sequence due to the force of hydrogen bonds. This two-dimensional structure is called the RNA sequence's secondary structure. Several algorithms have been proposed to predict an RNA sequence's secondary structure. These algorithms are referred to as RNA folding algorithms. Results We develop cache efficient, multicore, and GPU algorithms for RNA folding using Nussinov's algorithm. Conclusions Our cache efficient algorithm provides a speedup between 1.6 and 3.0 relative to a naive straightforward single core code. The multicore version of the cache efficient single core algorithm provides a speedup, relative to the naive single core algorithm, between 7.5 and 14.0 on a 6 core hyperthreaded CPU. Our GPU algorithm for the NVIDIA C2050 is up to 1582 times as fast as the naive single core algorithm and between 5.1 and 11.2 times as fast as the fastest previously known GPU algorithm for Nussinov RNA folding. PMID:25082539
Monitoring Single-channel Water Permeability in Polarized Cells*
Erokhova, Liudmila; Horner, Andreas; Kügler, Philipp; Pohl, Peter
2011-01-01
So far the determination of unitary permeability (pf) of water channels that are expressed in polarized cells is subject to large errors because the opening of a single water channel does not noticeably increase the water permeability of a membrane patch above the background. That is, in contrast to the patch clamp technique, where the single ion channel conductance may be derived from a single experiment, two experiments separated in time and/or space are required to obtain the single-channel water permeability pf as a function of the incremental water permeability (Pf,c) and the number (n) of water channels that contributed to Pf,c. Although the unitary conductance of ion channels is measured in the native environment of the channel, pf is so far derived from reconstituted channels or channels expressed in oocytes. To determine the pf of channels from live epithelial monolayers, we exploit the fact that osmotic volume flow alters the concentration of aqueous reporter dyes adjacent to the epithelia. We measure these changes by fluorescence correlation spectroscopy, which allows the calculation of both Pf,c and osmolyte dilution within the unstirred layer. Shifting the focus of the laser from the aqueous solution to the apical and basolateral membranes allowed the FCS-based determination of n. Here we validate the new technique by determining the pf of aquaporin 5 in Madin-Darby canine kidney cell monolayers. Because inhibition and subsequent activity rescue are monitored on the same sample, drug effects on exocytosis or endocytosis can be dissected from those on pf. PMID:21940624
Determinations of cloud liquid water in the tropics from the SSM/I
NASA Technical Reports Server (NTRS)
Alishouse, John C.; Swift, Calvin; Ruf, Christopher; Snyder, Sheila; Vongsathorn, Jennifer
1989-01-01
Upward-looking microwave radiometric observations were used to validate the SSM/I determinations, and also as a basis for the determination of new coefficients. Due to insufficiency of the initial four channel algorithm for cloud liquid water, the improved algorithm was derived from the CORRAD (the University of Massachusetts autocorrelation radiometer) measurements of cloud liquid water and the matching SSM/I brightness temperatures using the standard linear regression. The correlation coefficients for the possible four channel combinations, and subsequently the best and the worst combinations were determined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkov, M V; Garanin, S G; Dolgopolov, Yu V
2014-11-30
A seven-channel fibre laser system operated by the master oscillator – multichannel power amplifier scheme is the phase locked using a stochastic parallel gradient algorithm. The phase modulators on lithium niobate crystals are controlled by a multichannel electronic unit with the microcontroller processing signals in real time. The dynamic phase locking of the laser system with the bandwidth of 14 kHz is demonstrated, the time of phasing is 3 – 4 ms. (fibre and integrated-optical structures)
Design of a clinical notification system.
Wagner, M M; Tsui, F C; Pike, J; Pike, L
1999-01-01
We describe the requirements and design of an enterprise-wide notification system. From published descriptions of notification schemes, our own experience, and use cases provided by diverse users in our institution, we developed a set of functional requirements. The resulting design supports multiple communication channels, third party mappings (algorithms) from message to recipient and/or channel of delivery, and escalation algorithms. A requirement for multiple message formats is addressed by a document specification. We implemented this system in Java as a CORBA object. This paper describes the design and current implementation of our notification system.
Development of a Crosslink Channel Simulator
NASA Technical Reports Server (NTRS)
Hunt, Chris; Smith, Carl; Burns, Rich
2004-01-01
Distributed Spacecraft missions are an integral part of current and future plans for NASA and other space agencies. Many of these multi-vehicle missions involve utilizing the array of spacecraft as a single, instrument requiring communication via crosslinks to achieve mission goals. NASA s Goddard Space Flight Center (GSFC) is developing the Formation Flying Test Bed (FFTB) to provide a hardware-in-the-loop simulation environment to support mission concept development and system trades with a primary focus on Guidance, Navigation, and Control (GN&C) challenges associated with spacecraft flying. The goal of the FFTB is to reduce mission risk by assisting in mission planning and analysis, provide a technology development platform that allows algorithms to be developed for mission functions such as precision formation navigation and control and time synchronization. The FFTB will provide a medium in which the various crosslink transponders being used in multi-vehicle missions can be integrated for development and test; an integral part of the FFTB is the Crosslink Channel Simulator (CCS). The CCS is placed into the communications channel between the crosslinks under test, and is used to simulate on-mission effects to the communications channel such as vehicle maneuvers, relative vehicle motion, or antenna misalignment. The CCS is based on the Starlight software programmable platform developed at General Dynamics Decision Systems and provides the CCS with the ability to be modified on the fly to adapt to new crosslink formats or mission parameters. This paper briefly describes the Formation Flying Test Bed and its potential uses. It then provides details on the current and future development of the Crosslink Channel Simulator and its capabilities.
Progressive video coding for noisy channels
NASA Astrophysics Data System (ADS)
Kim, Beong-Jo; Xiong, Zixiang; Pearlman, William A.
1998-10-01
We extend the work of Sherwood and Zeger to progressive video coding for noisy channels. By utilizing a 3D extension of the set partitioning in hierarchical trees (SPIHT) algorithm, we cascade the resulting 3D SPIHT video coder with a rate-compatible punctured convolutional channel coder for transmission of video over a binary symmetric channel. Progressive coding is achieved by increasing the target rate of the 3D embedded SPIHT video coder as the channel condition improves. The performance of our proposed coding system is acceptable at low transmission rate and bad channel conditions. Its low complexity makes it suitable for emerging applications such as video over wireless channels.
Multi channel thermal hydraulic analysis of gas cooled fast reactor using genetic algorithm
NASA Astrophysics Data System (ADS)
Drajat, R. Z.; Su'ud, Z.; Soewono, E.; Gunawan, A. Y.
2012-05-01
There are three analyzes to be done in the design process of nuclear reactor i.e. neutronic analysis, thermal hydraulic analysis and thermodynamic analysis. The focus in this article is the thermal hydraulic analysis, which has a very important role in terms of system efficiency and the selection of the optimal design. This analysis is performed in a type of Gas Cooled Fast Reactor (GFR) using cooling Helium (He). The heat from nuclear fission reactions in nuclear reactors will be distributed through the process of conduction in fuel elements. Furthermore, the heat is delivered through a process of heat convection in the fluid flow in cooling channel. Temperature changes that occur in the coolant channels cause a decrease in pressure at the top of the reactor core. The governing equations in each channel consist of mass balance, momentum balance, energy balance, mass conservation and ideal gas equation. The problem is reduced to finding flow rates in each channel such that the pressure drops at the top of the reactor core are all equal. The problem is solved numerically with the genetic algorithm method. Flow rates and temperature distribution in each channel are obtained here.
Multiple-access relaying with network coding: iterative network/channel decoding with imperfect CSI
NASA Astrophysics Data System (ADS)
Vu, Xuan-Thang; Renzo, Marco Di; Duhamel, Pierre
2013-12-01
In this paper, we study the performance of the four-node multiple-access relay channel with binary Network Coding (NC) in various Rayleigh fading scenarios. In particular, two relay protocols, decode-and-forward (DF) and demodulate-and-forward (DMF) are considered. In the first case, channel decoding is performed at the relay before NC and forwarding. In the second case, only demodulation is performed at the relay. The contributions of the paper are as follows: (1) two joint network/channel decoding (JNCD) algorithms, which take into account possible decoding error at the relay, are developed in both DF and DMF relay protocols; (2) both perfect channel state information (CSI) and imperfect CSI at receivers are studied. In addition, we propose a practical method to forward the relays error characterization to the destination (quantization of the BER). This results in a fully practical scheme. (3) We show by simulation that the number of pilot symbols only affects the coding gain but not the diversity order, and that quantization accuracy affects both coding gain and diversity order. Moreover, when compared with the recent results using DMF protocol, our proposed DF protocol algorithm shows an improvement of 4 dB in fully interleaved Rayleigh fading channels and 0.7 dB in block Rayleigh fading channels.
The analysis of polar clouds from AVHRR satellite data using pattern recognition techniques
NASA Technical Reports Server (NTRS)
Smith, William L.; Ebert, Elizabeth
1990-01-01
The cloud cover in a set of summertime and wintertime AVHRR data from the Arctic and Antarctic regions was analyzed using a pattern recognition algorithm. The data were collected by the NOAA-7 satellite on 6 to 13 Jan. and 1 to 7 Jul. 1984 between 60 deg and 90 deg north and south latitude in 5 spectral channels, at the Global Area Coverage (GAC) resolution of approximately 4 km. This data embodied a Polar Cloud Pilot Data Set which was analyzed by a number of research groups as part of a polar cloud algorithm intercomparison study. This study was intended to determine whether the additional information contained in the AVHRR channels (beyond the standard visible and infrared bands on geostationary satellites) could be effectively utilized in cloud algorithms to resolve some of the cloud detection problems caused by low visible and thermal contrasts in the polar regions. The analysis described makes use of a pattern recognition algorithm which estimates the surface and cloud classification, cloud fraction, and surface and cloudy visible (channel 1) albedo and infrared (channel 4) brightness temperatures on a 2.5 x 2.5 deg latitude-longitude grid. In each grid box several spectral and textural features were computed from the calibrated pixel values in the multispectral imagery, then used to classify the region into one of eighteen surface and/or cloud types using the maximum likelihood decision rule. A slightly different version of the algorithm was used for each season and hemisphere because of differences in categories and because of the lack of visible imagery during winter. The classification of the scene is used to specify the optimal AVHRR channel for separating clear and cloudy pixels using a hybrid histogram-spatial coherence method. This method estimates values for cloud fraction, clear and cloudy albedos and brightness temperatures in each grid box. The choice of a class-dependent AVHRR channel allows for better separation of clear and cloudy pixels than does a global choice of a visible and/or infrared threshold. The classification also prevents erroneous estimates of large fractional cloudiness in areas of cloudfree snow and sea ice. The hybrid histogram-spatial coherence technique and the advantages of first classifying a scene in the polar regions are detailed. The complete Polar Cloud Pilot Data Set was analyzed and the results are presented and discussed.
NASA Astrophysics Data System (ADS)
Pioldi, Fabio; Ferrari, Rosalba; Rizzi, Egidio
2016-02-01
The present paper deals with the seismic modal dynamic identification of frame structures by a refined Frequency Domain Decomposition (rFDD) algorithm, autonomously formulated and implemented within MATLAB. First, the output-only identification technique is outlined analytically and then employed to characterize all modal properties. Synthetic response signals generated prior to the dynamic identification are adopted as input channels, in view of assessing a necessary condition for the procedure's efficiency. Initially, the algorithm is verified on canonical input from random excitation. Then, modal identification has been attempted successfully at given seismic input, taken as base excitation, including both strong motion data and single and multiple input ground motions. Rather than different attempts investigating the role of seismic response signals in the Time Domain, this paper considers the identification analysis in the Frequency Domain. Results turn-out very much consistent with the target values, with quite limited errors in the modal estimates, including for the damping ratios, ranging from values in the order of 1% to 10%. Either seismic excitation and high values of damping, resulting critical also in case of well-spaced modes, shall not fulfill traditional FFD assumptions: this shows the consistency of the developed algorithm. Through original strategies and arrangements, the paper shows that a comprehensive rFDD modal dynamic identification of frames at seismic input is feasible, also at concomitant high damping.
Reality Check Algorithm for Complex Sources in Early Warning
NASA Astrophysics Data System (ADS)
Karakus, G.; Heaton, T. H.
2013-12-01
In almost all currently operating earthquake early warning (EEW) systems, presently available seismic data are used to predict future shaking. In most cases, location and magnitude are estimated. We are developing an algorithm to test the goodness of that prediction in real time. We monitor envelopes of acceleration, velocity, and displacement; if they deviate significantly from the envelope predicted by Cua's envelope gmpe's then we declare an overfit (perhaps false alarm) or an underfit (possibly a larger event has just occurred). This algorithm is designed to provide a robust measure and to work as quickly as possible in real-time. We monitor the logarithm of the ratio between the envelopes of the ongoing observed event and the envelopes derived from the predicted envelopes of channels of ground motion of the Virtual Seismologist (VS) (Cua, G. and Heaton, T.). Then, we recursively filter this result with a simple running median (de-spiking operator) to minimize the effect of one single high value. Depending on the result of the filtered value we make a decision such as if this value is large enough (e.g., >1), then we would declare, 'that a larger event is in progress', or similarly if this value is small enough (e.g., <-1), then we would declare a false alarm. We design the algorithm to work at a wide range of amplitude scales; that is, it should work for both small and large events.
Least squares restoration of multi-channel images
NASA Technical Reports Server (NTRS)
Chin, Roland T.; Galatsanos, Nikolas P.
1989-01-01
In this paper, a least squares filter for the restoration of multichannel imagery is presented. The restoration filter is based on a linear, space-invariant imaging model and makes use of an iterative matrix inversion algorithm. The restoration utilizes both within-channel (spatial) and cross-channel information as constraints. Experiments using color images (three-channel imagery with red, green, and blue components) were performed to evaluate the filter's performance and to compare it with other monochrome and multichannel filters.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.
1986-01-01
High rate concatenated coding systems with trellis inner codes and Reed-Solomon (RS) outer codes for application in satellite communication systems are considered. Two types of inner codes are studied: high rate punctured binary convolutional codes which result in overall effective information rates between 1/2 and 1 bit per channel use; and bandwidth efficient signal space trellis codes which can achieve overall effective information rates greater than 1 bit per channel use. Channel capacity calculations with and without side information performed for the concatenated coding system. Concatenated coding schemes are investigated. In Scheme 1, the inner code is decoded with the Viterbi algorithm and the outer RS code performs error-correction only (decoding without side information). In scheme 2, the inner code is decoded with a modified Viterbi algorithm which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, while branch metrics are used to provide the reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. These two schemes are proposed for use on NASA satellite channels. Results indicate that high system reliability can be achieved with little or no bandwidth expansion.
Spectrum Access In Cognitive Radio Using a Two-Stage Reinforcement Learning Approach
NASA Astrophysics Data System (ADS)
Raj, Vishnu; Dias, Irene; Tholeti, Thulasi; Kalyani, Sheetal
2018-02-01
With the advent of the 5th generation of wireless standards and an increasing demand for higher throughput, methods to improve the spectral efficiency of wireless systems have become very important. In the context of cognitive radio, a substantial increase in throughput is possible if the secondary user can make smart decisions regarding which channel to sense and when or how often to sense. Here, we propose an algorithm to not only select a channel for data transmission but also to predict how long the channel will remain unoccupied so that the time spent on channel sensing can be minimized. Our algorithm learns in two stages - a reinforcement learning approach for channel selection and a Bayesian approach to determine the optimal duration for which sensing can be skipped. Comparisons with other learning methods are provided through extensive simulations. We show that the number of sensing is minimized with negligible increase in primary interference; this implies that lesser energy is spent by the secondary user in sensing and also higher throughput is achieved by saving on sensing.
Evolutionary Algorithm Based Feature Optimization for Multi-Channel EEG Classification.
Wang, Yubo; Veluvolu, Kalyana C
2017-01-01
The most BCI systems that rely on EEG signals employ Fourier based methods for time-frequency decomposition for feature extraction. The band-limited multiple Fourier linear combiner is well-suited for such band-limited signals due to its real-time applicability. Despite the improved performance of these techniques in two channel settings, its application in multiple-channel EEG is not straightforward and challenging. As more channels are available, a spatial filter will be required to eliminate the noise and preserve the required useful information. Moreover, multiple-channel EEG also adds the high dimensionality to the frequency feature space. Feature selection will be required to stabilize the performance of the classifier. In this paper, we develop a new method based on Evolutionary Algorithm (EA) to solve these two problems simultaneously. The real-valued EA encodes both the spatial filter estimates and the feature selection into its solution and optimizes it with respect to the classification error. Three Fourier based designs are tested in this paper. Our results show that the combination of Fourier based method with covariance matrix adaptation evolution strategy (CMA-ES) has the best overall performance.
Melnyk, Mariia I; Dryn, Dariia O; Al Kury, Lina T; Zholos, Alexander V; Soloviev, Anatoly I
2018-04-19
The effects of quercetin-loaded liposomes (PCL-Q) and their constituents, that is, free quercetin (Q) and 'empty' phosphatidylcholine vesicles (PCL), on maxi-K channel activity were studied in single mouse ileal myocytes before and after H 2 O 2 -induced oxidative stress. Macroscopic Maxi-K channel currents were recorded using whole-cell patch clamp techniques, while single BK Ca channel currents were recorded in the cell-attached configuration. Bath application of PCL-Q (100 μg/ml of lipid and 3 μg/ml of quercetin) increased single Maxi-K channel activity more than threefold, from 0.010 ± 0.003 to 0.034 ± 0.004 (n = 5; p < 0.05), whereas single-channel conductance increased non-significantly from 138 to 146 pS. In the presence of PCL-Q multiple simultaneous channel openings were observed, with up to eight active channels in the membrane patch. Surprisingly, 'empty' PCL (100 μg/ml) also produced some channel activation, although it was less potent compared to PCL-Q, that is, these increased NPo from 0.010 ± 0.003 to 0.019 ± 0.003 (n = 5; p < 0.05) and did not affect single-channel conductance (139 pS). Application of PCL-Q restored macroscopic Maxi-K currents suppressed by H 2 O 2 -induced oxidative stress in ileal smooth muscle cells. We conclude that PCL-Q can activate Maxi-K channels in ileal myocytes mainly by increasing channel open probability, as well as maintain Maxi-K-mediated whole-cell current under the conditions of oxidative stress. While fusion of the 'pure' liposomes with the plasma membrane may indirectly activate Maxi-K channels by altering channel's phospholipids environment, the additional potentiating action of quercetin may be due to its better bioavailability.
Enabling vendor independent photoacoustic imaging systems with asynchronous laser source
NASA Astrophysics Data System (ADS)
Wu, Yixuan; Zhang, Haichong K.; Boctor, Emad M.
2018-02-01
Channel data acquisition, and synchronization between laser excitation and PA signal acquisition, are two fundamental hardware requirements for photoacoustic (PA) imaging. Unfortunately, however, neither is equipped by most clinical ultrasound scanners. Therefore, less economical specialized research platforms are used in general, which hinders a smooth clinical transition of PA imaging. In previous studies, we have proposed an algorithm to achieve PA imaging using ultrasound post-beamformed (USPB) RF data instead of channel data. This work focuses on enabling clinical ultrasound scanners to implement PA imaging, without requiring synchronization between the laser excitation and PA signal acquisition. Laser synchronization is inherently consisted of two aspects: frequency and phase information. We synchronize without communicating the laser and the ultrasound scanner by investigating USPB images of a point-target phantom in two steps. First, frequency information is estimated by solving a nonlinear optimization problem, under the assumption that the segmented wave-front can only be beamformed into a single spot when synchronization is achieved. Second, after making frequencies of two systems identical, phase delay is estimated by optimizing the image quality while varying phase value. The proposed method is validated through simulation, by manually adding both frequency and phase errors, then applying the proposed algorithm to correct errors and reconstruct PA images. Compared with the ground truth, simulation results indicate that the remaining errors in frequency correction and phase correction are 0.28% and 2.34%, respectively, which affirm the potential of overcoming hardware barriers on PA imaging through software solution.
Performance analysis of cross-layer design with average PER constraint over MIMO fading channels
NASA Astrophysics Data System (ADS)
Dang, Xiaoyu; Liu, Yan; Yu, Xiangbin
2015-12-01
In this article, a cross-layer design (CLD) scheme for multiple-input and multiple-output system with the dual constraints of imperfect feedback and average packet error rate (PER) is presented, which is based on the combination of the adaptive modulation and the automatic repeat request protocols. The design performance is also evaluated over wireless Rayleigh fading channel. With the constraint of target PER and average PER, the optimum switching thresholds (STs) for attaining maximum spectral efficiency (SE) are developed. An effective iterative algorithm for finding the optimal STs is proposed via Lagrange multiplier optimisation. With different thresholds available, the analytical expressions of the average SE and PER are provided for the performance evaluation. To avoid the performance loss caused by the conventional single estimate, multiple outdated estimates (MOE) method, which utilises multiple previous channel estimation information, is presented for CLD to improve the system performance. It is shown that numerical simulations for average PER and SE are in consistent with the theoretical analysis and that the developed CLD with average PER constraint can meet the target PER requirement and show better performance in comparison with the conventional CLD with instantaneous PER constraint. Especially, the CLD based on the MOE method can obviously increase the system SE and reduce the impact of feedback delay greatly.
Variable Scheduling to Mitigate Channel Losses in Energy-Efficient Body Area Networks
Tselishchev, Yuriy; Boulis, Athanassios; Libman, Lavy
2012-01-01
We consider a typical body area network (BAN) setting in which sensor nodes send data to a common hub regularly on a TDMA basis, as defined by the emerging IEEE 802.15.6 BAN standard. To reduce transmission losses caused by the highly dynamic nature of the wireless channel around the human body, we explore variable TDMA scheduling techniques that allow the order of transmissions within each TDMA round to be decided on the fly, rather than being fixed in advance. Using a simple Markov model of the wireless links, we devise a number of scheduling algorithms that can be performed by the hub, which aim to maximize the expected number of successful transmissions in a TDMA round, and thereby significantly reduce transmission losses as compared with a static TDMA schedule. Importantly, these algorithms do not require a priori knowledge of the statistical properties of the wireless channels, and the reliability improvement is achieved entirely via shuffling the order of transmissions among devices, and does not involve any additional energy consumption (e.g., retransmissions). We evaluate these algorithms directly on an experimental set of traces obtained from devices strapped to human subjects performing regular daily activities, and confirm that the benefits of the proposed variable scheduling algorithms extend to this practical setup as well. PMID:23202183
Linear time-invariant controller design for two-channel decentralized control systems
NASA Technical Reports Server (NTRS)
Desoer, Charles A.; Gundes, A. Nazli
1987-01-01
This paper analyzes a linear time-invariant two-channel decentralized control system with a 2 x 2 strictly proper plant. It presents an algorithm for the algebraic design of a class of decentralized compensators which stabilize the given plant.
Plant Species Identification by Bi-channel Deep Convolutional Networks
NASA Astrophysics Data System (ADS)
He, Guiqing; Xia, Zhaoqiang; Zhang, Qiqi; Zhang, Haixi; Fan, Jianping
2018-04-01
Plant species identification achieves much attention recently as it has potential application in the environmental protection and human life. Although deep learning techniques can be directly applied for plant species identification, it still needs to be designed for this specific task to obtain the state-of-art performance. In this paper, a bi-channel deep learning framework is developed for identifying plant species. In the framework, two different sub-networks are fine-tuned over their pretrained models respectively. And then a stacking layer is used to fuse the output of two different sub-networks. We construct a plant dataset of Orchidaceae family for algorithm evaluation. Our experimental results have demonstrated that our bi-channel deep network can achieve very competitive performance on accuracy rates compared to the existing deep learning algorithm.
Two-Channel Satellite Retrievals of Aerosol Properties: An Overview
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.
1999-01-01
In order to reduce current uncertainties in the evaluation of the direct and indirect effects of tropospheric aerosols on climate on the global scale, it has been suggested to apply multi-channel retrieval algorithms to the full period of existing satellite data. This talk will outline the methodology of interpreting two-channel satellite radiance data over the ocean and describe a detailed analysis of the sensitivity of retrieved aerosol parameters to the assumptions made in different retrieval algorithms. We will specifically address the calibration and cloud screening issues, consider the suitability of existing satellite data sets to detecting short- and long-term regional and global changes, compare preliminary results obtained by several research groups, and discuss the prospects of creating an advanced retroactive climatology of aerosol optical thickness and size over the oceans.
NASA Astrophysics Data System (ADS)
DeSouza-Machado, Sergio; Larrabee Strow, L.; Tangborn, Andrew; Huang, Xianglei; Chen, Xiuhong; Liu, Xu; Wu, Wan; Yang, Qiguang
2018-01-01
One-dimensional variational retrievals of temperature and moisture fields from hyperspectral infrared (IR) satellite sounders use cloud-cleared radiances (CCRs) as their observation. These derived observations allow the use of clear-sky-only radiative transfer in the inversion for geophysical variables but at reduced spatial resolution compared to the native sounder observations. Cloud clearing can introduce various errors, although scenes with large errors can be identified and ignored. Information content studies show that, when using multilayer cloud liquid and ice profiles in infrared hyperspectral radiative transfer codes, there are typically only 2-4 degrees of freedom (DOFs) of cloud signal. This implies a simplified cloud representation is sufficient for some applications which need accurate radiative transfer. Here we describe a single-footprint retrieval approach for clear and cloudy conditions, which uses the thermodynamic and cloud fields from numerical weather prediction (NWP) models as a first guess, together with a simple cloud-representation model coupled to a fast scattering radiative transfer algorithm (RTA). The NWP model thermodynamic and cloud profiles are first co-located to the observations, after which the N-level cloud profiles are converted to two slab clouds (TwoSlab; typically one for ice and one for water clouds). From these, one run of our fast cloud-representation model allows an improvement of the a priori cloud state by comparing the observed and model-simulated radiances in the thermal window channels. The retrieval yield is over 90 %, while the degrees of freedom correlate with the observed window channel brightness temperature (BT) which itself depends on the cloud optical depth. The cloud-representation and scattering package is benchmarked against radiances computed using a maximum random overlap (RMO) cloud scheme. All-sky infrared radiances measured by NASA's Atmospheric Infrared Sounder (AIRS) and NWP thermodynamic and cloud profiles from the European Centre for Medium-Range Weather Forecasts (ECMWF) forecast model are used in this paper.
Panigrahy, D; Sahu, P K
2017-03-01
This paper proposes a five-stage based methodology to extract the fetal electrocardiogram (FECG) from the single channel abdominal ECG using differential evolution (DE) algorithm, extended Kalman smoother (EKS) and adaptive neuro fuzzy inference system (ANFIS) framework. The heart rate of the fetus can easily be detected after estimation of the fetal ECG signal. The abdominal ECG signal contains fetal ECG signal, maternal ECG component, and noise. To estimate the fetal ECG signal from the abdominal ECG signal, removal of the noise and the maternal ECG component presented in it is necessary. The pre-processing stage is used to remove the noise from the abdominal ECG signal. The EKS framework is used to estimate the maternal ECG signal from the abdominal ECG signal. The optimized parameters of the maternal ECG components are required to develop the state and measurement equation of the EKS framework. These optimized maternal ECG parameters are selected by the differential evolution algorithm. The relationship between the maternal ECG signal and the available maternal ECG component in the abdominal ECG signal is nonlinear. To estimate the actual maternal ECG component present in the abdominal ECG signal and also to recognize this nonlinear relationship the ANFIS is used. Inputs to the ANFIS framework are the output of EKS and the pre-processed abdominal ECG signal. The fetal ECG signal is computed by subtracting the output of ANFIS from the pre-processed abdominal ECG signal. Non-invasive fetal ECG database and set A of 2013 physionet/computing in cardiology challenge database (PCDB) are used for validation of the proposed methodology. The proposed methodology shows a sensitivity of 94.21%, accuracy of 90.66%, and positive predictive value of 96.05% from the non-invasive fetal ECG database. The proposed methodology also shows a sensitivity of 91.47%, accuracy of 84.89%, and positive predictive value of 92.18% from the set A of PCDB.
Radio-frequency response of single pores and artificial ion channels
NASA Astrophysics Data System (ADS)
Kim, H. S.; Ramachandran, S.; Stava, E.; van der Weide, D. W.; Blick, R. H.
2011-09-01
Intercellular communication relies on ion channels and pores in cell membranes. These protein-formed channels enable the exchange of ions and small molecules to electrically and/or chemically interact with the cells. Traditionally, recordings on single-ion channels and pores are performed in the dc regime, due to the extremely high impedance of these molecular junctions. This paper is intended as an introduction to radio-frequency (RF) recordings of single-molecule junctions in bilipid membranes. First, we demonstrate how early approaches to using microwave circuitry as readout devices for ion channel formation were realized. The second step will then focus on how to engineer microwave coupling into the high-impedance channel by making use of bio-compatible micro-coaxial lines. We then demonstrate integration of an ultra-broadband microwave circuit for the direct sampling of single α-hemolysin pores in a suspended bilipid membrane. Simultaneous direct current recordings reveal that we can monitor and correlate the RF transmission signal. This enables us to relate the open-close states of the direct current to the RF signal. Altogether, our experiments lay the ground for an RF-readout technique to perform real-time in vitro recordings of pores. The technique thus holds great promise for research and drug screening applications. The possible enhancement of sampling rates of single channels and pores by the large recording bandwidth will allow us to track the passage of single ions.
A framework with Cucho algorithm for discovering regular plans in mobile clients
NASA Astrophysics Data System (ADS)
Tsiligaridis, John
2017-09-01
In a mobile computing system, broadcasting has become a very interesting and challenging research issue. The server continuously broadcasts data to mobile users; the data can be inserted into customized size relations and broadcasted as Regular Broadcast Plan (RBP) with multiple channels. Two algorithms, given the data size for each provided service, the Basic Regular (BRA) and the Partition Value Algorithm (PVA) can provide a static and dynamic RBP construction with multiple constraints solutions respectively. Servers have to define the data size of the services and can provide a feasible RBP working with many broadcasting plan operations. The operations become more complicated when there are many kinds of services and the sizes of data sets are unknown to the server. To that end a framework has been developed that also gives the ability to select low or high capacity channels for servicing. Theorems with new analytical results can provide direct conditions that can state the existence of solutions for the RBP problem with the compound criterion. Two kinds of solutions are provided: the equal and the non equal subrelation solutions. The Cucho Search Algorithm (CS) with the Levy flight behavior has been selected for the optimization. The CS for RBP (CSRP) is developed applying the theorems to the discovery of RBPs. An additional change to CS has been made in order to increase the local search. The CS can also discover RBPs with the minimum number of channels. From all the above modern servers can be upgraded with these possibilities in regards to RBPs discovery with fewer channels.
Improved Surface Parameter Retrievals using AIRS/AMSU Data
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John
2008-01-01
The AIRS Science Team Version 5.0 retrieval algorithm became operational at the Goddard DAAC in July 2007 generating near real-time products from analysis of AIRS/AMSU sounding data. This algorithm contains many significant theoretical advances over the AIRS Science Team Version 4.0 retrieval algorithm used previously. Two very significant developments of Version 5 are: 1) the development and implementation of an improved Radiative Transfer Algorithm (RTA) which allows for accurate treatment of non-Local Thermodynamic Equilibrium (non-LTE) effects on shortwave sounding channels; and 2) the development of methodology to obtain very accurate case by case product error estimates which are in turn used for quality control. These theoretical improvements taken together enabled a new methodology to be developed which further improves soundings in partially cloudy conditions. In this methodology, longwave C02 channel observations in the spectral region 700 cm(exp -1) to 750 cm(exp -1) are used exclusively for cloud clearing purposes, while shortwave C02 channels in the spectral region 2195 cm(exp -1) 2395 cm(exp -1) are used for temperature sounding purposes. This allows for accurate temperature soundings under more difficult cloud conditions. This paper further improves on the methodology used in Version 5 to derive surface skin temperature and surface spectral emissivity from AIRS/AMSU observations. Now, following the approach used to improve tropospheric temperature profiles, surface skin temperature is also derived using only shortwave window channels. This produces improved surface parameters, both day and night, compared to what was obtained in Version 5. These in turn result in improved boundary layer temperatures and retrieved total O3 burden.
Zhang, Lu; Hong, Xuezhi; Pang, Xiaodan; Ozolins, Oskars; Udalcovs, Aleksejs; Schatz, Richard; Guo, Changjian; Zhang, Junwei; Nordwall, Fredrik; Engenhardt, Klaus M; Westergren, Urban; Popov, Sergei; Jacobsen, Gunnar; Xiao, Shilin; Hu, Weisheng; Chen, Jiajia
2018-01-15
We experimentally demonstrate the transmission of a 200 Gbit/s discrete multitone (DMT) at the soft forward error correction limit in an intensity-modulation direct-detection system with a single C-band packaged distributed feedback laser and traveling-wave electro absorption modulator (DFB-TWEAM), digital-to-analog converter and photodiode. The bit-power loaded DMT signal is transmitted over 1.6 km standard single-mode fiber with a net rate of 166.7 Gbit/s, achieving an effective electrical spectrum efficiency of 4.93 bit/s/Hz. Meanwhile, net rates of 174.2 Gbit/s and 179.5 Gbit/s are also demonstrated over 0.8 km SSMF and in an optical back-to-back case, respectively. The feature of the packaged DFB-TWEAM is presented. The nonlinearity-aware digital signal processing algorithm for channel equalization is mathematically described, which improves the signal-to-noise ratio up to 3.5 dB.
Two-ply channels for faster wicking in paper-based microfluidic devices.
Camplisson, Conor K; Schilling, Kevin M; Pedrotti, William L; Stone, Howard A; Martinez, Andres W
2015-12-07
This article describes the development of porous two-ply channels for paper-based microfluidic devices that wick fluids significantly faster than conventional, porous, single-ply channels. The two-ply channels were made by stacking two single-ply channels on top of each other and were fabricated entirely out of paper, wax and toner using two commercially available printers, a convection oven and a thermal laminator. The wicking in paper-based channels was studied and modeled using a modified Lucas-Washburn equation to account for the effect of evaporation, and a paper-based titration device incorporating two-ply channels was demonstrated.
Bierer, Julie Arenberg; Faulkner, Kathleen F.
2010-01-01
Objectives The goal of this study was to evaluate the ability of a threshold measure, made with a restricted electrode configuration, to identify channels exhibiting relatively poor spatial selectivity. With a restricted electrode configuration, channel-to-channel variability in threshold may reflect variations in the interface between the electrodes and auditory neurons (i.e., nerve survival, electrode placement, tissue impedance). These variations in the electrode-neuron interface should also be reflected in psychophysical tuning curve measurements. Specifically, it is hypothesized that high single-channel thresholds obtained with the spatially focused partial tripolar electrode configuration are predictive of wide or tip-shifted psychophysical tuning curves. Design Data were collected from five cochlear implant listeners implanted with the HiRes 90k cochlear implant (Advanced Bionics). Single-channel thresholds and most comfortable listening levels were obtained for stimuli that varied in presumed electrical field size by using the partial tripolar configuration, for which a fraction of current (σ) from a center active electrode returns through two neighboring electrodes and the remainder through a distant indifferent electrode. Forward-masked psychophysical tuning curves were obtained for channels with the highest, lowest, and median tripolar (σ=1 or 0.9) thresholds. The probe channel and level were fixed and presented with either the monopolar (σ=0) or a more focused partial tripolar (σ ≥ 0.55) configuration. The masker channel and level were varied while the configuration was fixed to σ = 0.5. A standard, three-interval, two-alternative forced choice procedure was used for thresholds and masked levels. Results Single-channel threshold and variability in threshold across channels systematically increased as the compensating current, σ, increased and the presumed electrical field became more focused. Across subjects, channels with the highest single-channel thresholds, when measured with a narrow, partial tripolar stimulus, had significantly broader psychophysical tuning curves than the lowest threshold channels. In two subjects, the tips of the tuning curves were shifted away from the probe channel. Tuning curves were also wider for the monopolar probes than with partial tripolar probes, for both the highest and lowest threshold channels. Conclusions These results suggest that single-channel thresholds measured with a restricted stimulus can be used to identify cochlear implant channels with poor spatial selectivity. Channels having wide or tip-shifted tuning characteristics would likely not deliver the appropriate spectral information to the intended auditory neurons, leading to suboptimal perception. As a clinical tool, quick identification of impaired channels could lead to patient-specific mapping strategies and result in improved speech and music perception. PMID:20090533
Estimating the beam attenuation coefficient in coastal waters from AVHRR imagery
NASA Astrophysics Data System (ADS)
Gould, Richard W.; Arnone, Robert A.
1997-09-01
This paper presents an algorithm to estimate particle beam attenuation at 660 nm ( cp660) in coastal areas using the red and near-infrared channels of the NOAA AVHRR satellite sensor. In situ reflectance spectra and cp660 measurements were collected at 23 stations in Case I and II waters during an April 1993 cruise in the northern Gulf of Mexico. The reflectance spectra were weighted by the spectral response of the AVHRR sensor and integrated over the channel 1 waveband to estimate the atmospherically corrected signal recorded by the satellite. An empirical relationship between integrated reflectance and cp660 values was derived with a linear correlation coefficient of 0.88. Because the AVHRR sensor requires a strong channel 1 signal, the algorithm is applicable in highly turbid areas ( cp660 > 1.5 m -1) where scattering from suspended sediment strongly controls the shape and magnitude of the red (550-650 nm) reflectance spectrum. The algorithm was tested on a data set collected 2 years later in different coastal waters in the northern Gulf of Mexico and satellite estimates of cp660 averaged within 37% of measured values. Application of the algorithm provides daily images of nearshore regions at 1 km resolution for evaluating processes affecting ocean color distribution patterns (tides, winds, currents, river discharge). Further validation and refinement of the algorithm are in progress to permit quantitative application in other coastal areas. Published by Elsevier Science Ltd
NASA Astrophysics Data System (ADS)
Sun, Y. W.; Liu, C.; Xie, P. H.; Hartl, A.; Chan, K. L.; Tian, Y.; Wang, W.; Qin, M.; Liu, J. G.; Liu, W. Q.
2015-12-01
In this paper, we demonstrate achieving accurate industrial SO2 emissions monitoring using a portable multi-channel gas analyzer with an optimized retrieval algorithm. The introduced analyzer features with large dynamic measurement range and correction of interferences from other co-existing infrared absorbers, e.g., NO, CO, CO2, NO2, CH4, HC, N2O and H2O. Both effects have been the major limitations of industrial SO2 emissions monitoring. The multi-channel gas analyzer measures 11 different wavelength channels simultaneously in order to achieve correction of several major problems of an infrared gas analyzer, including system drift, conflict of sensitivity, interferences among different infrared absorbers and limitation of measurement range. The optimized algorithm makes use of a 3rd polynomial rather than a constant factor to quantify gas-to-gas interference. The measurement results show good performance in both linear and nonlinear range, thereby solving the problem that the conventional interference correction is restricted by the linearity of both intended and interfering channels. The result implies that the measurement range of the developed multi-channel analyzer can be extended to the nonlinear absorption region. The measurement range and accuracy are evaluated by experimental laboratory calibration. An excellent agreement was achieved with a Pearson correlation coefficient (r2) of 0.99977 with measurement range from ~5 ppmv to 10 000 ppmv and measurement error <2 %. The instrument was also deployed for field measurement. Emissions from 3 different factories were measured. The emissions of these factories have been characterized with different co-existing infrared absorbers, covering a wide range of concentration levels. We compared our measurements with the commercial SO2 analyzers. The overall good agreements are achieved.
System Design for Nano-Network Communications
NASA Astrophysics Data System (ADS)
ShahMohammadian, Hoda
The potential applications of nanotechnology in a wide range of areas necessities nano-networking research. Nano-networking is a new type of networking which has emerged by applying nanotechnology to communication theory. Therefore, this dissertation presents a framework for physical layer communications in a nano-network and addresses some of the pressing unsolved challenges in designing a molecular communication system. The contribution of this dissertation is proposing well-justified models for signal propagation, noise sources, optimum receiver design and synchronization in molecular communication channels. The design of any communication system is primarily based on the signal propagation channel and noise models. Using the Brownian motion and advection molecular statistics, separate signal propagation and noise models are presented for diffusion-based and flow-based molecular communication channels. It is shown that the corrupting noise of molecular channels is uncorrelated and non-stationary with a signal dependent magnitude. The next key component of any communication system is the reception and detection process. This dissertation provides a detailed analysis of the effect of the ligand-receptor binding mechanism on the received signal, and develops the first optimal receiver design for molecular communications. The bit error rate performance of the proposed receiver is evaluated and the impact of medium motion on the receiver performance is investigated. Another important feature of any communication system is synchronization. In this dissertation, the first blind synchronization algorithm is presented for the molecular communication channels. The proposed algorithm uses a non-decision directed maximum likelihood criterion for estimating the channel delay. The Cramer-Rao lower bound is also derived and the performance of the proposed synchronization algorithm is evaluated by investigating its mean square error.