Sample records for adaptive thresholding algorithm

  1. Robust Adaptive Thresholder For Document Scanning Applications

    NASA Astrophysics Data System (ADS)

    Hsing, To R.

    1982-12-01

    In document scanning applications, thresholding is used to obtain binary data from a scanner. However, due to: (1) a wide range of different color backgrounds; (2) density variations of printed text information; and (3) the shading effect caused by the optical systems, the use of adaptive thresholding to enhance the useful information is highly desired. This paper describes a new robust adaptive thresholder for obtaining valid binary images. It is basically a memory type algorithm which can dynamically update the black and white reference level to optimize a local adaptive threshold function. The results of high image quality from different types of simulate test patterns can be obtained by this algorithm. The software algorithm is described and experiment results are present to describe the procedures. Results also show that the techniques described here can be used for real-time signal processing in the varied applications.

  2. A Fiber Bragg Grating Interrogation System with Self-Adaption Threshold Peak Detection Algorithm.

    PubMed

    Zhang, Weifang; Li, Yingwu; Jin, Bo; Ren, Feifei; Wang, Hongxun; Dai, Wei

    2018-04-08

    A Fiber Bragg Grating (FBG) interrogation system with a self-adaption threshold peak detection algorithm is proposed and experimentally demonstrated in this study. This system is composed of a field programmable gate array (FPGA) and advanced RISC machine (ARM) platform, tunable Fabry-Perot (F-P) filter and optical switch. To improve system resolution, the F-P filter was employed. As this filter is non-linear, this causes the shifting of central wavelengths with the deviation compensated by the parts of the circuit. Time-division multiplexing (TDM) of FBG sensors is achieved by an optical switch, with the system able to realize the combination of 256 FBG sensors. The wavelength scanning speed of 800 Hz can be achieved by a FPGA+ARM platform. In addition, a peak detection algorithm based on a self-adaption threshold is designed and the peak recognition rate is 100%. Experiments with different temperatures were conducted to demonstrate the effectiveness of the system. Four FBG sensors were examined in the thermal chamber without stress. When the temperature changed from 0 °C to 100 °C, the degree of linearity between central wavelengths and temperature was about 0.999 with the temperature sensitivity being 10 pm/°C. The static interrogation precision was able to reach 0.5 pm. Through the comparison of different peak detection algorithms and interrogation approaches, the system was verified to have an optimum comprehensive performance in terms of precision, capacity and speed.

  3. A Fiber Bragg Grating Interrogation System with Self-Adaption Threshold Peak Detection Algorithm

    PubMed Central

    Zhang, Weifang; Li, Yingwu; Jin, Bo; Ren, Feifei

    2018-01-01

    A Fiber Bragg Grating (FBG) interrogation system with a self-adaption threshold peak detection algorithm is proposed and experimentally demonstrated in this study. This system is composed of a field programmable gate array (FPGA) and advanced RISC machine (ARM) platform, tunable Fabry–Perot (F–P) filter and optical switch. To improve system resolution, the F–P filter was employed. As this filter is non-linear, this causes the shifting of central wavelengths with the deviation compensated by the parts of the circuit. Time-division multiplexing (TDM) of FBG sensors is achieved by an optical switch, with the system able to realize the combination of 256 FBG sensors. The wavelength scanning speed of 800 Hz can be achieved by a FPGA+ARM platform. In addition, a peak detection algorithm based on a self-adaption threshold is designed and the peak recognition rate is 100%. Experiments with different temperatures were conducted to demonstrate the effectiveness of the system. Four FBG sensors were examined in the thermal chamber without stress. When the temperature changed from 0 °C to 100 °C, the degree of linearity between central wavelengths and temperature was about 0.999 with the temperature sensitivity being 10 pm/°C. The static interrogation precision was able to reach 0.5 pm. Through the comparison of different peak detection algorithms and interrogation approaches, the system was verified to have an optimum comprehensive performance in terms of precision, capacity and speed. PMID:29642507

  4. A lane line segmentation algorithm based on adaptive threshold and connected domain theory

    NASA Astrophysics Data System (ADS)

    Feng, Hui; Xu, Guo-sheng; Han, Yi; Liu, Yang

    2018-04-01

    Before detecting cracks and repairs on road lanes, it's necessary to eliminate the influence of lane lines on the recognition result in road lane images. Aiming at the problems caused by lane lines, an image segmentation algorithm based on adaptive threshold and connected domain is proposed. First, by analyzing features like grey level distribution and the illumination of the images, the algorithm uses Hough transform to divide the images into different sections and convert them into binary images separately. It then uses the connected domain theory to amend the outcome of segmentation, remove noises and fill the interior zone of lane lines. Experiments have proved that this method could eliminate the influence of illumination and lane line abrasion, removing noises thoroughly while maintaining high segmentation precision.

  5. Adaptive threshold shearlet transform for surface microseismic data denoising

    NASA Astrophysics Data System (ADS)

    Tang, Na; Zhao, Xian; Li, Yue; Zhu, Dan

    2018-06-01

    Random noise suppression plays an important role in microseismic data processing. The microseismic data is often corrupted by strong random noise, which would directly influence identification and location of microseismic events. Shearlet transform is a new multiscale transform, which can effectively process the low magnitude of microseismic data. In shearlet domain, due to different distributions of valid signals and random noise, shearlet coefficients can be shrunk by threshold. Therefore, threshold is vital in suppressing random noise. The conventional threshold denoising algorithms usually use the same threshold to process all coefficients, which causes noise suppression inefficiency or valid signals loss. In order to solve above problems, we propose the adaptive threshold shearlet transform (ATST) for surface microseismic data denoising. In the new algorithm, we calculate the fundamental threshold for each direction subband firstly. In each direction subband, the adjustment factor is obtained according to each subband coefficient and its neighboring coefficients, in order to adaptively regulate the fundamental threshold for different shearlet coefficients. Finally we apply the adaptive threshold to deal with different shearlet coefficients. The experimental denoising results of synthetic records and field data illustrate that the proposed method exhibits better performance in suppressing random noise and preserving valid signal than the conventional shearlet denoising method.

  6. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation

    PubMed Central

    Zhang, Jie; Fan, Shangang; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki

    2017-01-01

    Both L1/2 and L2/3 are two typical non-convex regularizations of Lp (0algorithms. To further exploit the sparse structure of signal and image, this paper adopts multiple dictionary sparse transform strategies for the two typical cases p∈{1/2, 2/3} based on an iterative Lp thresholding algorithm and then proposes a sparse adaptive iterative-weighted Lp thresholding algorithm (SAITA). Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based Lp regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L1 algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based Lp case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work. PMID:29244777

  7. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    PubMed

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0algorithms. To further exploit the sparse structure of signal and image, this paper adopts multiple dictionary sparse transform strategies for the two typical cases p∈{1/2, 2/3} based on an iterative Lp thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA). Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L₁ algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  8. Wavelet-based adaptive thresholding method for image segmentation

    NASA Astrophysics Data System (ADS)

    Chen, Zikuan; Tao, Yang; Chen, Xin; Griffis, Carl

    2001-05-01

    A nonuniform background distribution may cause a global thresholding method to fail to segment objects. One solution is using a local thresholding method that adapts to local surroundings. In this paper, we propose a novel local thresholding method for image segmentation, using multiscale threshold functions obtained by wavelet synthesis with weighted detail coefficients. In particular, the coarse-to- fine synthesis with attenuated detail coefficients produces a threshold function corresponding to a high-frequency- reduced signal. This wavelet-based local thresholding method adapts to both local size and local surroundings, and its implementation can take advantage of the fast wavelet algorithm. We applied this technique to physical contaminant detection for poultry meat inspection using x-ray imaging. Experiments showed that inclusion objects in deboned poultry could be extracted at multiple resolutions despite their irregular sizes and uneven backgrounds.

  9. A de-noising algorithm based on wavelet threshold-exponential adaptive window width-fitting for ground electrical source airborne transient electromagnetic signal

    NASA Astrophysics Data System (ADS)

    Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun

    2016-05-01

    The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.

  10. ADAPTIVE THRESHOLD LOGIC.

    DTIC Science & Technology

    The design and construction of a 16 variable threshold logic gate with adaptable weights is described. The operating characteristics of tape wound...and sizes as well as for the 16 input adaptive threshold logic gate. (Author)

  11. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  12. Modern Adaptive Analytics Approach to Lowering Seismic Network Detection Thresholds

    NASA Astrophysics Data System (ADS)

    Johnson, C. E.

    2017-12-01

    Modern seismic networks present a number of challenges, but perhaps most notably are those related to 1) extreme variation in station density, 2) temporal variation in station availability, and 3) the need to achieve detectability for much smaller events of strategic importance. The first of these has been reasonably addressed in the development of modern seismic associators, such as GLASS 3.0 by the USGS/NEIC, though some work still remains to be done in this area. However, the latter two challenges demand special attention. Station availability is impacted by weather, equipment failure or the adding or removing of stations, and while thresholds have been pushed to increasingly smaller magnitudes, new algorithms are needed to achieve even lower thresholds. Station availability can be addressed by a modern, adaptive architecture that maintains specified performance envelopes using adaptive analytics coupled with complexity theory. Finally, detection thresholds can be lowered using a novel approach that tightly couples waveform analytics with the event detection and association processes based on a principled repicking algorithm that uses particle realignment for enhanced phase discrimination.

  13. Noise adaptive wavelet thresholding for speckle noise removal in optical coherence tomography.

    PubMed

    Zaki, Farzana; Wang, Yahui; Su, Hao; Yuan, Xin; Liu, Xuan

    2017-05-01

    Optical coherence tomography (OCT) is based on coherence detection of interferometric signals and hence inevitably suffers from speckle noise. To remove speckle noise in OCT images, wavelet domain thresholding has demonstrated significant advantages in suppressing noise magnitude while preserving image sharpness. However, speckle noise in OCT images has different characteristics in different spatial scales, which has not been considered in previous applications of wavelet domain thresholding. In this study, we demonstrate a noise adaptive wavelet thresholding (NAWT) algorithm that exploits the difference of noise characteristics in different wavelet sub-bands. The algorithm is simple, fast, effective and is closely related to the physical origin of speckle noise in OCT image. Our results demonstrate that NAWT outperforms conventional wavelet thresholding.

  14. Variable threshold algorithm for division of labor analyzed as a dynamical system.

    PubMed

    Castillo-Cagigal, Manuel; Matallanas, Eduardo; Navarro, Iñaki; Caamaño-Martín, Estefanía; Monasterio-Huelin, Félix; Gutiérrez, Álvaro

    2014-12-01

    Division of labor is a widely studied aspect of colony behavior of social insects. Division of labor models indicate how individuals distribute themselves in order to perform different tasks simultaneously. However, models that study division of labor from a dynamical system point of view cannot be found in the literature. In this paper, we define a division of labor model as a discrete-time dynamical system, in order to study the equilibrium points and their properties related to convergence and stability. By making use of this analytical model, an adaptive algorithm based on division of labor can be designed to satisfy dynamic criteria. In this way, we have designed and tested an algorithm that varies the response thresholds in order to modify the dynamic behavior of the system. This behavior modification allows the system to adapt to specific environmental and collective situations, making the algorithm a good candidate for distributed control applications. The variable threshold algorithm is based on specialization mechanisms. It is able to achieve an asymptotically stable behavior of the system in different environments and independently of the number of individuals. The algorithm has been successfully tested under several initial conditions and number of individuals.

  15. Adaptive threshold control for auto-rate fallback algorithm in IEEE 802.11 multi-rate WLANs

    NASA Astrophysics Data System (ADS)

    Wu, Qilin; Lu, Yang; Zhu, Xiaolin; Ge, Fangzhen

    2012-03-01

    The IEEE 802.11 standard supports multiple rates for data transmission in the physical layer. Nowadays, to improve network performance, a rate adaptation scheme called auto-rate fallback (ARF) is widely adopted in practice. However, ARF scheme suffers performance degradation in multiple contending nodes environments. In this article, we propose a novel rate adaptation scheme called ARF with adaptive threshold control. In multiple contending nodes environment, the proposed scheme can effectively mitigate the frame collision effect on rate adaptation decision by adaptively adjusting rate-up and rate-down threshold according to the current collision level. Simulation results show that the proposed scheme can achieve significantly higher throughput than the other existing rate adaptation schemes. Furthermore, the simulation results also demonstrate that the proposed scheme can effectively respond to the varying channel condition.

  16. Adaptive thresholding algorithm based on SAR images and wind data to segment oil spills along the northwest coast of the Iberian Peninsula.

    PubMed

    Mera, David; Cotos, José M; Varela-Pet, José; Garcia-Pineda, Oscar

    2012-10-01

    Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the ocean's surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Multiscale computations with a wavelet-adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Rastigejev, Yevgenii Anatolyevich

    A wavelet-based adaptive multiresolution algorithm for the numerical solution of multiscale problems governed by partial differential equations is introduced. The main features of the method include fast algorithms for the calculation of wavelet coefficients and approximation of derivatives on nonuniform stencils. The connection between the wavelet order and the size of the stencil is established. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution which are used in conjunction with an appropriate threshold criteria to adapt the collocation grid. The efficient data structures for grid representation as well as related computational algorithms to support grid rearrangement procedure are developed. The algorithm is applied to the simulation of phenomena described by Navier-Stokes equations. First, we undertake the study of the ignition and subsequent viscous detonation of a H2 : O2 : Ar mixture in a one-dimensional shock tube. Subsequently, we apply the algorithm to solve the two- and three-dimensional benchmark problem of incompressible flow in a lid-driven cavity at large Reynolds numbers. For these cases we show that solutions of comparable accuracy as the benchmarks are obtained with more than an order of magnitude reduction in degrees of freedom. The simulations show the striking ability of the algorithm to adapt to a solution having different scales at different spatial locations so as to produce accurate results at a relatively low computational cost.

  18. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    PubMed Central

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  19. A method of camera calibration with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Gao, Lei; Yan, Shu-hua; Wang, Guo-chao; Zhou, Chun-lei

    2009-07-01

    In order to calculate the parameters of the camera correctly, we must figure out the accurate coordinates of the certain points in the image plane. Corners are the important features in the 2D images. Generally speaking, they are the points that have high curvature and lie in the junction of different brightness regions of images. So corners detection has already widely used in many fields. In this paper we use the pinhole camera model and SUSAN corner detection algorithm to calibrate the camera. When using the SUSAN corner detection algorithm, we propose an approach to retrieve the gray difference threshold, adaptively. That makes it possible to pick up the right chessboard inner comers in all kinds of gray contrast. The experiment result based on this method was proved to be feasible.

  20. A globally convergent MC algorithm with an adaptive learning rate.

    PubMed

    Peng, Dezhong; Yi, Zhang; Xiang, Yong; Zhang, Haixian

    2012-02-01

    This brief deals with the problem of minor component analysis (MCA). Artificial neural networks can be exploited to achieve the task of MCA. Recent research works show that convergence of neural networks based MCA algorithms can be guaranteed if the learning rates are less than certain thresholds. However, the computation of these thresholds needs information about the eigenvalues of the autocorrelation matrix of data set, which is unavailable in online extraction of minor component from input data stream. In this correspondence, we introduce an adaptive learning rate into the OJAn MCA algorithm, such that its convergence condition does not depend on any unobtainable information, and can be easily satisfied in practical applications.

  1. Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm

    NASA Astrophysics Data System (ADS)

    Elahi, Sana; kaleem, Muhammad; Omer, Hammad

    2018-01-01

    Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.

  2. Spike-Threshold Adaptation Predicted by Membrane Potential Dynamics In Vivo

    PubMed Central

    Fontaine, Bertrand; Peña, José Luis; Brette, Romain

    2014-01-01

    Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo. PMID:24722397

  3. An adaptive tensor voting algorithm combined with texture spectrum

    NASA Astrophysics Data System (ADS)

    Wang, Gang; Su, Qing-tang; Lü, Gao-huan; Zhang, Xiao-feng; Liu, Yu-huan; He, An-zhi

    2015-01-01

    An adaptive tensor voting algorithm combined with texture spectrum is proposed. The image texture spectrum is used to get the adaptive scale parameter of voting field. Then the texture information modifies both the attenuation coefficient and the attenuation field so that we can use this algorithm to create more significant and correct structures in the original image according to the human visual perception. At the same time, the proposed method can improve the edge extraction quality, which includes decreasing the flocculent region efficiently and making image clear. In the experiment for extracting pavement cracks, the original pavement image is processed by the proposed method which is combined with the significant curve feature threshold procedure, and the resulted image displays the faint crack signals submerged in the complicated background efficiently and clearly.

  4. The threshold algorithm: Description of the methodology and new developments

    NASA Astrophysics Data System (ADS)

    Neelamraju, Sridhar; Oligschleger, Christina; Schön, J. Christian

    2017-10-01

    Understanding the dynamics of complex systems requires the investigation of their energy landscape. In particular, the flow of probability on such landscapes is a central feature in visualizing the time evolution of complex systems. To obtain such flows, and the concomitant stable states of the systems and the generalized barriers among them, the threshold algorithm has been developed. Here, we describe the methodology of this approach starting from the fundamental concepts in complex energy landscapes and present recent new developments, the threshold-minimization algorithm and the molecular dynamics threshold algorithm. For applications of these new algorithms, we draw on landscape studies of three disaccharide molecules: lactose, maltose, and sucrose.

  5. Algorithmic detectability threshold of the stochastic block model

    NASA Astrophysics Data System (ADS)

    Kawamoto, Tatsuro

    2018-03-01

    The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.

  6. Adaptive cockroach swarm algorithm

    NASA Astrophysics Data System (ADS)

    Obagbuwa, Ibidun C.; Abidoye, Ademola P.

    2017-07-01

    An adaptive cockroach swarm optimization (ACSO) algorithm is proposed in this paper to strengthen the existing cockroach swarm optimization (CSO) algorithm. The ruthless component of CSO algorithm is modified by the employment of blend crossover predator-prey evolution method which helps algorithm prevent any possible population collapse, maintain population diversity and create adaptive search in each iteration. The performance of the proposed algorithm on 16 global optimization benchmark function problems was evaluated and compared with the existing CSO, cuckoo search, differential evolution, particle swarm optimization and artificial bee colony algorithms.

  7. An adaptive DPCM algorithm for predicting contours in NTSC composite video signals

    NASA Astrophysics Data System (ADS)

    Cox, N. R.

    An adaptive DPCM algorithm is proposed for encoding digitized National Television Systems Committee (NTSC) color video signals. This algorithm essentially predicts picture contours in the composite signal without resorting to component separation. The contour parameters (slope thresholds) are optimized using four 'typical' television frames that have been sampled at three times the color subcarrier frequency. Three variations of the basic predictor are simulated and compared quantitatively with three non-adaptive predictors of similar complexity. By incorporating a dual-word-length coder and buffer memory, high quality color pictures can be encoded at 4.0 bits/pel or 42.95 Mbit/s. The effect of channel error propagation is also investigated.

  8. Automatic video shot boundary detection using k-means clustering and improved adaptive dual threshold comparison

    NASA Astrophysics Data System (ADS)

    Sa, Qila; Wang, Zhihui

    2018-03-01

    At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.

  9. Adaptive Spike Threshold Enables Robust and Temporally Precise Neuronal Encoding

    PubMed Central

    Resnik, Andrey; Celikel, Tansu; Englitz, Bernhard

    2016-01-01

    Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information. PMID:27304526

  10. Adaptive Spike Threshold Enables Robust and Temporally Precise Neuronal Encoding.

    PubMed

    Huang, Chao; Resnik, Andrey; Celikel, Tansu; Englitz, Bernhard

    2016-06-01

    Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information.

  11. Algorithm for improving psychophysical threshold estimates by detecting sustained inattention in experiments using PEST.

    PubMed

    Rinderknecht, Mike D; Ranzani, Raffaele; Popp, Werner L; Lambercy, Olivier; Gassert, Roger

    2018-05-10

    Psychophysical procedures are applied in various fields to assess sensory thresholds. During experiments, sampled psychometric functions are usually assumed to be stationary. However, perception can be altered, for example by loss of attention to the presentation of stimuli, leading to biased data, which results in poor threshold estimates. The few existing approaches attempting to identify non-stationarities either detect only whether there was a change in perception, or are not suitable for experiments with a relatively small number of trials (e.g., [Formula: see text] 300). We present a method to detect inattention periods on a trial-by-trial basis with the aim of improving threshold estimates in psychophysical experiments using the adaptive sampling procedure Parameter Estimation by Sequential Testing (PEST). The performance of the algorithm was evaluated in computer simulations modeling inattention, and tested in a behavioral experiment on proprioceptive difference threshold assessment in 20 stroke patients, a population where attention deficits are likely to be present. Simulations showed that estimation errors could be reduced by up to 77% for inattentive subjects, even in sequences with less than 100 trials. In the behavioral data, inattention was detected in 14% of assessments, and applying the proposed algorithm resulted in reduced test-retest variability in 73% of these corrected assessments pairs. The novel algorithm complements existing approaches and, besides being applicable post hoc, could also be used online to prevent collection of biased data. This could have important implications in assessment practice by shortening experiments and improving estimates, especially for clinical settings.

  12. Statistical efficiency of adaptive algorithms.

    PubMed

    Widrow, Bernard; Kamenetsky, Max

    2003-01-01

    The statistical efficiency of a learning algorithm applied to the adaptation of a given set of variable weights is defined as the ratio of the quality of the converged solution to the amount of data used in training the weights. Statistical efficiency is computed by averaging over an ensemble of learning experiences. A high quality solution is very close to optimal, while a low quality solution corresponds to noisy weights and less than optimal performance. In this work, two gradient descent adaptive algorithms are compared, the LMS algorithm and the LMS/Newton algorithm. LMS is simple and practical, and is used in many applications worldwide. LMS/Newton is based on Newton's method and the LMS algorithm. LMS/Newton is optimal in the least squares sense. It maximizes the quality of its adaptive solution while minimizing the use of training data. Many least squares adaptive algorithms have been devised over the years, but no other least squares algorithm can give better performance, on average, than LMS/Newton. LMS is easily implemented, but LMS/Newton, although of great mathematical interest, cannot be implemented in most practical applications. Because of its optimality, LMS/Newton serves as a benchmark for all least squares adaptive algorithms. The performances of LMS and LMS/Newton are compared, and it is found that under many circumstances, both algorithms provide equal performance. For example, when both algorithms are tested with statistically nonstationary input signals, their average performances are equal. When adapting with stationary input signals and with random initial conditions, their respective learning times are on average equal. However, under worst-case initial conditions, the learning time of LMS can be much greater than that of LMS/Newton, and this is the principal disadvantage of the LMS algorithm. But the strong points of LMS are ease of implementation and optimal performance under important practical conditions. For these reasons, the LMS

  13. Threshold automatic selection hybrid phase unwrapping algorithm for digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Zhou, Meiling; Min, Junwei; Yao, Baoli; Yu, Xianghua; Lei, Ming; Yan, Shaohui; Yang, Yanlong; Dan, Dan

    2015-01-01

    Conventional quality-guided (QG) phase unwrapping algorithm is hard to be applied to digital holographic microscopy because of the long execution time. In this paper, we present a threshold automatic selection hybrid phase unwrapping algorithm that combines the existing QG algorithm and the flood-filled (FF) algorithm to solve this problem. The original wrapped phase map is divided into high- and low-quality sub-maps by selecting a threshold automatically, and then the FF and QG unwrapping algorithms are used in each level to unwrap the phase, respectively. The feasibility of the proposed method is proved by experimental results, and the execution speed is shown to be much faster than that of the original QG unwrapping algorithm.

  14. Establishing a Dynamic Self-Adaptation Learning Algorithm of the BP Neural Network and Its Applications

    NASA Astrophysics Data System (ADS)

    Li, Xiaofeng; Xiang, Suying; Zhu, Pengfei; Wu, Min

    2015-12-01

    In order to avoid the inherent deficiencies of the traditional BP neural network, such as slow convergence speed, that easily leading to local minima, poor generalization ability and difficulty in determining the network structure, the dynamic self-adaptive learning algorithm of the BP neural network is put forward to improve the function of the BP neural network. The new algorithm combines the merit of principal component analysis, particle swarm optimization, correlation analysis and self-adaptive model, hence can effectively solve the problems of selecting structural parameters, initial connection weights and thresholds and learning rates of the BP neural network. This new algorithm not only reduces the human intervention, optimizes the topological structures of BP neural networks and improves the network generalization ability, but also accelerates the convergence speed of a network, avoids trapping into local minima, and enhances network adaptation ability and prediction ability. The dynamic self-adaptive learning algorithm of the BP neural network is used to forecast the total retail sale of consumer goods of Sichuan Province, China. Empirical results indicate that the new algorithm is superior to the traditional BP network algorithm in predicting accuracy and time consumption, which shows the feasibility and effectiveness of the new algorithm.

  15. QPSO-Based Adaptive DNA Computing Algorithm

    PubMed Central

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409

  16. Algorithms for accelerated convergence of adaptive PCA.

    PubMed

    Chatterjee, C; Kang, Z; Roychowdhury, V P

    2000-01-01

    We derive and discuss new adaptive algorithms for principal component analysis (PCA) that are shown to converge faster than the traditional PCA algorithms due to Oja, Sanger, and Xu. It is well known that traditional PCA algorithms that are derived by using gradient descent on an objective function are slow to converge. Furthermore, the convergence of these algorithms depends on appropriate choices of the gain sequences. Since online applications demand faster convergence and an automatic selection of gains, we present new adaptive algorithms to solve these problems. We first present an unconstrained objective function, which can be minimized to obtain the principal components. We derive adaptive algorithms from this objective function by using: 1) gradient descent; 2) steepest descent; 3) conjugate direction; and 4) Newton-Raphson methods. Although gradient descent produces Xu's LMSER algorithm, the steepest descent, conjugate direction, and Newton-Raphson methods produce new adaptive algorithms for PCA. We also provide a discussion on the landscape of the objective function, and present a global convergence proof of the adaptive gradient descent PCA algorithm using stochastic approximation theory. Extensive experiments with stationary and nonstationary multidimensional Gaussian sequences show faster convergence of the new algorithms over the traditional gradient descent methods.We also compare the steepest descent adaptive algorithm with state-of-the-art methods on stationary and nonstationary sequences.

  17. Comparative advantages of novel algorithms using MSR threshold and MSR difference threshold for biclustering gene expression data.

    PubMed

    Das, Shyama; Idicula, Sumam Mary

    2011-01-01

    The goal of biclustering in gene expression data matrix is to find a submatrix such that the genes in the submatrix show highly correlated activities across all conditions in the submatrix. A measure called mean squared residue (MSR) is used to simultaneously evaluate the coherence of rows and columns within the submatrix. MSR difference is the incremental increase in MSR when a gene or condition is added to the bicluster. In this chapter, three biclustering algorithms using MSR threshold (MSRT) and MSR difference threshold (MSRDT) are experimented and compared. All these methods use seeds generated from K-Means clustering algorithm. Then these seeds are enlarged by adding more genes and conditions. The first algorithm makes use of MSRT alone. Both the second and third algorithms make use of MSRT and the newly introduced concept of MSRDT. Highly coherent biclusters are obtained using this concept. In the third algorithm, a different method is used to calculate the MSRDT. The results obtained on bench mark datasets prove that these algorithms are better than many of the metaheuristic algorithms.

  18. Method of Improved Fuzzy Contrast Combined Adaptive Threshold in NSCT for Medical Image Enhancement

    PubMed Central

    Yang, Jie; Kasabov, Nikola

    2017-01-01

    Noises and artifacts are introduced to medical images due to acquisition techniques and systems. This interference leads to low contrast and distortion in images, which not only impacts the effectiveness of the medical image but also seriously affects the clinical diagnoses. This paper proposes an algorithm for medical image enhancement based on the nonsubsampled contourlet transform (NSCT), which combines adaptive threshold and an improved fuzzy set. First, the original image is decomposed into the NSCT domain with a low-frequency subband and several high-frequency subbands. Then, a linear transformation is adopted for the coefficients of the low-frequency component. An adaptive threshold method is used for the removal of high-frequency image noise. Finally, the improved fuzzy set is used to enhance the global contrast and the Laplace operator is used to enhance the details of the medical images. Experiments and simulation results show that the proposed method is superior to existing methods of image noise removal, improves the contrast of the image significantly, and obtains a better visual effect. PMID:28744464

  19. A self-adaptive algorithm for traffic sign detection in motion image based on color and shape features

    NASA Astrophysics Data System (ADS)

    Zhang, Ka; Sheng, Yehua; Gong, Zhijun; Ye, Chun; Li, Yongqiang; Liang, Cheng

    2007-06-01

    As an important sub-system in intelligent transportation system (ITS), the detection and recognition of traffic signs from mobile images is becoming one of the hot spots in the international research field of ITS. Considering the problem of traffic sign automatic detection in motion images, a new self-adaptive algorithm for traffic sign detection based on color and shape features is proposed in this paper. Firstly, global statistical color features of different images are computed based on statistics theory. Secondly, some self-adaptive thresholds and special segmentation rules for image segmentation are designed according to these global color features. Then, for red, yellow and blue traffic signs, the color image is segmented to three binary images by these thresholds and rules. Thirdly, if the number of white pixels in the segmented binary image exceeds the filtering threshold, the binary image should be further filtered. Fourthly, the method of gray-value projection is used to confirm top, bottom, left and right boundaries for candidate regions of traffic signs in the segmented binary image. Lastly, if the shape feature of candidate region satisfies the need of real traffic sign, this candidate region is confirmed as the detected traffic sign region. The new algorithm is applied to actual motion images of natural scenes taken by a CCD camera of the mobile photogrammetry system in Nanjing at different time. The experimental results show that the algorithm is not only simple, robust and more adaptive to natural scene images, but also reliable and high-speed on real traffic sign detection.

  20. Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding

    PubMed Central

    Sun, Lijuan; Guo, Jian; Xu, Bin; Li, Shujing

    2017-01-01

    The computation of image segmentation has become more complicated with the increasing number of thresholds, and the option and application of the thresholds in image thresholding fields have become an NP problem at the same time. The paper puts forward the modified discrete grey wolf optimizer algorithm (MDGWO), which improves on the optimal solution updating mechanism of the search agent by the weights. Taking Kapur's entropy as the optimized function and based on the discreteness of threshold in image segmentation, the paper firstly discretizes the grey wolf optimizer (GWO) and then proposes a new attack strategy by using the weight coefficient to replace the search formula for optimal solution used in the original algorithm. The experimental results show that MDGWO can search out the optimal thresholds efficiently and precisely, which are very close to the result examined by exhaustive searches. In comparison with the electromagnetism optimization (EMO), the differential evolution (DE), the Artifical Bee Colony (ABC), and the classical GWO, it is concluded that MDGWO has advantages over the latter four in terms of image segmentation quality and objective function values and their stability. PMID:28127305

  1. Study on a low complexity adaptive modulation algorithm in OFDM-ROF system with sub-carrier grouping technology

    NASA Astrophysics Data System (ADS)

    Liu, Chong-xin; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Tian, Qing-hua; Tian, Feng; Wang, Yong-jun; Rao, Lan; Mao, Yaya; Li, Deng-ao

    2018-01-01

    During the last decade, the orthogonal frequency division multiplexing radio-over-fiber (OFDM-ROF) system with adaptive modulation technology is of great interest due to its capability of raising the spectral efficiency dramatically, reducing the effects of fiber link or wireless channel, and improving the communication quality. In this study, according to theoretical analysis of nonlinear distortion and frequency selective fading on the transmitted signal, a low-complexity adaptive modulation algorithm is proposed in combination with sub-carrier grouping technology. This algorithm achieves the optimal performance of the system by calculating the average combined signal-to-noise ratio of each group and dynamically adjusting the origination modulation format according to the preset threshold and user's requirements. At the same time, this algorithm takes the sub-carrier group as the smallest unit in the initial bit allocation and the subsequent bit adjustment. So, the algorithm complexity is only 1 /M (M is the number of sub-carriers in each group) of Fischer algorithm, which is much smaller than many classic adaptive modulation algorithms, such as Hughes-Hartogs algorithm, Chow algorithm, and is in line with the development direction of green and high speed communication. Simulation results show that the performance of OFDM-ROF system with the improved algorithm is much better than those without adaptive modulation, and the BER of the former achieves 10e1 to 10e2 times lower than the latter when SNR values gets larger. We can obtain that this low complexity adaptive modulation algorithm is extremely useful for the OFDM-ROF system.

  2. An improved contrast enhancement algorithm for infrared images based on adaptive double plateaus histogram equalization

    NASA Astrophysics Data System (ADS)

    Li, Shuo; Jin, Weiqi; Li, Li; Li, Yiyang

    2018-05-01

    Infrared thermal images can reflect the thermal-radiation distribution of a particular scene. However, the contrast of the infrared images is usually low. Hence, it is generally necessary to enhance the contrast of infrared images in advance to facilitate subsequent recognition and analysis. Based on the adaptive double plateaus histogram equalization, this paper presents an improved contrast enhancement algorithm for infrared thermal images. In the proposed algorithm, the normalized coefficient of variation of the histogram, which characterizes the level of contrast enhancement, is introduced as feedback information to adjust the upper and lower plateau thresholds. The experiments on actual infrared images show that compared to the three typical contrast-enhancement algorithms, the proposed algorithm has better scene adaptability and yields better contrast-enhancement results for infrared images with more dark areas or a higher dynamic range. Hence, it has high application value in contrast enhancement, dynamic range compression, and digital detail enhancement for infrared thermal images.

  3. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    PubMed

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  4. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    PubMed

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  5. Fast Parallel MR Image Reconstruction via B1-based, Adaptive Restart, Iterative Soft Thresholding Algorithms (BARISTA)

    PubMed Central

    Noll, Douglas C.; Fessler, Jeffrey A.

    2014-01-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484

  6. Adaptive thresholding with inverted triangular area for real-time detection of the heart rate from photoplethysmogram traces on a smartphone.

    PubMed

    Jiang, Wen Jun; Wittek, Peter; Zhao, Li; Gao, Shi Chao

    2014-01-01

    Photoplethysmogram (PPG) signals acquired by smartphone cameras are weaker than those acquired by dedicated pulse oximeters. Furthermore, the signals have lower sampling rates, have notches in the waveform and are more severely affected by baseline drift, leading to specific morphological characteristics. This paper introduces a new feature, the inverted triangular area, to address these specific characteristics. The new feature enables real-time adaptive waveform detection using an algorithm of linear time complexity. It can also recognize notches in the waveform and it is inherently robust to baseline drift. An implementation of the algorithm on Android is available for free download. We collected data from 24 volunteers and compared our algorithm in peak detection with two competing algorithms designed for PPG signals, Incremental-Merge Segmentation (IMS) and Adaptive Thresholding (ADT). A sensitivity of 98.0% and a positive predictive value of 98.8% were obtained, which were 7.7% higher than the IMS algorithm in sensitivity, and 8.3% higher than the ADT algorithm in positive predictive value. The experimental results confirmed the applicability of the proposed method.

  7. Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry

    2014-07-01

    Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.

  8. Analysis of image thresholding segmentation algorithms based on swarm intelligence

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo

    2013-03-01

    Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.

  9. Unipolar Terminal-Attractor Based Neural Associative Memory with Adaptive Threshold

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)

    1996-01-01

    A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner-product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.

  10. Unipolar terminal-attractor based neural associative memory with adaptive threshold

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)

    1993-01-01

    A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.

  11. Synergy of adaptive thresholds and multiple transmitters in free-space optical communication.

    PubMed

    Louthain, James A; Schmidt, Jason D

    2010-04-26

    Laser propagation through extended turbulence causes severe beam spread and scintillation. Airborne laser communication systems require special considerations in size, complexity, power, and weight. Rather than using bulky, costly, adaptive optics systems, we reduce the variability of the received signal by integrating a two-transmitter system with an adaptive threshold receiver to average out the deleterious effects of turbulence. In contrast to adaptive optics approaches, systems employing multiple transmitters and adaptive thresholds exhibit performance improvements that are unaffected by turbulence strength. Simulations of this system with on-off-keying (OOK) showed that reducing the scintillation variations with multiple transmitters improves the performance of low-frequency adaptive threshold estimators by 1-3 dB. The combination of multiple transmitters and adaptive thresholding provided at least a 10 dB gain over implementing only transmitter pointing and receiver tilt correction for all three high-Rytov number scenarios. The scenario with a spherical-wave Rytov number R=0.20 enjoyed a 13 dB reduction in the required SNR for BER's between 10(-5) to 10(-3), consistent with the code gain metric. All five scenarios between 0.06 and 0.20 Rytov number improved to within 3 dB of the SNR of the lowest Rytov number scenario.

  12. Adaptive thresholding and dynamic windowing method for automatic centroid detection of digital Shack-Hartmann wavefront sensor.

    PubMed

    Yin, Xiaoming; Li, Xiang; Zhao, Liping; Fang, Zhongping

    2009-11-10

    A Shack-Hartmann wavefront sensor (SWHS) splits the incident wavefront into many subsections and transfers the distorted wavefront detection into the centroid measurement. The accuracy of the centroid measurement determines the accuracy of the SWHS. Many methods have been presented to improve the accuracy of the wavefront centroid measurement. However, most of these methods are discussed from the point of view of optics, based on the assumption that the spot intensity of the SHWS has a Gaussian distribution, which is not applicable to the digital SHWS. In this paper, we present a centroid measurement algorithm based on the adaptive thresholding and dynamic windowing method by utilizing image processing techniques for practical application of the digital SHWS in surface profile measurement. The method can detect the centroid of each focal spot precisely and robustly by eliminating the influence of various noises, such as diffraction of the digital SHWS, unevenness and instability of the light source, as well as deviation between the centroid of the focal spot and the center of the detection area. The experimental results demonstrate that the algorithm has better precision, repeatability, and stability compared with other commonly used centroid methods, such as the statistical averaging, thresholding, and windowing algorithms.

  13. Hardware Acceleration of Adaptive Neural Algorithms.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James, Conrad D.

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - worldmore » conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.« less

  14. Confronting Decision Cliffs: Diagnostic Assessment of Multi-Objective Evolutionary Algorithms' Performance for Addressing Uncertain Environmental Thresholds

    NASA Astrophysics Data System (ADS)

    Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.

    2014-12-01

    As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a

  15. An Adaptive Image Enhancement Technique by Combining Cuckoo Search and Particle Swarm Optimization Algorithm

    PubMed Central

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  16. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    PubMed

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.

  17. Threshold matrix for digital halftoning by genetic algorithm optimization

    NASA Astrophysics Data System (ADS)

    Alander, Jarmo T.; Mantere, Timo J.; Pyylampi, Tero

    1998-10-01

    Digital halftoning is used both in low and high resolution high quality printing technologies. Our method is designed to be mainly used for low resolution ink jet marking machines to produce both gray tone and color images. The main problem with digital halftoning is pink noise caused by the human eye's visual transfer function. To compensate for this the random dot patterns used are optimized to contain more blue than pink noise. Several such dot pattern generator threshold matrices have been created automatically by using genetic algorithm optimization, a non-deterministic global optimization method imitating natural evolution and genetics. A hybrid of genetic algorithm with a search method based on local backtracking was developed together with several fitness functions evaluating dot patterns for rectangular grids. By modifying the fitness function, a family of dot generators results, each with its particular statistical features. Several versions of genetic algorithms, backtracking and fitness functions were tested to find a reasonable combination. The generated threshold matrices have been tested by simulating a set of test images using the Khoros image processing system. Even though the work was focused on developing low resolution marking technology, the resulting family of dot generators can be applied also in other halftoning application areas including high resolution printing technology.

  18. Low-resolution expression recognition based on central oblique average CS-LBP with adaptive threshold

    NASA Astrophysics Data System (ADS)

    Han, Sheng; Xi, Shi-qiong; Geng, Wei-dong

    2017-11-01

    In order to solve the problem of low recognition rate of traditional feature extraction operators under low-resolution images, a novel algorithm of expression recognition is proposed, named central oblique average center-symmetric local binary pattern (CS-LBP) with adaptive threshold (ATCS-LBP). Firstly, the features of face images can be extracted by the proposed operator after pretreatment. Secondly, the obtained feature image is divided into blocks. Thirdly, the histogram of each block is computed independently and all histograms can be connected serially to create a final feature vector. Finally, expression classification is achieved by using support vector machine (SVM) classifier. Experimental results on Japanese female facial expression (JAFFE) database show that the proposed algorithm can achieve a recognition rate of 81.9% when the resolution is as low as 16×16, which is much better than that of the traditional feature extraction operators.

  19. Classification of adaptive memetic algorithms: a comparative study.

    PubMed

    Ong, Yew-Soon; Lim, Meng-Hiot; Zhu, Ning; Wong, Kok-Wai

    2006-02-01

    Adaptation of parameters and operators represents one of the recent most important and promising areas of research in evolutionary computations; it is a form of designing self-configuring algorithms that acclimatize to suit the problem in hand. Here, our interests are on a recent breed of hybrid evolutionary algorithms typically known as adaptive memetic algorithms (MAs). One unique feature of adaptive MAs is the choice of local search methods or memes and recent studies have shown that this choice significantly affects the performances of problem searches. In this paper, we present a classification of memes adaptation in adaptive MAs on the basis of the mechanism used and the level of historical knowledge on the memes employed. Then the asymptotic convergence properties of the adaptive MAs considered are analyzed according to the classification. Subsequently, empirical studies on representatives of adaptive MAs for different type-level meme adaptations using continuous benchmark problems indicate that global-level adaptive MAs exhibit better search performances. Finally we conclude with some promising research directions in the area.

  20. An Adaptive and Time-Efficient ECG R-Peak Detection Algorithm.

    PubMed

    Qin, Qin; Li, Jianqing; Yue, Yinggao; Liu, Chengyu

    2017-01-01

    R-peak detection is crucial in electrocardiogram (ECG) signal analysis. This study proposed an adaptive and time-efficient R-peak detection algorithm for ECG processing. First, wavelet multiresolution analysis was applied to enhance the ECG signal representation. Then, ECG was mirrored to convert large negative R-peaks to positive ones. After that, local maximums were calculated by the first-order forward differential approach and were truncated by the amplitude and time interval thresholds to locate the R-peaks. The algorithm performances, including detection accuracy and time consumption, were tested on the MIT-BIH arrhythmia database and the QT database. Experimental results showed that the proposed algorithm achieved mean sensitivity of 99.39%, positive predictivity of 99.49%, and accuracy of 98.89% on the MIT-BIH arrhythmia database and 99.83%, 99.90%, and 99.73%, respectively, on the QT database. By processing one ECG record, the mean time consumptions were 0.872 s and 0.763 s for the MIT-BIH arrhythmia database and QT database, respectively, yielding 30.6% and 32.9% of time reduction compared to the traditional Pan-Tompkins method.

  1. An Adaptive and Time-Efficient ECG R-Peak Detection Algorithm

    PubMed Central

    Qin, Qin

    2017-01-01

    R-peak detection is crucial in electrocardiogram (ECG) signal analysis. This study proposed an adaptive and time-efficient R-peak detection algorithm for ECG processing. First, wavelet multiresolution analysis was applied to enhance the ECG signal representation. Then, ECG was mirrored to convert large negative R-peaks to positive ones. After that, local maximums were calculated by the first-order forward differential approach and were truncated by the amplitude and time interval thresholds to locate the R-peaks. The algorithm performances, including detection accuracy and time consumption, were tested on the MIT-BIH arrhythmia database and the QT database. Experimental results showed that the proposed algorithm achieved mean sensitivity of 99.39%, positive predictivity of 99.49%, and accuracy of 98.89% on the MIT-BIH arrhythmia database and 99.83%, 99.90%, and 99.73%, respectively, on the QT database. By processing one ECG record, the mean time consumptions were 0.872 s and 0.763 s for the MIT-BIH arrhythmia database and QT database, respectively, yielding 30.6% and 32.9% of time reduction compared to the traditional Pan-Tompkins method. PMID:29104745

  2. Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image

    NASA Astrophysics Data System (ADS)

    Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.

    2017-12-01

    Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.

  3. Adaptive Algorithms for Automated Processing of Document Images

    DTIC Science & Technology

    2011-01-01

    ABSTRACT Title of dissertation: ADAPTIVE ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES Mudit Agrawal, Doctor of Philosophy, 2011...2011 4. TITLE AND SUBTITLE Adaptive Algorithms for Automated Processing of Document Images 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES by Mudit Agrawal Dissertation submitted to the Faculty of the Graduate School of the University

  4. An adaptive replacement algorithm for paged-memory computer systems.

    NASA Technical Reports Server (NTRS)

    Thorington, J. M., Jr.; Irwin, J. D.

    1972-01-01

    A general class of adaptive replacement schemes for use in paged memories is developed. One such algorithm, called SIM, is simulated using a probability model that generates memory traces, and the results of the simulation of this adaptive scheme are compared with those obtained using the best nonlookahead algorithms. A technique for implementing this type of adaptive replacement algorithm with state of the art digital hardware is also presented.

  5. Detectability Thresholds and Optimal Algorithms for Community Structure in Dynamic Networks

    NASA Astrophysics Data System (ADS)

    Ghasemian, Amir; Zhang, Pan; Clauset, Aaron; Moore, Cristopher; Peel, Leto

    2016-07-01

    The detection of communities within a dynamic network is a common means for obtaining a coarse-grained view of a complex system and for investigating its underlying processes. While a number of methods have been proposed in the machine learning and physics literature, we lack a theoretical analysis of their strengths and weaknesses, or of the ultimate limits on when communities can be detected. Here, we study the fundamental limits of detecting community structure in dynamic networks. Specifically, we analyze the limits of detectability for a dynamic stochastic block model where nodes change their community memberships over time, but where edges are generated independently at each time step. Using the cavity method, we derive a precise detectability threshold as a function of the rate of change and the strength of the communities. Below this sharp threshold, we claim that no efficient algorithm can identify the communities better than chance. We then give two algorithms that are optimal in the sense that they succeed all the way down to this threshold. The first uses belief propagation, which gives asymptotically optimal accuracy, and the second is a fast spectral clustering algorithm, based on linearizing the belief propagation equations. These results extend our understanding of the limits of community detection in an important direction, and introduce new mathematical tools for similar extensions to networks with other types of auxiliary information.

  6. A dual-adaptive support-based stereo matching algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yin; Zhang, Yun

    2017-07-01

    Many stereo matching algorithms use fixed color thresholds and a rigid cross skeleton to segment supports (viz., Cross method), which, however, does not work well for different images. To address this issue, this paper proposes a novel dual adaptive support (viz., DAS)-based stereo matching method, which uses both appearance and shape information of a local region to segment supports automatically, and, then, integrates the DAS-based cost aggregation with the absolute difference plus census transform cost, scanline optimization and disparity refinement to develop a stereo matching system. The performance of the DAS method is also evaluated in the Middlebury benchmark and by comparing with the Cross method. The results show that the average error for the DAS method 25.06% lower than that for the Cross method, indicating that the proposed method is more accurate, with fewer parameters and suitable for parallel computing.

  7. Lower-upper-threshold correlation for underwater range-gated imaging self-adaptive enhancement.

    PubMed

    Sun, Liang; Wang, Xinwei; Liu, Xiaoquan; Ren, Pengdao; Lei, Pingshun; He, Jun; Fan, Songtao; Zhou, Yan; Liu, Yuliang

    2016-10-10

    In underwater range-gated imaging (URGI), enhancement of low-brightness and low-contrast images is critical for human observation. Traditional histogram equalizations over-enhance images, with the result of details being lost. To compress over-enhancement, a lower-upper-threshold correlation method is proposed for underwater range-gated imaging self-adaptive enhancement based on double-plateau histogram equalization. The lower threshold determines image details and compresses over-enhancement. It is correlated with the upper threshold. First, the upper threshold is updated by searching for the local maximum in real time, and then the lower threshold is calculated by the upper threshold and the number of nonzero units selected from a filtered histogram. With this method, the backgrounds of underwater images are constrained with enhanced details. Finally, the proof experiments are performed. Peak signal-to-noise-ratio, variance, contrast, and human visual properties are used to evaluate the objective quality of the global and regions of interest images. The evaluation results demonstrate that the proposed method adaptively selects the proper upper and lower thresholds under different conditions. The proposed method contributes to URGI with effective image enhancement for human eyes.

  8. Development of a thresholding algorithm for calcium classification at multiple CT energies

    NASA Astrophysics Data System (ADS)

    Ng, LY.; Alssabbagh, M.; Tajuddin, A. A.; Shuaib, I. L.; Zainon, R.

    2017-05-01

    The objective of this study was to develop a thresholding method for calcium classification with different concentration using single-energy computed tomography. Five different concentrations of calcium chloride were filled in PMMA tubes and placed inside a water-filled PMMA phantom (diameter 10 cm). The phantom was scanned at 70, 80, 100, 120 and 140 kV using a SECT. CARE DOSE 4D was used and the slice thickness was set to 1 mm for all energies. ImageJ software inspired by the National Institute of Health (NIH) was used to measure the CT numbers for each calcium concentration from the CT images. The results were compared with a developed algorithm for verification. The percentage differences between the measured CT numbers obtained from the developed algorithm and the ImageJ show similar results. The multi-thresholding algorithm was found to be able to distinguish different concentrations of calcium chloride. However, it was unable to detect low concentrations of calcium chloride and iron (III) nitrate with CT numbers between 25 HU and 65 HU. The developed thresholding method used in this study may help to differentiate between calcium plaques and other types of plaques in blood vessels as it is proven to have a good ability to detect the high concentration of the calcium chloride. However, the algorithm needs to be improved to solve the limitations of detecting calcium chloride solution which has a similar CT number with iron (III) nitrate solution.

  9. A new edge detection algorithm based on Canny idea

    NASA Astrophysics Data System (ADS)

    Feng, Yingke; Zhang, Jinmin; Wang, Siming

    2017-10-01

    The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.

  10. Incrementing data quality of multi-frequency echograms using the Adaptive Wiener Filter (AWF) denoising algorithm

    NASA Astrophysics Data System (ADS)

    Peña, M.

    2016-10-01

    Achieving acceptable signal-to-noise ratio (SNR) can be difficult when working in sparsely populated waters and/or when species have low scattering such as fluid filled animals. The increasing use of higher frequencies and the study of deeper depths in fisheries acoustics, as well as the use of commercial vessels, is raising the need to employ good denoising algorithms. The use of a lower Sv threshold to remove noise or unwanted targets is not suitable in many cases and increases the relative background noise component in the echogram, demanding more effectiveness from denoising algorithms. The Adaptive Wiener Filter (AWF) denoising algorithm is presented in this study. The technique is based on the AWF commonly used in digital photography and video enhancement. The algorithm firstly increments the quality of the data with a variance-dependent smoothing, before estimating the noise level as the envelope of the Sv minima. The AWF denoising algorithm outperforms existing algorithms in the presence of gaussian, speckle and salt & pepper noise, although impulse noise needs to be previously removed. Cleaned echograms present homogenous echotraces with outlined edges.

  11. Star adaptation for two-algorithms used on serial computers

    NASA Technical Reports Server (NTRS)

    Howser, L. M.; Lambiotte, J. J., Jr.

    1974-01-01

    Two representative algorithms used on a serial computer and presently executed on the Control Data Corporation 6000 computer were adapted to execute efficiently on the Control Data STAR-100 computer. Gaussian elimination for the solution of simultaneous linear equations and the Gauss-Legendre quadrature formula for the approximation of an integral are the two algorithms discussed. A description is given of how the programs were adapted for STAR and why these adaptations were necessary to obtain an efficient STAR program. Some points to consider when adapting an algorithm for STAR are discussed. Program listings of the 6000 version coded in 6000 FORTRAN, the adapted STAR version coded in 6000 FORTRAN, and the STAR version coded in STAR FORTRAN are presented in the appendices.

  12. Multiratio fusion change detection with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Hytla, Patrick C.; Balster, Eric J.; Vasquez, Juan R.; Neuroth, Robert M.

    2017-04-01

    A ratio-based change detection method known as multiratio fusion (MRF) is proposed and tested. The MRF framework builds on other change detection components proposed in this work: dual ratio (DR) and multiratio (MR). The DR method involves two ratios coupled with adaptive thresholds to maximize detected changes and minimize false alarms. The use of two ratios is shown to outperform the single ratio case when the means of the image pairs are not equal. MR change detection builds on the DR method by including negative imagery to produce four total ratios with adaptive thresholds. Inclusion of negative imagery is shown to improve detection sensitivity and to boost detection performance in certain target and background cases. MRF further expands this concept by fusing together the ratio outputs using a routine in which detections must be verified by two or more ratios to be classified as a true changed pixel. The proposed method is tested with synthetically generated test imagery and real datasets with results compared to other methods found in the literature. DR is shown to significantly outperform the standard single ratio method. MRF produces excellent change detection results that exhibit up to a 22% performance improvement over other methods from the literature at low false-alarm rates.

  13. Contributions of adaptation currents to dynamic spike threshold on slow timescales: Biophysical insights from conductance-based models

    NASA Astrophysics Data System (ADS)

    Yi, Guosheng; Wang, Jiang; Wei, Xile; Deng, Bin; Li, Huiyan; Che, Yanqiu

    2017-06-01

    Spike-frequency adaptation (SFA) mediated by various adaptation currents, such as voltage-gated K+ current (IM), Ca2+-gated K+ current (IAHP), or Na+-activated K+ current (IKNa), exists in many types of neurons, which has been shown to effectively shape their information transmission properties on slow timescales. Here we use conductance-based models to investigate how the activation of three adaptation currents regulates the threshold voltage for action potential (AP) initiation during the course of SFA. It is observed that the spike threshold gets depolarized and the rate of membrane depolarization (dV/dt) preceding AP is reduced as adaptation currents reduce firing rate. It is indicated that the presence of inhibitory adaptation currents enables the neuron to generate a dynamic threshold inversely correlated with preceding dV/dt on slower timescales than fast dynamics of AP generation. By analyzing the interactions of ionic currents at subthreshold potentials, we find that the activation of adaptation currents increase the outward level of net membrane current prior to AP initiation, which antagonizes inward Na+ to result in a depolarized threshold and lower dV/dt from one AP to the next. Our simulations demonstrate that the threshold dynamics on slow timescales is a secondary effect caused by the activation of adaptation currents. These findings have provided a biophysical interpretation of the relationship between adaptation currents and spike threshold.

  14. Adaptive algorithm of magnetic heading detection

    NASA Astrophysics Data System (ADS)

    Liu, Gong-Xu; Shi, Ling-Feng

    2017-11-01

    Magnetic data obtained from a magnetic sensor usually fluctuate in a certain range, which makes it difficult to estimate the magnetic heading accurately. In fact, magnetic heading information is usually submerged in noise because of all kinds of electromagnetic interference and the diversity of the pedestrian’s motion states. In order to solve this problem, a new adaptive algorithm based on the (typically) right-angled corridors of a building or residential buildings is put forward to process heading information. First, a 3D indoor localization platform is set up based on MPU9250. Then, several groups of data are measured by changing the experimental environment and pedestrian’s motion pace. The raw data from the attached inertial measurement unit are calibrated and arranged into a time-stamped array and written to a data file. Later, the data file is imported into MATLAB for processing and analysis using the proposed adaptive algorithm. Finally, the algorithm is verified by comparison with the existing algorithm. The experimental results show that the algorithm has strong robustness and good fault tolerance, which can detect the heading information accurately and in real-time.

  15. A Novel Zero Velocity Interval Detection Algorithm for Self-Contained Pedestrian Navigation System with Inertial Sensors

    PubMed Central

    Tian, Xiaochun; Chen, Jiabin; Han, Yongqiang; Shang, Jianyu; Li, Nan

    2016-01-01

    Zero velocity update (ZUPT) plays an important role in pedestrian navigation algorithms with the premise that the zero velocity interval (ZVI) should be detected accurately and effectively. A novel adaptive ZVI detection algorithm based on a smoothed pseudo Wigner–Ville distribution to remove multiple frequencies intelligently (SPWVD-RMFI) is proposed in this paper. The novel algorithm adopts the SPWVD-RMFI method to extract the pedestrian gait frequency and to calculate the optimal ZVI detection threshold in real time by establishing the function relationships between the thresholds and the gait frequency; then, the adaptive adjustment of thresholds with gait frequency is realized and improves the ZVI detection precision. To put it into practice, a ZVI detection experiment is carried out; the result shows that compared with the traditional fixed threshold ZVI detection method, the adaptive ZVI detection algorithm can effectively reduce the false and missed detection rate of ZVI; this indicates that the novel algorithm has high detection precision and good robustness. Furthermore, pedestrian trajectory positioning experiments at different walking speeds are carried out to evaluate the influence of the novel algorithm on positioning precision. The results show that the ZVI detected by the adaptive ZVI detection algorithm for pedestrian trajectory calculation can achieve better performance. PMID:27669266

  16. Multi-element array signal reconstruction with adaptive least-squares algorithms

    NASA Technical Reports Server (NTRS)

    Kumar, R.

    1992-01-01

    Two versions of the adaptive least-squares algorithm are presented for combining signals from multiple feeds placed in the focal plane of a mechanical antenna whose reflector surface is distorted due to various deformations. Coherent signal combining techniques based on the adaptive least-squares algorithm are examined for nearly optimally and adaptively combining the outputs of the feeds. The performance of the two versions is evaluated by simulations. It is demonstrated for the example considered that both of the adaptive least-squares algorithms are capable of offsetting most of the loss in the antenna gain incurred due to reflector surface deformations.

  17. AMOBH: Adaptive Multiobjective Black Hole Algorithm.

    PubMed

    Wu, Chong; Wu, Tao; Fu, Kaiyuan; Zhu, Yuan; Li, Yongbo; He, Wangyong; Tang, Shengwen

    2017-01-01

    This paper proposes a new multiobjective evolutionary algorithm based on the black hole algorithm with a new individual density assessment (cell density), called "adaptive multiobjective black hole algorithm" (AMOBH). Cell density has the characteristics of low computational complexity and maintains a good balance of convergence and diversity of the Pareto front. The framework of AMOBH can be divided into three steps. Firstly, the Pareto front is mapped to a new objective space called parallel cell coordinate system. Then, to adjust the evolutionary strategies adaptively, Shannon entropy is employed to estimate the evolution status. At last, the cell density is combined with a dominance strength assessment called cell dominance to evaluate the fitness of solutions. Compared with the state-of-the-art methods SPEA-II, PESA-II, NSGA-II, and MOEA/D, experimental results show that AMOBH has a good performance in terms of convergence rate, population diversity, population convergence, subpopulation obtention of different Pareto regions, and time complexity to the latter in most cases.

  18. Adaptive Two Dimensional RLS (Recursive Least Squares) Algorithms

    DTIC Science & Technology

    1989-03-01

    in Monterey wonderful. IX I. INTRODUCTION Adaptive algorithms have been used successfully for many years in a wide range of digital signal...SIMULATION RESULTS The 2-D FRLS algorithm was tested both on computer-generated data and on digitized images. For a baseline reference the 2-D L:rv1S...Alexander, S. T. Adaptivt Signal Processing: Theory and Applications. Springer- Verlag, New York. 1986. 7. Bellanger, Maurice G. Adaptive Digital

  19. An adaptive inverse kinematics algorithm for robot manipulators

    NASA Technical Reports Server (NTRS)

    Colbaugh, R.; Glass, K.; Seraji, H.

    1990-01-01

    An adaptive algorithm for solving the inverse kinematics problem for robot manipulators is presented. The algorithm is derived using model reference adaptive control (MRAC) theory and is computationally efficient for online applications. The scheme requires no a priori knowledge of the kinematics of the robot if Cartesian end-effector sensing is available, and it requires knowledge of only the forward kinematics if joint position sensing is used. Computer simulation results are given for the redundant seven-DOF robotics research arm, demonstrating that the proposed algorithm yields accurate joint angle trajectories for a given end-effector position/orientation trajectory.

  20. A parallel adaptive mesh refinement algorithm

    NASA Technical Reports Server (NTRS)

    Quirk, James J.; Hanebutte, Ulf R.

    1993-01-01

    Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.

  1. Genetic algorithms in adaptive fuzzy control

    NASA Technical Reports Server (NTRS)

    Karr, C. Lucas; Harper, Tony R.

    1992-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust fuzzy membership functions in response to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific computer-simulated chemical system is used to demonstrate the ideas presented.

  2. 3D SAPIV particle field reconstruction method based on adaptive threshold.

    PubMed

    Qu, Xiangju; Song, Yang; Jin, Ying; Li, Zhenhua; Wang, Xuezhen; Guo, ZhenYan; Ji, Yunjing; He, Anzhi

    2018-03-01

    Particle image velocimetry (PIV) is a necessary flow field diagnostic technique that provides instantaneous velocimetry information non-intrusively. Three-dimensional (3D) PIV methods can supply the full understanding of a 3D structure, the complete stress tensor, and the vorticity vector in the complex flows. In synthetic aperture particle image velocimetry (SAPIV), the flow field can be measured with large particle intensities from the same direction by different cameras. During SAPIV particle reconstruction, particles are commonly reconstructed by manually setting a threshold to filter out unfocused particles in the refocused images. In this paper, the particle intensity distribution in refocused images is analyzed, and a SAPIV particle field reconstruction method based on an adaptive threshold is presented. By using the adaptive threshold to filter the 3D measurement volume integrally, the three-dimensional location information of the focused particles can be reconstructed. The cross correlations between images captured from cameras and images projected by the reconstructed particle field are calculated for different threshold values. The optimal threshold is determined by cubic curve fitting and is defined as the threshold value that causes the correlation coefficient to reach its maximum. The numerical simulation of a 16-camera array and a particle field at two adjacent time events quantitatively evaluates the performance of the proposed method. An experimental system consisting of a camera array of 16 cameras was used to reconstruct the four adjacent frames in a vortex flow field. The results show that the proposed reconstruction method can effectively reconstruct the 3D particle fields.

  3. Adaptive protection algorithm and system

    DOEpatents

    Hedrick, Paul [Pittsburgh, PA; Toms, Helen L [Irwin, PA; Miller, Roger M [Mars, PA

    2009-04-28

    An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.

  4. A Self Adaptive Differential Evolution Algorithm for Global Optimization

    NASA Astrophysics Data System (ADS)

    Kumar, Pravesh; Pant, Millie

    This paper presents a new Differential Evolution algorithm based on hybridization of adaptive control parameters and trigonometric mutation. First we propose a self adaptive DE named ADE where choice of control parameter F and Cr is not fixed at some constant value but is taken iteratively. The proposed algorithm is further modified by applying trigonometric mutation in it and the corresponding algorithm is named as ATDE. The performance of ATDE is evaluated on the set of 8 benchmark functions and the results are compared with the classical DE algorithm in terms of average fitness function value, number of function evaluations, convergence time and success rate. The numerical result shows the competence of the proposed algorithm.

  5. An Adaptive Tradeoff Algorithm for Multi-issue SLA Negotiation

    NASA Astrophysics Data System (ADS)

    Son, Seokho; Sim, Kwang Mong

    Since participants in a Cloud may be independent bodies, mechanisms are necessary for resolving different preferences in leasing Cloud services. Whereas there are currently mechanisms that support service-level agreement negotiation, there is little or no negotiation support for concurrent price and timeslot for Cloud service reservations. For the concurrent price and timeslot negotiation, a tradeoff algorithm to generate and evaluate a proposal which consists of price and timeslot proposal is necessary. The contribution of this work is thus to design an adaptive tradeoff algorithm for multi-issue negotiation mechanism. The tradeoff algorithm referred to as "adaptive burst mode" is especially designed to increase negotiation speed and total utility and to reduce computational load by adaptively generating concurrent set of proposals. The empirical results obtained from simulations carried out using a testbed suggest that due to the concurrent price and timeslot negotiation mechanism with adaptive tradeoff algorithm: 1) both agents achieve the best performance in terms of negotiation speed and utility; 2) the number of evaluations of each proposal is comparatively lower than previous scheme (burst-N).

  6. Properties of an adaptive feedback equalization algorithm.

    PubMed

    Engebretson, A M; French-St George, M

    1993-01-01

    This paper describes a new approach to feedback equalization for hearing aids. The method involves the use of an adaptive algorithm that estimates and tracks the characteristic of the hearing aid feedback path. The algorithm is described and the results of simulation studies and bench testing are presented.

  7. Thresher: an improved algorithm for peak height thresholding of microbial community profiles.

    PubMed

    Starke, Verena; Steele, Andrew

    2014-11-15

    This article presents Thresher, an improved technique for finding peak height thresholds for automated rRNA intergenic spacer analysis (ARISA) profiles. We argue that thresholds must be sample dependent, taking community richness into account. In most previous fragment analyses, a common threshold is applied to all samples simultaneously, ignoring richness variations among samples and thereby compromising cross-sample comparison. Our technique solves this problem, and at the same time provides a robust method for outlier rejection, selecting for removal any replicate pairs that are not valid replicates. Thresholds are calculated individually for each replicate in a pair, and separately for each sample. The thresholds are selected to be the ones that minimize the dissimilarity between the replicates after thresholding. If a choice of threshold results in the two replicates in a pair failing a quantitative test of similarity, either that threshold or that sample must be rejected. We compare thresholded ARISA results with sequencing results, and demonstrate that the Thresher algorithm outperforms conventional thresholding techniques. The software is implemented in R, and the code is available at http://verenastarke.wordpress.com or by contacting the author. vstarke@ciw.edu or http://verenastarke.wordpress.com Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Self-Tuning Threshold Method for Real-Time Gait Phase Detection Based on Ground Contact Forces Using FSRs.

    PubMed

    Tang, Jing; Zheng, Jianbin; Wang, Yang; Yu, Lie; Zhan, Enqi; Song, Qiuzhi

    2018-02-06

    This paper presents a novel methodology for detecting the gait phase of human walking on level ground. The previous threshold method (TM) sets a threshold to divide the ground contact forces (GCFs) into on-ground and off-ground states. However, the previous methods for gait phase detection demonstrate no adaptability to different people and different walking speeds. Therefore, this paper presents a self-tuning triple threshold algorithm (STTTA) that calculates adjustable thresholds to adapt to human walking. Two force sensitive resistors (FSRs) were placed on the ball and heel to measure GCFs. Three thresholds (i.e., high-threshold, middle-threshold andlow-threshold) were used to search out the maximum and minimum GCFs for the self-adjustments of thresholds. The high-threshold was the main threshold used to divide the GCFs into on-ground and off-ground statuses. Then, the gait phases were obtained through the gait phase detection algorithm (GPDA), which provides the rules that determine calculations for STTTA. Finally, the STTTA reliability is determined by comparing the results between STTTA and Mariani method referenced as the timing analysis module (TAM) and Lopez-Meyer methods. Experimental results show that the proposed method can be used to detect gait phases in real time and obtain high reliability when compared with the previous methods in the literature. In addition, the proposed method exhibits strong adaptability to different wearers walking at different walking speeds.

  9. Vehicle tracking using fuzzy-based vehicle detection window with adaptive parameters

    NASA Astrophysics Data System (ADS)

    Chitsobhuk, Orachat; Kasemsiri, Watjanapong; Glomglome, Sorayut; Lapamonpinyo, Pipatphon

    2018-04-01

    In this paper, fuzzy-based vehicle tracking system is proposed. The proposed system consists of two main processes: vehicle detection and vehicle tracking. In the first process, the Gradient-based Adaptive Threshold Estimation (GATE) algorithm is adopted to provide the suitable threshold value for the sobel edge detection. The estimated threshold can be adapted to the changes of diverse illumination conditions throughout the day. This leads to greater vehicle detection performance compared to a fixed user's defined threshold. In the second process, this paper proposes the novel vehicle tracking algorithms namely Fuzzy-based Vehicle Analysis (FBA) in order to reduce the false estimation of the vehicle tracking caused by uneven edges of the large vehicles and vehicle changing lanes. The proposed FBA algorithm employs the average edge density and the Horizontal Moving Edge Detection (HMED) algorithm to alleviate those problems by adopting fuzzy rule-based algorithms to rectify the vehicle tracking. The experimental results demonstrate that the proposed system provides the high accuracy of vehicle detection about 98.22%. In addition, it also offers the low false detection rates about 3.92%.

  10. ANOTHER LOOK AT THE FAST ITERATIVE SHRINKAGE/THRESHOLDING ALGORITHM (FISTA)*

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2017-01-01

    This paper provides a new way of developing the “Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)” [3] that is widely used for minimizing composite convex functions with a nonsmooth term such as the ℓ1 regularizer. In particular, this paper shows that FISTA corresponds to an optimized approach to accelerating the proximal gradient method with respect to a worst-case bound of the cost function. This paper then proposes a new algorithm that is derived by instead optimizing the step coefficients of the proximal gradient method with respect to a worst-case bound of the composite gradient mapping. The proof is based on the worst-case analysis called Performance Estimation Problem in [11]. PMID:29805242

  11. Hair segmentation using adaptive threshold from edge and branch length measures.

    PubMed

    Lee, Ian; Du, Xian; Anthony, Brian

    2017-10-01

    Non-invasive imaging techniques allow the monitoring of skin structure and diagnosis of skin diseases in clinical applications. However, hair in skin images hampers the imaging and classification of the skin structure of interest. Although many hair segmentation methods have been proposed for digital hair removal, a major challenge in hair segmentation remains in detecting hairs that are thin, overlapping, of similar contrast or color to underlying skin, or overlaid on highly-textured skin structure. To solve the problem, we present an automatic hair segmentation method that uses edge density (ED) and mean branch length (MBL) to measure hair. First, hair is detected by the integration of top-hat transform and modified second-order Gaussian filter. Second, we employ a robust adaptive threshold of ED and MBL to generate a hair mask. Third, the hair mask is refined by k-NN classification of hair and skin pixels. The proposed algorithm was tested using two datasets of healthy skin images and lesion images respectively. These datasets were taken from different imaging platforms in various illumination levels and varying skin colors. We compared the hair detection and segmentation results from our algorithm and six other hair segmentation methods of state of the art. Our method exhibits high value of sensitivity: 75% and specificity: 95%, which indicates significantly higher accuracy and better balance between true positive and false positive detection than the other methods. Published by Elsevier Ltd.

  12. Lowering threshold energy for femtosecond laser pulse photodisruption through turbid media using adaptive optics

    NASA Astrophysics Data System (ADS)

    Hansen, A.; Ripken, Tammo; Krueger, Ronald R.; Lubatschowski, Holger

    2011-03-01

    Focussed femtosecond laser pulses are applied in ophthalmic tissues to create an optical breakdown and therefore a tissue dissection through photodisruption. The threshold irradiance for the optical breakdown depends on the photon density in the focal volume which can be influenced by the pulse energy, the size of the irradiated area (focus), and the irradiation time. For an application in the posterior eye segment the aberrations of the anterior eye elements cause a distortion of the wavefront and therefore an increased focal volume which reduces the photon density and thus raises the required energy for surpassing the threshold irradiance. The influence of adaptive optics on lowering the pulse energy required for photodisruption by refining a distorted focus was investigated. A reduction of the threshold energy can be shown when using adaptive optics. The spatial confinement with adaptive optics furthermore raises the irradiance at constant pulse energy. The lowered threshold energy allows for tissue dissection with reduced peripheral damage. This offers the possibility for moving femtosecond laser surgery from corneal or lental applications in the anterior eye to vitreal or retinal applications in the posterior eye.

  13. Vibratory Adaptation of Cutaneous Mechanoreceptive Afferents

    PubMed Central

    Bensmaïa, S. J.; Leung, Y. Y.; Hsiao, S. S.; Johnson, K. O.

    2007-01-01

    The objective of this study was to investigate the effects of extended suprathreshold vibratory stimulation on the sensitivity of slowly adapting type 1 (SA1), rapidly adapting (RA), and Pacinian (PC) afferents. To that end, an algorithm was developed to track afferent absolute (I0) and entrainment (I1) thresholds as they change over time. We recorded afferent responses to periliminal vibratory test stimuli, which were interleaved with intense vibratory conditioning stimuli during the adaptation period of each experimental run. From these measurements, the algorithm allowed us to infer changes in the afferents’ sensitivity. We investigated the stimulus parameters that affect adaptation by assessing the degree to which adaptation depends on the amplitude and frequency of the adapting stimulus. For all three afferent types, I0 and I1 increased with increasing adaptation frequency and amplitude. The degree of adaptation seems to be independent of the firing rate evoked in the afferent by the conditioning stimulus. In the analysis, we distinguished between additive adaptation (in which I0 and I1 shift equally) and multiplicative effects (in which the ratio I1/I0 remains constant). RA threshold shifts are almost perfectly additive. SA1 threshold shifts are close to additive and far from multiplicative (I1 threshold shifts are twice the shifts). PC shifts are more difficult to classify. We used an I0 integrate-and-fire model to study the possible neural mechanisms. A change in transducer gain predicts a multiplicative change in I0 and I1 and is thus ruled out as a mechanism underlying SA1 and RA adaptation. A change in the resting action potential threshold predicts equal, additive change in I0 and I1 and thus accounts well for RA adaptation. A change in the degree of refractoriness during the relative refractory period predicts an additional change in I1 such as that observed for SA1 fibers. We infer that adaptation is caused by an increase in spiking thresholds

  14. Adaptive optics image restoration algorithm based on wavefront reconstruction and adaptive total variation method

    NASA Astrophysics Data System (ADS)

    Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen

    2016-11-01

    To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.

  15. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    NASA Astrophysics Data System (ADS)

    Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing

    2015-08-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).

  16. Acoustical source reconstruction from non-synchronous sequential measurements by Fast Iterative Shrinkage Thresholding Algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Liang; Antoni, Jerome; Leclere, Quentin; Jiang, Weikang

    2017-11-01

    Acoustical source reconstruction is a typical inverse problem, whose minimum frequency of reconstruction hinges on the size of the array and maximum frequency depends on the spacing distance between the microphones. For the sake of enlarging the frequency of reconstruction and reducing the cost of an acquisition system, Cyclic Projection (CP), a method of sequential measurements without reference, was recently investigated (JSV,2016,372:31-49). In this paper, the Propagation based Fast Iterative Shrinkage Thresholding Algorithm (Propagation-FISTA) is introduced, which improves CP in two aspects: (1) the number of acoustic sources is no longer needed and the only making assumption is that of a "weakly sparse" eigenvalue spectrum; (2) the construction of the spatial basis is much easier and adaptive to practical scenarios of acoustical measurements benefiting from the introduction of propagation based spatial basis. The proposed Propagation-FISTA is first investigated with different simulations and experimental setups and is next illustrated with an industrial case.

  17. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    NASA Astrophysics Data System (ADS)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  18. An adaptive grid algorithm for one-dimensional nonlinear equations

    NASA Technical Reports Server (NTRS)

    Gutierrez, William E.; Hills, Richard G.

    1990-01-01

    Richards' equation, which models the flow of liquid through unsaturated porous media, is highly nonlinear and difficult to solve. Step gradients in the field variables require the use of fine grids and small time step sizes. The numerical instabilities caused by the nonlinearities often require the use of iterative methods such as Picard or Newton interation. These difficulties result in large CPU requirements in solving Richards equation. With this in mind, adaptive and multigrid methods are investigated for use with nonlinear equations such as Richards' equation. Attention is focused on one-dimensional transient problems. To investigate the use of multigrid and adaptive grid methods, a series of problems are studied. First, a multigrid program is developed and used to solve an ordinary differential equation, demonstrating the efficiency with which low and high frequency errors are smoothed out. The multigrid algorithm and an adaptive grid algorithm is used to solve one-dimensional transient partial differential equations, such as the diffusive and convective-diffusion equations. The performance of these programs are compared to that of the Gauss-Seidel and tridiagonal methods. The adaptive and multigrid schemes outperformed the Gauss-Seidel algorithm, but were not as fast as the tridiagonal method. The adaptive grid scheme solved the problems slightly faster than the multigrid method. To solve nonlinear problems, Picard iterations are introduced into the adaptive grid and tridiagonal methods. Burgers' equation is used as a test problem for the two algorithms. Both methods obtain solutions of comparable accuracy for similar time increments. For the Burgers' equation, the adaptive grid method finds the solution approximately three times faster than the tridiagonal method. Finally, both schemes are used to solve the water content formulation of the Richards' equation. For this problem, the adaptive grid method obtains a more accurate solution in fewer work units and

  19. An Adaptive Immune Genetic Algorithm for Edge Detection

    NASA Astrophysics Data System (ADS)

    Li, Ying; Bai, Bendu; Zhang, Yanning

    An adaptive immune genetic algorithm (AIGA) based on cost minimization technique method for edge detection is proposed. The proposed AIGA recommends the use of adaptive probabilities of crossover, mutation and immune operation, and a geometric annealing schedule in immune operator to realize the twin goals of maintaining diversity in the population and sustaining the fast convergence rate in solving the complex problems such as edge detection. Furthermore, AIGA can effectively exploit some prior knowledge and information of the local edge structure in the edge image to make vaccines, which results in much better local search ability of AIGA than that of the canonical genetic algorithm. Experimental results on gray-scale images show the proposed algorithm perform well in terms of quality of the final edge image, rate of convergence and robustness to noise.

  20. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    PubMed

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.

  1. Accelerated Path-following Iterative Shrinkage Thresholding Algorithm with Application to Semiparametric Graph Estimation

    PubMed Central

    Zhao, Tuo; Liu, Han

    2016-01-01

    We propose an accelerated path-following iterative shrinkage thresholding algorithm (APISTA) for solving high dimensional sparse nonconvex learning problems. The main difference between APISTA and the path-following iterative shrinkage thresholding algorithm (PISTA) is that APISTA exploits an additional coordinate descent subroutine to boost the computational performance. Such a modification, though simple, has profound impact: APISTA not only enjoys the same theoretical guarantee as that of PISTA, i.e., APISTA attains a linear rate of convergence to a unique sparse local optimum with good statistical properties, but also significantly outperforms PISTA in empirical benchmarks. As an application, we apply APISTA to solve a family of nonconvex optimization problems motivated by estimating sparse semiparametric graphical models. APISTA allows us to obtain new statistical recovery results which do not exist in the existing literature. Thorough numerical results are provided to back up our theory. PMID:28133430

  2. Fast algorithm of adaptive Fourier series

    NASA Astrophysics Data System (ADS)

    Gao, You; Ku, Min; Qian, Tao

    2018-05-01

    Adaptive Fourier decomposition (AFD, precisely 1-D AFD or Core-AFD) was originated for the goal of positive frequency representations of signals. It achieved the goal and at the same time offered fast decompositions of signals. There then arose several types of AFDs. AFD merged with the greedy algorithm idea, and in particular, motivated the so-called pre-orthogonal greedy algorithm (Pre-OGA) that was proven to be the most efficient greedy algorithm. The cost of the advantages of the AFD type decompositions is, however, the high computational complexity due to the involvement of maximal selections of the dictionary parameters. The present paper offers one formulation of the 1-D AFD algorithm by building the FFT algorithm into it. Accordingly, the algorithm complexity is reduced, from the original $\\mathcal{O}(M N^2)$ to $\\mathcal{O}(M N\\log_2 N)$, where $N$ denotes the number of the discretization points on the unit circle and $M$ denotes the number of points in $[0,1)$. This greatly enhances the applicability of AFD. Experiments are carried out to show the high efficiency of the proposed algorithm.

  3. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation

    PubMed Central

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2016-01-01

    The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms. PMID:27999361

  4. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation.

    PubMed

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2016-12-19

    The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms.

  5. Adaptive convergence nonuniformity correction algorithm.

    PubMed

    Qian, Weixian; Chen, Qian; Bai, Junqi; Gu, Guohua

    2011-01-01

    Nowadays, convergence and ghosting artifacts are common problems in scene-based nonuniformity correction (NUC) algorithms. In this study, we introduce the idea of space frequency to the scene-based NUC. Then the convergence speed factor is presented, which can adaptively change the convergence speed by a change of the scene dynamic range. In fact, the convergence speed factor role is to decrease the statistical data standard deviation. The nonuniformity space relativity characteristic was summarized by plenty of experimental statistical data. The space relativity characteristic was used to correct the convergence speed factor, which can make it more stable. Finally, real and simulated infrared image sequences were applied to demonstrate the positive effect of our algorithm.

  6. G/SPLINES: A hybrid of Friedman's Multivariate Adaptive Regression Splines (MARS) algorithm with Holland's genetic algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1991-01-01

    G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.

  7. Flight data processing with the F-8 adaptive algorithm

    NASA Technical Reports Server (NTRS)

    Hartmann, G.; Stein, G.; Petersen, K.

    1977-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters has been designed for NASA's DFBW F-8 aircraft. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm has been implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer and surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software. The software and its performance evaluation based on flight data are described

  8. Adaptive Trajectory Prediction Algorithm for Climbing Flights

    NASA Technical Reports Server (NTRS)

    Schultz, Charles Alexander; Thipphavong, David P.; Erzberger, Heinz

    2012-01-01

    Aircraft climb trajectories are difficult to predict, and large errors in these predictions reduce the potential operational benefits of some advanced features for NextGen. The algorithm described in this paper improves climb trajectory prediction accuracy by adjusting trajectory predictions based on observed track data. It utilizes rate-of-climb and airspeed measurements derived from position data to dynamically adjust the aircraft weight modeled for trajectory predictions. In simulations with weight uncertainty, the algorithm is able to adapt to within 3 percent of the actual gross weight within two minutes of the initial adaptation. The root-mean-square of altitude errors for five-minute predictions was reduced by 73 percent. Conflict detection performance also improved, with a 15 percent reduction in missed alerts and a 10 percent reduction in false alerts. In a simulation with climb speed capture intent and weight uncertainty, the algorithm improved climb trajectory prediction accuracy by up to 30 percent and conflict detection performance, reducing missed and false alerts by up to 10 percent.

  9. Adaptive firefly algorithm: parameter analysis and its application.

    PubMed

    Cheung, Ngaam J; Ding, Xue-Ming; Shen, Hong-Bin

    2014-01-01

    As a nature-inspired search algorithm, firefly algorithm (FA) has several control parameters, which may have great effects on its performance. In this study, we investigate the parameter selection and adaptation strategies in a modified firefly algorithm - adaptive firefly algorithm (AdaFa). There are three strategies in AdaFa including (1) a distance-based light absorption coefficient; (2) a gray coefficient enhancing fireflies to share difference information from attractive ones efficiently; and (3) five different dynamic strategies for the randomization parameter. Promising selections of parameters in the strategies are analyzed to guarantee the efficient performance of AdaFa. AdaFa is validated over widely used benchmark functions, and the numerical experiments and statistical tests yield useful conclusions on the strategies and the parameter selections affecting the performance of AdaFa. When applied to the real-world problem - protein tertiary structure prediction, the results demonstrated improved variants can rebuild the tertiary structure with the average root mean square deviation less than 0.4Å and 1.5Å from the native constrains with noise free and 10% Gaussian white noise.

  10. Adaptive Firefly Algorithm: Parameter Analysis and its Application

    PubMed Central

    Shen, Hong-Bin

    2014-01-01

    As a nature-inspired search algorithm, firefly algorithm (FA) has several control parameters, which may have great effects on its performance. In this study, we investigate the parameter selection and adaptation strategies in a modified firefly algorithmadaptive firefly algorithm (AdaFa). There are three strategies in AdaFa including (1) a distance-based light absorption coefficient; (2) a gray coefficient enhancing fireflies to share difference information from attractive ones efficiently; and (3) five different dynamic strategies for the randomization parameter. Promising selections of parameters in the strategies are analyzed to guarantee the efficient performance of AdaFa. AdaFa is validated over widely used benchmark functions, and the numerical experiments and statistical tests yield useful conclusions on the strategies and the parameter selections affecting the performance of AdaFa. When applied to the real-world problem — protein tertiary structure prediction, the results demonstrated improved variants can rebuild the tertiary structure with the average root mean square deviation less than 0.4Å and 1.5Å from the native constrains with noise free and 10% Gaussian white noise. PMID:25397812

  11. Adaptive spline autoregression threshold method in forecasting Mitsubishi car sales volume at PT Srikandi Diamond Motors

    NASA Astrophysics Data System (ADS)

    Susanti, D.; Hartini, E.; Permana, A.

    2017-01-01

    Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.

  12. An adaptive threshold detector and channel parameter estimator for deep space optical communications

    NASA Technical Reports Server (NTRS)

    Arabshahi, P.; Mukai, R.; Yan, T. -Y.

    2001-01-01

    This paper presents a method for optimal adaptive setting of ulse-position-modulation pulse detection thresholds, which minimizes the total probability of error for the dynamically fading optical fee space channel.

  13. Developing Bayesian adaptive methods for estimating sensitivity thresholds (d′) in Yes-No and forced-choice tasks

    PubMed Central

    Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.

    2015-01-01

    Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798

  14. Prediction of cardiovascular risk in rheumatoid arthritis: performance of original and adapted SCORE algorithms.

    PubMed

    Arts, E E A; Popa, C D; Den Broeder, A A; Donders, R; Sandoo, A; Toms, T; Rollefstad, S; Ikdahl, E; Semb, A G; Kitas, G D; Van Riel, P L C M; Fransen, J

    2016-04-01

    Predictive performance of cardiovascular disease (CVD) risk calculators appears suboptimal in rheumatoid arthritis (RA). A disease-specific CVD risk algorithm may improve CVD risk prediction in RA. The objectives of this study are to adapt the Systematic COronary Risk Evaluation (SCORE) algorithm with determinants of CVD risk in RA and to assess the accuracy of CVD risk prediction calculated with the adapted SCORE algorithm. Data from the Nijmegen early RA inception cohort were used. The primary outcome was first CVD events. The SCORE algorithm was recalibrated by reweighing included traditional CVD risk factors and adapted by adding other potential predictors of CVD. Predictive performance of the recalibrated and adapted SCORE algorithms was assessed and the adapted SCORE was externally validated. Of the 1016 included patients with RA, 103 patients experienced a CVD event. Discriminatory ability was comparable across the original, recalibrated and adapted SCORE algorithms. The Hosmer-Lemeshow test results indicated that all three algorithms provided poor model fit (p<0.05) for the Nijmegen and external validation cohort. The adapted SCORE algorithm mainly improves CVD risk estimation in non-event cases and does not show a clear advantage in reclassifying patients with RA who develop CVD (event cases) into more appropriate risk groups. This study demonstrates for the first time that adaptations of the SCORE algorithm do not provide sufficient improvement in risk prediction of future CVD in RA to serve as an appropriate alternative to the original SCORE. Risk assessment using the original SCORE algorithm may underestimate CVD risk in patients with RA. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  15. Olfactory Detection Thresholds and Adaptation in Adults with Autism Spectrum Condition

    ERIC Educational Resources Information Center

    Tavassoli, T.; Baron-Cohen, S.

    2012-01-01

    Sensory issues have been widely reported in Autism Spectrum Conditions (ASC). Since olfaction is one of the least investigated senses in ASC, the current studies explore olfactory detection thresholds and adaptation to olfactory stimuli in adults with ASC. 80 participants took part, 38 (18 females, 20 males) with ASC and 42 control participants…

  16. Cross counter-based adaptive assembly scheme in optical burst switching networks

    NASA Astrophysics Data System (ADS)

    Zhu, Zhi-jun; Dong, Wen; Le, Zi-chun; Chen, Wan-jun; Sun, Xingshu

    2009-11-01

    A novel adaptive assembly algorithm called Cross-counter Balance Adaptive Assembly Period (CBAAP) is proposed in this paper. The major difference between CBAAP and other adaptive assembly algorithms is that the threshold of CBAAP can be dynamically adjusted according to the cross counter and step length value. In terms of assembly period and the burst loss probability, we compare the performance of CBAAP with those of three typical algorithms FAP (Fixed Assembly Period), FBL (Fixed Burst Length) and MBMAP (Min-Burst length-Max-Assembly-Period) in the simulation part. The simulation results demonstrate the effectiveness of our algorithm.

  17. Evaluation of thresholding techniques for segmenting scaffold images in tissue engineering

    NASA Astrophysics Data System (ADS)

    Rajagopalan, Srinivasan; Yaszemski, Michael J.; Robb, Richard A.

    2004-05-01

    Tissue engineering attempts to address the ever widening gap between the demand and supply of organ and tissue transplants using natural and biomimetic scaffolds. The regeneration of specific tissues aided by synthetic materials is dependent on the structural and morphometric properties of the scaffold. These properties can be derived non-destructively using quantitative analysis of high resolution microCT scans of scaffolds. Thresholding of the scanned images into polymeric and porous phase is central to the outcome of the subsequent structural and morphometric analysis. Visual thresholding of scaffolds produced using stochastic processes is inaccurate. Depending on the algorithmic assumptions made, automatic thresholding might also be inaccurate. Hence there is a need to analyze the performance of different techniques and propose alternate ones, if needed. This paper provides a quantitative comparison of different thresholding techniques for segmenting scaffold images. The thresholding algorithms examined include those that exploit spatial information, locally adaptive characteristics, histogram entropy information, histogram shape information, and clustering of gray-level information. The performance of different techniques was evaluated using established criteria, including misclassification error, edge mismatch, relative foreground error, and region non-uniformity. Algorithms that exploit local image characteristics seem to perform much better than those using global information.

  18. [A cloud detection algorithm for MODIS images combining Kmeans clustering and multi-spectral threshold method].

    PubMed

    Wang, Wei; Song, Wei-Guo; Liu, Shi-Xing; Zhang, Yong-Ming; Zheng, Hong-Yang; Tian, Wei

    2011-04-01

    An improved method for detecting cloud combining Kmeans clustering and the multi-spectral threshold approach is described. On the basis of landmark spectrum analysis, MODIS data is categorized into two major types initially by Kmeans method. The first class includes clouds, smoke and snow, and the second class includes vegetation, water and land. Then a multi-spectral threshold detection is applied to eliminate interference such as smoke and snow for the first class. The method is tested with MODIS data at different time under different underlying surface conditions. By visual method to test the performance of the algorithm, it was found that the algorithm can effectively detect smaller area of cloud pixels and exclude the interference of underlying surface, which provides a good foundation for the next fire detection approach.

  19. Adaptive phase k-means algorithm for waveform classification

    NASA Astrophysics Data System (ADS)

    Song, Chengyun; Liu, Zhining; Wang, Yaojun; Xu, Feng; Li, Xingming; Hu, Guangmin

    2018-01-01

    Waveform classification is a powerful technique for seismic facies analysis that describes the heterogeneity and compartments within a reservoir. Horizon interpretation is a critical step in waveform classification. However, the horizon often produces inconsistent waveform phase, and thus results in an unsatisfied classification. To alleviate this problem, an adaptive phase waveform classification method called the adaptive phase k-means is introduced in this paper. Our method improves the traditional k-means algorithm using an adaptive phase distance for waveform similarity measure. The proposed distance is a measure with variable phases as it moves from sample to sample along the traces. Model traces are also updated with the best phase interference in the iterative process. Therefore, our method is robust to phase variations caused by the interpretation horizon. We tested the effectiveness of our algorithm by applying it to synthetic and real data. The satisfactory results reveal that the proposed method tolerates certain waveform phase variation and is a good tool for seismic facies analysis.

  20. Positive-negative corresponding normalized ghost imaging based on an adaptive threshold

    NASA Astrophysics Data System (ADS)

    Li, G. L.; Zhao, Y.; Yang, Z. H.; Liu, X.

    2016-11-01

    Ghost imaging (GI) technology has attracted increasing attention as a new imaging technique in recent years. However, the signal-to-noise ratio (SNR) of GI with pseudo-thermal light needs to be improved before it meets engineering application demands. We therefore propose a new scheme called positive-negative correspondence normalized GI based on an adaptive threshold (PCNGI-AT) to achieve a good performance with less amount of data. In this work, we use both the advantages of normalized GI (NGI) and positive-negative correspondence GI (P-NCGI). The correctness and feasibility of the scheme were proved in theory before we designed an adaptive threshold selection method, in which the parameter of object signal selection conditions is replaced by the normalizing value. The simulation and experimental results reveal that the SNR of the proposed scheme is better than that of time-correspondence differential GI (TCDGI), avoiding the calculation of the matrix of correlation and reducing the amount of data used. The method proposed will make GI far more practical in engineering applications.

  1. Fully implicit adaptive mesh refinement MHD algorithm

    NASA Astrophysics Data System (ADS)

    Philip, Bobby

    2005-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.

  2. Shape anomaly detection under strong measurement noise: An analytical approach to adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Krasichkov, Alexander S.; Grigoriev, Eugene B.; Bogachev, Mikhail I.; Nifontov, Eugene M.

    2015-10-01

    We suggest an analytical approach to the adaptive thresholding in a shape anomaly detection problem. We find an analytical expression for the distribution of the cosine similarity score between a reference shape and an observational shape hindered by strong measurement noise that depends solely on the noise level and is independent of the particular shape analyzed. The analytical treatment is also confirmed by computer simulations and shows nearly perfect agreement. Using this analytical solution, we suggest an improved shape anomaly detection approach based on adaptive thresholding. We validate the noise robustness of our approach using typical shapes of normal and pathological electrocardiogram cycles hindered by additive white noise. We show explicitly that under high noise levels our approach considerably outperforms the conventional tactic that does not take into account variations in the noise level.

  3. Surgical wound segmentation based on adaptive threshold edge detection and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shih, Hsueh-Fu; Ho, Te-Wei; Hsu, Jui-Tse; Chang, Chun-Che; Lai, Feipei; Wu, Jin-Ming

    2017-02-01

    Postsurgical wound care has a great impact on patients' prognosis. It often takes few days, even few weeks, for the wound to stabilize, which incurs a great cost of health care and nursing resources. To assess the wound condition and diagnosis, it is important to segment out the wound region for further analysis. However, the scenario of this strategy often consists of complicated background and noise. In this study, we propose a wound segmentation algorithm based on Canny edge detector and genetic algorithm with an unsupervised evaluation function. The results were evaluated by the 112 clinical images, and 94.3% of images were correctly segmented. The judgment was based on the evaluation of experimented medical doctors. This capability to extract complete wound regions, makes it possible to conduct further image analysis such as intelligent recovery evaluation and automatic infection requirements.

  4. A Hybrid Adaptive Routing Algorithm for Event-Driven Wireless Sensor Networks

    PubMed Central

    Figueiredo, Carlos M. S.; Nakamura, Eduardo F.; Loureiro, Antonio A. F.

    2009-01-01

    Routing is a basic function in wireless sensor networks (WSNs). For these networks, routing algorithms depend on the characteristics of the applications and, consequently, there is no self-contained algorithm suitable for every case. In some scenarios, the network behavior (traffic load) may vary a lot, such as an event-driven application, favoring different algorithms at different instants. This work presents a hybrid and adaptive algorithm for routing in WSNs, called Multi-MAF, that adapts its behavior autonomously in response to the variation of network conditions. In particular, the proposed algorithm applies both reactive and proactive strategies for routing infrastructure creation, and uses an event-detection estimation model to change between the strategies and save energy. To show the advantages of the proposed approach, it is evaluated through simulations. Comparisons with independent reactive and proactive algorithms show improvements on energy consumption. PMID:22423207

  5. A hybrid adaptive routing algorithm for event-driven wireless sensor networks.

    PubMed

    Figueiredo, Carlos M S; Nakamura, Eduardo F; Loureiro, Antonio A F

    2009-01-01

    Routing is a basic function in wireless sensor networks (WSNs). For these networks, routing algorithms depend on the characteristics of the applications and, consequently, there is no self-contained algorithm suitable for every case. In some scenarios, the network behavior (traffic load) may vary a lot, such as an event-driven application, favoring different algorithms at different instants. This work presents a hybrid and adaptive algorithm for routing in WSNs, called Multi-MAF, that adapts its behavior autonomously in response to the variation of network conditions. In particular, the proposed algorithm applies both reactive and proactive strategies for routing infrastructure creation, and uses an event-detection estimation model to change between the strategies and save energy. To show the advantages of the proposed approach, it is evaluated through simulations. Comparisons with independent reactive and proactive algorithms show improvements on energy consumption.

  6. Adaptively resizing populations: Algorithm, analysis, and first results

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.; Smuda, Ellen

    1993-01-01

    Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.

  7. A chaos wolf optimization algorithm with self-adaptive variable step-size

    NASA Astrophysics Data System (ADS)

    Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun

    2017-10-01

    To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  8. A wavelet-based adaptive fusion algorithm of infrared polarization imaging

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang

    2011-08-01

    The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.

  9. Improvement and implementation for Canny edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Qiu, Yue-hong

    2015-07-01

    Edge detection is necessary for image segmentation and pattern recognition. In this paper, an improved Canny edge detection approach is proposed due to the defect of traditional algorithm. A modified bilateral filter with a compensation function based on pixel intensity similarity judgment was used to smooth image instead of Gaussian filter, which could preserve edge feature and remove noise effectively. In order to solve the problems of sensitivity to the noise in gradient calculating, the algorithm used 4 directions gradient templates. Finally, Otsu algorithm adaptively obtain the dual-threshold. All of the algorithm simulated with OpenCV 2.4.0 library in the environments of vs2010, and through the experimental analysis, the improved algorithm has been proved to detect edge details more effectively and with more adaptability.

  10. Adaptive process control using fuzzy logic and genetic algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  11. Microscopy mineral image enhancement based on improved adaptive threshold in nonsubsampled shearlet transform domain

    NASA Astrophysics Data System (ADS)

    Li, Liangliang; Si, Yujuan; Jia, Zhenhong

    2018-03-01

    In this paper, a novel microscopy mineral image enhancement method based on adaptive threshold in non-subsampled shearlet transform (NSST) domain is proposed. First, the image is decomposed into one low-frequency sub-band and several high-frequency sub-bands. Second, the gamma correction is applied to process the low-frequency sub-band coefficients, and the improved adaptive threshold is adopted to suppress the noise of the high-frequency sub-bands coefficients. Third, the processed coefficients are reconstructed with the inverse NSST. Finally, the unsharp filter is used to enhance the details of the reconstructed image. Experimental results on various microscopy mineral images demonstrated that the proposed approach has a better enhancement effect in terms of objective metric and subjective metric.

  12. Adaptive reference update (ARU) algorithm. A stochastic search algorithm for efficient optimization of multi-drug cocktails

    PubMed Central

    2012-01-01

    Background Multi-target therapeutics has been shown to be effective for treating complex diseases, and currently, it is a common practice to combine multiple drugs to treat such diseases to optimize the therapeutic outcomes. However, considering the huge number of possible ways to mix multiple drugs at different concentrations, it is practically difficult to identify the optimal drug combination through exhaustive testing. Results In this paper, we propose a novel stochastic search algorithm, called the adaptive reference update (ARU) algorithm, that can provide an efficient and systematic way for optimizing multi-drug cocktails. The ARU algorithm iteratively updates the drug combination to improve its response, where the update is made by comparing the response of the current combination with that of a reference combination, based on which the beneficial update direction is predicted. The reference combination is continuously updated based on the drug response values observed in the past, thereby adapting to the underlying drug response function. To demonstrate the effectiveness of the proposed algorithm, we evaluated its performance based on various multi-dimensional drug functions and compared it with existing algorithms. Conclusions Simulation results show that the ARU algorithm significantly outperforms existing stochastic search algorithms, including the Gur Game algorithm. In fact, the ARU algorithm can more effectively identify potent drug combinations and it typically spends fewer iterations for finding effective combinations. Furthermore, the ARU algorithm is robust to random fluctuations and noise in the measured drug response, which makes the algorithm well-suited for practical drug optimization applications. PMID:23134742

  13. Enabling the extended compact genetic algorithm for real-parameter optimization by using adaptive discretization.

    PubMed

    Chen, Ying-ping; Chen, Chao-Hong

    2010-01-01

    An adaptive discretization method, called split-on-demand (SoD), enables estimation of distribution algorithms (EDAs) for discrete variables to solve continuous optimization problems. SoD randomly splits a continuous interval if the number of search points within the interval exceeds a threshold, which is decreased at every iteration. After the split operation, the nonempty intervals are assigned integer codes, and the search points are discretized accordingly. As an example of using SoD with EDAs, the integration of SoD and the extended compact genetic algorithm (ECGA) is presented and numerically examined. In this integration, we adopt a local search mechanism as an optional component of our back end optimization engine. As a result, the proposed framework can be considered as a memetic algorithm, and SoD can potentially be applied to other memetic algorithms. The numerical experiments consist of two parts: (1) a set of benchmark functions on which ECGA with SoD and ECGA with two well-known discretization methods: the fixed-height histogram (FHH) and the fixed-width histogram (FWH) are compared; (2) a real-world application, the economic dispatch problem, on which ECGA with SoD is compared to other methods. The experimental results indicate that SoD is a better discretization method to work with ECGA. Moreover, ECGA with SoD works quite well on the economic dispatch problem and delivers solutions better than the best known results obtained by other methods in existence.

  14. Detecting an atomic clock frequency anomaly using an adaptive Kalman filter algorithm

    NASA Astrophysics Data System (ADS)

    Song, Huijie; Dong, Shaowu; Wu, Wenjun; Jiang, Meng; Wang, Weixiong

    2018-06-01

    The abnormal frequencies of an atomic clock mainly include frequency jump and frequency drift jump. Atomic clock frequency anomaly detection is a key technique in time-keeping. The Kalman filter algorithm, as a linear optimal algorithm, has been widely used in real-time detection for abnormal frequency. In order to obtain an optimal state estimation, the observation model and dynamic model of the Kalman filter algorithm should satisfy Gaussian white noise conditions. The detection performance is degraded if anomalies affect the observation model or dynamic model. The idea of the adaptive Kalman filter algorithm, applied to clock frequency anomaly detection, uses the residuals given by the prediction for building ‘an adaptive factor’ the prediction state covariance matrix is real-time corrected by the adaptive factor. The results show that the model error is reduced and the detection performance is improved. The effectiveness of the algorithm is verified by the frequency jump simulation, the frequency drift jump simulation and the measured data of the atomic clock by using the chi-square test.

  15. Fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, R.

    1986-01-01

    A new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general, adaptive parameter-estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be nonGaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the fast Fourier transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.

  16. Smart algorithms and adaptive methods in computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Tinsley Oden, J.

    1989-05-01

    A review is presented of the use of smart algorithms which employ adaptive methods in processing large amounts of data in computational fluid dynamics (CFD). Smart algorithms use a rationally based set of criteria for automatic decision making in an attempt to produce optimal simulations of complex fluid dynamics problems. The information needed to make these decisions is not known beforehand and evolves in structure and form during the numerical solution of flow problems. Once the code makes a decision based on the available data, the structure of the data may change, and criteria may be reapplied in order to direct the analysis toward an acceptable end. Intelligent decisions are made by processing vast amounts of data that evolve unpredictably during the calculation. The basic components of adaptive methods and their application to complex problems of fluid dynamics are reviewed. The basic components of adaptive methods are: (1) data structures, that is what approaches are available for modifying data structures of an approximation so as to reduce errors; (2) error estimation, that is what techniques exist for estimating error evolution in a CFD calculation; and (3) solvers, what algorithms are available which can function in changing meshes. Numerical examples which demonstrate the viability of these approaches are presented.

  17. Threshold-selecting strategy for best possible ground state detection with genetic algorithms

    NASA Astrophysics Data System (ADS)

    Lässig, Jörg; Hoffmann, Karl Heinz

    2009-04-01

    Genetic algorithms are a standard heuristic to find states of low energy in complex state spaces as given by physical systems such as spin glasses but also in combinatorial optimization. The paper considers the problem of selecting individuals in the current population in genetic algorithms for crossover. Many schemes have been considered in literature as possible crossover selection strategies. We show for a large class of quality measures that the best possible probability distribution for selecting individuals in each generation of the algorithm execution is a rectangular distribution over the individuals sorted by their energy values. This means uniform probabilities have to be assigned to a group of the individuals with lowest energy in the population but probabilities equal to zero to individuals which are corresponding to energy values higher than a fixed cutoff, which is equal to a certain rank in the vector sorted by the energy of the states in the current population. The considered strategy is dubbed threshold selecting. The proof applies basic arguments of Markov chains and linear optimization and makes only a few assumptions on the underlying principles and hence applies to a large class of algorithms.

  18. Adaptive Process Control with Fuzzy Logic and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision-making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  19. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    PubMed

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.

  20. Multi-objective Optimization Design of Gear Reducer Based on Adaptive Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Li, Rui; Chang, Tian; Wang, Jianwei; Wei, Xiaopeng; Wang, Jinming

    2008-11-01

    An adaptive Genetic Algorithm (GA) is introduced to solve the multi-objective optimized design of the reducer. Firstly, according to the structure, strength, etc. in a reducer, a multi-objective optimized model of the helical gear reducer is established. And then an adaptive GA based on a fuzzy controller is introduced, aiming at the characteristics of multi-objective, multi-parameter, multi-constraint conditions. Finally, a numerical example is illustrated to show the advantages of this approach and the effectiveness of an adaptive genetic algorithm used in optimized design of a reducer.

  1. Intermediate view reconstruction using adaptive disparity search algorithm for real-time 3D processing

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo

    2008-03-01

    In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.

  2. Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  3. THRESHOLD LOGIC.

    DTIC Science & Technology

    synthesis procedures; a ’best’ method is definitely established. (2) ’Symmetry Types for Threshold Logic’ is a tutorial expositon including a careful...development of the Goto-Takahasi self-dual type ideas. (3) ’Best Threshold Gate Decisions’ reports a comparison, on the 2470 7-argument threshold ...interpretation is shown best. (4) ’ Threshold Gate Networks’ reviews the previously discussed 2-algorithm in geometric terms, describes our FORTRAN

  4. Adaptive Neural Network Algorithm for Power Control in Nuclear Power Plants

    NASA Astrophysics Data System (ADS)

    Masri Husam Fayiz, Al

    2017-01-01

    The aim of this paper is to design, test and evaluate a prototype of an adaptive neural network algorithm for the power controlling system of a nuclear power plant. The task of power control in nuclear reactors is one of the fundamental tasks in this field. Therefore, researches are constantly conducted to ameliorate the power reactor control process. Currently, in the Department of Automation in the National Research Nuclear University (NRNU) MEPhI, numerous studies are utilizing various methodologies of artificial intelligence (expert systems, neural networks, fuzzy systems and genetic algorithms) to enhance the performance, safety, efficiency and reliability of nuclear power plants. In particular, a study of an adaptive artificial intelligent power regulator in the control systems of nuclear power reactors is being undertaken to enhance performance and to minimize the output error of the Automatic Power Controller (APC) on the grounds of a multifunctional computer analyzer (simulator) of the Water-Water Energetic Reactor known as Vodo-Vodyanoi Energetichesky Reaktor (VVER) in Russian. In this paper, a block diagram of an adaptive reactor power controller was built on the basis of an intelligent control algorithm. When implementing intelligent neural network principles, it is possible to improve the quality and dynamic of any control system in accordance with the principles of adaptive control. It is common knowledge that an adaptive control system permits adjusting the controller’s parameters according to the transitions in the characteristics of the control object or external disturbances. In this project, it is demonstrated that the propitious options for an automatic power controller in nuclear power plants is a control system constructed on intelligent neural network algorithms.

  5. SIMULATION OF A REACTING POLLUTANT PUFF USING AN ADAPTIVE GRID ALGORITHM

    EPA Science Inventory

    A new dynamic solution adaptive grid algorithm DSAGA-PPM, has been developed for use in air quality modeling. In this paper, this algorithm is described and evaluated with a test problem. Cone-shaped distributions of various chemical species undergoing chemical reactions are rota...

  6. Three-dimensional geoelectric modelling with optimal work/accuracy rate using an adaptive wavelet algorithm

    NASA Astrophysics Data System (ADS)

    Plattner, A.; Maurer, H. R.; Vorloeper, J.; Dahmen, W.

    2010-08-01

    Despite the ever-increasing power of modern computers, realistic modelling of complex 3-D earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modelling approaches includes either finite difference or non-adaptive finite element algorithms and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behaviour of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modelled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet-based approach that is applicable to a large range of problems, also including nonlinear problems. In comparison with earlier applications of adaptive solvers to geophysical problems we employ here a new adaptive scheme whose core ingredients arose from a rigorous analysis of the overall asymptotically optimal computational complexity, including in particular, an optimal work/accuracy rate. Our adaptive wavelet algorithm offers several attractive features: (i) for a given subsurface model, it allows the forward modelling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient and (iii) the modelling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving 3-D geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared

  7. Analysis of adaptive algorithms for an integrated communication network

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Barr, Matthew; Chong-Kwon, Kim

    1985-01-01

    Techniques were examined that trade communication bandwidth for decreased transmission delays. When the network is lightly used, these schemes attempt to use additional network resources to decrease communication delays. As the network utilization rises, the schemes degrade gracefully, still providing service but with minimal use of the network. Because the schemes use a combination of circuit and packet switching, they should respond to variations in the types and amounts of network traffic. Also, a combination of circuit and packet switching to support the widely varying traffic demands imposed on an integrated network was investigated. The packet switched component is best suited to bursty traffic where some delays in delivery are acceptable. The circuit switched component is reserved for traffic that must meet real time constraints. Selected packet routing algorithms that might be used in an integrated network were simulated. An integrated traffic places widely varying workload demands on a network. Adaptive algorithms were identified, ones that respond to both the transient and evolutionary changes that arise in integrated networks. A new algorithm was developed, hybrid weighted routing, that adapts to workload changes.

  8. Edge detection based on adaptive threshold b-spline wavelet for optical sub-aperture measuring

    NASA Astrophysics Data System (ADS)

    Zhang, Shiqi; Hui, Mei; Liu, Ming; Zhao, Zhu; Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin

    2015-08-01

    In the research of optical synthetic aperture imaging system, phase congruency is the main problem and it is necessary to detect sub-aperture phase. The edge of the sub-aperture system is more complex than that in the traditional optical imaging system. And with the existence of steep slope for large-aperture optical component, interference fringe may be quite dense when interference imaging. Deep phase gradient may cause a loss of phase information. Therefore, it's urgent to search for an efficient edge detection method. Wavelet analysis as a powerful tool is widely used in the fields of image processing. Based on its properties of multi-scale transform, edge region is detected with high precision in small scale. Longing with the increase of scale, noise is reduced in contrary. So it has a certain suppression effect on noise. Otherwise, adaptive threshold method which sets different thresholds in various regions can detect edge points from noise. Firstly, fringe pattern is obtained and cubic b-spline wavelet is adopted as the smoothing function. After the multi-scale wavelet decomposition of the whole image, we figure out the local modulus maxima in gradient directions. However, it also contains noise, and thus adaptive threshold method is used to select the modulus maxima. The point which greater than threshold value is boundary point. Finally, we use corrosion and expansion deal with the resulting image to get the consecutive boundary of image.

  9. An adaptive clustering algorithm for image matching based on corner feature

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-04-01

    The traditional image matching algorithm always can not balance the real-time and accuracy better, to solve the problem, an adaptive clustering algorithm for image matching based on corner feature is proposed in this paper. The method is based on the similarity of the matching pairs of vector pairs, and the adaptive clustering is performed on the matching point pairs. Harris corner detection is carried out first, the feature points of the reference image and the perceived image are extracted, and the feature points of the two images are first matched by Normalized Cross Correlation (NCC) function. Then, using the improved algorithm proposed in this paper, the matching results are clustered to reduce the ineffective operation and improve the matching speed and robustness. Finally, the Random Sample Consensus (RANSAC) algorithm is used to match the matching points after clustering. The experimental results show that the proposed algorithm can effectively eliminate the most wrong matching points while the correct matching points are retained, and improve the accuracy of RANSAC matching, reduce the computation load of whole matching process at the same time.

  10. Estimating meme fitness in adaptive memetic algorithms for combinatorial problems.

    PubMed

    Smith, J E

    2012-01-01

    Among the most promising and active research areas in heuristic optimisation is the field of adaptive memetic algorithms (AMAs). These gain much of their reported robustness by adapting the probability with which each of a set of local improvement operators is applied, according to an estimate of their current value to the search process. This paper addresses the issue of how the current value should be estimated. Assuming the estimate occurs over several applications of a meme, we consider whether the extreme or mean improvements should be used, and whether this aggregation should be global, or local to some part of the solution space. To investigate these issues, we use the well-established COMA framework that coevolves the specification of a population of memes (representing different local search algorithms) alongside a population of candidate solutions to the problem at hand. Two very different memetic algorithms are considered: the first using adaptive operator pursuit to adjust the probabilities of applying a fixed set of memes, and a second which applies genetic operators to dynamically adapt and create memes and their functional definitions. For the latter, especially on combinatorial problems, credit assignment mechanisms based on historical records, or on notions of landscape locality, will have limited application, and it is necessary to estimate the value of a meme via some form of sampling. The results on a set of binary encoded combinatorial problems show that both methods are very effective, and that for some problems it is necessary to use thousands of variables in order to tease apart the differences between different reward schemes. However, for both memetic algorithms, a significant pattern emerges that reward based on mean improvement is better than that based on extreme improvement. This contradicts recent findings from adapting the parameters of operators involved in global evolutionary search. The results also show that local reward schemes

  11. A Multi-Anatomical Retinal Structure Segmentation System for Automatic Eye Screening Using Morphological Adaptive Fuzzy Thresholding

    PubMed Central

    Elleithy, Khaled; Elleithy, Abdelrahman

    2018-01-01

    Eye exam can be as efficacious as physical one in determining health concerns. Retina screening can be the very first clue for detecting a variety of hidden health issues including pre-diabetes and diabetes. Through the process of clinical diagnosis and prognosis; ophthalmologists rely heavily on the binary segmented version of retina fundus image; where the accuracy of segmented vessels, optic disc, and abnormal lesions extremely affects the diagnosis accuracy which in turn affect the subsequent clinical treatment steps. This paper proposes an automated retinal fundus image segmentation system composed of three segmentation subsystems follow same core segmentation algorithm. Despite of broad difference in features and characteristics; retinal vessels, optic disc, and exudate lesions are extracted by each subsystem without the need for texture analysis or synthesis. For sake of compact diagnosis and complete clinical insight, our proposed system can detect these anatomical structures in one session with high accuracy even in pathological retina images. The proposed system uses a robust hybrid segmentation algorithm combines adaptive fuzzy thresholding and mathematical morphology. The proposed system is validated using four benchmark datasets: DRIVE and STARE (vessels), DRISHTI-GS (optic disc), and DIARETDB1 (exudates lesions). Competitive segmentation performance is achieved, outperforming a variety of up-to-date systems and demonstrating the capacity to deal with other heterogeneous anatomical structures. PMID:29888146

  12. Fast Adapting Ensemble: A New Algorithm for Mining Data Streams with Concept Drift

    PubMed Central

    Ortíz Díaz, Agustín; Ramos-Jiménez, Gonzalo; Frías Blanco, Isvani; Caballero Mota, Yailé; Morales-Bueno, Rafael

    2015-01-01

    The treatment of large data streams in the presence of concept drifts is one of the main challenges in the field of data mining, particularly when the algorithms have to deal with concepts that disappear and then reappear. This paper presents a new algorithm, called Fast Adapting Ensemble (FAE), which adapts very quickly to both abrupt and gradual concept drifts, and has been specifically designed to deal with recurring concepts. FAE processes the learning examples in blocks of the same size, but it does not have to wait for the batch to be complete in order to adapt its base classification mechanism. FAE incorporates a drift detector to improve the handling of abrupt concept drifts and stores a set of inactive classifiers that represent old concepts, which are activated very quickly when these concepts reappear. We compare our new algorithm with various well-known learning algorithms, taking into account, common benchmark datasets. The experiments show promising results from the proposed algorithm (regarding accuracy and runtime), handling different types of concept drifts. PMID:25879051

  13. A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems.

    PubMed

    Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping

    2013-01-01

    Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.

  14. Graded-threshold parametric response maps: towards a strategy for adaptive dose painting

    NASA Astrophysics Data System (ADS)

    Lausch, A.; Jensen, N.; Chen, J.; Lee, T. Y.; Lock, M.; Wong, E.

    2014-03-01

    Purpose: To modify the single-threshold parametric response map (ST-PRM) method for predicting treatment outcomes in order to facilitate its use for guidance of adaptive dose painting in intensity-modulated radiotherapy. Methods: Multiple graded thresholds were used to extend the ST-PRM method (Nat. Med. 2009;15(5):572-576) such that the full functional change distribution within tumours could be represented with respect to multiple confidence interval estimates for functional changes in similar healthy tissue. The ST-PRM and graded-threshold PRM (GT-PRM) methods were applied to functional imaging scans of 5 patients treated for hepatocellular carcinoma. Pre and post-radiotherapy arterial blood flow maps (ABF) were generated from CT-perfusion scans of each patient. ABF maps were rigidly registered based on aligning tumour centres of mass. ST-PRM and GT-PRM analyses were then performed on overlapping tumour regions within the registered ABF maps. Main findings: The ST-PRMs contained many disconnected clusters of voxels classified as having a significant change in function. While this may be useful to predict treatment response, it may pose challenges for identifying boost volumes or for informing dose-painting by numbers strategies. The GT-PRMs included all of the same information as ST-PRMs but also visualized the full tumour functional change distribution. Heterogeneous clusters in the ST-PRMs often became more connected in the GT-PRMs by voxels with similar functional changes. Conclusions: GT-PRMs provided additional information which helped to visualize relationships between significant functional changes identified by ST-PRMs. This may enhance ST-PRM utility for guiding adaptive dose painting.

  15. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  16. A Demons algorithm for image registration with locally adaptive regularization.

    PubMed

    Cahill, Nathan D; Noble, J Alison; Hawkes, David J

    2009-01-01

    Thirion's Demons is a popular algorithm for nonrigid image registration because of its linear computational complexity and ease of implementation. It approximately solves the diffusion registration problem by successively estimating force vectors that drive the deformation toward alignment and smoothing the force vectors by Gaussian convolution. In this article, we show how the Demons algorithm can be generalized to allow image-driven locally adaptive regularization in a manner that preserves both the linear complexity and ease of implementation of the original Demons algorithm. We show that the proposed algorithm exhibits lower target registration error and requires less computational effort than the original Demons algorithm on the registration of serial chest CT scans of patients with lung nodules.

  17. Performance study of LMS based adaptive algorithms for unknown system identification

    NASA Astrophysics Data System (ADS)

    Javed, Shazia; Ahmad, Noor Atinah

    2014-07-01

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.

  18. Performance study of LMS based adaptive algorithms for unknown system identification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Javed, Shazia; Ahmad, Noor Atinah

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signalmore » is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.« less

  19. Formulation and implementation of nonstationary adaptive estimation algorithm with applications to air-data reconstruction

    NASA Technical Reports Server (NTRS)

    Whitmore, S. A.

    1985-01-01

    The dynamics model and data sources used to perform air-data reconstruction are discussed, as well as the Kalman filter. The need for adaptive determination of the noise statistics of the process is indicated. The filter innovations are presented as a means of developing the adaptive criterion, which is based on the true mean and covariance of the filter innovations. A method for the numerical approximation of the mean and covariance of the filter innovations is presented. The algorithm as developed is applied to air-data reconstruction for the space shuttle, and data obtained from the third landing are presented. To verify the performance of the adaptive algorithm, the reconstruction is also performed using a constant covariance Kalman filter. The results of the reconstructions are compared, and the adaptive algorithm exhibits better performance.

  20. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Zheng, E-mail: 19994035@sina.com; Wang, Jun; Zhou, Bihua

    2014-03-15

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented tomore » tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.« less

  1. Adaptive Load-Balancing Algorithms Using Symmetric Broadcast Networks

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Biswas, Rupak; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    In a distributed-computing environment, it is important to ensure that the processor workloads are adequately balanced. Among numerous load-balancing algorithms, a unique approach due to Dam and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three novel SBN-based load-balancing algorithms, and implement them on an SP2. A thorough experimental study with Poisson-distributed synthetic loads demonstrates that these algorithms are very effective in balancing system load while minimizing processor idle time. They also compare favorably with several other existing load-balancing techniques. Additional experiments performed with real data demonstrate that the SBN approach is effective in adaptive computational science and engineering applications where dynamic load balancing is extremely crucial.

  2. A kernel adaptive algorithm for quaternion-valued inputs.

    PubMed

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.

  3. An adaptive design for updating the threshold value of a continuous biomarker

    PubMed Central

    Spencer, Amy V.; Harbron, Chris; Mander, Adrian; Wason, James; Peers, Ian

    2017-01-01

    Potential predictive biomarkers are often measured on a continuous scale, but in practice, a threshold value to divide the patient population into biomarker ‘positive’ and ‘negative’ is desirable. Early phase clinical trials are increasingly using biomarkers for patient selection, but at this stage, it is likely that little will be known about the relationship between the biomarker and the treatment outcome. We describe a single-arm trial design with adaptive enrichment, which can increase power to demonstrate efficacy within a patient subpopulation, the parameters of which are also estimated. Our design enables us to learn about the biomarker and optimally adjust the threshold during the study, using a combination of generalised linear modelling and Bayesian prediction. At the final analysis, a binomial exact test is carried out, allowing the hypothesis that ‘no population subset exists in which the novel treatment has a desirable response rate’ to be tested. Through extensive simulations, we are able to show increased power over fixed threshold methods in many situations without increasing the type-I error rate. We also show that estimates of the threshold, which defines the population subset, are unbiased and often more precise than those from fixed threshold studies. We provide an example of the method applied (retrospectively) to publically available data from a study of the use of tamoxifen after mastectomy by the German Breast Study Group, where progesterone receptor is the biomarker of interest. PMID:27417407

  4. Adaptive mechanism-based congestion control for networked systems

    NASA Astrophysics Data System (ADS)

    Liu, Zhi; Zhang, Yun; Chen, C. L. Philip

    2013-03-01

    In order to assure the communication quality in network systems with heavy traffic and limited bandwidth, a new ATRED (adaptive thresholds random early detection) congestion control algorithm is proposed for the congestion avoidance and resource management of network systems. Different to the traditional AQM (active queue management) algorithms, the control parameters of ATRED are not configured statically, but dynamically adjusted by the adaptive mechanism. By integrating with the adaptive strategy, ATRED alleviates the tuning difficulty of RED (random early detection) and shows a better control on the queue management, and achieve a more robust performance than RED under varying network conditions. Furthermore, a dynamic transmission control protocol-AQM control system using ATRED controller is introduced for the systematic analysis. It is proved that the stability of the network system can be guaranteed when the adaptive mechanism is finely designed. Simulation studies show the proposed ATRED algorithm achieves a good performance in varying network environments, which is superior to the RED and Gentle-RED algorithm, and providing more reliable service under varying network conditions.

  5. Face verification with balanced thresholds.

    PubMed

    Yan, Shuicheng; Xu, Dong; Tang, Xiaoou

    2007-01-01

    The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.

  6. Thresholds for conservation and management: structured decision making as a conceptual framework

    USGS Publications Warehouse

    Nichols, James D.; Eaton, Mitchell J.; Martin, Julien; Edited by Guntenspergen, Glenn R.

    2014-01-01

    changes in system dynamics. They are frequently incorporated into ecological models used to project system responses to management actions. Utility thresholds are components of management objectives and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. Decision thresholds are derived from the other components of the decision process.We advocate a structured decision making (SDM) approach within which the following components are identified: objectives (possibly including utility thresholds), potential actions, models (possibly including ecological thresholds), monitoring program, and a solution algorithm (which produces decision thresholds). Adaptive resource management (ARM) is described as a special case of SDM developed for recurrent decision problems that are characterized by uncertainty. We believe that SDM, in general, and ARM, in particular, provide good approaches to conservation and management. Use of SDM and ARM also clarifies the distinct roles of ecological thresholds, utility thresholds, and decision thresholds in informed decision processes.

  7. Fully implicit moving mesh adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Chacon, Luis

    2005-10-01

    In many problems of interest, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former is best dealt with with fully implicit methods, which are able to step over fast frequencies to resolve the dynamical time scale of interest. The latter requires grid adaptivity for efficiency. Moving-mesh grid adaptive methods are attractive because they can be designed to minimize the numerical error for a given resolution. However, the required grid governing equations are typically very nonlinear and stiff, and of considerably difficult numerical treatment. Not surprisingly, fully coupled, implicit approaches where the grid and the physics equations are solved simultaneously are rare in the literature, and circumscribed to 1D geometries. In this study, we present a fully implicit algorithm for moving mesh methods that is feasible for multidimensional geometries. A crucial element is the development of an effective multilevel treatment of the grid equation.ootnotetextL. Chac'on, G. Lapenta, A fully implicit, nonlinear adaptive grid strategy, J. Comput. Phys., accepted (2005) We will show that such an approach is competitive vs. uniform grids both from the accuracy (due to adaptivity) and the efficiency standpoints. Results for a variety of models 1D and 2D geometries, including nonlinear diffusion, radiation-diffusion, Burgers equation, and gas dynamics will be presented.

  8. Designing an Algorithm for Cancerous Tissue Segmentation Using Adaptive K-means Cluttering and Discrete Wavelet Transform.

    PubMed

    Rezaee, Kh; Haddadnia, J

    2013-09-01

    Breast cancer is currently one of the leading causes of death among women worldwide. The diagnosis and separation of cancerous tumors in mammographic images require accuracy, experience and time, and it has always posed itself as a major challenge to the radiologists and physicians. This paper proposes a new algorithm which draws on discrete wavelet transform and adaptive K-means techniques to transmute the medical images implement the tumor estimation and detect breast cancer tumors in mammograms in early stages. It also allows the rapid processing of the input data. In the first step, after designing a filter, the discrete wavelet transform is applied to the input images and the approximate coefficients of scaling components are constructed. Then, the different parts of image are classified in continuous spectrum. In the next step, by using adaptive K-means algorithm for initializing and smart choice of clusters' number, the appropriate threshold is selected. Finally, the suspicious cancerous mass is separated by implementing the image processing techniques. We Received 120 mammographic images in LJPEG format, which had been scanned in Gray-Scale with 50 microns size, 3% noise and 20% INU from clinical data taken from two medical databases (mini-MIAS and DDSM). The proposed algorithm detected tumors at an acceptable level with an average accuracy of 92.32% and sensitivity of 90.24%. Also, the Kappa coefficient was approximately 0.85, which proved the suitable reliability of the system performance. The exact positioning of the cancerous tumors allows the radiologist to determine the stage of disease progression and suggest an appropriate treatment in accordance with the tumor growth. The low PPV and high NPV of the system is a warranty of the system and both clinical specialists and patients can trust its output.

  9. Designing an Algorithm for Cancerous Tissue Segmentation Using Adaptive K-means Cluttering and Discrete Wavelet Transform

    PubMed Central

    Rezaee, Kh.; Haddadnia, J.

    2013-01-01

    Background: Breast cancer is currently one of the leading causes of death among women worldwide. The diagnosis and separation of cancerous tumors in mammographic images require accuracy, experience and time, and it has always posed itself as a major challenge to the radiologists and physicians. Objective: This paper proposes a new algorithm which draws on discrete wavelet transform and adaptive K-means techniques to transmute the medical images implement the tumor estimation and detect breast cancer tumors in mammograms in early stages. It also allows the rapid processing of the input data. Method: In the first step, after designing a filter, the discrete wavelet transform is applied to the input images and the approximate coefficients of scaling components are constructed. Then, the different parts of image are classified in continuous spectrum. In the next step, by using adaptive K-means algorithm for initializing and smart choice of clusters’ number, the appropriate threshold is selected. Finally, the suspicious cancerous mass is separated by implementing the image processing techniques. Results: We Received 120 mammographic images in LJPEG format, which had been scanned in Gray-Scale with 50 microns size, 3% noise and 20% INU from clinical data taken from two medical databases (mini-MIAS and DDSM). The proposed algorithm detected tumors at an acceptable level with an average accuracy of 92.32% and sensitivity of 90.24%. Also, the Kappa coefficient was approximately 0.85, which proved the suitable reliability of the system performance. Conclusion: The exact positioning of the cancerous tumors allows the radiologist to determine the stage of disease progression and suggest an appropriate treatment in accordance with the tumor growth. The low PPV and high NPV of the system is a warranty of the system and both clinical specialists and patients can trust its output. PMID:25505753

  10. Impedance computed tomography using an adaptive smoothing coefficient algorithm.

    PubMed

    Suzuki, A; Uchiyama, A

    2001-01-01

    In impedance computed tomography, a fixed coefficient regularization algorithm has been frequently used to improve the ill-conditioning problem of the Newton-Raphson algorithm. However, a lot of experimental data and a long period of computation time are needed to determine a good smoothing coefficient because a good smoothing coefficient has to be manually chosen from a number of coefficients and is a constant for each iteration calculation. Thus, sometimes the fixed coefficient regularization algorithm distorts the information or fails to obtain any effect. In this paper, a new adaptive smoothing coefficient algorithm is proposed. This algorithm automatically calculates the smoothing coefficient from the eigenvalue of the ill-conditioned matrix. Therefore, the effective images can be obtained within a short computation time. Also the smoothing coefficient is automatically adjusted by the information related to the real resistivity distribution and the data collection method. In our impedance system, we have reconstructed the resistivity distributions of two phantoms using this algorithm. As a result, this algorithm only needs one-fifth the computation time compared to the fixed coefficient regularization algorithm. When compared to the fixed coefficient regularization algorithm, it shows that the image is obtained more rapidly and applicable in real-time monitoring of the blood vessel.

  11. An adaptive bit synchronization algorithm under time-varying environment.

    NASA Technical Reports Server (NTRS)

    Chow, L. R.; Owen, H. A., Jr.; Wang, P. P.

    1973-01-01

    This paper presents an adaptive estimation algorithm for bit synchronization, assuming that the parameters of the incoming data process are time-varying. Experiment results have proved that this synchronizer is workable either judged by the amount of data required or the speed of convergence.

  12. Influence of Injury Risk Thresholds on the Performance of an Algorithm to Predict Crashes with Serious Injuries

    PubMed Central

    Bahouth, George; Digges, Kennerly; Schulman, Carl

    2012-01-01

    This paper presents methods to estimate crash injury risk based on crash characteristics captured by some passenger vehicles equipped with Advanced Automatic Crash Notification technology. The resulting injury risk estimates could be used within an algorithm to optimize rescue care. Regression analysis was applied to the National Automotive Sampling System / Crashworthiness Data System (NASS/CDS) to determine how variations in a specific injury risk threshold would influence the accuracy of predicting crashes with serious injuries. The recommended thresholds for classifying crashes with severe injuries are 0.10 for frontal crashes and 0.05 for side crashes. The regression analysis of NASS/CDS indicates that these thresholds will provide sensitivity above 0.67 while maintaining a positive predictive value in the range of 0.20. PMID:23169132

  13. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; hide

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

  14. Control algorithms and applications of the wavefront sensorless adaptive optics

    NASA Astrophysics Data System (ADS)

    Ma, Liang; Wang, Bin; Zhou, Yuanshen; Yang, Huizhen

    2017-10-01

    Compared with the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system need not to measure the wavefront and reconstruct it. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. Based on the analysis of principle and system model of the WFSless AO system, wavefront correction methods of the WFSless AO system were divided into two categories: model-free-based and model-based control algorithms. The WFSless AO system based on model-free-based control algorithms commonly considers the performance metric as a function of the control parameters and then uses certain control algorithm to improve the performance metric. The model-based control algorithms include modal control algorithms, nonlinear control algorithms and control algorithms based on geometrical optics. Based on the brief description of above typical control algorithms, hybrid methods combining the model-free-based control algorithm with the model-based control algorithm were generalized. Additionally, characteristics of various control algorithms were compared and analyzed. We also discussed the extensive applications of WFSless AO system in free space optical communication (FSO), retinal imaging in the human eye, confocal microscope, coherent beam combination (CBC) techniques and extended objects.

  15. Orion MPCV Touchdown Detection Threshold Development and Testing

    NASA Technical Reports Server (NTRS)

    Daum, Jared; Gay, Robert

    2013-01-01

    A robust method of detecting Orion Multi-Purpose Crew Vehicle (MPCV) splashdown is necessary to ensure crew and hardware safety during descent and after touchdown. The proposed method uses a triple redundant system to inhibit Reaction Control System (RCS) thruster firings, detach parachute risers from the vehicle, and transition to the post-landing segment of the Flight Software (FSW). An in-depth trade study was completed to determine optimal characteristics of the touchdown detection method resulting in an algorithm monitoring filtered, lever-arm corrected, 200 Hz Inertial Measurement Unit (IMU) vehicle acceleration magnitude data against a tunable threshold using persistence counter logic. Following the design of the algorithm, high fidelity environment and vehicle simulations, coupled with the actual vehicle FSW, were used to tune the acceleration threshold and persistence counter value to result in adequate performance in detecting touchdown and sufficient safety margin against early detection while descending under parachutes. An analytical approach including Kriging and adaptive sampling allowed for a sufficient number of finite element analysis (FEA) impact simulations to be completed using minimal computation time. The combination of a persistence counter of 10 and an acceleration threshold of approximately 57.3 ft/s2 resulted in an impact performance factor of safety (FOS) of 1.0 and a safety FOS of approximately 2.6 for touchdown declaration. An RCS termination acceleration threshold of approximately 53.1 ft/s(exp)2 with a persistence counter of 10 resulted in an increased impact performance FOS of 1.2 at the expense of a lowered under-parachutes safety factor of 2.2. The resulting tuned algorithm was then tested on data from eight Capsule Parachute Assembly System (CPAS) flight tests, showing an experimental minimum safety FOS of 6.1. The formulated touchdown detection algorithm will be flown on the Orion MPCV FSW during the Exploration Flight Test 1

  16. CHAMP: a locally adaptive unmixing-based hyperspectral anomaly detection algorithm

    NASA Astrophysics Data System (ADS)

    Crist, Eric P.; Thelen, Brian J.; Carrara, David A.

    1998-10-01

    Anomaly detection offers a means by which to identify potentially important objects in a scene without prior knowledge of their spectral signatures. As such, this approach is less sensitive to variations in target class composition, atmospheric and illumination conditions, and sensor gain settings than would be a spectral matched filter or similar algorithm. The best existing anomaly detectors generally fall into one of two categories: those based on local Gaussian statistics, and those based on linear mixing moles. Unmixing-based approaches better represent the real distribution of data in a scene, but are typically derived and applied on a global or scene-wide basis. Locally adaptive approaches allow detection of more subtle anomalies by accommodating the spatial non-homogeneity of background classes in a typical scene, but provide a poorer representation of the true underlying background distribution. The CHAMP algorithm combines the best attributes of both approaches, applying a linear-mixing model approach in a spatially adaptive manner. The algorithm itself, and teste results on simulated and actual hyperspectral image data, are presented in this paper.

  17. An adaptive design for updating the threshold value of a continuous biomarker.

    PubMed

    Spencer, Amy V; Harbron, Chris; Mander, Adrian; Wason, James; Peers, Ian

    2016-11-30

    Potential predictive biomarkers are often measured on a continuous scale, but in practice, a threshold value to divide the patient population into biomarker 'positive' and 'negative' is desirable. Early phase clinical trials are increasingly using biomarkers for patient selection, but at this stage, it is likely that little will be known about the relationship between the biomarker and the treatment outcome. We describe a single-arm trial design with adaptive enrichment, which can increase power to demonstrate efficacy within a patient subpopulation, the parameters of which are also estimated. Our design enables us to learn about the biomarker and optimally adjust the threshold during the study, using a combination of generalised linear modelling and Bayesian prediction. At the final analysis, a binomial exact test is carried out, allowing the hypothesis that 'no population subset exists in which the novel treatment has a desirable response rate' to be tested. Through extensive simulations, we are able to show increased power over fixed threshold methods in many situations without increasing the type-I error rate. We also show that estimates of the threshold, which defines the population subset, are unbiased and often more precise than those from fixed threshold studies. We provide an example of the method applied (retrospectively) to publically available data from a study of the use of tamoxifen after mastectomy by the German Breast Study Group, where progesterone receptor is the biomarker of interest. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  18. Landsat ecosystem disturbance adaptive processing system (LEDAPS) algorithm description

    USGS Publications Warehouse

    Schmidt, Gail; Jenkerson, Calli B.; Masek, Jeffrey; Vermote, Eric; Gao, Feng

    2013-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) software was originally developed by the National Aeronautics and Space Administration–Goddard Space Flight Center and the University of Maryland to produce top-of-atmosphere reflectance from LandsatThematic Mapper and Enhanced Thematic Mapper Plus Level 1 digital numbers and to apply atmospheric corrections to generate a surface-reflectance product.The U.S. Geological Survey (USGS) has adopted the LEDAPS algorithm for producing the Landsat Surface Reflectance Climate Data Record.This report discusses the LEDAPS algorithm, which was implemented by the USGS.

  19. An adaptive displacement estimation algorithm for improved reconstruction of thermal strain.

    PubMed

    Ding, Xuan; Dutta, Debaditya; Mahmoud, Ahmed M; Tillman, Bryan; Leers, Steven A; Kim, Kang

    2015-01-01

    Thermal strain imaging (TSI) can be used to differentiate between lipid and water-based tissues in atherosclerotic arteries. However, detecting small lipid pools in vivo requires accurate and robust displacement estimation over a wide range of displacement magnitudes. Phase-shift estimators such as Loupas' estimator and time-shift estimators such as normalized cross-correlation (NXcorr) are commonly used to track tissue displacements. However, Loupas' estimator is limited by phase-wrapping and NXcorr performs poorly when the SNR is low. In this paper, we present an adaptive displacement estimation algorithm that combines both Loupas' estimator and NXcorr. We evaluated this algorithm using computer simulations and an ex vivo human tissue sample. Using 1-D simulation studies, we showed that when the displacement magnitude induced by thermal strain was >λ/8 and the electronic system SNR was >25.5 dB, the NXcorr displacement estimate was less biased than the estimate found using Loupas' estimator. On the other hand, when the displacement magnitude was ≤λ/4 and the electronic system SNR was ≤25.5 dB, Loupas' estimator had less variance than NXcorr. We used these findings to design an adaptive displacement estimation algorithm. Computer simulations of TSI showed that the adaptive displacement estimator was less biased than either Loupas' estimator or NXcorr. Strain reconstructed from the adaptive displacement estimates improved the strain SNR by 43.7 to 350% and the spatial accuracy by 1.2 to 23.0% (P < 0.001). An ex vivo human tissue study provided results that were comparable to computer simulations. The results of this study showed that a novel displacement estimation algorithm, which combines two different displacement estimators, yielded improved displacement estimation and resulted in improved strain reconstruction.

  20. An Adaptive Displacement Estimation Algorithm for Improved Reconstruction of Thermal Strain

    PubMed Central

    Ding, Xuan; Dutta, Debaditya; Mahmoud, Ahmed M.; Tillman, Bryan; Leers, Steven A.; Kim, Kang

    2014-01-01

    Thermal strain imaging (TSI) can be used to differentiate between lipid and water-based tissues in atherosclerotic arteries. However, detecting small lipid pools in vivo requires accurate and robust displacement estimation over a wide range of displacement magnitudes. Phase-shift estimators such as Loupas’ estimator and time-shift estimators like normalized cross-correlation (NXcorr) are commonly used to track tissue displacements. However, Loupas’ estimator is limited by phase-wrapping and NXcorr performs poorly when the signal-to-noise ratio (SNR) is low. In this paper, we present an adaptive displacement estimation algorithm that combines both Loupas’ estimator and NXcorr. We evaluated this algorithm using computer simulations and an ex-vivo human tissue sample. Using 1-D simulation studies, we showed that when the displacement magnitude induced by thermal strain was >λ/8 and the electronic system SNR was >25.5 dB, the NXcorr displacement estimate was less biased than the estimate found using Loupas’ estimator. On the other hand, when the displacement magnitude was ≤λ/4 and the electronic system SNR was ≤25.5 dB, Loupas’ estimator had less variance than NXcorr. We used these findings to design an adaptive displacement estimation algorithm. Computer simulations of TSI using Field II showed that the adaptive displacement estimator was less biased than either Loupas’ estimator or NXcorr. Strain reconstructed from the adaptive displacement estimates improved the strain SNR by 43.7–350% and the spatial accuracy by 1.2–23.0% (p < 0.001). An ex-vivo human tissue study provided results that were comparable to computer simulations. The results of this study showed that a novel displacement estimation algorithm, which combines two different displacement estimators, yielded improved displacement estimation and results in improved strain reconstruction. PMID:25585398

  1. An Adaptive Deghosting Method in Neural Network-Based Infrared Detectors Nonuniformity Correction.

    PubMed

    Li, Yiyang; Jin, Weiqi; Zhu, Jin; Zhang, Xu; Li, Shuo

    2018-01-13

    The problems of the neural network-based nonuniformity correction algorithm for infrared focal plane arrays mainly concern slow convergence speed and ghosting artifacts. In general, the more stringent the inhibition of ghosting, the slower the convergence speed. The factors that affect these two problems are the estimated desired image and the learning rate. In this paper, we propose a learning rate rule that combines adaptive threshold edge detection and a temporal gate. Through the noise estimation algorithm, the adaptive spatial threshold is related to the residual nonuniformity noise in the corrected image. The proposed learning rate is used to effectively and stably suppress ghosting artifacts without slowing down the convergence speed. The performance of the proposed technique was thoroughly studied with infrared image sequences with both simulated nonuniformity and real nonuniformity. The results show that the deghosting performance of the proposed method is superior to that of other neural network-based nonuniformity correction algorithms and that the convergence speed is equivalent to the tested deghosting methods.

  2. Adaptivity and smart algorithms for fluid-structure interaction

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley

    1990-01-01

    This paper reviews new approaches in CFD which have the potential for significantly increasing current capabilities of modeling complex flow phenomena and of treating difficult problems in fluid-structure interaction. These approaches are based on the notions of adaptive methods and smart algorithms, which use instantaneous measures of the quality and other features of the numerical flowfields as a basis for making changes in the structure of the computational grid and of algorithms designed to function on the grid. The application of these new techniques to several problem classes are addressed, including problems with moving boundaries, fluid-structure interaction in high-speed turbine flows, flow in domains with receding boundaries, and related problems.

  3. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    NASA Astrophysics Data System (ADS)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  4. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    DOE PAGES

    Li, Weixuan; Lin, Guang

    2015-03-21

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less

  5. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan; Lin, Guang, E-mail: guanglin@purdue.edu

    2015-08-01

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less

  6. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  7. Inversion for Refractivity Parameters Using a Dynamic Adaptive Cuckoo Search with Crossover Operator Algorithm

    PubMed Central

    Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang

    2016-01-01

    Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter. PMID:27212938

  8. Adaptive re-tracking algorithm for retrieval of water level variations and wave heights from satellite altimetry data for middle-sized inland water bodies

    NASA Astrophysics Data System (ADS)

    Troitskaya, Yuliya; Lebedev, Sergey; Soustova, Irina; Rybushkina, Galina; Papko, Vladislav; Baidakov, Georgy; Panyutin, Andrey

    One of the recent applications of satellite altimetry originally designed for measurements of the sea level [1] is associated with remote investigation of the water level of inland waters: lakes, rivers, reservoirs [2-7]. The altimetry data re-tracking algorithms developed for open ocean conditions (e.g. Ocean-1,2) [1] often cannot be used in these cases, since the radar return is significantly contaminated by reflection from the land. The problem of minimization of errors in the water level retrieval for inland waters from altimetry measurements can be resolved by re-tracking satellite altimetry data. Recently, special re-tracking algorithms have been actively developed for re-processing altimetry data in the coastal zone when reflection from land strongly affects echo shapes: threshold re-tracking, The other methods of re-tracking (threshold re-tracking, beta-re-tracking, improved threshold re-tracking) were developed in [9-11]. The latest development in this field is PISTACH product [12], in which retracking bases on the classification of typical forms of telemetric waveforms in the coastal zones and inland water bodies. In this paper a novel method of regional adaptive re-tracking based on constructing a theoretical model describing the formation of telemetric waveforms by reflection from the piecewise constant model surface corresponding to the geography of the region is considered. It was proposed in [13, 14], where the algorithm for assessing water level in inland water bodies and in the coastal zone of the ocean with an error of about 10-15 cm was constructed. The algorithm includes four consecutive steps: - constructing a local piecewise model of a reflecting surface in the neighbourhood of the reservoir; - solving a direct problem by calculating the reflected waveforms within the framework of the model; - imposing restrictions and validity criteria for the algorithm based on waveform modelling; - solving the inverse problem by retrieving a tracking point

  9. Dynamic Multiple-Threshold Call Admission Control Based on Optimized Genetic Algorithm in Wireless/Mobile Networks

    NASA Astrophysics Data System (ADS)

    Wang, Shengling; Cui, Yong; Koodli, Rajeev; Hou, Yibin; Huang, Zhangqin

    Due to the dynamics of topology and resources, Call Admission Control (CAC) plays a significant role for increasing resource utilization ratio and guaranteeing users' QoS requirements in wireless/mobile networks. In this paper, a dynamic multi-threshold CAC scheme is proposed to serve multi-class service in a wireless/mobile network. The thresholds are renewed at the beginning of each time interval to react to the changing mobility rate and network load. To find suitable thresholds, a reward-penalty model is designed, which provides different priorities between different service classes and call types through different reward/penalty policies according to network load and average call arrival rate. To speed up the running time of CAC, an Optimized Genetic Algorithm (OGA) is presented, whose components such as encoding, population initialization, fitness function and mutation etc., are all optimized in terms of the traits of the CAC problem. The simulation demonstrates that the proposed CAC scheme outperforms the similar schemes, which means the optimization is realized. Finally, the simulation shows the efficiency of OGA.

  10. SIMULATION OF DISPERSION OF A POWER PLANT PLUME USING AN ADAPTIVE GRID ALGORITHM

    EPA Science Inventory

    A new dynamic adaptive grid algorithm has been developed for use in air quality modeling. This algorithm uses a higher order numerical scheme?the piecewise parabolic method (PPM)?for computing advective solution fields; a weight function capable of promoting grid node clustering ...

  11. An environment-adaptive management algorithm for hearing-support devices incorporating listening situation and noise type classifiers.

    PubMed

    Yook, Sunhyun; Nam, Kyoung Won; Kim, Heepyung; Hong, Sung Hwa; Jang, Dong Pyo; Kim, In Young

    2015-04-01

    In order to provide more consistent sound intelligibility for the hearing-impaired person, regardless of environment, it is necessary to adjust the setting of the hearing-support (HS) device to accommodate various environmental circumstances. In this study, a fully automatic HS device management algorithm that can adapt to various environmental situations is proposed; it is composed of a listening-situation classifier, a noise-type classifier, an adaptive noise-reduction algorithm, and a management algorithm that can selectively turn on/off one or more of the three basic algorithms-beamforming, noise-reduction, and feedback cancellation-and can also adjust internal gains and parameters of the wide-dynamic-range compression (WDRC) and noise-reduction (NR) algorithms in accordance with variations in environmental situations. Experimental results demonstrated that the implemented algorithms can classify both listening situation and ambient noise type situations with high accuracies (92.8-96.4% and 90.9-99.4%, respectively), and the gains and parameters of the WDRC and NR algorithms were successfully adjusted according to variations in environmental situation. The average values of signal-to-noise ratio (SNR), frequency-weighted segmental SNR, Perceptual Evaluation of Speech Quality, and mean opinion test scores of 10 normal-hearing volunteers of the adaptive multiband spectral subtraction (MBSS) algorithm were improved by 1.74 dB, 2.11 dB, 0.49, and 0.68, respectively, compared to the conventional fixed-parameter MBSS algorithm. These results indicate that the proposed environment-adaptive management algorithm can be applied to HS devices to improve sound intelligibility for hearing-impaired individuals in various acoustic environments. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  12. An adaptive tracker for ShipIR/NTCS

    NASA Astrophysics Data System (ADS)

    Ramaswamy, Srinivasan; Vaitekunas, David A.

    2015-05-01

    A key component in any image-based tracking system is the adaptive tracking algorithm used to segment the image into potential targets, rank-and-select the best candidate target, and the gating of the selected target to further improve tracker performance. This paper will describe a new adaptive tracker algorithm added to the naval threat countermeasure simulator (NTCS) of the NATO-standard ship signature model (ShipIR). The new adaptive tracking algorithm is an optional feature used with any of the existing internal NTCS or user-defined seeker algorithms (e.g., binary centroid, intensity centroid, and threshold intensity centroid). The algorithm segments the detected pixels into clusters, and the smallest set of clusters that meet the detection criterion is obtained by using a knapsack algorithm to identify the set of clusters that should not be used. The rectangular area containing the chosen clusters defines an inner boundary, from which a weighted centroid is calculated as the aim-point. A track-gate is then positioned around the clusters, taking into account the rate of change of the bounding area and compensating for any gimbal displacement. A sequence of scenarios is used to test the new tracking algorithm on a generic unclassified DDG ShipIR model, with and without flares, and demonstrate how some of the key seeker signals are impacted by both the ship and flare intrinsic signatures.

  13. A fuzzy optimal threshold technique for medical images

    NASA Astrophysics Data System (ADS)

    Thirupathi Kannan, Balaji; Krishnasamy, Krishnaveni; Pradeep Kumar Kenny, S.

    2012-01-01

    A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized, preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic structures, compared with various existing algorithms and proved better than the existing algorithms.

  14. Adaptive semantic tag mining from heterogeneous clinical research texts.

    PubMed

    Hao, T; Weng, C

    2015-01-01

    To develop an adaptive approach to mine frequent semantic tags (FSTs) from heterogeneous clinical research texts. We develop a "plug-n-play" framework that integrates replaceable unsupervised kernel algorithms with formatting, functional, and utility wrappers for FST mining. Temporal information identification and semantic equivalence detection were two example functional wrappers. We first compared this approach's recall and efficiency for mining FSTs from ClinicalTrials.gov to that of a recently published tag-mining algorithm. Then we assessed this approach's adaptability to two other types of clinical research texts: clinical data requests and clinical trial protocols, by comparing the prevalence trends of FSTs across three texts. Our approach increased the average recall and speed by 12.8% and 47.02% respectively upon the baseline when mining FSTs from ClinicalTrials.gov, and maintained an overlap in relevant FSTs with the base- line ranging between 76.9% and 100% for varying FST frequency thresholds. The FSTs saturated when the data size reached 200 documents. Consistent trends in the prevalence of FST were observed across the three texts as the data size or frequency threshold changed. This paper contributes an adaptive tag-mining framework that is scalable and adaptable without sacrificing its recall. This component-based architectural design can be potentially generalizable to improve the adaptability of other clinical text mining methods.

  15. Noise-shaping gradient descent-based online adaptation algorithms for digital calibration of analog circuits.

    PubMed

    Chakrabartty, Shantanu; Shaga, Ravi K; Aono, Kenji

    2013-04-01

    Analog circuits that are calibrated using digital-to-analog converters (DACs) use a digital signal processor-based algorithm for real-time adaptation and programming of system parameters. In this paper, we first show that this conventional framework for adaptation yields suboptimal calibration properties because of artifacts introduced by quantization noise. We then propose a novel online stochastic optimization algorithm called noise-shaping or ΣΔ gradient descent, which can shape the quantization noise out of the frequency regions spanning the parameter adaptation trajectories. As a result, the proposed algorithms demonstrate superior parameter search properties compared to floating-point gradient methods and better convergence properties than conventional quantized gradient-methods. In the second part of this paper, we apply the ΣΔ gradient descent algorithm to two examples of real-time digital calibration: 1) balancing and tracking of bias currents, and 2) frequency calibration of a band-pass Gm-C biquad filter biased in weak inversion. For each of these examples, the circuits have been prototyped in a 0.5-μm complementary metal-oxide-semiconductor process, and we demonstrate that the proposed algorithm is able to find the optimal solution even in the presence of spurious local minima, which are introduced by the nonlinear and non-monotonic response of calibration DACs.

  16. Image segmentation algorithm based on improved PCNN

    NASA Astrophysics Data System (ADS)

    Chen, Hong; Wu, Chengdong; Yu, Xiaosheng; Wu, Jiahui

    2017-11-01

    A modified simplified Pulse Coupled Neural Network (PCNN) model is proposed in this article based on simplified PCNN. Some work have done to enrich this model, such as imposing restrictions items of the inputs, improving linking inputs and internal activity of PCNN. A self-adaptive parameter setting method of linking coefficient and threshold value decay time constant is proposed here, too. At last, we realized image segmentation algorithm for five pictures based on this proposed simplified PCNN model and PSO. Experimental results demonstrate that this image segmentation algorithm is much better than method of SPCNN and OTSU.

  17. Comparison of an adaptive local thresholding method on CBCT and µCT endodontic images

    NASA Astrophysics Data System (ADS)

    Michetti, Jérôme; Basarab, Adrian; Diemer, Franck; Kouame, Denis

    2018-01-01

    Root canal segmentation on cone beam computed tomography (CBCT) images is difficult because of the noise level, resolution limitations, beam hardening and dental morphological variations. An image processing framework, based on an adaptive local threshold method, was evaluated on CBCT images acquired on extracted teeth. A comparison with high quality segmented endodontic images on micro computed tomography (µCT) images acquired from the same teeth was carried out using a dedicated registration process. Each segmented tooth was evaluated according to volume and root canal sections through the area and the Feret’s diameter. The proposed method is shown to overcome the limitations of CBCT and to provide an automated and adaptive complete endodontic segmentation. Despite a slight underestimation (-4, 08%), the local threshold segmentation method based on edge-detection was shown to be fast and accurate. Strong correlations between CBCT and µCT segmentations were found both for the root canal area and diameter (respectively 0.98 and 0.88). Our findings suggest that combining CBCT imaging with this image processing framework may benefit experimental endodontology, teaching and could represent a first development step towards the clinical use of endodontic CBCT segmentation during pulp cavity treatment.

  18. Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm.

    PubMed

    Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong

    2016-01-01

    In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis.

  19. Improved relocatable over-the-horizon radar detection and tracking using the maximum likelihood adaptive neural system algorithm

    NASA Astrophysics Data System (ADS)

    Perlovsky, Leonid I.; Webb, Virgil H.; Bradley, Scott R.; Hansen, Christopher A.

    1998-07-01

    An advanced detection and tracking system is being developed for the U.S. Navy's Relocatable Over-the-Horizon Radar (ROTHR) to provide improved tracking performance against small aircraft typically used in drug-smuggling activities. The development is based on the Maximum Likelihood Adaptive Neural System (MLANS), a model-based neural network that combines advantages of neural network and model-based algorithmic approaches. The objective of the MLANS tracker development effort is to address user requirements for increased detection and tracking capability in clutter and improved track position, heading, and speed accuracy. The MLANS tracker is expected to outperform other approaches to detection and tracking for the following reasons. It incorporates adaptive internal models of target return signals, target tracks and maneuvers, and clutter signals, which leads to concurrent clutter suppression, detection, and tracking (track-before-detect). It is not combinatorial and thus does not require any thresholding or peak picking and can track in low signal-to-noise conditions. It incorporates superresolution spectrum estimation techniques exceeding the performance of conventional maximum likelihood and maximum entropy methods. The unique spectrum estimation method is based on the Einsteinian interpretation of the ROTHR received energy spectrum as a probability density of signal frequency. The MLANS neural architecture and learning mechanism are founded on spectrum models and maximization of the "Einsteinian" likelihood, allowing knowledge of the physical behavior of both targets and clutter to be injected into the tracker algorithms. The paper describes the addressed requirements and expected improvements, theoretical foundations, engineering methodology, and results of the development effort to date.

  20. Improved adaptive genetic algorithm with sparsity constraint applied to thermal neutron CT reconstruction of two-phase flow

    NASA Astrophysics Data System (ADS)

    Yan, Mingfei; Hu, Huasi; Otake, Yoshie; Taketani, Atsushi; Wakabayashi, Yasuo; Yanagimachi, Shinzo; Wang, Sheng; Pan, Ziheng; Hu, Guang

    2018-05-01

    Thermal neutron computer tomography (CT) is a useful tool for visualizing two-phase flow due to its high imaging contrast and strong penetrability of neutrons for tube walls constructed with metallic material. A novel approach for two-phase flow CT reconstruction based on an improved adaptive genetic algorithm with sparsity constraint (IAGA-SC) is proposed in this paper. In the algorithm, the neighborhood mutation operator is used to ensure the continuity of the reconstructed object. The adaptive crossover probability P c and mutation probability P m are improved to help the adaptive genetic algorithm (AGA) achieve the global optimum. The reconstructed results for projection data, obtained from Monte Carlo simulation, indicate that the comprehensive performance of the IAGA-SC algorithm exceeds the adaptive steepest descent-projection onto convex sets (ASD-POCS) algorithm in restoring typical and complex flow regimes. It especially shows great advantages in restoring the simply connected flow regimes and the shape of object. In addition, the CT experiment for two-phase flow phantoms was conducted on the accelerator-driven neutron source to verify the performance of the developed IAGA-SC algorithm.

  1. Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis

    NASA Technical Reports Server (NTRS)

    Padovan, J.

    1981-01-01

    A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated.

  2. An improved finger-vein recognition algorithm based on template matching

    NASA Astrophysics Data System (ADS)

    Liu, Yueyue; Di, Si; Jin, Jian; Huang, Daoping

    2016-10-01

    Finger-vein recognition has became the most popular biometric identify methods. The investigation on the recognition algorithms always is the key point in this field. So far, there are many applicable algorithms have been developed. However, there are still some problems in practice, such as the variance of the finger position which may lead to the image distortion and shifting; during the identification process, some matching parameters determined according to experience may also reduce the adaptability of algorithm. Focus on above mentioned problems, this paper proposes an improved finger-vein recognition algorithm based on template matching. In order to enhance the robustness of the algorithm for the image distortion, the least squares error method is adopted to correct the oblique finger. During the feature extraction, local adaptive threshold method is adopted. As regard as the matching scores, we optimized the translation preferences as well as matching distance between the input images and register images on the basis of Naoto Miura algorithm. Experimental results indicate that the proposed method can improve the robustness effectively under the finger shifting and rotation conditions.

  3. New developments in supra-threshold perimetry.

    PubMed

    Henson, David B; Artes, Paul H

    2002-09-01

    To describe a series of recent enhancements to supra-threshold perimetry. Computer simulations were used to develop an improved algorithm (HEART) for the setting of the supra-threshold test intensity at the beginning of a field test, and to evaluate the relationship between various pass/fail criteria and the test's performance (sensitivity and specificity) and how they compare with modern threshold perimetry. Data were collected in optometric practices to evaluate HEART and to assess how the patient's response times can be analysed to detect false positive response errors in visual field test results. The HEART algorithm shows improved performance (reduced between-eye differences) over current algorithms. A pass/fail criterion of '3 stimuli seen of 3-5 presentations' at each test location reduces test/retest variability and combines high sensitivity and specificity. A large percentage of false positive responses can be detected by comparing their latencies to the average response time of a patient. Optimised supra-threshold visual field tests can perform as well as modern threshold techniques. Such tests may be easier to perform for novice patients, compared with the more demanding threshold tests.

  4. Point estimation following two-stage adaptive threshold enrichment clinical trials.

    PubMed

    Kimani, Peter K; Todd, Susan; Renfro, Lindsay A; Stallard, Nigel

    2018-05-31

    Recently, several study designs incorporating treatment effect assessment in biomarker-based subpopulations have been proposed. Most statistical methodologies for such designs focus on the control of type I error rate and power. In this paper, we have developed point estimators for clinical trials that use the two-stage adaptive enrichment threshold design. The design consists of two stages, where in stage 1, patients are recruited in the full population. Stage 1 outcome data are then used to perform interim analysis to decide whether the trial continues to stage 2 with the full population or a subpopulation. The subpopulation is defined based on one of the candidate threshold values of a numerical predictive biomarker. To estimate treatment effect in the selected subpopulation, we have derived unbiased estimators, shrinkage estimators, and estimators that estimate bias and subtract it from the naive estimate. We have recommended one of the unbiased estimators. However, since none of the estimators dominated in all simulation scenarios based on both bias and mean squared error, an alternative strategy would be to use a hybrid estimator where the estimator used depends on the subpopulation selected. This would require a simulation study of plausible scenarios before the trial. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  5. An adaptive SVSF-SLAM algorithm to improve the success and solving the UGVs cooperation problem

    NASA Astrophysics Data System (ADS)

    Demim, Fethi; Nemra, Abdelkrim; Louadj, Kahina; Hamerlain, Mustapha; Bazoula, Abdelouahab

    2018-05-01

    This paper aims to present a Decentralised Cooperative Simultaneous Localization and Mapping (DCSLAM) solution based on 2D laser data using an Adaptive Covariance Intersection (ACI). The ACI-DCSLAM algorithm will be validated on a swarm of Unmanned Ground Vehicles (UGVs) receiving features to estimate the position and covariance of shared features before adding them to the global map. With the proposed solution, a group of (UGVs) will be able to construct a large reliable map and localise themselves within this map without any user intervention. The most popular solutions to this problem are the EKF-SLAM, Nonlinear H-infinity ? SLAM and the FAST-SLAM. The former suffers from two important problems which are the poor consistency caused by the linearization problem and the calculation of Jacobian. The second solution is the ? which is a very promising filter because it doesn't make any assumption about noise characteristics, while the latter is not suitable for real time implementation. Therefore, a new alternative solution based on the smooth variable structure filter (SVSF) is adopted. Cooperative adaptive SVSF-SLAM algorithm is proposed in this paper to solve the UGVs SLAM problem. Our main contribution consists in adapting the SVSF filter to solve the Decentralised Cooperative SLAM problem for multiple UGVs. The algorithms developed in this paper were implemented using two mobile robots Pioneer ?, equiped with 2D laser telemetry sensors. Good results are obtained by the Cooperative adaptive SVSF-SLAM algorithm compared to the Cooperative EKF/?-SLAM algorithms, especially when the noise is colored or affected by a variable bias. Simulation results confirm and show the efficiency of the proposed algorithm which is more robust, stable and adapted to real time applications.

  6. Real time optimization algorithm for wavefront sensorless adaptive optics OCT (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Verstraete, Hans R. G. W.; Heisler, Morgan; Ju, Myeong Jin; Wahl, Daniel J.; Bliek, Laurens; Kalkman, Jeroen; Bonora, Stefano; Sarunic, Marinko V.; Verhaegen, Michel; Jian, Yifan

    2017-02-01

    Optical Coherence Tomography (OCT) has revolutionized modern ophthalmology, providing depth resolved images of the retinal layers in a system that is suited to a clinical environment. A limitation of the performance and utilization of the OCT systems has been the lateral resolution. Through the combination of wavefront sensorless adaptive optics with dual variable optical elements, we present a compact lens based OCT system that is capable of imaging the photoreceptor mosaic. We utilized a commercially available variable focal length lens to correct for a wide range of defocus commonly found in patient eyes, and a multi-actuator adaptive lens after linearization of the hysteresis in the piezoelectric actuators for aberration correction to obtain near diffraction limited imaging at the retina. A parallel processing computational platform permitted real-time image acquisition and display. The Data-based Online Nonlinear Extremum seeker (DONE) algorithm was used for real time optimization of the wavefront sensorless adaptive optics OCT, and the performance was compared with a coordinate search algorithm. Cross sectional images of the retinal layers and en face images of the cone photoreceptor mosaic acquired in vivo from research volunteers before and after WSAO optimization are presented. Applying the DONE algorithm in vivo for wavefront sensorless AO-OCT demonstrates that the DONE algorithm succeeds in drastically improving the signal while achieving a computational time of 1 ms per iteration, making it applicable for high speed real time applications.

  7. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics.

    PubMed

    Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo

    2013-06-01

    In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery.

  8. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics

    PubMed Central

    Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo

    2013-01-01

    In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery. PMID:23761849

  9. Adaptive optics compensation of orbital angular momentum beams with a modified Gerchberg-Saxton-based phase retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Chang, Huan; Yin, Xiao-li; Cui, Xiao-zhou; Zhang, Zhi-chao; Ma, Jian-xin; Wu, Guo-hua; Zhang, Li-jia; Xin, Xiang-jun

    2017-12-01

    Practical orbital angular momentum (OAM)-based free-space optical (FSO) communications commonly experience serious performance degradation and crosstalk due to atmospheric turbulence. In this paper, we propose a wave-front sensorless adaptive optics (WSAO) system with a modified Gerchberg-Saxton (GS)-based phase retrieval algorithm to correct distorted OAM beams. We use the spatial phase perturbation (SPP) GS algorithm with a distorted probe Gaussian beam as the only input. The principle and parameter selections of the algorithm are analyzed, and the performance of the algorithm is discussed. The simulation results show that the proposed adaptive optics (AO) system can significantly compensate for distorted OAM beams in single-channel or multiplexed OAM systems, which provides new insights into adaptive correction systems using OAM beams.

  10. Adaptive control and noise suppression by a variable-gain gradient algorithm

    NASA Technical Reports Server (NTRS)

    Merhav, S. J.; Mehta, R. S.

    1987-01-01

    An adaptive control system based on normalized LMS filters is investigated. The finite impulse response of the nonparametric controller is adaptively estimated using a given reference model. Specifically, the following issues are addressed: The stability of the closed loop system is analyzed and heuristically established. Next, the adaptation process is studied for piecewise constant plant parameters. It is shown that by introducing a variable-gain in the gradient algorithm, a substantial reduction in the LMS adaptation rate can be achieved. Finally, process noise at the plant output generally causes a biased estimate of the controller. By introducing a noise suppression scheme, this bias can be substantially reduced and the response of the adapted system becomes very close to that of the reference model. Extensive computer simulations validate these and demonstrate assertions that the system can rapidly adapt to random jumps in plant parameters.

  11. An Adaptive Deghosting Method in Neural Network-Based Infrared Detectors Nonuniformity Correction

    PubMed Central

    Li, Yiyang; Jin, Weiqi; Zhu, Jin; Zhang, Xu; Li, Shuo

    2018-01-01

    The problems of the neural network-based nonuniformity correction algorithm for infrared focal plane arrays mainly concern slow convergence speed and ghosting artifacts. In general, the more stringent the inhibition of ghosting, the slower the convergence speed. The factors that affect these two problems are the estimated desired image and the learning rate. In this paper, we propose a learning rate rule that combines adaptive threshold edge detection and a temporal gate. Through the noise estimation algorithm, the adaptive spatial threshold is related to the residual nonuniformity noise in the corrected image. The proposed learning rate is used to effectively and stably suppress ghosting artifacts without slowing down the convergence speed. The performance of the proposed technique was thoroughly studied with infrared image sequences with both simulated nonuniformity and real nonuniformity. The results show that the deghosting performance of the proposed method is superior to that of other neural network-based nonuniformity correction algorithms and that the convergence speed is equivalent to the tested deghosting methods. PMID:29342857

  12. A neural algorithm for the non-uniform and adaptive sampling of biomedical data.

    PubMed

    Mesin, Luca

    2016-04-01

    Body sensors are finding increasing applications in the self-monitoring for health-care and in the remote surveillance of sensitive people. The physiological data to be sampled can be non-stationary, with bursts of high amplitude and frequency content providing most information. Such data could be sampled efficiently with a non-uniform schedule that increases the sampling rate only during activity bursts. A real time and adaptive algorithm is proposed to select the sampling rate, in order to reduce the number of measured samples, but still recording the main information. The algorithm is based on a neural network which predicts the subsequent samples and their uncertainties, requiring a measurement only when the risk of the prediction is larger than a selectable threshold. Four examples of application to biomedical data are discussed: electromyogram, electrocardiogram, electroencephalogram, and body acceleration. Sampling rates are reduced under the Nyquist limit, still preserving an accurate representation of the data and of their power spectral densities (PSD). For example, sampling at 60% of the Nyquist frequency, the percentage average rectified errors in estimating the signals are on the order of 10% and the PSD is fairly represented, until the highest frequencies. The method outperforms both uniform sampling and compressive sensing applied to the same data. The discussed method allows to go beyond Nyquist limit, still preserving the information content of non-stationary biomedical signals. It could find applications in body sensor networks to lower the number of wireless communications (saving sensor power) and to reduce the occupation of memory. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Sequential Insertion Heuristic with Adaptive Bee Colony Optimisation Algorithm for Vehicle Routing Problem with Time Windows

    PubMed Central

    Jawarneh, Sana; Abdullah, Salwani

    2015-01-01

    This paper presents a bee colony optimisation (BCO) algorithm to tackle the vehicle routing problem with time window (VRPTW). The VRPTW involves recovering an ideal set of routes for a fleet of vehicles serving a defined number of customers. The BCO algorithm is a population-based algorithm that mimics the social communication patterns of honeybees in solving problems. The performance of the BCO algorithm is dependent on its parameters, so the online (self-adaptive) parameter tuning strategy is used to improve its effectiveness and robustness. Compared with the basic BCO, the adaptive BCO performs better. Diversification is crucial to the performance of the population-based algorithm, but the initial population in the BCO algorithm is generated using a greedy heuristic, which has insufficient diversification. Therefore the ways in which the sequential insertion heuristic (SIH) for the initial population drives the population toward improved solutions are examined. Experimental comparisons indicate that the proposed adaptive BCO-SIH algorithm works well across all instances and is able to obtain 11 best results in comparison with the best-known results in the literature when tested on Solomon’s 56 VRPTW 100 customer instances. Also, a statistical test shows that there is a significant difference between the results. PMID:26132158

  14. Fully implicit moving mesh adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Serazio, C.; Chacon, L.; Lapenta, G.

    2006-10-01

    In many problems of interest, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former is best dealt with with fully implicit methods, which are able to step over fast frequencies to resolve the dynamical time scale of interest. The latter requires grid adaptivity for efficiency. Moving-mesh grid adaptive methods are attractive because they can be designed to minimize the numerical error for a given resolution. However, the required grid governing equations are typically very nonlinear and stiff, and of considerably difficult numerical treatment. Not surprisingly, fully coupled, implicit approaches where the grid and the physics equations are solved simultaneously are rare in the literature, and circumscribed to 1D geometries. In this study, we present a fully implicit algorithm for moving mesh methods that is feasible for multidimensional geometries. Crucial elements are the development of an effective multilevel treatment of the grid equation, and a robust, rigorous error estimator. For the latter, we explore the effectiveness of a coarse grid correction error estimator, which faithfully reproduces spatial truncation errors for conservative equations. We will show that the moving mesh approach is competitive vs. uniform grids both in accuracy (due to adaptivity) and efficiency. Results for a variety of models 1D and 2D geometries will be presented. L. Chac'on, G. Lapenta, J. Comput. Phys., 212 (2), 703 (2006) G. Lapenta, L. Chac'on, J. Comput. Phys., accepted (2006)

  15. Identification of robust adaptation gene regulatory network parameters using an improved particle swarm optimization algorithm.

    PubMed

    Huang, X N; Ren, H P

    2016-05-13

    Robust adaptation is a critical ability of gene regulatory network (GRN) to survive in a fluctuating environment, which represents the system responding to an input stimulus rapidly and then returning to its pre-stimulus steady state timely. In this paper, the GRN is modeled using the Michaelis-Menten rate equations, which are highly nonlinear differential equations containing 12 undetermined parameters. The robust adaption is quantitatively described by two conflicting indices. To identify the parameter sets in order to confer the GRNs with robust adaptation is a multi-variable, multi-objective, and multi-peak optimization problem, which is difficult to acquire satisfactory solutions especially high-quality solutions. A new best-neighbor particle swarm optimization algorithm is proposed to implement this task. The proposed algorithm employs a Latin hypercube sampling method to generate the initial population. The particle crossover operation and elitist preservation strategy are also used in the proposed algorithm. The simulation results revealed that the proposed algorithm could identify multiple solutions in one time running. Moreover, it demonstrated a superior performance as compared to the previous methods in the sense of detecting more high-quality solutions within an acceptable time. The proposed methodology, owing to its universality and simplicity, is useful for providing the guidance to design GRN with superior robust adaptation.

  16. A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.

    PubMed

    Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J

    2009-11-28

    In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.

  17. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  18. Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm

    PubMed Central

    Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong

    2016-01-01

    In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis. PMID:27959895

  19. Noise reduction algorithm with the soft thresholding based on the Shannon entropy and bone-conduction speech cross- correlation bands.

    PubMed

    Na, Sung Dae; Wei, Qun; Seong, Ki Woong; Cho, Jin Ho; Kim, Myoung Nam

    2018-01-01

    The conventional methods of speech enhancement, noise reduction, and voice activity detection are based on the suppression of noise or non-speech components of the target air-conduction signals. However, air-conduced speech is hard to differentiate from babble or white noise signals. To overcome this problem, the proposed algorithm uses the bone-conduction speech signals and soft thresholding based on the Shannon entropy principle and cross-correlation of air- and bone-conduction signals. A new algorithm for speech detection and noise reduction is proposed, which makes use of the Shannon entropy principle and cross-correlation with the bone-conduction speech signals to threshold the wavelet packet coefficients of the noisy speech. The proposed method can be get efficient result by objective quality measure that are PESQ, RMSE, Correlation, SNR. Each threshold is generated by the entropy and cross-correlation approaches in the decomposed bands using the wavelet packet decomposition. As a result, the noise is reduced by the proposed method using the MATLAB simulation. To verify the method feasibility, we compared the air- and bone-conduction speech signals and their spectra by the proposed method. As a result, high performance of the proposed method is confirmed, which makes it quite instrumental to future applications in communication devices, noisy environment, construction, and military operations.

  20. An Auditory-Masking-Threshold-Based Noise Suppression Algorithm GMMSE-AMT[ERB] for Listeners with Sensorineural Hearing Loss

    NASA Astrophysics Data System (ADS)

    Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica

    2005-12-01

    This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.

  1. A High Fuel Consumption Efficiency Management Scheme for PHEVs Using an Adaptive Genetic Algorithm

    PubMed Central

    Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah

    2015-01-01

    A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day. PMID:25587974

  2. An Adaptive Reputation-Based Algorithm for Grid Virtual Organization Formation

    NASA Astrophysics Data System (ADS)

    Cui, Yongrui; Li, Mingchu; Ren, Yizhi; Sakurai, Kouichi

    A novel adaptive reputation-based virtual organization formation is proposed. It restrains the bad performers effectively based on the consideration of the global experience of the evaluator and evaluates the direct trust relation between two grid nodes accurately by consulting the previous trust value rationally. It also consults and improves the reputation evaluation process in PathTrust model by taking account of the inter-organizational trust relationship and combines it with direct and recommended trust in a weighted way, which makes the algorithm more robust against collusion attacks. Additionally, the proposed algorithm considers the perspective of the VO creator and takes required VO services as one of the most important fine-grained evaluation criterion, which makes the algorithm more suitable for constructing VOs in grid environments that include autonomous organizations. Simulation results show that our algorithm restrains the bad performers and resists against fake transaction attacks and badmouth attacks effectively. It provides a clear advantage in the design of a VO infrastructure.

  3. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  4. An Adaptive Cultural Algorithm with Improved Quantum-behaved Particle Swarm Optimization for Sonar Image Detection.

    PubMed

    Wang, Xingmei; Hao, Wenqian; Li, Qiming

    2017-12-18

    This paper proposes an adaptive cultural algorithm with improved quantum-behaved particle swarm optimization (ACA-IQPSO) to detect the underwater sonar image. In the population space, to improve searching ability of particles, iterative times and the fitness value of particles are regarded as factors to adaptively adjust the contraction-expansion coefficient of the quantum-behaved particle swarm optimization algorithm (QPSO). The improved quantum-behaved particle swarm optimization algorithm (IQPSO) can make particles adjust their behaviours according to their quality. In the belief space, a new update strategy is adopted to update cultural individuals according to the idea of the update strategy in shuffled frog leaping algorithm (SFLA). Moreover, to enhance the utilization of information in the population space and belief space, accept function and influence function are redesigned in the new communication protocol. The experimental results show that ACA-IQPSO can obtain good clustering centres according to the grey distribution information of underwater sonar images, and accurately complete underwater objects detection. Compared with other algorithms, the proposed ACA-IQPSO has good effectiveness, excellent adaptability, a powerful searching ability and high convergence efficiency. Meanwhile, the experimental results of the benchmark functions can further demonstrate that the proposed ACA-IQPSO has better searching ability, convergence efficiency and stability.

  5. An Adaptive Numeric Predictor-corrector Guidance Algorithm for Atmospheric Entry Vehicles. M.S. Thesis - MIT, Cambridge

    NASA Technical Reports Server (NTRS)

    Spratlin, Kenneth Milton

    1987-01-01

    An adaptive numeric predictor-corrector guidance is developed for atmospheric entry vehicles which utilize lift to achieve maximum footprint capability. Applicability of the guidance design to vehicles with a wide range of performance capabilities is desired so as to reduce the need for algorithm redesign with each new vehicle. Adaptability is desired to minimize mission-specific analysis and planning. The guidance algorithm motivation and design are presented. Performance is assessed for application of the algorithm to the NASA Entry Research Vehicle (ERV). The dispersions the guidance must be designed to handle are presented. The achievable operational footprint for expected worst-case dispersions is presented. The algorithm performs excellently for the expected dispersions and captures most of the achievable footprint.

  6. Optimizing Retransmission Threshold in Wireless Sensor Networks

    PubMed Central

    Bi, Ran; Li, Yingshu; Tan, Guozhen; Sun, Liang

    2016-01-01

    The retransmission threshold in wireless sensor networks is critical to the latency of data delivery in the networks. However, existing works on data transmission in sensor networks did not consider the optimization of the retransmission threshold, and they simply set the same retransmission threshold for all sensor nodes in advance. The method did not take link quality and delay requirement into account, which decreases the probability of a packet passing its delivery path within a given deadline. This paper investigates the problem of finding optimal retransmission thresholds for relay nodes along a delivery path in a sensor network. The object of optimizing retransmission thresholds is to maximize the summation of the probability of the packet being successfully delivered to the next relay node or destination node in time. A dynamic programming-based distributed algorithm for finding optimal retransmission thresholds for relay nodes along a delivery path in the sensor network is proposed. The time complexity is OnΔ·max1≤i≤n{ui}, where ui is the given upper bound of the retransmission threshold of sensor node i in a given delivery path, n is the length of the delivery path and Δ is the given upper bound of the transmission delay of the delivery path. If Δ is greater than the polynomial, to reduce the time complexity, a linear programming-based (1+pmin)-approximation algorithm is proposed. Furthermore, when the ranges of the upper and lower bounds of retransmission thresholds are big enough, a Lagrange multiplier-based distributed O(1)-approximation algorithm with time complexity O(1) is proposed. Experimental results show that the proposed algorithms have better performance. PMID:27171092

  7. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    PubMed Central

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated

  8. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction.

    PubMed

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A

    2016-04-01

    The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two

  9. Adaptive geodesic transform for segmentation of vertebrae on CT images

    NASA Astrophysics Data System (ADS)

    Gaonkar, Bilwaj; Shu, Liao; Hermosillo, Gerardo; Zhan, Yiqiang

    2014-03-01

    Vertebral segmentation is a critical first step in any quantitative evaluation of vertebral pathology using CT images. This is especially challenging because bone marrow tissue has the same intensity profile as the muscle surrounding the bone. Thus simple methods such as thresholding or adaptive k-means fail to accurately segment vertebrae. While several other algorithms such as level sets may be used for segmentation any algorithm that is clinically deployable has to work in under a few seconds. To address these dual challenges we present here, a new algorithm based on the geodesic distance transform that is capable of segmenting the spinal vertebrae in under one second. To achieve this we extend the theory of the geodesic distance transforms proposed in1 to incorporate high level anatomical knowledge through adaptive weighting of image gradients. Such knowledge may be provided by the user directly or may be automatically generated by another algorithm. We incorporate information 'learnt' using a previously published machine learning algorithm2 to segment the L1 to L5 vertebrae. While we present a particular application here, the adaptive geodesic transform is a generic concept which can be applied to segmentation of other organs as well.

  10. Robustness of continuous-time adaptive control algorithms in the presence of unmodeled dynamics

    NASA Technical Reports Server (NTRS)

    Rohrs, C. E.; Valavani, L.; Athans, M.; Stein, G.

    1985-01-01

    This paper examines the robustness properties of existing adaptive control algorithms to unmodeled plant high-frequency dynamics and unmeasurable output disturbances. It is demonstrated that there exist two infinite-gain operators in the nonlinear dynamic system which determines the time-evolution of output and parameter errors. The pragmatic implications of the existence of such infinite-gain operators is that: (1) sinusoidal reference inputs at specific frequencies and/or (2) sinusoidal output disturbances at any frequency (including dc), can cause the loop gain to increase without bound, thereby exciting the unmodeled high-frequency dynamics, and yielding an unstable control system. Hence, it is concluded that existing adaptive control algorithms as they are presented in the literature referenced in this paper, cannot be used with confidence in practical designs where the plant contains unmodeled dynamics because instability is likely to result. Further understanding is required to ascertain how the currently implemented adaptive systems differ from the theoretical systems studied here and how further theoretical development can improve the robustness of adaptive controllers.

  11. Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Yang, Feng; Xi, Hong-Sheng; Guo, Wei; Sheng, Yanmin

    2007-12-01

    We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.

  12. Viral Diversity Threshold for Adaptive Immunity in Prokaryotes

    PubMed Central

    Weinberger, Ariel D.; Wolf, Yuri I.; Lobkovsky, Alexander E.; Gilmore, Michael S.; Koonin, Eugene V.

    2012-01-01

    ABSTRACT Bacteria and archaea face continual onslaughts of rapidly diversifying viruses and plasmids. Many prokaryotes maintain adaptive immune systems known as clustered regularly interspaced short palindromic repeats (CRISPR) and CRISPR-associated genes (Cas). CRISPR-Cas systems are genomic sensors that serially acquire viral and plasmid DNA fragments (spacers) that are utilized to target and cleave matching viral and plasmid DNA in subsequent genomic invasions, offering critical immunological memory. Only 50% of sequenced bacteria possess CRISPR-Cas immunity, in contrast to over 90% of sequenced archaea. To probe why half of bacteria lack CRISPR-Cas immunity, we combined comparative genomics and mathematical modeling. Analysis of hundreds of diverse prokaryotic genomes shows that CRISPR-Cas systems are substantially more prevalent in thermophiles than in mesophiles. With sequenced bacteria disproportionately mesophilic and sequenced archaea mostly thermophilic, the presence of CRISPR-Cas appears to depend more on environmental temperature than on bacterial-archaeal taxonomy. Mutation rates are typically severalfold higher in mesophilic prokaryotes than in thermophilic prokaryotes. To quantitatively test whether accelerated viral mutation leads microbes to lose CRISPR-Cas systems, we developed a stochastic model of virus-CRISPR coevolution. The model competes CRISPR-Cas-positive (CRISPR-Cas+) prokaryotes against CRISPR-Cas-negative (CRISPR-Cas−) prokaryotes, continually weighing the antiviral benefits conferred by CRISPR-Cas immunity against its fitness costs. Tracking this cost-benefit analysis across parameter space reveals viral mutation rate thresholds beyond which CRISPR-Cas cannot provide sufficient immunity and is purged from host populations. These results offer a simple, testable viral diversity hypothesis to explain why mesophilic bacteria disproportionately lack CRISPR-Cas immunity. More generally, fundamental limits on the adaptability of biological

  13. The scotopic threshold response of the dark-adapted electroretinogram of the mouse.

    PubMed

    Saszik, Shannon M; Robson, John G; Frishman, Laura J

    2002-09-15

    The most sensitive response in the dark-adapted electroretinogram (ERG), the scotopic threshold response (STR) which originates from the proximal retina, has been identified in several mammals including humans, but previously not in the mouse. The current study established the presence and assessed the nature of the mouse STR. ERGs were recorded from adult wild-type C57/BL6 mice anaesthetized with ketamine (70 mg kg(-1)) and xylazine (7 mg kg(-1)). Recordings were between DTL fibres placed under contact lenses on the two eyes. Monocular test stimuli were brief flashes (lambda(max) 462 nm; -6.1 to +1.8 log scotopic Troland seconds(sc td s)) under fully dark-adapted conditions and in the presence of steady adapting backgrounds (-3.2 to -1.7 log sc td). For the weakest test stimuli, ERGs consisted of a slow negative potential maximal approximately 200 ms after the flash, with a small positive potential preceding it. The negative wave resembled the STR of other species. As intensity was increased, the negative potential saturated but the positive potential (maximal approximately 110 ms) continued to grow as the b-wave. For stimuli that saturated the b-wave, the a-wave emerged. For stimulus strengths up to those at which the a-wave emerged, ERG amplitudes measured at fixed times after the flash (110 and 200 ms) were fitted with a model assuming an initially linear rise of response amplitude with intensity, followed by saturation of five components of declining sensitivity: a negative STR (nSTR), a positive STR (pSTR), a positive scotopic response (pSR), PII (the bipolar cell component) and PIII (the photoreceptor component). The nSTR and pSTR were approximately 3 times more sensitive than the pSR, which was approximately 7 times more sensitive than PII. The sensitive positive components dominated the b-wave up to > 5 % of its saturated amplitude. Pharmacological agents that suppress proximal retinal activity (e.g. GABA) minimized the pSTR, nSTR and pSR, essentially

  14. Adaptive Swarm Balancing Algorithms for rare-event prediction in imbalanced healthcare data

    PubMed Central

    Wong, Raymond K.; Mohammed, Sabah; Fiaidhi, Jinan; Sung, Yunsick

    2017-01-01

    Clinical data analysis and forecasting have made substantial contributions to disease control, prevention and detection. However, such data usually suffer from highly imbalanced samples in class distributions. In this paper, we aim to formulate effective methods to rebalance binary imbalanced dataset, where the positive samples take up only the minority. We investigate two different meta-heuristic algorithms, particle swarm optimization and bat algorithm, and apply them to empower the effects of synthetic minority over-sampling technique (SMOTE) for pre-processing the datasets. One approach is to process the full dataset as a whole. The other is to split up the dataset and adaptively process it one segment at a time. The experimental results reported in this paper reveal that the performance improvements obtained by the former methods are not scalable to larger data scales. The latter methods, which we call Adaptive Swarm Balancing Algorithms, lead to significant efficiency and effectiveness improvements on large datasets while the first method is invalid. We also find it more consistent with the practice of the typical large imbalanced medical datasets. We further use the meta-heuristic algorithms to optimize two key parameters of SMOTE. The proposed methods lead to more credible performances of the classifier, and shortening the run time compared to brute-force method. PMID:28753613

  15. Analysis of Online DBA Algorithm with Adaptive Sleep Cycle in WDM EPON

    NASA Astrophysics Data System (ADS)

    Pajčin, Bojan; Matavulj, Petar; Radivojević, Mirjana

    2018-05-01

    In order to manage Quality of Service (QoS) and energy efficiency in the optical access network, an online Dynamic Bandwidth Allocation (DBA) algorithm with adaptive sleep cycle is presented. This DBA algorithm has the ability to allocate an additional bandwidth to the end user within a single sleep cycle whose duration changes depending on the current buffers occupancy. The purpose of this DBA algorithm is to tune the duration of the sleep cycle depending on the network load in order to provide service to the end user without violating strict QoS requests in all network operating conditions.

  16. A Region Tracking-Based Vehicle Detection Algorithm in Nighttime Traffic Scenes

    PubMed Central

    Wang, Jianqiang; Sun, Xiaoyan; Guo, Junbin

    2013-01-01

    The preceding vehicles detection technique in nighttime traffic scenes is an important part of the advanced driver assistance system (ADAS). This paper proposes a region tracking-based vehicle detection algorithm via the image processing technique. First, the brightness of the taillights during nighttime is used as the typical feature, and we use the existing global detection algorithm to detect and pair the taillights. When the vehicle is detected, a time series analysis model is introduced to predict vehicle positions and the possible region (PR) of the vehicle in the next frame. Then, the vehicle is only detected in the PR. This could reduce the detection time and avoid the false pairing between the bright spots in the PR and the bright spots out of the PR. Additionally, we present a thresholds updating method to make the thresholds adaptive. Finally, experimental studies are provided to demonstrate the application and substantiate the superiority of the proposed algorithm. The results show that the proposed algorithm can simultaneously reduce both the false negative detection rate and the false positive detection rate.

  17. An Adaptive Evolutionary Algorithm for Traveling Salesman Problem with Precedence Constraints

    PubMed Central

    Sung, Jinmo; Jeong, Bongju

    2014-01-01

    Traveling sales man problem with precedence constraints is one of the most notorious problems in terms of the efficiency of its solution approach, even though it has very wide range of industrial applications. We propose a new evolutionary algorithm to efficiently obtain good solutions by improving the search process. Our genetic operators guarantee the feasibility of solutions over the generations of population, which significantly improves the computational efficiency even when it is combined with our flexible adaptive searching strategy. The efficiency of the algorithm is investigated by computational experiments. PMID:24701158

  18. An adaptive evolutionary algorithm for traveling salesman problem with precedence constraints.

    PubMed

    Sung, Jinmo; Jeong, Bongju

    2014-01-01

    Traveling sales man problem with precedence constraints is one of the most notorious problems in terms of the efficiency of its solution approach, even though it has very wide range of industrial applications. We propose a new evolutionary algorithm to efficiently obtain good solutions by improving the search process. Our genetic operators guarantee the feasibility of solutions over the generations of population, which significantly improves the computational efficiency even when it is combined with our flexible adaptive searching strategy. The efficiency of the algorithm is investigated by computational experiments.

  19. Wavefront sensorless adaptive optics OCT with the DONE algorithm for in vivo human retinal imaging [Invited].

    PubMed

    Verstraete, Hans R G W; Heisler, Morgan; Ju, Myeong Jin; Wahl, Daniel; Bliek, Laurens; Kalkman, Jeroen; Bonora, Stefano; Jian, Yifan; Verhaegen, Michel; Sarunic, Marinko V

    2017-04-01

    In this report, which is an international collaboration of OCT, adaptive optics, and control research, we demonstrate the Data-based Online Nonlinear Extremum-seeker (DONE) algorithm to guide the image based optimization for wavefront sensorless adaptive optics (WFSL-AO) OCT for in vivo human retinal imaging. The ocular aberrations were corrected using a multi-actuator adaptive lens after linearization of the hysteresis in the piezoelectric actuators. The DONE algorithm succeeded in drastically improving image quality and the OCT signal intensity, up to a factor seven, while achieving a computational time of 1 ms per iteration, making it applicable for many high speed applications. We demonstrate the correction of five aberrations using 70 iterations of the DONE algorithm performed over 2.8 s of continuous volumetric OCT acquisition. Data acquired from an imaging phantom and in vivo from human research volunteers are presented.

  20. Properties of perimetric threshold estimates from Full Threshold, SITA Standard, and SITA Fast strategies.

    PubMed

    Artes, Paul H; Iwase, Aiko; Ohno, Yuko; Kitazawa, Yoshiaki; Chauhan, Balwantray C

    2002-08-01

    To investigate the distributions of threshold estimates with the Swedish Interactive Threshold Algorithms (SITA) Standard, SITA Fast, and the Full Threshold algorithm (Humphrey Field Analyzer; Zeiss-Humphrey Instruments, Dublin, CA) and to compare the pointwise test-retest variability of these strategies. One eye of 49 patients (mean age, 61.6 years; range, 22-81) with glaucoma (Mean Deviation mean, -7.13 dB; range, +1.8 to -23.9 dB) was examined four times with each of the three strategies. The mean and median SITA Standard and SITA Fast threshold estimates were compared with a "best available" estimate of sensitivity (mean results of three Full Threshold tests). Pointwise 90% retest limits (5th and 95th percentiles of retest thresholds) were derived to assess the reproducibility of individual threshold estimates. The differences between the threshold estimates of the SITA and Full Threshold strategies were largest ( approximately 3 dB) for midrange sensitivities ( approximately 15 dB). The threshold distributions of SITA were considerably different from those of the Full Threshold strategy. The differences remained of similar magnitude when the analysis was repeated on a subset of 20 locations that are examined early during the course of a Full Threshold examination. With sensitivities above 25 dB, both SITA strategies exhibited lower test-retest variability than the Full Threshold strategy. Below 25 dB, the retest intervals of SITA Standard were slightly smaller than those of the Full Threshold strategy, whereas those of SITA Fast were larger. SITA Standard may be superior to the Full Threshold strategy for monitoring patients with visual field loss. The greater test-retest variability of SITA Fast in areas of low sensitivity is likely to offset the benefit of even shorter test durations with this strategy. The sensitivity differences between the SITA and Full Threshold strategies may relate to factors other than reduced fatigue. They are, however, small in

  1. Threshold-adaptive canny operator based on cross-zero points

    NASA Astrophysics Data System (ADS)

    Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu

    2018-03-01

    Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.

  2. SIMULATION OF DISPERSION OF A POWER PLANT PLUME USING AN ADAPTIVE GRID ALGORITHM. (R827028)

    EPA Science Inventory

    A new dynamic adaptive grid algorithm has been developed for use in air quality modeling. This algorithm uses a higher order numerical scheme––the piecewise parabolic method (PPM)––for computing advective solution fields; a weight function capable o...

  3. Analysis of convergence of an evolutionary algorithm with self-adaptation using a stochastic Lyapunov function.

    PubMed

    Semenov, Mikhail A; Terkel, Dmitri A

    2003-01-01

    This paper analyses the convergence of evolutionary algorithms using a technique which is based on a stochastic Lyapunov function and developed within the martingale theory. This technique is used to investigate the convergence of a simple evolutionary algorithm with self-adaptation, which contains two types of parameters: fitness parameters, belonging to the domain of the objective function; and control parameters, responsible for the variation of fitness parameters. Although both parameters mutate randomly and independently, they converge to the "optimum" due to the direct (for fitness parameters) and indirect (for control parameters) selection. We show that the convergence velocity of the evolutionary algorithm with self-adaptation is asymptotically exponential, similar to the velocity of the optimal deterministic algorithm on the class of unimodal functions. Although some martingale inequalities have not be proved analytically, they have been numerically validated with 0.999 confidence using Monte-Carlo simulations.

  4. Investigation of Adaptive-threshold Approaches for Determining Area-Time Integrals from Satellite Infrared Data to Estimate Convective Rain Volumes

    NASA Technical Reports Server (NTRS)

    Smith, Paul L.; VonderHaar, Thomas H.

    1996-01-01

    The principal goal of this project is to establish relationships that would allow application of area-time integral (ATI) calculations based upon satellite data to estimate rainfall volumes. The research is being carried out as a collaborative effort between the two participating organizations, with the satellite data analysis to determine values for the ATIs being done primarily by the STC-METSAT scientists and the associated radar data analysis to determine the 'ground-truth' rainfall estimates being done primarily at the South Dakota School of Mines and Technology (SDSM&T). Synthesis of the two separate kinds of data and investigation of the resulting rainfall-versus-ATI relationships is then carried out jointly. The research has been pursued using two different approaches, which for convenience can be designated as the 'fixed-threshold approach' and the 'adaptive-threshold approach'. In the former, an attempt is made to determine a single temperature threshold in the satellite infrared data that would yield ATI values for identifiable cloud clusters which are closely related to the corresponding rainfall amounts as determined by radar. Work on the second, or 'adaptive-threshold', approach for determining the satellite ATI values has explored two avenues: (1) attempt involved choosing IR thresholds to match the satellite ATI values with ones separately calculated from the radar data on a case basis; and (2) an attempt involved a striaghtforward screening analysis to determine the (fixed) offset that would lead to the strongest correlation and lowest standard error of estimate in the relationship between the satellite ATI values and the corresponding rainfall volumes.

  5. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiu, Dongbin

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  6. Multiple-Threshold Event Detection and Other Enhancements to the Virtual Seismologist (VS) Earthquake Early Warning Algorithm

    NASA Astrophysics Data System (ADS)

    Fischer, M.; Caprio, M.; Cua, G. B.; Heaton, T. H.; Clinton, J. F.; Wiemer, S.

    2009-12-01

    The Virtual Seismologist (VS) algorithm is a Bayesian approach to earthquake early warning (EEW) being implemented by the Swiss Seismological Service at ETH Zurich. The application of Bayes’ theorem in earthquake early warning states that the most probable source estimate at any given time is a combination of contributions from a likelihood function that evolves in response to incoming data from the on-going earthquake, and selected prior information, which can include factors such as network topology, the Gutenberg-Richter relationship or previously observed seismicity. The VS algorithm was one of three EEW algorithms involved in the California Integrated Seismic Network (CISN) real-time EEW testing and performance evaluation effort. Its compelling real-time performance in California over the last three years has led to its inclusion in the new USGS-funded effort to develop key components of CISN ShakeAlert, a prototype EEW system that could potentially be implemented in California. A significant portion of VS code development was supported by the SAFER EEW project in Europe. We discuss recent enhancements to the VS EEW algorithm. We developed and continue to test a multiple-threshold event detection scheme, which uses different association / location approaches depending on the peak amplitudes associated with an incoming P pick. With this scheme, an event with sufficiently high initial amplitudes can be declared on the basis of a single station, maximizing warning times for damaging events for which EEW is most relevant. Smaller, non-damaging events, which will have lower initial amplitudes, will require more picks to be declared an event to reduce false alarms. This transforms the VS codes from a regional EEW approach reliant on traditional location estimation (and it requirement of at least 4 picks as implemented by the Binder Earthworm phase associator) to a hybrid on-site/regional approach capable of providing a continuously evolving stream of EEW

  7. EMD self-adaptive selecting relevant modes algorithm for FBG spectrum signal

    NASA Astrophysics Data System (ADS)

    Chen, Yong; Wu, Chun-ting; Liu, Huan-lin

    2017-07-01

    Noise may reduce the demodulation accuracy of fiber Bragg grating (FBG) sensing signal so as to affect the quality of sensing detection. Thus, the recovery of a signal from observed noisy data is necessary. In this paper, a precise self-adaptive algorithm of selecting relevant modes is proposed to remove the noise of signal. Empirical mode decomposition (EMD) is first used to decompose a signal into a set of modes. The pseudo modes cancellation is introduced to identify and eliminate false modes, and then the Mutual Information (MI) of partial modes is calculated. MI is used to estimate the critical point of high and low frequency components. Simulation results show that the proposed algorithm estimates the critical point more accurately than the traditional algorithms for FBG spectral signal. While, compared to the similar algorithms, the signal noise ratio of the signal can be improved more than 10 dB after processing by the proposed algorithm, and correlation coefficient can be increased by 0.5, so it demonstrates better de-noising effect.

  8. Wavefront sensorless adaptive optics OCT with the DONE algorithm for in vivo human retinal imaging [Invited

    PubMed Central

    Verstraete, Hans R. G. W.; Heisler, Morgan; Ju, Myeong Jin; Wahl, Daniel; Bliek, Laurens; Kalkman, Jeroen; Bonora, Stefano; Jian, Yifan; Verhaegen, Michel; Sarunic, Marinko V.

    2017-01-01

    In this report, which is an international collaboration of OCT, adaptive optics, and control research, we demonstrate the Data-based Online Nonlinear Extremum-seeker (DONE) algorithm to guide the image based optimization for wavefront sensorless adaptive optics (WFSL-AO) OCT for in vivo human retinal imaging. The ocular aberrations were corrected using a multi-actuator adaptive lens after linearization of the hysteresis in the piezoelectric actuators. The DONE algorithm succeeded in drastically improving image quality and the OCT signal intensity, up to a factor seven, while achieving a computational time of 1 ms per iteration, making it applicable for many high speed applications. We demonstrate the correction of five aberrations using 70 iterations of the DONE algorithm performed over 2.8 s of continuous volumetric OCT acquisition. Data acquired from an imaging phantom and in vivo from human research volunteers are presented. PMID:28736670

  9. Research on adaptive optics image restoration algorithm based on improved joint maximum a posteriori method

    NASA Astrophysics Data System (ADS)

    Zhang, Lijuan; Li, Yang; Wang, Junnan; Liu, Ying

    2018-03-01

    In this paper, we propose a point spread function (PSF) reconstruction method and joint maximum a posteriori (JMAP) estimation method for the adaptive optics image restoration. Using the JMAP method as the basic principle, we establish the joint log likelihood function of multi-frame adaptive optics (AO) images based on the image Gaussian noise models. To begin with, combining the observed conditions and AO system characteristics, a predicted PSF model for the wavefront phase effect is developed; then, we build up iterative solution formulas of the AO image based on our proposed algorithm, addressing the implementation process of multi-frame AO images joint deconvolution method. We conduct a series of experiments on simulated and real degraded AO images to evaluate our proposed algorithm. Compared with the Wiener iterative blind deconvolution (Wiener-IBD) algorithm and Richardson-Lucy IBD algorithm, our algorithm has better restoration effects including higher peak signal-to-noise ratio ( PSNR) and Laplacian sum ( LS) value than the others. The research results have a certain application values for actual AO image restoration.

  10. DTFP-Growth: Dynamic Threshold-Based FP-Growth Rule Mining Algorithm Through Integrating Gene Expression, Methylation, and Protein-Protein Interaction Profiles.

    PubMed

    Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan; Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan; Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan

    2018-04-01

    Association rule mining is an important technique for identifying interesting relationships between gene pairs in a biological data set. Earlier methods basically work for a single biological data set, and, in maximum cases, a single minimum support cutoff can be applied globally, i.e., across all genesets/itemsets. To overcome this limitation, in this paper, we propose dynamic threshold-based FP-growth rule mining algorithm that integrates gene expression, methylation and protein-protein interaction profiles based on weighted shortest distance to find the novel associations among different pairs of genes in multi-view data sets. For this purpose, we introduce three new thresholds, namely, Distance-based Variable/Dynamic Supports (DVS), Distance-based Variable Confidences (DVC), and Distance-based Variable Lifts (DVL) for each rule by integrating co-expression, co-methylation, and protein-protein interactions existed in the multi-omics data set. We develop the proposed algorithm utilizing these three novel multiple threshold measures. In the proposed algorithm, the values of , , and are computed for each rule separately, and subsequently it is verified whether the support, confidence, and lift of each evolved rule are greater than or equal to the corresponding individual , , and values, respectively, or not. If all these three conditions for a rule are found to be true, the rule is treated as a resultant rule. One of the major advantages of the proposed method compared with other related state-of-the-art methods is that it considers both the quantitative and interactive significance among all pairwise genes belonging to each rule. Moreover, the proposed method generates fewer rules, takes less running time, and provides greater biological significance for the resultant top-ranking rules compared to previous methods.

  11. Experimental Evaluation of a Braille-Reading-Inspired Finger Motion Adaptive Algorithm.

    PubMed

    Ulusoy, Melda; Sipahi, Rifat

    2016-01-01

    Braille reading is a complex process involving intricate finger-motion patterns and finger-rubbing actions across Braille letters for the stimulation of appropriate nerves. Although Braille reading is performed by smoothly moving the finger from left-to-right, research shows that even fluent reading requires right-to-left movements of the finger, known as "reversal". Reversals are crucial as they not only enhance stimulation of nerves for correctly reading the letters, but they also show one to re-read the letters that were missed in the first pass. Moreover, it is known that reversals can be performed as often as in every sentence and can start at any location in a sentence. Here, we report experimental results on the feasibility of an algorithm that can render a machine to automatically adapt to reversal gestures of one's finger. Through Braille-reading-analogous tasks, the algorithm is tested with thirty sighted subjects that volunteered in the study. We find that the finger motion adaptive algorithm (FMAA) is useful in achieving cooperation between human finger and the machine. In the presence of FMAA, subjects' performance metrics associated with the tasks have significantly improved as supported by statistical analysis. In light of these encouraging results, preliminary experiments are carried out with five blind subjects with the aim to put the algorithm to test. Results obtained from carefully designed experiments showed that subjects' Braille reading accuracy in the presence of FMAA was more favorable then when FMAA was turned off. Utilization of FMAA in future generation Braille reading devices thus holds strong promise.

  12. Experimental Evaluation of a Braille-Reading-Inspired Finger Motion Adaptive Algorithm

    PubMed Central

    2016-01-01

    Braille reading is a complex process involving intricate finger-motion patterns and finger-rubbing actions across Braille letters for the stimulation of appropriate nerves. Although Braille reading is performed by smoothly moving the finger from left-to-right, research shows that even fluent reading requires right-to-left movements of the finger, known as “reversal”. Reversals are crucial as they not only enhance stimulation of nerves for correctly reading the letters, but they also show one to re-read the letters that were missed in the first pass. Moreover, it is known that reversals can be performed as often as in every sentence and can start at any location in a sentence. Here, we report experimental results on the feasibility of an algorithm that can render a machine to automatically adapt to reversal gestures of one’s finger. Through Braille-reading-analogous tasks, the algorithm is tested with thirty sighted subjects that volunteered in the study. We find that the finger motion adaptive algorithm (FMAA) is useful in achieving cooperation between human finger and the machine. In the presence of FMAA, subjects’ performance metrics associated with the tasks have significantly improved as supported by statistical analysis. In light of these encouraging results, preliminary experiments are carried out with five blind subjects with the aim to put the algorithm to test. Results obtained from carefully designed experiments showed that subjects’ Braille reading accuracy in the presence of FMAA was more favorable then when FMAA was turned off. Utilization of FMAA in future generation Braille reading devices thus holds strong promise. PMID:26849058

  13. Angular-contact ball-bearing internal load estimation algorithm using runtime adaptive relaxation

    NASA Astrophysics Data System (ADS)

    Medina, H.; Mutu, R.

    2017-07-01

    An algorithm to estimate internal loads for single-row angular contact ball bearings due to externally applied thrust loads and high-operating speeds is presented. A new runtime adaptive relaxation procedure and blending function is proposed which ensures algorithm stability whilst also reducing the number of iterations needed to reach convergence, leading to an average reduction in computation time in excess of approximately 80%. The model is validated based on a 218 angular contact bearing and shows excellent agreement compared to published results.

  14. Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.

    PubMed

    Liu, Li; Lin, Weikai; Jin, Mingwu

    2015-01-01

    In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and

  16. Should the parameters of a BCI translation algorithm be continually adapted?

    PubMed

    McFarland, Dennis J; Sarnacki, William A; Wolpaw, Jonathan R

    2011-07-15

    People with or without motor disabilities can learn to control sensorimotor rhythms (SMRs) recorded from the scalp to move a computer cursor in one or more dimensions or can use the P300 event-related potential as a control signal to make discrete selections. Data collected from individuals using an SMR-based or P300-based BCI were evaluated offline to estimate the impact on performance of continually adapting the parameters of the translation algorithm during BCI operation. The performance of the SMR-based BCI was enhanced by adaptive updating of the feature weights or adaptive normalization of the features. In contrast, P300 performance did not benefit from either of these procedures. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. A General, Adaptive, Roadmap-Based Algorithm for Protein Motion Computation.

    PubMed

    Molloy, Kevin; Shehu, Amarda

    2016-03-01

    Precious information on protein function can be extracted from a detailed characterization of protein equilibrium dynamics. This remains elusive in wet and dry laboratories, as function-modulating transitions of a protein between functionally-relevant, thermodynamically-stable and meta-stable structural states often span disparate time scales. In this paper we propose a novel, robotics-inspired algorithm that circumvents time-scale challenges by drawing analogies between protein motion and robot motion. The algorithm adapts the popular roadmap-based framework in robot motion computation to handle the more complex protein conformation space and its underlying rugged energy surface. Given known structures representing stable and meta-stable states of a protein, the algorithm yields a time- and energy-prioritized list of transition paths between the structures, with each path represented as a series of conformations. The algorithm balances computational resources between a global search aimed at obtaining a global view of the network of protein conformations and their connectivity and a detailed local search focused on realizing such connections with physically-realistic models. Promising results are presented on a variety of proteins that demonstrate the general utility of the algorithm and its capability to improve the state of the art without employing system-specific insight.

  18. An improved cooperative adaptive cruise control (CACC) algorithm considering invalid communication

    NASA Astrophysics Data System (ADS)

    Wang, Pangwei; Wang, Yunpeng; Yu, Guizhen; Tang, Tieqiao

    2014-05-01

    For the Cooperative Adaptive Cruise Control (CACC) Algorithm, existing research studies mainly focus on how inter-vehicle communication can be used to develop CACC controller, the influence of the communication delays and lags of the actuators to the string stability. However, whether the string stability can be guaranteed when inter-vehicle communication is invalid partially has hardly been considered. This paper presents an improved CACC algorithm based on the sliding mode control theory and analyses the range of CACC controller parameters to maintain string stability. A dynamic model of vehicle spacing deviation in a platoon is then established, and the string stability conditions under improved CACC are analyzed. Unlike the traditional CACC algorithms, the proposed algorithm can ensure the functionality of the CACC system even if inter-vehicle communication is partially invalid. Finally, this paper establishes a platoon of five vehicles to simulate the improved CACC algorithm in MATLAB/Simulink, and the simulation results demonstrate that the improved CACC algorithm can maintain the string stability of a CACC platoon through adjusting the controller parameters and enlarging the spacing to prevent accidents. With guaranteed string stability, the proposed CACC algorithm can prevent oscillation of vehicle spacing and reduce chain collision accidents under real-world circumstances. This research proposes an improved CACC algorithm, which can guarantee the string stability when inter-vehicle communication is invalid.

  19. Adaptive Cross-correlation Algorithm and Experiment of Extended Scene Shack-Hartmann Wavefront Sensing

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin; Morgan, Rhonda M.; Green, Joseph J.; Ohara, Catherine M.; Redding, David C.

    2007-01-01

    We have developed a new, adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels in two extended-scene images captured by a Shack-Hartmann wavefront sensor (SH-WFS). It determines the positions of all of the extended-scene image cells relative to a reference cell using an FFT-based iterative image shifting algorithm. It works with both point-source spot images as well as extended scene images. We have also set up a testbed for extended0scene SH-WFS, and tested the ACC algorithm with the measured data of both point-source and extended-scene images. In this paper we describe our algorithm and present out experimental results.

  20. An implicit adaptation algorithm for a linear model reference control system

    NASA Technical Reports Server (NTRS)

    Mabius, L.; Kaufman, H.

    1975-01-01

    This paper presents a stable implicit adaptation algorithm for model reference control. The constraints for stability are found using Lyapunov's second method and do not depend on perfect model following between the system and the reference model. Methods are proposed for satisfying these constraints without estimating the parameters on which the constraints depend.

  1. Path Planning Algorithms for the Adaptive Sensor Fleet

    NASA Technical Reports Server (NTRS)

    Stoneking, Eric; Hosler, Jeff

    2005-01-01

    The Adaptive Sensor Fleet (ASF) is a general purpose fleet management and planning system being developed by NASA in coordination with NOAA. The current mission of ASF is to provide the capability for autonomous cooperative survey and sampling of dynamic oceanographic phenomena such as current systems and algae blooms. Each ASF vessel is a software model that represents a real world platform that carries a variety of sensors. The OASIS platform will provide the first physical vessel, outfitted with the systems and payloads necessary to execute the oceanographic observations described in this paper. The ASF architecture is being designed for extensibility to accommodate heterogenous fleet elements, and is not limited to using the OASIS platform to acquire data. This paper describes the path planning algorithms developed for the acquisition phase of a typical ASF task. Given a polygonal target region to be surveyed, the region is subdivided according to the number of vessels in the fleet. The subdivision algorithm seeks a solution in which all subregions have equal area and minimum mean radius. Once the subregions are defined, a dynamic programming method is used to find a minimum-time path for each vessel from its initial position to its assigned region. This path plan includes the effects of water currents as well as avoidance of known obstacles. A fleet-level planning algorithm then shuffles the individual vessel assignments to find the overall solution which puts all vessels in their assigned regions in the minimum time. This shuffle algorithm may be described as a process of elimination on the sorted list of permutations of a cost matrix. All these path planning algorithms are facilitated by discretizing the region of interest onto a hexagonal tiling.

  2. Twelve automated thresholding methods for segmentation of PET images: a phantom study.

    PubMed

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M

    2012-06-21

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  3. Twelve automated thresholding methods for segmentation of PET images: a phantom study

    NASA Astrophysics Data System (ADS)

    Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.

    2012-06-01

    Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.

  4. Sensitivity and Specificity of Swedish Interactive Threshold Algorithm and Standard Full Threshold Perimetry in Primary Open-angle Glaucoma.

    PubMed

    Bamdad, Shahram; Beigi, Vahid; Sedaghat, Mohammad Reza

    2017-01-01

    Perimetry is one of the mainstays in glaucoma diagnosis and treatment. Various strategies offer different accuracies in glaucoma testing. Our aim was to determine and compare the diagnostic sensitivity and specificity of Swedish Interactive Threshold Algorithm (SITA) Fast and Standard Full Threshold (SFT) strategies of the Humphrey Field Analyzer (HFA) in identifying patients with visual field defect in glaucoma disease. This prospective observational case series study was conducted in a university-based eye hospital. A total of 37 eyes of 20 patients with glaucoma were evaluated using the central 30-2 program and both the SITA Fast and SFT strategies. Both strategies were performed for each strategy in each session and for four times in a 2-week period. Data were analyzed using the Student's t-test, analysis of variance, and chi-square test. The SITA Fast and SFT strategies had similar sensitivity of 93.3%. The specificity of SITA Fast and SFT strategies was 57.4% and 71.4% respectively. The mean duration of SFT tests was 14.6 minutes, and that of SITA Fast tests was 5.45 minutes (a statistically significant 62.5% reduction). In gray scale plots, visual field defect was less deep in SITA Fast than in SFT; however, more points had significant defect (p < 0.5% and p < 1%) in pattern deviation plots in SITA Fast than in SFT; these differences were not clinically significant. In conclusion, the SITA Fast strategy showed higher sensitivity for detection of glaucoma compared to the SFT strategy, yet with reduced specificity; however, the shorter test duration makes it a more acceptable choice in many clinical situations, especially for children, elderly, and those with musculoskeletal diseases.

  5. Real time tracking by LOPF algorithm with mixture model

    NASA Astrophysics Data System (ADS)

    Meng, Bo; Zhu, Ming; Han, Guangliang; Wu, Zhiguo

    2007-11-01

    A new particle filter-the Local Optimum Particle Filter (LOPF) algorithm is presented for tracking object accurately and steadily in visual sequences in real time which is a challenge task in computer vision field. In order to using the particles efficiently, we first use Sobel algorithm to extract the profile of the object. Then, we employ a new Local Optimum algorithm to auto-initialize some certain number of particles from these edge points as centre of the particles. The main advantage we do this in stead of selecting particles randomly in conventional particle filter is that we can pay more attentions on these more important optimum candidates and reduce the unnecessary calculation on those negligible ones, in addition we can overcome the conventional degeneracy phenomenon in a way and decrease the computational costs. Otherwise, the threshold is a key factor that affecting the results very much. So here we adapt an adaptive threshold choosing method to get the optimal Sobel result. The dissimilarities between the target model and the target candidates are expressed by a metric derived from the Bhattacharyya coefficient. Here, we use both the counter cue to select the particles and the color cur to describe the targets as the mixture target model. The effectiveness of our scheme is demonstrated by real visual tracking experiments. Results from simulations and experiments with real video data show the improved performance of the proposed algorithm when compared with that of the standard particle filter. The superior performance is evident when the target encountering the occlusion in real video where the standard particle filter usually fails.

  6. Downscaling Land Surface Temperature in Complex Regions by Using Multiple Scale Factors with Adaptive Thresholds

    PubMed Central

    Yang, Yingbao; Li, Xiaolong; Pan, Xin; Zhang, Yong; Cao, Chen

    2017-01-01

    Many downscaling algorithms have been proposed to address the issue of coarse-resolution land surface temperature (LST) derived from available satellite-borne sensors. However, few studies have focused on improving LST downscaling in urban areas with several mixed surface types. In this study, LST was downscaled by a multiple linear regression model between LST and multiple scale factors in mixed areas with three or four surface types. The correlation coefficients (CCs) between LST and the scale factors were used to assess the importance of the scale factors within a moving window. CC thresholds determined which factors participated in the fitting of the regression equation. The proposed downscaling approach, which involves an adaptive selection of the scale factors, was evaluated using the LST derived from four Landsat 8 thermal imageries of Nanjing City in different seasons. Results of the visual and quantitative analyses show that the proposed approach achieves relatively satisfactory downscaling results on 11 August, with coefficient of determination and root-mean-square error of 0.87 and 1.13 °C, respectively. Relative to other approaches, our approach shows the similar accuracy and the availability in all seasons. The best (worst) availability occurred in the region of vegetation (water). Thus, the approach is an efficient and reliable LST downscaling method. Future tasks include reliable LST downscaling in challenging regions and the application of our model in middle and low spatial resolutions. PMID:28368301

  7. Dynamic game balancing implementation using adaptive algorithm in mobile-based Safari Indonesia game

    NASA Astrophysics Data System (ADS)

    Yuniarti, Anny; Nata Wardanie, Novita; Kuswardayan, Imam

    2018-03-01

    In developing a game there is one method that should be applied to maintain the interest of players, namely dynamic game balancing. Dynamic game balancing is a process to match a player’s playing style with the behaviour, attributes, and game environment. This study applies dynamic game balancing using adaptive algorithm in scrolling shooter game type called Safari Indonesia which developed using Unity. The game of this type is portrayed by a fighter aircraft character trying to defend itself from insistent enemy attacks. This classic game is chosen to implement adaptive algorithms because it has quite complex attributes to be developed using dynamic game balancing. Tests conducted by distributing questionnaires to a number of players indicate that this method managed to reduce frustration and increase the pleasure factor in playing.

  8. A parallel adaptive quantum genetic algorithm for the controllability of arbitrary networks.

    PubMed

    Li, Yuhong; Gong, Guanghong; Li, Ni

    2018-01-01

    In this paper, we propose a novel algorithm-parallel adaptive quantum genetic algorithm-which can rapidly determine the minimum control nodes of arbitrary networks with both control nodes and state nodes. The corresponding network can be fully controlled with the obtained control scheme. We transformed the network controllability issue into a combinational optimization problem based on the Popov-Belevitch-Hautus rank condition. A set of canonical networks and a list of real-world networks were experimented. Comparison results demonstrated that the algorithm was more ideal to optimize the controllability of networks, especially those larger-size networks. We demonstrated subsequently that there were links between the optimal control nodes and some network statistical characteristics. The proposed algorithm provides an effective approach to improve the controllability optimization of large networks or even extra-large networks with hundreds of thousands nodes.

  9. Bilevel thresholding of sliced image of sludge floc.

    PubMed

    Chu, C P; Lee, D J

    2004-02-15

    This work examined the feasibility of employing various thresholding algorithms to determining the optimal bilevel thresholding value for estimating the geometric parameters of sludge flocs from the microtome sliced images and from the confocal laser scanning microscope images. Morphological information extracted from images depends on the bilevel thresholding value. According to the evaluation on the luminescence-inverted images and fractal curves (quadric Koch curve and Sierpinski carpet), Otsu's method yields more stable performance than other histogram-based algorithms and is chosen to obtain the porosity. The maximum convex perimeter method, however, can probe the shapes and spatial distribution of the pores among the biomass granules in real sludge flocs. A combined algorithm is recommended for probing the sludge floc structure.

  10. A Controlled Study of the Effectiveness of an Adaptive Closed-Loop Algorithm to Minimize Corticosteroid-Induced Stress Hyperglycemia in Type 1 Diabetes

    PubMed Central

    Youssef, Joseph El; Castle, Jessica R; Branigan, Deborah L; Massoud, Ryan G; Breen, Matthew E; Jacobs, Peter G; Bequette, B Wayne; Ward, W Kenneth

    2011-01-01

    To be effective in type 1 diabetes, algorithms must be able to limit hyperglycemic excursions resulting from medical and emotional stress. We tested an algorithm that estimates insulin sensitivity at regular intervals and continually adjusts gain factors of a fading memory proportional-derivative (FMPD) algorithm. In order to assess whether the algorithm could appropriately adapt and limit the degree of hyperglycemia, we administered oral hydrocortisone repeatedly to create insulin resistance. We compared this indirect adaptive proportional-derivative (APD) algorithm to the FMPD algorithm, which used fixed gain parameters. Each subject with type 1 diabetes (n = 14) was studied on two occasions, each for 33 h. The APD algorithm consistently identified a fall in insulin sensitivity after hydrocortisone. The gain factors and insulin infusion rates were appropriately increased, leading to satisfactory glycemic control after adaptation (premeal glucose on day 2, 148 ± 6 mg/dl). After sufficient time was allowed for adaptation, the late postprandial glucose increment was significantly lower than when measured shortly after the onset of the steroid effect. In addition, during the controlled comparison, glycemia was significantly lower with the APD algorithm than with the FMPD algorithm. No increase in hypoglycemic frequency was found in the APD-only arm. An afferent system of duplicate amperometric sensors demonstrated a high degree of accuracy; the mean absolute relative difference of the sensor used to control the algorithm was 9.6 ± 0.5%. We conclude that an adaptive algorithm that frequently estimates insulin sensitivity and adjusts gain factors is capable of minimizing corticosteroid-induced stress hyperglycemia. PMID:22226248

  11. Clinical evaluation of pacemaker automatic capture management and atrioventricular interval extension algorithm.

    PubMed

    Chen, Ke-ping; Xu, Geng; Wu, Shulin; Tang, Baopeng; Wang, Li; Zhang, Shu

    2013-03-01

    The present study was to assess the accuracy of automatic atrial and ventricular capture management (ACM and VCM) in determining pacing threshold and the performance of a second-generation automatic atrioventricular (AV) interval extension algorithm for reducing unnecessary ventricular pacing. A total of 398 patients at 32 centres who received an EnPulse dual-chamber pacing/dual-chamber adaptive rate pacing pacemaker (Medtronic, Minneapolis, MN, USA) were enrolled. The last amplitude thresholds as measured by ACM and VCM prior to the 6-month follow-up were compared with manually measured thresholds. Device diagnostics were used to evaluate ACM and VCM and the percentage of ventricular pacing with and without the AV extension algorithm. Modelling was performed to assess longevity gains relating to the use of automaticity features. Atrial and ventricular capture management performed accurately and reliably provided complete capture management in 97% of studied patients. The AV interval extension algorithm reduced the median per cent of right ventricular pacing in patients with sinus node dysfunction from 99.7 to 1.5% at 6-month follow-up and in patients with intermittent AV block (excluding persistent 3° AV block) from 99.9 to 50.2%. On the basis of validated modelling, estimated device longevity could potentially be extended by 1.9 years through the use of the capture management and AV interval extension features. Both ACM and VCM features reliably measured thresholds in nearly all patients; the AV extension algorithm significantly reduced ventricular pacing; and the use of pacemaker automaticity features potentially extends device longevity.

  12. An Energy Efficient Adaptive Sampling Algorithm in a Sensor Network for Automated Water Quality Monitoring.

    PubMed

    Shu, Tongxin; Xia, Min; Chen, Jiahong; Silva, Clarence de

    2017-11-05

    Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA) is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO) and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA), while achieving around the same Normalized Mean Error (NME), DDASA is superior in saving 5.31% more battery energy.

  13. An Energy Efficient Adaptive Sampling Algorithm in a Sensor Network for Automated Water Quality Monitoring

    PubMed Central

    Shu, Tongxin; Xia, Min; Chen, Jiahong; de Silva, Clarence

    2017-01-01

    Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA) is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO) and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA), while achieving around the same Normalized Mean Error (NME), DDASA is superior in saving 5.31% more battery energy. PMID:29113087

  14. An Adaptive Cross-Correlation Algorithm for Extended-Scene Shack-Hartmann Wavefront Sensing

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin; Green, Joseph J.; Ohara, Catherine M.; Redding, David C.

    2007-01-01

    This viewgraph presentation reviews the Adaptive Cross-Correlation (ACC) Algorithm for extended scene-Shack Hartmann wavefront (WF) sensing. A Shack-Hartmann sensor places a lenslet array at a plane conjugate to the WF error source. Each sub-aperture lenslet samples the WF in the corresponding patch of the WF. A description of the ACC algorithm is included. The ACC has several benefits; amongst them are: ACC requires only about 4 image-shifting iterations to achieve 0.01 pixel accuracy and ACC is insensitive to both background light and noise much more robust than centroiding,

  15. Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation

    NASA Astrophysics Data System (ADS)

    Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei

    2016-11-01

    Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.

  16. Adaptive and accelerated tracking-learning-detection

    NASA Astrophysics Data System (ADS)

    Guo, Pengyu; Li, Xin; Ding, Shaowen; Tian, Zunhua; Zhang, Xiaohu

    2013-08-01

    An improved online long-term visual tracking algorithm, named adaptive and accelerated TLD (AA-TLD) based on Tracking-Learning-Detection (TLD) which is a novel tracking framework has been introduced in this paper. The improvement focuses on two aspects, one is adaption, which makes the algorithm not dependent on the pre-defined scanning grids by online generating scale space, and the other is efficiency, which uses not only algorithm-level acceleration like scale prediction that employs auto-regression and moving average (ARMA) model to learn the object motion to lessen the detector's searching range and the fixed number of positive and negative samples that ensures a constant retrieving time, but also CPU and GPU parallel technology to achieve hardware acceleration. In addition, in order to obtain a better effect, some TLD's details are redesigned, which uses a weight including both normalized correlation coefficient and scale size to integrate results, and adjusts distance metric thresholds online. A contrastive experiment on success rate, center location error and execution time, is carried out to show a performance and efficiency upgrade over state-of-the-art TLD with partial TLD datasets and Shenzhou IX return capsule image sequences. The algorithm can be used in the field of video surveillance to meet the need of real-time video tracking.

  17. Research on AHP decision algorithms based on BP algorithm

    NASA Astrophysics Data System (ADS)

    Ma, Ning; Guan, Jianhe

    2017-10-01

    Decision making is the thinking activity that people choose or judge, and scientific decision-making has always been a hot issue in the field of research. Analytic Hierarchy Process (AHP) is a simple and practical multi-criteria and multi-objective decision-making method that combines quantitative and qualitative and can show and calculate the subjective judgment in digital form. In the process of decision analysis using AHP method, the rationality of the two-dimensional judgment matrix has a great influence on the decision result. However, in dealing with the real problem, the judgment matrix produced by the two-dimensional comparison is often inconsistent, that is, it does not meet the consistency requirements. BP neural network algorithm is an adaptive nonlinear dynamic system. It has powerful collective computing ability and learning ability. It can perfect the data by constantly modifying the weights and thresholds of the network to achieve the goal of minimizing the mean square error. In this paper, the BP algorithm is used to deal with the consistency of the two-dimensional judgment matrix of the AHP.

  18. Wireless rake-receiver using adaptive filter with a family of partial update algorithms in noise cancellation applications

    NASA Astrophysics Data System (ADS)

    Fayadh, Rashid A.; Malek, F.; Fadhil, Hilal A.; Aldhaibani, Jaafar A.; Salman, M. K.; Abdullah, Farah Salwani

    2015-05-01

    For high data rate propagation in wireless ultra-wideband (UWB) communication systems, the inter-symbol interference (ISI), multiple-access interference (MAI), and multiple-users interference (MUI) are influencing the performance of the wireless systems. In this paper, the rake-receiver was presented with the spread signal by direct sequence spread spectrum (DS-SS) technique. The adaptive rake-receiver structure was shown with adjusting the receiver tap weights using least mean squares (LMS), normalized least mean squares (NLMS), and affine projection algorithms (APA) to support the weak signals by noise cancellation and mitigate the interferences. To minimize the data convergence speed and to reduce the computational complexity by the previous algorithms, a well-known approach of partial-updates (PU) adaptive filters were employed with algorithms, such as sequential-partial, periodic-partial, M-max-partial, and selective-partial updates (SPU) in the proposed system. The simulation results of bit error rate (BER) versus signal-to-noise ratio (SNR) are illustrated to show the performance of partial-update algorithms that have nearly comparable performance with the full update adaptive filters. Furthermore, the SPU-partial has closed performance to the full-NLMS and full-APA while the M-max-partial has closed performance to the full-LMS updates algorithms.

  19. Influence of aging on thermal and vibratory thresholds of quantitative sensory testing.

    PubMed

    Lin, Yea-Huey; Hsieh, Song-Chou; Chao, Chi-Chao; Chang, Yang-Chyuan; Hsieh, Sung-Tsang

    2005-09-01

    Quantitative sensory testing has become a common approach to evaluate thermal and vibratory thresholds in various types of neuropathies. To understand the effect of aging on sensory perception, we measured warm, cold, and vibratory thresholds by performing quantitative sensory testing on a population of 484 normal subjects (175 males and 309 females), aged 48.61 +/- 14.10 (range 20-86) years. Sensory thresholds of the hand and foot were measured with two algorithms: the method of limits (Limits) and the method of level (Level). Thresholds measured by Limits are reaction-time-dependent, while those measured by Level are independent of reaction time. In addition, we explored (1) the correlations of thresholds between these two algorithms, (2) the effect of age on differences in thresholds between algorithms, and (3) differences in sensory thresholds between the two test sites. Age was consistently and significantly correlated with sensory thresholds of all tested modalities measured by both algorithms on multivariate regression analysis compared with other factors, including gender, body height, body weight, and body mass index. When thresholds were plotted against age, slopes differed between sensory thresholds of the hand and those of the foot: for the foot, slopes were steeper compared with those for the hand for each sensory modality. Sensory thresholds of both test sites measured by Level were highly correlated with those measured by Limits, and thresholds measured by Limits were higher than those measured by Level. Differences in sensory thresholds between the two algorithms were also correlated with age: thresholds of the foot were higher than those of the hand for each sensory modality. This difference in thresholds (measured with both Level and Limits) between the hand and foot was also correlated with age. These findings suggest that age is the most significant factor in determining sensory thresholds compared with the other factors of gender and

  20. Thresholds for Coral Bleaching: Are Synergistic Factors and Shifting Thresholds Changing the Landscape for Management? (Invited)

    NASA Astrophysics Data System (ADS)

    Eakin, C.; Donner, S. D.; Logan, C. A.; Gledhill, D. K.; Liu, G.; Heron, S. F.; Christensen, T.; Rauenzahn, J.; Morgan, J.; Parker, B. A.; Hoegh-Guldberg, O.; Skirving, W. J.; Strong, A. E.

    2010-12-01

    As carbon dioxide rises in the atmosphere, climate change and ocean acidification are modifying important physical and chemical parameters in the oceans with resulting impacts on coral reef ecosystems. Rising CO2 is warming the world’s oceans and causing corals to bleach, with both alarming frequency and severity. The frequent return of stressful temperatures has already resulted in major damage to many of the world’s coral reefs and is expected to continue in the foreseeable future. Warmer oceans also have contributed to a rise in coral infectious diseases. Both bleaching and infectious disease can result in coral mortality and threaten one of the most diverse ecosystems on Earth and the important ecosystem services they provide. Additionally, ocean acidification from rising CO2 is reducing the availability of carbonate ions needed by corals to build their skeletons and perhaps depressing the threshold for bleaching. While thresholds vary among species and locations, it is clear that corals around the world are already experiencing anomalous temperatures that are too high, too often, and that warming is exceeding the rate at which corals can adapt. This is despite a complex adaptive capacity that involves both the coral host and the zooxanthellae, including changes in the relative abundance of the latter in their coral hosts. The safe upper limit for atmospheric CO2 is probably somewhere below 350ppm, a level we passed decades ago, and for temperature is a sustained global temperature increase of less than 1.5°C above pre-industrial levels. How much can corals acclimate and/or adapt to the unprecedented fast changing environmental conditions? Any change in the threshold for coral bleaching as the result of acclimation and/or adaption may help corals to survive in the future but adaptation to one stress may be maladaptive to another. There also is evidence that ocean acidification and nutrient enrichment modify this threshold. What do shifting thresholds mean

  1. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    PubMed

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  2. STAR adaptation of QR algorithm. [program for solving over-determined systems of linear equations

    NASA Technical Reports Server (NTRS)

    Shah, S. N.

    1981-01-01

    The QR algorithm used on a serial computer and executed on the Control Data Corporation 6000 Computer was adapted to execute efficiently on the Control Data STAR-100 computer. How the scalar program was adapted for the STAR-100 and why these adaptations yielded an efficient STAR program is described. Program listings of the old scalar version and the vectorized SL/1 version are presented in the appendices. Execution times for the two versions applied to the same system of linear equations, are compared.

  3. Fully implicit adaptive mesh refinement algorithm for reduced MHD

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Pernice, Michael; Chacon, Luis

    2006-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)

  4. ADART: an adaptive algebraic reconstruction algorithm for discrete tomography.

    PubMed

    Maestre-Deusto, F Javier; Scavello, Giovanni; Pizarro, Joaquín; Galindo, Pedro L

    2011-08-01

    In this paper we suggest an algorithm based on the Discrete Algebraic Reconstruction Technique (DART) which is capable of computing high quality reconstructions from substantially fewer projections than required for conventional continuous tomography. Adaptive DART (ADART) goes a step further than DART on the reduction of the number of unknowns of the associated linear system achieving a significant reduction in the pixel error rate of reconstructed objects. The proposed methodology automatically adapts the border definition criterion at each iteration, resulting in a reduction of the number of pixels belonging to the border, and consequently of the number of unknowns in the general algebraic reconstruction linear system to be solved, being this reduction specially important at the final stage of the iterative process. Experimental results show that reconstruction errors are considerably reduced using ADART when compared to original DART, both in clean and noisy environments.

  5. Processing of fetal heart rate through non-invasive adaptive system based on recursive least squares algorithm

    NASA Astrophysics Data System (ADS)

    Fajkus, Marcel; Nedoma, Jan; Martinek, Radek; Vasinek, Vladimir

    2017-10-01

    In this article, we describe an innovative non-invasive method of Fetal Phonocardiography (fPCG) using fiber-optic sensors and adaptive algorithm for the measurement of fetal heart rate (fHR). Conventional PCG is based on a noninvasive scanning of acoustic signals by means of a microphone placed on the thorax. As for fPCG, the microphone is placed on the maternal abdomen. Our solution is based on patent pending non-invasive scanning of acoustic signals by means of a fiber-optic interferometer. Fiber-optic sensors are resistant to technical artifacts such as electromagnetic interferences (EMI), thus they can be used in situations where it is impossible to use conventional EFM methods, e.g. during Magnetic Resonance Imaging (MRI) examination or in case of delivery in water. The adaptive evaluation system is based on Recursive least squares (RLS) algorithm. Based on real measurements provided on five volunteers with their written consent, we created a simplified dynamic signal model of a distribution of heartbeat sounds (HS) through the human body. Our created model allows us to verification of the proposed adaptive system RLS algorithm. The functionality of the proposed non-invasive adaptive system was verified by objective parameters such as Sensitivity (S+) and Signal to Noise Ratio (SNR).

  6. An improved self-adaptive ant colony algorithm based on genetic strategy for the traveling salesman problem

    NASA Astrophysics Data System (ADS)

    Wang, Pan; Zhang, Yi; Yan, Dong

    2018-05-01

    Ant Colony Algorithm (ACA) is a powerful and effective algorithm for solving the combination optimization problem. Moreover, it was successfully used in traveling salesman problem (TSP). But it is easy to prematurely converge to the non-global optimal solution and the calculation time is too long. To overcome those shortcomings, a new method is presented-An improved self-adaptive Ant Colony Algorithm based on genetic strategy. The proposed method adopts adaptive strategy to adjust the parameters dynamically. And new crossover operation and inversion operation in genetic strategy was used in this method. We also make an experiment using the well-known data in TSPLIB. The experiment results show that the performance of the proposed method is better than the basic Ant Colony Algorithm and some improved ACA in both the result and the convergence time. The numerical results obtained also show that the proposed optimization method can achieve results close to the theoretical best known solutions at present.

  7. On an adaptive preconditioned Crank-Nicolson MCMC algorithm for infinite dimensional Bayesian inference

    NASA Astrophysics Data System (ADS)

    Hu, Zixi; Yao, Zhewei; Li, Jinglai

    2017-03-01

    Many scientific and engineering problems require to perform Bayesian inference for unknowns of infinite dimension. In such problems, many standard Markov Chain Monte Carlo (MCMC) algorithms become arbitrary slow under the mesh refinement, which is referred to as being dimension dependent. To this end, a family of dimensional independent MCMC algorithms, known as the preconditioned Crank-Nicolson (pCN) methods, were proposed to sample the infinite dimensional parameters. In this work we develop an adaptive version of the pCN algorithm, where the covariance operator of the proposal distribution is adjusted based on sampling history to improve the simulation efficiency. We show that the proposed algorithm satisfies an important ergodicity condition under some mild assumptions. Finally we provide numerical examples to demonstrate the performance of the proposed method.

  8. A comparison of two adaptive algorithms for the control of active engine mounts

    NASA Astrophysics Data System (ADS)

    Hillis, A. J.; Harrison, A. J. L.; Stoten, D. P.

    2005-08-01

    This paper describes work conducted in order to control automotive active engine mounts, consisting of a conventional passive mount and an internal electromagnetic actuator. Active engine mounts seek to cancel the oscillatory forces generated by the rotation of out-of-balance masses within the engine. The actuator generates a force dependent on a control signal from an algorithm implemented with a real-time DSP. The filtered-x least-mean-square (FXLMS) adaptive filter is used as a benchmark for comparison with a new implementation of the error-driven minimal controller synthesis (Er-MCSI) adaptive controller. Both algorithms are applied to an active mount fitted to a saloon car equipped with a four-cylinder turbo-diesel engine, and have no a priori knowledge of the system dynamics. The steady-state and transient performance of the two algorithms are compared and the relative merits of the two approaches are discussed. The Er-MCSI strategy offers significant computational advantages as it requires no cancellation path modelling. The Er-MCSI controller is found to perform in a fashion similar to the FXLMS filter—typically reducing chassis vibration by 50-90% under normal driving conditions.

  9. An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects.

    PubMed

    Kim, Jinkwon; Min, Se Dong; Lee, Myoungho

    2011-06-27

    Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians.

  10. An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects

    PubMed Central

    2011-01-01

    Background Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. Methods In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. Results A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. Conclusions The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians. PMID:21707989

  11. A community detection algorithm based on structural similarity

    NASA Astrophysics Data System (ADS)

    Guo, Xuchao; Hao, Xia; Liu, Yaqiong; Zhang, Li; Wang, Lu

    2017-09-01

    In order to further improve the efficiency and accuracy of community detection algorithm, a new algorithm named SSTCA (the community detection algorithm based on structural similarity with threshold) is proposed. In this algorithm, the structural similarities are taken as the weights of edges, and the threshold k is considered to remove multiple edges whose weights are less than the threshold, and improve the computational efficiency. Tests were done on the Zachary’s network, Dolphins’ social network and Football dataset by the proposed algorithm, and compared with GN and SSNCA algorithm. The results show that the new algorithm is superior to other algorithms in accuracy for the dense networks and the operating efficiency is improved obviously.

  12. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    PubMed

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-08-12

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  13. An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU.

    PubMed

    Xu, Hailong; Cui, Xiaowei; Lu, Mingquan

    2016-03-11

    Nowadays, software-defined radio (SDR) has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS) adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU) are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP) and Space-Frequency Adaptive Processing (SFAP) are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications.

  14. An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU

    PubMed Central

    Xu, Hailong; Cui, Xiaowei; Lu, Mingquan

    2016-01-01

    Nowadays, software-defined radio (SDR) has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS) adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU) are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP) and Space-Frequency Adaptive Processing (SFAP) are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications. PMID:26978363

  15. Performance Evaluation of Multichannel Adaptive Algorithms for Local Active Noise Control

    NASA Astrophysics Data System (ADS)

    DE DIEGO, M.; GONZALEZ, A.

    2001-07-01

    This paper deals with the development of a multichannel active noise control (ANC) system inside an enclosed space. The purpose is to design a real practical system which works well in local ANC applications. Moreover, the algorithm implemented in the adaptive controller should be robust, of low computational complexity and it should manage to generate a uniform useful-size zone of quite in order to allow the head motion of a person seated on a seat inside a car. Experiments were carried out under semi-anechoic and listening room conditions to verify the successful implementation of the multichannel system. The developed prototype consists of an array of up to four microphones used as error sensors mounted on the headrest of a seat place inside the enclosure. One loudspeaker was used as single primary source and two secondary sources were placed facing the seat. The aim of this multichannel system is to reduce the sound pressure levels in an area around the error sensors, following a local control strategy. When using this technique, the cancellation points are not only the error sensor positions but an area around them, which is measured by using a monitoring microphone. Different multichannel adaptive algorithms for ANC have been analyzed and their performance verified. Multiple error algorithms are used in order to cancel out different types of primary noise (engine noise and random noise) with several configurations (up to four channels system). As an alternative to the multiple error LMS algorithm (multichannel version of the filtered-X LMS algorithm, MELMS), the least maximum mean squares (LMMS) and the scanning error-LMS algorithm have been developed in this work in order to reduce computational complexity and achieve a more uniform residual field. The ANC algorithms were programmed on a digital signal processing board equipped with a TMS320C40 floating point DSP processor. Measurements concerning real-time experiments on local noise reduction in two

  16. Searching for signposts: Adaptive planning thresholds in long-term water supply projections for the Western U.S.

    NASA Astrophysics Data System (ADS)

    Robinson, B.; Herman, J. D.

    2017-12-01

    Long-term water supply planning is challenged by highly uncertain streamflow projections across climate models and emissions scenarios. Recent studies have devised infrastructure and policy responses that can withstand or adapt to an ensemble of scenarios, particularly those outside the envelope of historical variability. An important aspect of this process is whether the proposed thresholds for adaptation (i.e., observations that trigger a response) truly represent a trend toward future change. Here we propose an approach to connect observations of annual mean streamflow with long-term projections by filtering GCM-based streamflow ensembles. Visualizations are developed to investigate whether observed changes in mean annual streamflow can be linked to projected changes in end-of-century mean and variance relative to the full ensemble. A key focus is identifying thresholds that point to significant long-term changes in the distribution of streamflow (+/- 20% or greater) as early as possible. The analysis is performed on 87 sites in the Western United States, using streamflow ensembles through 2100 from a recent study by the U.S. Bureau of Reclamation. Results focus on three primary questions: (1) how many years of observed data are needed to identify the most extreme scenarios, and by what year can they be identified? (2) are these features different between sites? and (3) using this analysis, do observed flows to date at each site point to significant long-term changes? This study addresses the challenge of severe uncertainty in long-term streamflow projections by identifying key thresholds that can be observed to support water supply planning.

  17. Novel Near-Lossless Compression Algorithm for Medical Sequence Images with Adaptive Block-Based Spatial Prediction.

    PubMed

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2016-12-01

    To address the low compression efficiency of lossless compression and the low image quality of general near-lossless compression, a novel near-lossless compression algorithm based on adaptive spatial prediction is proposed for medical sequence images for possible diagnostic use in this paper. The proposed method employs adaptive block size-based spatial prediction to predict blocks directly in the spatial domain and Lossless Hadamard Transform before quantization to improve the quality of reconstructed images. The block-based prediction breaks the pixel neighborhood constraint and takes full advantage of the local spatial correlations found in medical images. The adaptive block size guarantees a more rational division of images and the improved use of the local structure. The results indicate that the proposed algorithm can efficiently compress medical images and produces a better peak signal-to-noise ratio (PSNR) under the same pre-defined distortion than other near-lossless methods.

  18. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  19. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    PubMed Central

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-01-01

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503

  20. A Constrained Genetic Algorithm with Adaptively Defined Fitness Function in MRS Quantification

    NASA Astrophysics Data System (ADS)

    Papakostas, G. A.; Karras, D. A.; Mertzios, B. G.; Graveron-Demilly, D.; van Ormondt, D.

    MRS Signal quantification is a rather involved procedure and has attracted the interest of the medical engineering community, regarding the development of computationally efficient methodologies. Significant contributions based on Computational Intelligence tools, such as Neural Networks (NNs), demonstrated a good performance but not without drawbacks already discussed by the authors. On the other hand preliminary application of Genetic Algorithms (GA) has already been reported in the literature by the authors regarding the peak detection problem encountered in MRS quantification using the Voigt line shape model. This paper investigates a novel constrained genetic algorithm involving a generic and adaptively defined fitness function which extends the simple genetic algorithm methodology in case of noisy signals. The applicability of this new algorithm is scrutinized through experimentation in artificial MRS signals interleaved with noise, regarding its signal fitting capabilities. Although extensive experiments with real world MRS signals are necessary, the herein shown performance illustrates the method's potential to be established as a generic MRS metabolites quantification procedure.

  1. Assessing and Adapting LiDAR-Derived Pit-Free Canopy Height Model Algorithm for Sites with Varying Vegetation Structure

    NASA Astrophysics Data System (ADS)

    Scholl, V.; Hulslander, D.; Goulden, T.; Wasser, L. A.

    2015-12-01

    Spatial and temporal monitoring of vegetation structure is important to the ecological community. Airborne Light Detection and Ranging (LiDAR) systems are used to efficiently survey large forested areas. From LiDAR data, three-dimensional models of forests called canopy height models (CHMs) are generated and used to estimate tree height. A common problem associated with CHMs is data pits, where LiDAR pulses penetrate the top of the canopy, leading to an underestimation of vegetation height. The National Ecological Observatory Network (NEON) currently implements an algorithm to reduce data pit frequency, which requires two height threshold parameters, increment size and range ceiling. CHMs are produced at a series of height increments up to a height range ceiling and combined to produce a CHM with reduced pits (referred to as a "pit-free" CHM). The current implementation uses static values for the height increment and ceiling (5 and 15 meters, respectively). To facilitate the generation of accurate pit-free CHMs across diverse NEON sites with varying vegetation structure, the impacts of adjusting the height threshold parameters were investigated through development of an algorithm which dynamically selects the height increment and ceiling. A series of pit-free CHMs were generated using three height range ceilings and four height increment values for three ecologically different sites. Height threshold parameters were found to change CHM-derived tree heights up to 36% compared to original CHMs. The extent of the parameters' influence on modelled tree heights was greater than expected, which will be considered during future CHM data product development at NEON. (A) Aerial image of Harvard National Forest, (B) standard CHM containing pits, appearing as black speckles, (C) a pit-free CHM created with the static algorithm implementation, and (D) a pit-free CHM created through varying the height threshold ceiling up to 82 m and the increment to 1 m.

  2. A systematic review of gait analysis methods based on inertial sensors and adaptive algorithms.

    PubMed

    Caldas, Rafael; Mundt, Marion; Potthast, Wolfgang; Buarque de Lima Neto, Fernando; Markert, Bernd

    2017-09-01

    The conventional methods to assess human gait are either expensive or complex to be applied regularly in clinical practice. To reduce the cost and simplify the evaluation, inertial sensors and adaptive algorithms have been utilized, respectively. This paper aims to summarize studies that applied adaptive also called artificial intelligence (AI) algorithms to gait analysis based on inertial sensor data, verifying if they can support the clinical evaluation. Articles were identified through searches of the main databases, which were encompassed from 1968 to October 2016. We have identified 22 studies that met the inclusion criteria. The included papers were analyzed due to their data acquisition and processing methods with specific questionnaires. Concerning the data acquisition, the mean score is 6.1±1.62, what implies that 13 of 22 papers failed to report relevant outcomes. The quality assessment of AI algorithms presents an above-average rating (8.2±1.84). Therefore, AI algorithms seem to be able to support gait analysis based on inertial sensor data. Further research, however, is necessary to enhance and standardize the application in patients, since most of the studies used distinct methods to evaluate healthy subjects. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Active control of impulsive noise with symmetric α-stable distribution based on an improved step-size normalized adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yali; Zhang, Qizhi; Yin, Yixin

    2015-05-01

    In this paper, active control of impulsive noise with symmetric α-stable (SαS) distribution is studied. A general step-size normalized filtered-x Least Mean Square (FxLMS) algorithm is developed based on the analysis of existing algorithms, and the Gaussian distribution function is used to normalize the step size. Compared with existing algorithms, the proposed algorithm needs neither the parameter selection and thresholds estimation nor the process of cost function selection and complex gradient computation. Computer simulations have been carried out to suggest that the proposed algorithm is effective for attenuating SαS impulsive noise, and then the proposed algorithm has been implemented in an experimental ANC system. Experimental results show that the proposed scheme has good performance for SαS impulsive noise attenuation.

  4. Implementation and performance evaluation of acoustic denoising algorithms for UAV

    NASA Astrophysics Data System (ADS)

    Chowdhury, Ahmed Sony Kamal

    Unmanned Aerial Vehicles (UAVs) have become popular alternative for wildlife monitoring and border surveillance applications. Elimination of the UAV's background noise and classifying the target audio signal effectively are still a major challenge. The main goal of this thesis is to remove UAV's background noise by means of acoustic denoising techniques. Existing denoising algorithms, such as Adaptive Least Mean Square (LMS), Wavelet Denoising, Time-Frequency Block Thresholding, and Wiener Filter, were implemented and their performance evaluated. The denoising algorithms were evaluated for average Signal to Noise Ratio (SNR), Segmental SNR (SSNR), Log Likelihood Ratio (LLR), and Log Spectral Distance (LSD) metrics. To evaluate the effectiveness of the denoising algorithms on classification of target audio, we implemented Support Vector Machine (SVM) and Naive Bayes classification algorithms. Simulation results demonstrate that LMS and Discrete Wavelet Transform (DWT) denoising algorithm offered superior performance than other algorithms. Finally, we implemented the LMS and DWT algorithms on a DSP board for hardware evaluation. Experimental results showed that LMS algorithm's performance is robust compared to DWT for various noise types to classify target audio signals.

  5. Fast adaptive diamond search algorithm for block-matching motion estimation using spatial correlation

    NASA Astrophysics Data System (ADS)

    Park, Sang-Gon; Jeong, Dong-Seok

    2000-12-01

    In this paper, we propose a fast adaptive diamond search algorithm (FADS) for block matching motion estimation. Many fast motion estimation algorithms reduce the computational complexity by the UESA (Unimodal Error Surface Assumption) where the matching error monotonically increases as the search moves away from the global minimum point. Recently, many fast BMAs (Block Matching Algorithms) make use of the fact that global minimum points in real world video sequences are centered at the position of zero motion. But these BMAs, especially in large motion, are easily trapped into the local minima and result in poor matching accuracy. So, we propose a new motion estimation algorithm using the spatial correlation among the neighboring blocks. We move the search origin according to the motion vectors of the spatially neighboring blocks and their MAEs (Mean Absolute Errors). The computer simulation shows that the proposed algorithm has almost the same computational complexity with DS (Diamond Search), but enhances PSNR. Moreover, the proposed algorithm gives almost the same PSNR as that of FS (Full Search), even for the large motion with half the computational load.

  6. Demonstration of the use of ADAPT to derive predictive maintenance algorithms for the KSC central heat plant

    NASA Technical Reports Server (NTRS)

    Hunter, H. E.

    1972-01-01

    The Avco Data Analysis and Prediction Techniques (ADAPT) were employed to determine laws capable of detecting failures in a heat plant up to three days in advance of the occurrence of the failure. The projected performance of algorithms yielded a detection probability of 90% with false alarm rates of the order of 1 per year for a sample rate of 1 per day with each detection, followed by 3 hourly samplings. This performance was verified on 173 independent test cases. The program also demonstrated diagnostic algorithms and the ability to predict the time of failure to approximately plus or minus 8 hours up to three days in advance of the failure. The ADAPT programs produce simple algorithms which have a unique possibility of a relatively low cost updating procedure. The algorithms were implemented on general purpose computers at Kennedy Space Flight Center and tested against current data.

  7. Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fijany, A.; Milman, M.; Redding, D.

    1994-12-31

    In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm,more » designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.« less

  8. Flight Testing of the Space Launch System (SLS) Adaptive Augmenting Control (AAC) Algorithm on an F/A-18

    NASA Technical Reports Server (NTRS)

    Dennehy, Cornelius J.; VanZwieten, Tannen S.; Hanson, Curtis E.; Wall, John H.; Miller, Chris J.; Gilligan, Eric T.; Orr, Jeb S.

    2014-01-01

    The Marshall Space Flight Center (MSFC) Flight Mechanics and Analysis Division developed an adaptive augmenting control (AAC) algorithm for launch vehicles that improves robustness and performance on an as-needed basis by adapting a classical control algorithm to unexpected environments or variations in vehicle dynamics. This was baselined as part of the Space Launch System (SLS) flight control system. The NASA Engineering and Safety Center (NESC) was asked to partner with the SLS Program and the Space Technology Mission Directorate (STMD) Game Changing Development Program (GCDP) to flight test the AAC algorithm on a manned aircraft that can achieve a high level of dynamic similarity to a launch vehicle and raise the technology readiness of the algorithm early in the program. This document reports the outcome of the NESC assessment.

  9. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  10. Layer-oriented multigrid wavefront reconstruction algorithms for multi-conjugate adaptive optics

    NASA Astrophysics Data System (ADS)

    Gilles, Luc; Ellerbroek, Brent L.; Vogel, Curtis R.

    2003-02-01

    Multi-conjugate adaptive optics (MCAO) systems with 104-105 degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wavefront control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of AO degrees of freedom. In this paper, we develop an iterative sparse matrix implementation of minimum variance wavefront reconstruction for telescope diameters up to 32m with more than 104 actuators. The basic approach is the preconditioned conjugate gradient method, using a multigrid preconditioner incorporating a layer-oriented (block) symmetric Gauss-Seidel iterative smoothing operator. We present open-loop numerical simulation results to illustrate algorithm convergence.

  11. RZA-NLMF algorithm-based adaptive sparse sensing for realizing compressive sensing

    NASA Astrophysics Data System (ADS)

    Gui, Guan; Xu, Li; Adachi, Fumiyuki

    2014-12-01

    Nonlinear sparse sensing (NSS) techniques have been adopted for realizing compressive sensing in many applications such as radar imaging. Unlike the NSS, in this paper, we propose an adaptive sparse sensing (ASS) approach using the reweighted zero-attracting normalized least mean fourth (RZA-NLMF) algorithm which depends on several given parameters, i.e., reweighted factor, regularization parameter, and initial step size. First, based on the independent assumption, Cramer-Rao lower bound (CRLB) is derived as for the performance comparisons. In addition, reweighted factor selection method is proposed for achieving robust estimation performance. Finally, to verify the algorithm, Monte Carlo-based computer simulations are given to show that the ASS achieves much better mean square error (MSE) performance than the NSS.

  12. Clustering of tethered satellite system simulation data by an adaptive neuro-fuzzy algorithm

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda; Pemmaraju, Surya

    1992-01-01

    Recent developments in neuro-fuzzy systems indicate that the concepts of adaptive pattern recognition, when used to identify appropriate control actions corresponding to clusters of patterns representing system states in dynamic nonlinear control systems, may result in innovative designs. A modular, unsupervised neural network architecture, in which fuzzy learning rules have been embedded is used for on-line identification of similar states. The architecture and control rules involved in Adaptive Fuzzy Leader Clustering (AFLC) allow this system to be incorporated in control systems for identification of system states corresponding to specific control actions. We have used this algorithm to cluster the simulation data of Tethered Satellite System (TSS) to estimate the range of delta voltages necessary to maintain the desired length rate of the tether. The AFLC algorithm is capable of on-line estimation of the appropriate control voltages from the corresponding length error and length rate error without a priori knowledge of their membership functions and familarity with the behavior of the Tethered Satellite System.

  13. The effect of different exercise protocols and regression-based algorithms on the assessment of the anaerobic threshold.

    PubMed

    Zuniga, Jorge M; Housh, Terry J; Camic, Clayton L; Bergstrom, Haley C; Schmidt, Richard J; Johnson, Glen O

    2014-09-01

    The purpose of this study was to examine the effect of ramp and step incremental cycle ergometer tests on the assessment of the anaerobic threshold (AT) using 3 different computerized regression-based algorithms. Thirteen healthy adults (mean age and body mass [SD] = 23.4 [3.3] years and body mass = 71.7 [11.1] kg) visited the laboratory on separate occasions. Two-way repeated measures analyses of variance with appropriate follow-up procedures were used to analyze the data. The step protocol resulted in greater mean values across algorithms than the ramp protocol for the V[Combining Dot Above]O2 (step = 1.7 [0.6] L·min and ramp = 1.5 [0.4] L·min) and heart rate (HR) (step = 133 [21] b·min and ramp = 124 [15] b·min) at the AT. There were no significant mean differences, however, in power outputs at the AT between the step (115.2 [44.3] W) and the ramp (112.2 [31.2] W) protocols. Furthermore, there were no significant mean differences for V[Combining Dot Above]O2, HR, or power output across protocols among the 3 computerized regression-based algorithms used to estimate the AT. The current findings suggested that the protocol selection, but not the regression-based algorithms can affect the assessment of the V[Combining Dot Above]O2 and HR at the AT.

  14. A new interferential multispectral image compression algorithm based on adaptive classification and curve-fitting

    NASA Astrophysics Data System (ADS)

    Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke

    2008-08-01

    A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.

  15. Error bounds of adaptive dynamic programming algorithms for solving undiscounted optimal control problems.

    PubMed

    Liu, Derong; Li, Hongliang; Wang, Ding

    2015-06-01

    In this paper, we establish error bounds of adaptive dynamic programming algorithms for solving undiscounted infinite-horizon optimal control problems of discrete-time deterministic nonlinear systems. We consider approximation errors in the update equations of both value function and control policy. We utilize a new assumption instead of the contraction assumption in discounted optimal control problems. We establish the error bounds for approximate value iteration based on a new error condition. Furthermore, we also establish the error bounds for approximate policy iteration and approximate optimistic policy iteration algorithms. It is shown that the iterative approximate value function can converge to a finite neighborhood of the optimal value function under some conditions. To implement the developed algorithms, critic and action neural networks are used to approximate the value function and control policy, respectively. Finally, a simulation example is given to demonstrate the effectiveness of the developed algorithms.

  16. Experiment on a three-beam adaptive array for EHF frequency-hopped signals using a fast algorithm, phase-D

    NASA Astrophysics Data System (ADS)

    Yen, J. L.; Kremer, P.; Amin, N.; Fung, J.

    1989-05-01

    The Department of National Defence (Canada) has been conducting studies into multi-beam adaptive arrays for extremely high frequency (EHF) frequency hopped signals. A three-beam 43 GHz adaptive antenna and a beam control processor is under development. An interactive software package for the operation of the array, capable of applying different control algorithms is being written. A maximum signal to jammer plus noise ratio (SJNR) was found to provide superior performance in preventing degradation of user signals in the presence of nearby jammers. A new fast algorithm using a modified conjugate gradient approach was found to be a very efficient way to implement anti-jamming arrays based on maximum SJNR criterion. The present study was intended to refine and simplify this algorithm and to implement the algorithm on an experimental array for real-time evaluation of anti-jamming performance. A three-beam adaptive array was used. A simulation package was used in the evaluation of multi-beam systems using more than three beams and different user-jammer scenarios. An attempt to further reduce the computation burden through continued analysis of maximum SJNR met with limited success. A method to acquire and track an incoming laser beam is proposed.

  17. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding

    PubMed Central

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908

  18. A Self-Adaptive Fuzzy c-Means Algorithm for Determining the Optimal Number of Clusters

    PubMed Central

    Wang, Zhihao; Yi, Jing

    2016-01-01

    For the shortcoming of fuzzy c-means algorithm (FCM) needing to know the number of clusters in advance, this paper proposed a new self-adaptive method to determine the optimal number of clusters. Firstly, a density-based algorithm was put forward. The algorithm, according to the characteristics of the dataset, automatically determined the possible maximum number of clusters instead of using the empirical rule n and obtained the optimal initial cluster centroids, improving the limitation of FCM that randomly selected cluster centroids lead the convergence result to the local minimum. Secondly, this paper, by introducing a penalty function, proposed a new fuzzy clustering validity index based on fuzzy compactness and separation, which ensured that when the number of clusters verged on that of objects in the dataset, the value of clustering validity index did not monotonically decrease and was close to zero, so that the optimal number of clusters lost robustness and decision function. Then, based on these studies, a self-adaptive FCM algorithm was put forward to estimate the optimal number of clusters by the iterative trial-and-error process. At last, experiments were done on the UCI, KDD Cup 1999, and synthetic datasets, which showed that the method not only effectively determined the optimal number of clusters, but also reduced the iteration of FCM with the stable clustering result. PMID:28042291

  19. Accelerating adaptive inverse distance weighting interpolation algorithm on a graphics processing unit

    PubMed Central

    Xu, Liangliang; Xu, Nengxiong

    2017-01-01

    This paper focuses on designing and implementing parallel adaptive inverse distance weighting (AIDW) interpolation algorithms by using the graphics processing unit (GPU). The AIDW is an improved version of the standard IDW, which can adaptively determine the power parameter according to the data points’ spatial distribution pattern and achieve more accurate predictions than those predicted by IDW. In this paper, we first present two versions of the GPU-accelerated AIDW, i.e. the naive version without profiting from the shared memory and the tiled version taking advantage of the shared memory. We also implement the naive version and the tiled version using two data layouts, structure of arrays and array of aligned structures, on both single and double precision. We then evaluate the performance of parallel AIDW by comparing it with its corresponding serial algorithm on three different machines equipped with the GPUs GT730M, M5000 and K40c. The experimental results indicate that: (i) there is no significant difference in the computational efficiency when different data layouts are employed; (ii) the tiled version is always slightly faster than the naive version; and (iii) on single precision the achieved speed-up can be up to 763 (on the GPU M5000), while on double precision the obtained highest speed-up is 197 (on the GPU K40c). To benefit the community, all source code and testing data related to the presented parallel AIDW algorithm are publicly available. PMID:28989754

  20. Accelerating adaptive inverse distance weighting interpolation algorithm on a graphics processing unit.

    PubMed

    Mei, Gang; Xu, Liangliang; Xu, Nengxiong

    2017-09-01

    This paper focuses on designing and implementing parallel adaptive inverse distance weighting (AIDW) interpolation algorithms by using the graphics processing unit (GPU). The AIDW is an improved version of the standard IDW, which can adaptively determine the power parameter according to the data points' spatial distribution pattern and achieve more accurate predictions than those predicted by IDW. In this paper, we first present two versions of the GPU-accelerated AIDW, i.e. the naive version without profiting from the shared memory and the tiled version taking advantage of the shared memory. We also implement the naive version and the tiled version using two data layouts, structure of arrays and array of aligned structures, on both single and double precision. We then evaluate the performance of parallel AIDW by comparing it with its corresponding serial algorithm on three different machines equipped with the GPUs GT730M, M5000 and K40c. The experimental results indicate that: (i) there is no significant difference in the computational efficiency when different data layouts are employed; (ii) the tiled version is always slightly faster than the naive version; and (iii) on single precision the achieved speed-up can be up to 763 (on the GPU M5000), while on double precision the obtained highest speed-up is 197 (on the GPU K40c). To benefit the community, all source code and testing data related to the presented parallel AIDW algorithm are publicly available.

  1. EEG/ERP adaptive noise canceller design with controlled search space (CSS) approach in cuckoo and other optimization algorithms.

    PubMed

    Ahirwal, M K; Kumar, Anil; Singh, G K

    2013-01-01

    This paper explores the migration of adaptive filtering with swarm intelligence/evolutionary techniques employed in the field of electroencephalogram/event-related potential noise cancellation or extraction. A new approach is proposed in the form of controlled search space to stabilize the randomness of swarm intelligence techniques especially for the EEG signal. Swarm-based algorithms such as Particles Swarm Optimization, Artificial Bee Colony, and Cuckoo Optimization Algorithm with their variants are implemented to design optimized adaptive noise canceler. The proposed controlled search space technique is tested on each of the swarm intelligence techniques and is found to be more accurate and powerful. Adaptive noise canceler with traditional algorithms such as least-mean-square, normalized least-mean-square, and recursive least-mean-square algorithms are also implemented to compare the results. ERP signals such as simulated visual evoked potential, real visual evoked potential, and real sensorimotor evoked potential are used, due to their physiological importance in various EEG studies. Average computational time and shape measures of evolutionary techniques are observed 8.21E-01 sec and 1.73E-01, respectively. Though, traditional algorithms take negligible time consumption, but are unable to offer good shape preservation of ERP, noticed as average computational time and shape measure difference, 1.41E-02 sec and 2.60E+00, respectively.

  2. Dependence of Adaptive Cross-correlation Algorithm Performance on the Extended Scene Image Quality

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2008-01-01

    Recently, we reported an adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels between two extended-scene sub-images captured by a Shack-Hartmann wavefront sensor. It determines the positions of all extended-scene image cells relative to a reference cell in the same frame using an FFT-based iterative image-shifting algorithm. It works with both point-source spot images as well as extended scene images. We have demonstrated previously based on some measured images that the ACC algorithm can determine image shifts with as high an accuracy as 0.01 pixel for shifts as large 3 pixels, and yield similar results for both point source spot images and extended scene images. The shift estimate accuracy of the ACC algorithm depends on illumination level, background, and scene content in addition to the amount of the shift between two image cells. In this paper we investigate how the performance of the ACC algorithm depends on the quality and the frequency content of extended scene images captured by a Shack-Hatmann camera. We also compare the performance of the ACC algorithm with those of several other approaches, and introduce a failsafe criterion for the ACC algorithm-based extended scene Shack-Hatmann sensors.

  3. Using pixel intensity as a self-regulating threshold for deterministic image sampling in Milano Retinex: the T-Rex algorithm

    NASA Astrophysics Data System (ADS)

    Lecca, Michela; Modena, Carla Maria; Rizzi, Alessandro

    2018-01-01

    Milano Retinexes are spatial color algorithms, part of the Retinex family, usually employed for image enhancement. They modify the color of each pixel taking into account the surrounding colors and their position, catching in this way the local spatial color distribution relevant to image enhancement. We present T-Rex (from the words threshold and Retinex), an implementation of Milano Retinex, whose main novelty is the use of the pixel intensity as a self-regulating threshold to deterministically sample local color information. The experiments, carried out on real-world pictures, show that T-Rex image enhancement performance are in line with those of the Milano Retinex family: T-Rex increases the brightness, the contrast, and the flatness of the channel distributions of the input image, making more intelligible the content of pictures acquired under difficult light conditions.

  4. Six weeks of a polarized training-intensity distribution leads to greater physiological and performance adaptations than a threshold model in trained cyclists.

    PubMed

    Neal, Craig M; Hunter, Angus M; Brennan, Lorraine; O'Sullivan, Aifric; Hamilton, D Lee; De Vito, Giuseppe; Galloway, Stuart D R

    2013-02-15

    This study was undertaken to investigate physiological adaptation with two endurance-training periods differing in intensity distribution. In a randomized crossover fashion, separated by 4 wk of detraining, 12 male cyclists completed two 6-wk training periods: 1) a polarized model [6.4 (±1.4 SD) h/wk; 80%, 0%, and 20% of training time in low-, moderate-, and high-intensity zones, respectively]; and 2) a threshold model [7.5 (±2.0 SD) h/wk; 57%, 43%, and 0% training-intensity distribution]. Before and after each training period, following 2 days of diet and exercise control, fasted skeletal muscle biopsies were obtained for mitochondrial enzyme activity and monocarboxylate transporter (MCT) 1 and 4 expression, and morning first-void urine samples were collected for NMR spectroscopy-based metabolomics analysis. Endurance performance (40-km time trial), incremental exercise, peak power output (PPO), and high-intensity exercise capacity (95% maximal work rate to exhaustion) were also assessed. Endurance performance, PPOs, lactate threshold (LT), MCT4, and high-intensity exercise capacity all increased over both training periods. Improvements were greater following polarized rather than threshold for PPO [mean (±SE) change of 8 (±2)% vs. 3 (±1)%, P < 0.05], LT [9 (±3)% vs. 2 (±4)%, P < 0.05], and high-intensity exercise capacity [85 (±14)% vs. 37 (±14)%, P < 0.05]. No changes in mitochondrial enzyme activities or MCT1 were observed following training. A significant multilevel, partial least squares-discriminant analysis model was obtained for the threshold model but not the polarized model in the metabolomics analysis. A polarized training distribution results in greater systemic adaptation over 6 wk in already well-trained cyclists. Markers of muscle metabolic adaptation are largely unchanged, but metabolomics markers suggest different cellular metabolic stress that requires further investigation.

  5. Robust fundamental frequency estimation in sustained vowels: Detailed algorithmic comparisons and information fusion with adaptive Kalman filtering

    PubMed Central

    Tsanas, Athanasios; Zañartu, Matías; Little, Max A.; Fox, Cynthia; Ramig, Lorraine O.; Clifford, Gari D.

    2014-01-01

    There has been consistent interest among speech signal processing researchers in the accurate estimation of the fundamental frequency (F0) of speech signals. This study examines ten F0 estimation algorithms (some well-established and some proposed more recently) to determine which of these algorithms is, on average, better able to estimate F0 in the sustained vowel /a/. Moreover, a robust method for adaptively weighting the estimates of individual F0 estimation algorithms based on quality and performance measures is proposed, using an adaptive Kalman filter (KF) framework. The accuracy of the algorithms is validated using (a) a database of 117 synthetic realistic phonations obtained using a sophisticated physiological model of speech production and (b) a database of 65 recordings of human phonations where the glottal cycles are calculated from electroglottograph signals. On average, the sawtooth waveform inspired pitch estimator and the nearly defect-free algorithms provided the best individual F0 estimates, and the proposed KF approach resulted in a ∼16% improvement in accuracy over the best single F0 estimation algorithm. These findings may be useful in speech signal processing applications where sustained vowels are used to assess vocal quality, when very accurate F0 estimation is required. PMID:24815269

  6. Design of infrasound-detection system via adaptive LMSTDE algorithm

    NASA Technical Reports Server (NTRS)

    Khalaf, C. S.; Stoughton, J. W.

    1984-01-01

    A proposed solution to an aviation safety problem is based on passive detection of turbulent weather phenomena through their infrasonic emission. This thesis describes a system design that is adequate for detection and bearing evaluation of infrasounds. An array of four sensors, with the appropriate hardware, is used for the detection part. Bearing evaluation is based on estimates of time delays between sensor outputs. The generalized cross correlation (GCC), as the conventional time-delay estimation (TDE) method, is first reviewed. An adaptive TDE approach, using the least mean square (LMS) algorithm, is then discussed. A comparison between the two techniques is made and the advantages of the adaptive approach are listed. The behavior of the GCC, as a Roth processor, is examined for the anticipated signals. It is shown that the Roth processor has the desired effect of sharpening the peak of the correlation function. It is also shown that the LMSTDE technique is an equivalent implementation of the Roth processor in the time domain. A LMSTDE lead-lag model, with a variable stability coefficient and a convergence criterion, is designed.

  7. Adaptive local thresholding for robust nucleus segmentation utilizing shape priors

    NASA Astrophysics Data System (ADS)

    Wang, Xiuzhong; Srinivas, Chukka

    2016-03-01

    This paper describes a novel local thresholding method for foreground detection. First, a Canny edge detection method is used for initial edge detection. Then, tensor voting is applied on the initial edge pixels, using a nonsymmetric tensor field tailored to encode prior information about nucleus size, shape, and intensity spatial distribution. Tensor analysis is then performed to generate the saliency image and, based on that, the refined edge. Next, the image domain is divided into blocks. In each block, at least one foreground and one background pixel are sampled for each refined edge pixel. The saliency weighted foreground histogram and background histogram are then created. These two histograms are used to calculate a threshold by minimizing the background and foreground pixel classification error. The block-wise thresholds are then used to generate the threshold for each pixel via interpolation. Finally, the foreground is obtained by comparing the original image with the threshold image. The effective use of prior information, combined with robust techniques, results in far more reliable foreground detection, which leads to robust nucleus segmentation.

  8. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  9. Finite element analysis and genetic algorithm optimization design for the actuator placement on a large adaptive structure

    NASA Astrophysics Data System (ADS)

    Sheng, Lizeng

    The dissertation focuses on one of the major research needs in the area of adaptive/intelligent/smart structures, the development and application of finite element analysis and genetic algorithms for optimal design of large-scale adaptive structures. We first review some basic concepts in finite element method and genetic algorithms, along with the research on smart structures. Then we propose a solution methodology for solving a critical problem in the design of a next generation of large-scale adaptive structures---optimal placements of a large number of actuators to control thermal deformations. After briefly reviewing the three most frequently used general approaches to derive a finite element formulation, the dissertation presents techniques associated with general shell finite element analysis using flat triangular laminated composite elements. The element used here has three nodes and eighteen degrees of freedom and is obtained by combining a triangular membrane element and a triangular plate bending element. The element includes the coupling effect between membrane deformation and bending deformation. The membrane element is derived from the linear strain triangular element using Cook's transformation. The discrete Kirchhoff triangular (DKT) element is used as the plate bending element. For completeness, a complete derivation of the DKT is presented. Geometrically nonlinear finite element formulation is derived for the analysis of adaptive structures under the combined thermal and electrical loads. Next, we solve the optimization problems of placing a large number of piezoelectric actuators to control thermal distortions in a large mirror in the presence of four different thermal loads. We then extend this to a multi-objective optimization problem of determining only one set of piezoelectric actuator locations that can be used to control the deformation in the same mirror under the action of any one of the four thermal loads. A series of genetic algorithms

  10. Treatment thresholds for osteoporosis and reimbursability criteria: perspectives associated with fracture risk-assessment tools.

    PubMed

    Adami, Silvano; Bertoldo, Francesco; Gatti, Davide; Minisola, Giovanni; Rossini, Maurizio; Sinigaglia, Luigi; Varenna, Massimo

    2013-09-01

    The definition of osteoporosis was based for several years on bone mineral density values, which were used by most guidelines for defining treatment thresholds. The availability of tools for the estimation of fracture risk, such as FRAX™ or its adapted Italian version, DeFRA, is providing a way to grade osteoporosis severity. By applying these new tools, the criteria identified in Italy for treatment reimbursability (e.g., "Nota 79") are confirmed as extremely conservative. The new fracture risk-assessment tools provide continuous risk values that can be used by health authorities (or "payers") for identifying treatment thresholds. FRAX estimates the risk for "major osteoporotic fractures," which are not counted in registered fracture trials. Here, we elaborate an algorithm to convert vertebral and nonvertebral fractures to the "major fractures" of FRAX, and this allows a cost-effectiveness assessment for each drug.

  11. Using patient-specific phantoms to evaluate deformable image registration algorithms for adaptive radiation therapy

    PubMed Central

    Stanley, Nick; Glide-Hurst, Carri; Kim, Jinkoo; Adams, Jeffrey; Li, Shunshan; Wen, Ning; Chetty, Indrin J.; Zhong, Hualiang

    2014-01-01

    The quality of adaptive treatment planning depends on the accuracy of its underlying deformable image registration (DIR). The purpose of this study is to evaluate the performance of two DIR algorithms, B-spline–based deformable multipass (DMP) and deformable demons (Demons), implemented in a commercial software package. Evaluations were conducted using both computational and physical deformable phantoms. Based on a finite element method (FEM), a total of 11 computational models were developed from a set of CT images acquired from four lung and one prostate cancer patients. FEM generated displacement vector fields (DVF) were used to construct the lung and prostate image phantoms. Based on a fast-Fourier transform technique, image noise power spectrum was incorporated into the prostate image phantoms to create simulated CBCT images. The FEM-DVF served as a gold standard for verification of the two registration algorithms performed on these phantoms. The registration algorithms were also evaluated at the homologous points quantified in the CT images of a physical lung phantom. The results indicated that the mean errors of the DMP algorithm were in the range of 1.0 ~ 3.1 mm for the computational phantoms and 1.9 mm for the physical lung phantom. For the computational prostate phantoms, the corresponding mean error was 1.0–1.9 mm in the prostate, 1.9–2.4 mm in the rectum, and 1.8–2.1 mm over the entire patient body. Sinusoidal errors induced by B-spline interpolations were observed in all the displacement profiles of the DMP registrations. Regions of large displacements were observed to have more registration errors. Patient-specific FEM models have been developed to evaluate the DIR algorithms implemented in the commercial software package. It has been found that the accuracy of these algorithms is patient-dependent and related to various factors including tissue deformation magnitudes and image intensity gradients across the regions of interest. This may

  12. Experiment on a three-beam adaptive array for EHF frequency-hopped signals using a fast algorithm, phase E

    NASA Astrophysics Data System (ADS)

    Yen, J. L.; Kremer, P.; Fung, J.

    1990-05-01

    The Department of National Defence (Canada) has been conducting studies into multi-beam adaptive arrays for extremely high frequency (EHF) frequency hopped signals. A three-beam 43 GHz adaptive antenna and a beam control processor is under development. An interactive software package for the operation of the array, capable of applying different control algorithms is being written. A maximum signal to jammer plus noise ratio (SJNR) has been found to provide superior performance in preventing degradation of user signals in the presence of nearby jammers. A new fast algorithm using a modified conjugate gradient approach has been found to be a very efficient way to implement anti-jamming arrays based on maximum SJNR criterion. The present study was intended to refine and simplify this algorithm and to implement the algorithm on an experimental array for real-time evaluation of anti-jamming performance. A three-beam adaptive array was used. A simulation package was used in the evaluation of multi-beam systems using more than three beams and different user-jammer scenarios. An attempt to further reduce the computation burden through further analysis of maximum SJNR met with limited success. The investigation of a new angle detector for spatial tracking in heterodyne laser space communications was completed.

  13. Threshold Assessment of Gear Diagnostic Tools on Flight and Test Rig Data

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Mosher, Marianne; Huff, Edward M.

    2003-01-01

    A method for defining thresholds for vibration-based algorithms that provides the minimum number of false alarms while maintaining sensitivity to gear damage was developed. This analysis focused on two vibration based gear damage detection algorithms, FM4 and MSA. This method was developed using vibration data collected during surface fatigue tests performed in a spur gearbox rig. The thresholds were defined based on damage progression during tests with damage. The thresholds false alarm rates were then evaluated on spur gear tests without damage. Next, the same thresholds were applied to flight data from an OH-58 helicopter transmission. Results showed that thresholds defined in test rigs can be used to define thresholds in flight to correctly classify the transmission operation as normal.

  14. Multi-color space threshold segmentation and self-learning k-NN algorithm for surge test EUT status identification

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Liu, Gui-xiong

    2016-09-01

    The identification of targets varies in different surge tests. A multi-color space threshold segmentation and self-learning k-nearest neighbor algorithm ( k-NN) for equipment under test status identification was proposed after using feature matching to identify equipment status had to train new patterns every time before testing. First, color space (L*a*b*, hue saturation lightness (HSL), hue saturation value (HSV)) to segment was selected according to the high luminance points ratio and white luminance points ratio of the image. Second, the unknown class sample S r was classified by the k-NN algorithm with training set T z according to the feature vector, which was formed from number of pixels, eccentricity ratio, compactness ratio, and Euler's numbers. Last, while the classification confidence coefficient equaled k, made S r as one sample of pre-training set T z '. The training set T z increased to T z+1 by T z ' if T z ' was saturated. In nine series of illuminant, indicator light, screen, and disturbances samples (a total of 21600 frames), the algorithm had a 98.65%identification accuracy, also selected five groups of samples to enlarge the training set from T 0 to T 5 by itself.

  15. A new adaptive algorithm for automated feature extraction in exponentially damped signals for health monitoring of smart structures

    NASA Astrophysics Data System (ADS)

    Qarib, Hossein; Adeli, Hojjat

    2015-12-01

    In this paper authors introduce a new adaptive signal processing technique for feature extraction and parameter estimation in noisy exponentially damped signals. The iterative 3-stage method is based on the adroit integration of the strengths of parametric and nonparametric methods such as multiple signal categorization, matrix pencil, and empirical mode decomposition algorithms. The first stage is a new adaptive filtration or noise removal scheme. The second stage is a hybrid parametric-nonparametric signal parameter estimation technique based on an output-only system identification technique. The third stage is optimization of estimated parameters using a combination of the primal-dual path-following interior point algorithm and genetic algorithm. The methodology is evaluated using a synthetic signal and a signal obtained experimentally from transverse vibrations of a steel cantilever beam. The method is successful in estimating the frequencies accurately. Further, it estimates the damping exponents. The proposed adaptive filtration method does not include any frequency domain manipulation. Consequently, the time domain signal is not affected as a result of frequency domain and inverse transformations.

  16. Space Object Maneuver Detection Algorithms Using TLE Data

    NASA Astrophysics Data System (ADS)

    Pittelkau, M.

    2016-09-01

    An important aspect of Space Situational Awareness (SSA) is detection of deliberate and accidental orbit changes of space objects. Although space surveillance systems detect orbit maneuvers within their tracking algorithms, maneuver data are not readily disseminated for general use. However, two-line element (TLE) data is available and can be used to detect maneuvers of space objects. This work is an attempt to improve upon existing TLE-based maneuver detection algorithms. Three adaptive maneuver detection algorithms are developed and evaluated: The first is a fading-memory Kalman filter, which is equivalent to the sliding-window least-squares polynomial fit, but computationally more efficient and adaptive to the noise in the TLE data. The second algorithm is based on a sample cumulative distribution function (CDF) computed from a histogram of the magnitude-squared |V|2 of change-in-velocity vectors (V), which is computed from the TLE data. A maneuver detection threshold is computed from the median estimated from the CDF, or from the CDF and a specified probability of false alarm. The third algorithm is a median filter. The median filter is the simplest of a class of nonlinear filters called order statistics filters, which is within the theory of robust statistics. The output of the median filter is practically insensitive to outliers, or large maneuvers. The median of the |V|2 data is proportional to the variance of the V, so the variance is estimated from the output of the median filter. A maneuver is detected when the input data exceeds a constant times the estimated variance.

  17. Texture orientation-based algorithm for detecting infrared maritime targets.

    PubMed

    Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai

    2015-05-20

    Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.

  18. Modified artificial fish school algorithm for free space optical communication with sensor-less adaptive optics system

    NASA Astrophysics Data System (ADS)

    Cao, Jingtai; Zhao, Xiaohui; Li, Zhaokun; Liu, Wei; Gu, Haijun

    2017-11-01

    The performance of free space optical (FSO) communication system is limited by atmospheric turbulent extremely. Adaptive optics (AO) is the significant method to overcome the atmosphere disturbance. Especially, for the strong scintillation effect, the sensor-less AO system plays a major role for compensation. In this paper, a modified artificial fish school (MAFS) algorithm is proposed to compensate the aberrations in the sensor-less AO system. Both the static and dynamic aberrations compensations are analyzed and the performance of FSO communication before and after aberrations compensations is compared. In addition, MAFS algorithm is compared with artificial fish school (AFS) algorithm, stochastic parallel gradient descent (SPGD) algorithm and simulated annealing (SA) algorithm. It is shown that the MAFS algorithm has a higher convergence speed than SPGD algorithm and SA algorithm, and reaches the better convergence value than AFS algorithm, SPGD algorithm and SA algorithm. The sensor-less AO system with MAFS algorithm effectively increases the coupling efficiency at the receiving terminal with fewer numbers of iterations. In conclusion, the MAFS algorithm has great significance for sensor-less AO system to compensate atmospheric turbulence in FSO communication system.

  19. DARK ADAPTATION IN DINEUTES

    PubMed Central

    Clark, Leonard B.

    1938-01-01

    The level of dark adaptation of the whirligig beetle can be measured in terms of the threshold intensity calling forth a response. The course of dark adaptation was determined at levels of light adaptation of 6.5, 91.6, and 6100 foot-candles. All data can be fitted by the same curve. This indicates that dark adaptation follows parts of the same course irrespective of the level of light adaptation. The intensity of the adapting light determines the level at which dark adaptation will begin. The relation between log aI 0 (instantaneous threshold) and log of adapting light intensity is linear over the range studied. PMID:19873056

  20. FPGA implementation of ICA algorithm for blind signal separation and adaptive noise canceling.

    PubMed

    Kim, Chang-Min; Park, Hyung-Min; Kim, Taesu; Choi, Yoon-Kyung; Lee, Soo-Young

    2003-01-01

    An field programmable gate array (FPGA) implementation of independent component analysis (ICA) algorithm is reported for blind signal separation (BSS) and adaptive noise canceling (ANC) in real time. In order to provide enormous computing power for ICA-based algorithms with multipath reverberation, a special digital processor is designed and implemented in FPGA. The chip design fully utilizes modular concept and several chips may be put together for complex applications with a large number of noise sources. Experimental results with a fabricated test board are reported for ANC only, BSS only, and simultaneous ANC/BSS, which demonstrates successful speech enhancement in real environments in real time.

  1. An adaptive multi-level simulation algorithm for stochastic biological systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.

    2015-01-14

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We

  2. QRS Detection Algorithm for Telehealth Electrocardiogram Recordings.

    PubMed

    Khamis, Heba; Weiss, Robert; Xie, Yang; Chang, Chan-Wei; Lovell, Nigel H; Redmond, Stephen J

    2016-07-01

    QRS detection algorithms are needed to analyze electrocardiogram (ECG) recordings generated in telehealth environments. However, the numerous published QRS detectors focus on clean clinical data. Here, a "UNSW" QRS detection algorithm is described that is suitable for clinical ECG and also poorer quality telehealth ECG. The UNSW algorithm generates a feature signal containing information about ECG amplitude and derivative, which is filtered according to its frequency content and an adaptive threshold is applied. The algorithm was tested on clinical and telehealth ECG and the QRS detection performance is compared to the Pan-Tompkins (PT) and Gutiérrez-Rivas (GR) algorithm. For the MIT-BIH Arrhythmia database (virtually artifact free, clinical ECG), the overall sensitivity (Se) and positive predictivity (+P) of the UNSW algorithm was >99%, which was comparable to PT and GR. When applied to the MIT-BIH noise stress test database (clinical ECG with added calibrated noise) after artifact masking, all three algorithms had overall Se >99%, and the UNSW algorithm had higher +P (98%, p < 0.05) than PT and GR. For 250 telehealth ECG records (unsupervised recordings; dry metal electrodes), the UNSW algorithm had 98% Se and 95% +P which was superior to PT (+P: p < 0.001) and GR (Se and +P: p < 0.001). This is the first study to describe a QRS detection algorithm for telehealth data and evaluate it on clinical and telehealth ECG with superior results to published algorithms. The UNSW algorithm could be used to manage increasing telehealth ECG analysis workloads.

  3. Improved Adaptive LSB Steganography Based on Chaos and Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Lifang; Zhao, Yao; Ni, Rongrong; Li, Ting

    2010-12-01

    We propose a novel steganographic method in JPEG images with high performance. Firstly, we propose improved adaptive LSB steganography, which can achieve high capacity while preserving the first-order statistics. Secondly, in order to minimize visual degradation of the stego image, we shuffle bits-order of the message based on chaos whose parameters are selected by the genetic algorithm. Shuffling message's bits-order provides us with a new way to improve the performance of steganography. Experimental results show that our method outperforms classical steganographic methods in image quality, while preserving characteristics of histogram and providing high capacity.

  4. CISN ShakeAlert: Faster Warning Information Through Multiple Threshold Event Detection in the Virtual Seismologist (VS) Early Warning Algorithm

    NASA Astrophysics Data System (ADS)

    Cua, G. B.; Fischer, M.; Caprio, M.; Heaton, T. H.; Cisn Earthquake Early Warning Project Team

    2010-12-01

    The Virtual Seismologist (VS) earthquake early warning (EEW) algorithm is one of 3 EEW approaches being incorporated into the California Integrated Seismic Network (CISN) ShakeAlert system, a prototype EEW system that could potentially be implemented in California. The VS algorithm, implemented by the Swiss Seismological Service at ETH Zurich, is a Bayesian approach to EEW, wherein the most probable source estimate at any given time is a combination of contributions from a likehihood function that evolves in response to incoming data from the on-going earthquake, and selected prior information, which can include factors such as network topology, the Gutenberg-Richter relationship or previously observed seismicity. The VS codes have been running in real-time at the Southern California Seismic Network since July 2008, and at the Northern California Seismic Network since February 2009. We discuss recent enhancements to the VS EEW algorithm that are being integrated into CISN ShakeAlert. We developed and continue to test a multiple-threshold event detection scheme, which uses different association / location approaches depending on the peak amplitudes associated with an incoming P pick. With this scheme, an event with sufficiently high initial amplitudes can be declared on the basis of a single station, maximizing warning times for damaging events for which EEW is most relevant. Smaller, non-damaging events, which will have lower initial amplitudes, will require more picks to initiate an event declaration, with the goal of reducing false alarms. This transforms the VS codes from a regional EEW approach reliant on traditional location estimation (and the requirement of at least 4 picks as implemented by the Binder Earthworm phase associator) into an on-site/regional approach capable of providing a continuously evolving stream of EEW information starting from the first P-detection. Real-time and offline analysis on Swiss and California waveform datasets indicate that the

  5. A dynamic fuzzy genetic algorithm for natural image segmentation using adaptive mean shift

    NASA Astrophysics Data System (ADS)

    Arfan Jaffar, M.

    2017-01-01

    In this paper, a colour image segmentation approach based on hybridisation of adaptive mean shift (AMS), fuzzy c-mean and genetic algorithms (GAs) is presented. Image segmentation is the perceptual faction of pixels based on some likeness measure. GA with fuzzy behaviour is adapted to maximise the fuzzy separation and minimise the global compactness among the clusters or segments in spatial fuzzy c-mean (sFCM). It adds diversity to the search process to find the global optima. A simple fusion method has been used to combine the clusters to overcome the problem of over segmentation. The results show that our technique outperforms state-of-the-art methods.

  6. Implementation of a rapid correction algorithm for adaptive optics using a plenoptic sensor

    NASA Astrophysics Data System (ADS)

    Ko, Jonathan; Wu, Chensheng; Davis, Christopher C.

    2016-09-01

    Adaptive optics relies on the accuracy and speed of a wavefront sensor in order to provide quick corrections to distortions in the optical system. In weaker cases of atmospheric turbulence often encountered in astronomical fields, a traditional Shack-Hartmann sensor has proved to be very effective. However, in cases of stronger atmospheric turbulence often encountered near the surface of the Earth, atmospheric turbulence no longer solely causes small tilts in the wavefront. Instead, lasers passing through strong or "deep" atmospheric turbulence encounter beam breakup, which results in interference effects and discontinuities in the incoming wavefront. In these situations, a Shack-Hartmann sensor can no longer effectively determine the shape of the incoming wavefront. We propose a wavefront reconstruction and correction algorithm based around the plenoptic sensor. The plenoptic sensor's design allows it to match and exceed the wavefront sensing capabilities of a Shack-Hartmann sensor for our application. Novel wavefront reconstruction algorithms can take advantage of the plenoptic sensor to provide a rapid wavefront reconstruction necessary for real time turbulence. To test the integrity of the plenoptic sensor and its reconstruction algorithms, we use artificially generated turbulence in a lab scale environment to simulate the structure and speed of outdoor atmospheric turbulence. By analyzing the performance of our system with and without the closed-loop plenoptic sensor adaptive optics system, we can show that the plenoptic sensor is effective in mitigating real time lab generated atmospheric turbulence.

  7. Coloring geographical threshold graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradonjic, Milan; Percus, Allon; Muller, Tobias

    We propose a coloring algorithm for sparse random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Here, we analyzemore » the GTG coloring algorithm together with the graph's clique number, showing formally that in spite of the differences in structure between GTG and RGG, the asymptotic behavior of the chromatic number is identical: {chi}1n 1n n / 1n n (1 + {omicron}(1)). Finally, we consider the leading corrections to this expression, again using the coloring algorithm and clique number to provide bounds on the chromatic number. We show that the gap between the lower and upper bound is within C 1n n / (1n 1n n){sup 2}, and specify the constant C.« less

  8. Stable Extraction of Threshold Voltage Using Transconductance Change Method for CMOS Modeling, Simulation and Characterization

    NASA Astrophysics Data System (ADS)

    Choi, Woo Young; Woo, Dong-Soo; Choi, Byung Yong; Lee, Jong Duk; Park, Byung-Gook

    2004-04-01

    We proposed a stable extraction algorithm for threshold voltage using transconductance change method by optimizing node interval. With the algorithm, noise-free gm2 (=dgm/dVGS) profiles can be extracted within one-percent error, which leads to more physically-meaningful threshold voltage calculation by the transconductance change method. The extracted threshold voltage predicts the gate-to-source voltage at which the surface potential is within kT/q of φs=2φf+VSB. Our algorithm makes the transconductance change method more practical by overcoming noise problem. This threshold voltage extraction algorithm yields the threshold roll-off behavior of nanoscale metal oxide semiconductor field effect transistor (MOSFETs) accurately and makes it possible to calculate the surface potential φs at any other point on the drain-to-source current (IDS) versus gate-to-source voltage (VGS) curve. It will provide us with a useful analysis tool in the field of device modeling, simulation and characterization.

  9. A novel clinical decision support system using improved adaptive genetic algorithm for the assessment of fetal well-being.

    PubMed

    Ravindran, Sindhu; Jambek, Asral Bahari; Muthusamy, Hariharan; Neoh, Siew-Chin

    2015-01-01

    A novel clinical decision support system is proposed in this paper for evaluating the fetal well-being from the cardiotocogram (CTG) dataset through an Improved Adaptive Genetic Algorithm (IAGA) and Extreme Learning Machine (ELM). IAGA employs a new scaling technique (called sigma scaling) to avoid premature convergence and applies adaptive crossover and mutation techniques with masking concepts to enhance population diversity. Also, this search algorithm utilizes three different fitness functions (two single objective fitness functions and multi-objective fitness function) to assess its performance. The classification results unfold that promising classification accuracy of 94% is obtained with an optimal feature subset using IAGA. Also, the classification results are compared with those of other Feature Reduction techniques to substantiate its exhaustive search towards the global optimum. Besides, five other benchmark datasets are used to gauge the strength of the proposed IAGA algorithm.

  10. Predicting missing values in a home care database using an adaptive uncertainty rule method.

    PubMed

    Konias, S; Gogou, G; Bamidis, P D; Vlahavas, I; Maglaveras, N

    2005-01-01

    Contemporary literature illustrates an abundance of adaptive algorithms for mining association rules. However, most literature is unable to deal with the peculiarities, such as missing values and dynamic data creation, that are frequently encountered in fields like medicine. This paper proposes an uncertainty rule method that uses an adaptive threshold for filling missing values in newly added records. A new approach for mining uncertainty rules and filling missing values is proposed, which is in turn particularly suitable for dynamic databases, like the ones used in home care systems. In this study, a new data mining method named FiMV (Filling Missing Values) is illustrated based on the mined uncertainty rules. Uncertainty rules have quite a similar structure to association rules and are extracted by an algorithm proposed in previous work, namely AURG (Adaptive Uncertainty Rule Generation). The main target was to implement an appropriate method for recovering missing values in a dynamic database, where new records are continuously added, without needing to specify any kind of thresholds beforehand. The method was applied to a home care monitoring system database. Randomly, multiple missing values for each record's attributes (rate 5-20% by 5% increments) were introduced in the initial dataset. FiMV demonstrated 100% completion rates with over 90% success in each case, while usual approaches, where all records with missing values are ignored or thresholds are required, experienced significantly reduced completion and success rates. It is concluded that the proposed method is appropriate for the data-cleaning step of the Knowledge Discovery process in databases. The latter, containing much significance for the output efficiency of any data mining technique, can improve the quality of the mined information.

  11. Validation of various adaptive threshold methods of segmentation applied to follicular lymphoma digital images stained with 3,3'-Diaminobenzidine&Haematoxylin.

    PubMed

    Korzynska, Anna; Roszkowiak, Lukasz; Lopez, Carlos; Bosch, Ramon; Witkowski, Lukasz; Lejeune, Marylene

    2013-03-25

    The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the 'brown component' extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without

  12. Testing for a slope-based decoupling algorithm in a woofer-tweeter adaptive optics system.

    PubMed

    Cheng, Tao; Liu, WenJin; Yang, KangJian; He, Xin; Yang, Ping; Xu, Bing

    2018-05-01

    It is well known that using two or more deformable mirrors (DMs) can improve the compensation ability of an adaptive optics (AO) system. However, to keep the stability of an AO system, the correlation between the multiple DMs must be suppressed during the correction. In this paper, we proposed a slope-based decoupling algorithm to simultaneous control the multiple DMs. In order to examine the validity and practicality of this algorithm, a typical woofer-tweeter (W-T) AO system was set up. For the W-T system, a theory model was simulated and the results indicated in theory that the algorithm we presented can selectively make woofer and tweeter correct different spatial frequency aberration and suppress the cross coupling between the dual DMs. At the same time, the experimental results for the W-T AO system were consistent with the results of the simulation, which demonstrated in practice that this algorithm is practical for the AO system with dual DMs.

  13. A novel adaptive, real-time algorithm to detect gait events from wearable sensors.

    PubMed

    Chia Bejarano, Noelia; Ambrosini, Emilia; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Monticone, Marco; Ferrante, Simona

    2015-05-01

    A real-time, adaptive algorithm based on two inertial and magnetic sensors placed on the shanks was developed for gait-event detection. For each leg, the algorithm detected the Initial Contact (IC), as the minimum of the flexion/extension angle, and the End Contact (EC) and the Mid-Swing (MS), as minimum and maximum of the angular velocity, respectively. The algorithm consisted of calibration, real-time detection, and step-by-step update. Data collected from 22 healthy subjects (21 to 85 years) walking at three self-selected speeds were used to validate the algorithm against the GaitRite system. Comparable levels of accuracy and significantly lower detection delays were achieved with respect to other published methods. The algorithm robustness was tested on ten healthy subjects performing sudden speed changes and on ten stroke subjects (43 to 89 years). For healthy subjects, F1-scores of 1 and mean detection delays lower than 14 ms were obtained. For stroke subjects, F1-scores of 0.998 and 0.944 were obtained for IC and EC, respectively, with mean detection delays always below 31 ms. The algorithm accurately detected gait events in real time from a heterogeneous dataset of gait patterns and paves the way for the design of closed-loop controllers for customized gait trainings and/or assistive devices.

  14. Optimization of IBF parameters based on adaptive tool-path algorithm

    NASA Astrophysics Data System (ADS)

    Deng, Wen Hui; Chen, Xian Hua; Jin, Hui Liang; Zhong, Bo; Hou, Jin; Li, An Qi

    2018-03-01

    As a kind of Computer Controlled Optical Surfacing(CCOS) technology. Ion Beam Figuring(IBF) has obvious advantages in the control of surface accuracy, surface roughness and subsurface damage. The superiority and characteristics of IBF in optical component processing are analyzed from the point of view of removal mechanism. For getting more effective and automatic tool path with the information of dwell time, a novel algorithm is proposed in this thesis. Based on the removal functions made through our IBF equipment and the adaptive tool-path, optimized parameters are obtained through analysis the residual error that would be created in the polishing process. A Φ600 mm plane reflector element was used to be a simulation instance. The simulation result shows that after four combinations of processing, the surface accuracy of PV (Peak Valley) value and the RMS (Root Mean Square) value was reduced to 4.81 nm and 0.495 nm from 110.22 nm and 13.998 nm respectively in the 98% aperture. The result shows that the algorithm and optimized parameters provide a good theoretical for high precision processing of IBF.

  15. An adaptive approach to the physical annealing strategy for simulated annealing

    NASA Astrophysics Data System (ADS)

    Hasegawa, M.

    2013-02-01

    A new and reasonable method for adaptive implementation of simulated annealing (SA) is studied on two types of random traveling salesman problems. The idea is based on the previous finding on the search characteristics of the threshold algorithms, that is, the primary role of the relaxation dynamics in their finite-time optimization process. It is shown that the effective temperature for optimization can be predicted from the system's behavior analogous to the stabilization phenomenon occurring in the heating process starting from a quenched solution. The subsequent slow cooling near the predicted point draws out the inherent optimizing ability of finite-time SA in more straightforward manner than the conventional adaptive approach.

  16. SART-Type Half-Threshold Filtering Approach for CT Reconstruction

    PubMed Central

    YU, HENGYONG; WANG, GE

    2014-01-01

    The ℓ1 regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the ℓp norm (0 < p < 1) and solve the ℓp minimization problem. Very recently, Xu et al. developed an analytic solution for the ℓ1∕2 regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering. PMID:25530928

  17. SART-Type Half-Threshold Filtering Approach for CT Reconstruction.

    PubMed

    Yu, Hengyong; Wang, Ge

    2014-01-01

    The [Formula: see text] regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the [Formula: see text] norm (0 < p < 1) and solve the [Formula: see text] minimization problem. Very recently, Xu et al. developed an analytic solution for the [Formula: see text] regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering.

  18. Using patient‐specific phantoms to evaluate deformable image registration algorithms for adaptive radiation therapy

    PubMed Central

    Stanley, Nick; Glide‐Hurst, Carri; Kim, Jinkoo; Adams, Jeffrey; Li, Shunshan; Wen, Ning; Chetty, Indrin J

    2013-01-01

    The quality of adaptive treatment planning depends on the accuracy of its underlying deformable image registration (DIR). The purpose of this study is to evaluate the performance of two DIR algorithms, B‐spline‐based deformable multipass (DMP) and deformable demons (Demons), implemented in a commercial software package. Evaluations were conducted using both computational and physical deformable phantoms. Based on a finite element method (FEM), a total of 11 computational models were developed from a set of CT images acquired from four lung and one prostate cancer patients. FEM generated displacement vector fields (DVF) were used to construct the lung and prostate image phantoms. Based on a fast‐Fourier transform technique, image noise power spectrum was incorporated into the prostate image phantoms to create simulated CBCT images. The FEM‐DVF served as a gold standard for verification of the two registration algorithms performed on these phantoms. The registration algorithms were also evaluated at the homologous points quantified in the CT images of a physical lung phantom. The results indicated that the mean errors of the DMP algorithm were in the range of 1.0~3.1mm for the computational phantoms and 1.9 mm for the physical lung phantom. For the computational prostate phantoms, the corresponding mean error was 1.0–1.9 mm in the prostate, 1.9–2.4 mm in the rectum, and 1.8–2.1 mm over the entire patient body. Sinusoidal errors induced by B‐spline interpolations were observed in all the displacement profiles of the DMP registrations. Regions of large displacements were observed to have more registration errors. Patient‐specific FEM models have been developed to evaluate the DIR algorithms implemented in the commercial software package. It has been found that the accuracy of these algorithms is patient‐dependent and related to various factors including tissue deformation magnitudes and image intensity gradients across the regions of interest. This

  19. Automatic luminous reflections detector using global threshold with increased luminosity contrast in images

    NASA Astrophysics Data System (ADS)

    Silva, Ricardo Petri; Naozuka, Gustavo Taiji; Mastelini, Saulo Martiello; Felinto, Alan Salvany

    2018-01-01

    The incidence of luminous reflections (LR) in captured images can interfere with the color of the affected regions. These regions tend to oversaturate, becoming whitish and, consequently, losing the original color information of the scene. Decision processes that employ images acquired from digital cameras can be impaired by the LR incidence. Such applications include real-time video surgeries, facial, and ocular recognition. This work proposes an algorithm called contrast enhancement of potential LR regions, which is a preprocessing to increase the contrast of potential LR regions, in order to improve the performance of automatic LR detectors. In addition, three automatic detectors were compared with and without the employment of our preprocessing method. The first one is a technique already consolidated in the literature called the Chang-Tseng threshold. We propose two automatic detectors called adapted histogram peak and global threshold. We employed four performance metrics to evaluate the detectors, namely, accuracy, precision, exactitude, and root mean square error. The exactitude metric is developed by this work. Thus, a manually defined reference model was created. The global threshold detector combined with our preprocessing method presented the best results, with an average exactitude rate of 82.47%.

  20. A hybrid skull-stripping algorithm based on adaptive balloon snake models

    NASA Astrophysics Data System (ADS)

    Liu, Hung-Ting; Sheu, Tony W. H.; Chang, Herng-Hua

    2013-02-01

    Skull-stripping is one of the most important preprocessing steps in neuroimage analysis. We proposed a hybrid algorithm based on an adaptive balloon snake model to handle this challenging task. The proposed framework consists of two stages: first, the fuzzy possibilistic c-means (FPCM) is used for voxel clustering, which provides a labeled image for the snake contour initialization. In the second stage, the contour is initialized outside the brain surface based on the FPCM result and evolves under the guidance of the balloon snake model, which drives the contour with an adaptive inward normal force to capture the boundary of the brain. The similarity indices indicate that our method outperformed the BSE and BET methods in skull-stripping the MR image volumes in the IBSR data set. Experimental results show the effectiveness of this new scheme and potential applications in a wide variety of skull-stripping applications.

  1. Genetic algorithm based adaptive neural network ensemble and its application in predicting carbon flux

    USGS Publications Warehouse

    Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.

    2007-01-01

    To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.

  2. A novel evaluation of two related and two independent algorithms for eye movement classification during reading.

    PubMed

    Friedman, Lee; Rigas, Ioannis; Abdulin, Evgeny; Komogortsev, Oleg V

    2018-05-15

    Nystrӧm and Holmqvist have published a method for the classification of eye movements during reading (ONH) (Nyström & Holmqvist, 2010). When we applied this algorithm to our data, the results were not satisfactory, so we modified the algorithm (now the MNH) to better classify our data. The changes included: (1) reducing the amount of signal filtering, (2) excluding a new type of noise, (3) removing several adaptive thresholds and replacing them with fixed thresholds, (4) changing the way that the start and end of each saccade was determined, (5) employing a new algorithm for detecting PSOs, and (6) allowing a fixation period to either begin or end with noise. A new method for the evaluation of classification algorithms is presented. It was designed to provide comprehensive feedback to an algorithm developer, in a time-efficient manner, about the types and numbers of classification errors that an algorithm produces. This evaluation was conducted by three expert raters independently, across 20 randomly chosen recordings, each classified by both algorithms. The MNH made many fewer errors in determining when saccades start and end, and it also detected some fixations and saccades that the ONH did not. The MNH fails to detect very small saccades. We also evaluated two additional algorithms: the EyeLink Parser and a more current, machine-learning-based algorithm. The EyeLink Parser tended to find more saccades that ended too early than did the other methods, and we found numerous problems with the output of the machine-learning-based algorithm.

  3. The Limits to Adaptation; A Systems Approach

    EPA Science Inventory

    The Limits to Adaptation: A Systems Approach. The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering...

  4. AIDA: an adaptive image deconvolution algorithm with application to multi-frame and three-dimensional data

    PubMed Central

    Hom, Erik F. Y.; Marchis, Franck; Lee, Timothy K.; Haase, Sebastian; Agard, David A.; Sedat, John W.

    2011-01-01

    We describe an adaptive image deconvolution algorithm (AIDA) for myopic deconvolution of multi-frame and three-dimensional data acquired through astronomical and microscopic imaging. AIDA is a reimplementation and extension of the MISTRAL method developed by Mugnier and co-workers and shown to yield object reconstructions with excellent edge preservation and photometric precision [J. Opt. Soc. Am. A 21, 1841 (2004)]. Written in Numerical Python with calls to a robust constrained conjugate gradient method, AIDA has significantly improved run times over the original MISTRAL implementation. Included in AIDA is a scheme to automatically balance maximum-likelihood estimation and object regularization, which significantly decreases the amount of time and effort needed to generate satisfactory reconstructions. We validated AIDA using synthetic data spanning a broad range of signal-to-noise ratios and image types and demonstrated the algorithm to be effective for experimental data from adaptive optics–equipped telescope systems and wide-field microscopy. PMID:17491626

  5. Directional Histogram Ratio at Random Probes: A Local Thresholding Criterion for Capillary Images

    PubMed Central

    Lu, Na; Silva, Jharon; Gu, Yu; Gerber, Scott; Wu, Hulin; Gelbard, Harris; Dewhurst, Stephen; Miao, Hongyu

    2013-01-01

    With the development of micron-scale imaging techniques, capillaries can be conveniently visualized using methods such as two-photon and whole mount microscopy. However, the presence of background staining, leaky vessels and the diffusion of small fluorescent molecules can lead to significant complexity in image analysis and loss of information necessary to accurately quantify vascular metrics. One solution to this problem is the development of accurate thresholding algorithms that reliably distinguish blood vessels from surrounding tissue. Although various thresholding algorithms have been proposed, our results suggest that without appropriate pre- or post-processing, the existing approaches may fail to obtain satisfactory results for capillary images that include areas of contamination. In this study, we propose a novel local thresholding algorithm, called directional histogram ratio at random probes (DHR-RP). This method explicitly considers the geometric features of tube-like objects in conducting image binarization, and has a reliable performance in distinguishing small vessels from either clean or contaminated background. Experimental and simulation studies suggest that our DHR-RP algorithm is superior over existing thresholding methods. PMID:23525856

  6. Angular dependence of multiangle dynamic light scattering for particle size distribution inversion using a self-adapting regularization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Lei; Yu, Long; Yang, Kecheng; Li, Wei; Li, Kai; Xia, Min

    2018-04-01

    The multiangle dynamic light scattering (MDLS) technique can better estimate particle size distributions (PSDs) than single-angle dynamic light scattering. However, determining the inversion range, angular weighting coefficients, and scattering angle combination is difficult but fundamental to the reconstruction for both unimodal and multimodal distributions. In this paper, we propose a self-adapting regularization method called the wavelet iterative recursion nonnegative Tikhonov-Phillips-Twomey (WIRNNT-PT) algorithm. This algorithm combines a wavelet multiscale strategy with an appropriate inversion method and could self-adaptively optimize several noteworthy issues containing the choices of the weighting coefficients, the inversion range and the optimal inversion method from two regularization algorithms for estimating the PSD from MDLS measurements. In addition, the angular dependence of the MDLS for estimating the PSDs of polymeric latexes is thoroughly analyzed. The dependence of the results on the number and range of measurement angles was analyzed in depth to identify the optimal scattering angle combination. Numerical simulations and experimental results for unimodal and multimodal distributions are presented to demonstrate both the validity of the WIRNNT-PT algorithm and the angular dependence of MDLS and show that the proposed algorithm with a six-angle analysis in the 30-130° range can be satisfactorily applied to retrieve PSDs from MDLS measurements.

  7. Performance comparison of two resolution modeling PET reconstruction algorithms in terms of physical figures of merit used in quantitative imaging.

    PubMed

    Matheoud, R; Ferrando, O; Valzano, S; Lizio, D; Sacchetti, G; Ciarmiello, A; Foppiano, F; Brambilla, M

    2015-07-01

    Resolution modeling (RM) of PET systems has been introduced in iterative reconstruction algorithms for oncologic PET. The RM recovers the loss of resolution and reduces the associated partial volume effect. While these methods improved the observer performance, particularly in the detection of small and faint lesions, their impact on quantification accuracy still requires thorough investigation. The aim of this study was to characterize the performances of the RM algorithms under controlled conditions simulating a typical (18)F-FDG oncologic study, using an anthropomorphic phantom and selected physical figures of merit, used for image quantification. Measurements were performed on Biograph HiREZ (B_HiREZ) and Discovery 710 (D_710) PET/CT scanners and reconstructions were performed using the standard iterative reconstructions and the RM algorithms associated to each scanner: TrueX and SharpIR, respectively. RM determined a significant improvement in contrast recovery for small targets (≤17 mm diameter) only for the D_710 scanner. The maximum standardized uptake value (SUVmax) increased when RM was applied using both scanners. The SUVmax of small targets was on average lower with the B_HiREZ than with the D_710. Sharp IR improved the accuracy of SUVmax determination, whilst TrueX showed an overestimation of SUVmax for sphere dimensions greater than 22 mm. The goodness of fit of adaptive threshold algorithms worsened significantly when RM algorithms were employed for both scanners. Differences in general quantitative performance were observed for the PET scanners analyzed. Segmentation of PET images using adaptive threshold algorithms should not be undertaken in conjunction with RM reconstructions. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  8. Object tracking algorithm based on the color histogram probability distribution

    NASA Astrophysics Data System (ADS)

    Li, Ning; Lu, Tongwei; Zhang, Yanduo

    2018-04-01

    In order to resolve tracking failure resulted from target's being occlusion and follower jamming caused by objects similar to target in the background, reduce the influence of light intensity. This paper change HSV and YCbCr color channel correction the update center of the target, continuously updated image threshold self-adaptive target detection effect, Clustering the initial obstacles is roughly range, shorten the threshold range, maximum to detect the target. In order to improve the accuracy of detector, this paper increased the Kalman filter to estimate the target state area. The direction predictor based on the Markov model is added to realize the target state estimation under the condition of background color interference and enhance the ability of the detector to identify similar objects. The experimental results show that the improved algorithm more accurate and faster speed of processing.

  9. An unbiased adaptive sampling algorithm for the exploration of RNA mutational landscapes under evolutionary pressure.

    PubMed

    Waldispühl, Jérôme; Ponty, Yann

    2011-11-01

    The analysis of the relationship between sequences and structures (i.e., how mutations affect structures and reciprocally how structures influence mutations) is essential to decipher the principles driving molecular evolution, to infer the origins of genetic diseases, and to develop bioengineering applications such as the design of artificial molecules. Because their structures can be predicted from the sequence data only, RNA molecules provide a good framework to study this sequence-structure relationship. We recently introduced a suite of algorithms called RNAmutants which allows a complete exploration of RNA sequence-structure maps in polynomial time and space. Formally, RNAmutants takes an input sequence (or seed) to compute the Boltzmann-weighted ensembles of mutants with exactly k mutations, and sample mutations from these ensembles. However, this approach suffers from major limitations. Indeed, since the Boltzmann probabilities of the mutations depend of the free energy of the structures, RNAmutants has difficulties to sample mutant sequences with low G+C-contents. In this article, we introduce an unbiased adaptive sampling algorithm that enables RNAmutants to sample regions of the mutational landscape poorly covered by classical algorithms. We applied these methods to sample mutations with low G+C-contents. These adaptive sampling techniques can be easily adapted to explore other regions of the sequence and structural landscapes which are difficult to sample. Importantly, these algorithms come at a minimal computational cost. We demonstrate the insights offered by these techniques on studies of complete RNA sequence structures maps of sizes up to 40 nucleotides. Our results indicate that the G+C-content has a strong influence on the size and shape of the evolutionary accessible sequence and structural spaces. In particular, we show that low G+C-contents favor the apparition of internal loops and thus possibly the synthesis of tertiary structure motifs. On

  10. A Robust Step Detection Algorithm and Walking Distance Estimation Based on Daily Wrist Activity Recognition Using a Smart Band.

    PubMed

    Trong Bui, Duong; Nguyen, Nhan Duc; Jeong, Gu-Min

    2018-06-25

    Human activity recognition and pedestrian dead reckoning are an interesting field because of their importance utilities in daily life healthcare. Currently, these fields are facing many challenges, one of which is the lack of a robust algorithm with high performance. This paper proposes a new method to implement a robust step detection and adaptive distance estimation algorithm based on the classification of five daily wrist activities during walking at various speeds using a smart band. The key idea is that the non-parametric adaptive distance estimator is performed after two activity classifiers and a robust step detector. In this study, two classifiers perform two phases of recognizing five wrist activities during walking. Then, a robust step detection algorithm, which is integrated with an adaptive threshold, peak and valley correction algorithm, is applied to the classified activities to detect the walking steps. In addition, the misclassification activities are fed back to the previous layer. Finally, three adaptive distance estimators, which are based on a non-parametric model of the average walking speed, calculate the length of each strike. The experimental results show that the average classification accuracy is about 99%, and the accuracy of the step detection is 98.7%. The error of the estimated distance is 2.2⁻4.2% depending on the type of wrist activities.

  11. Novel Hierarchical Fall Detection Algorithm Using a Multiphase Fall Model.

    PubMed

    Hsieh, Chia-Yeh; Liu, Kai-Chun; Huang, Chih-Ning; Chu, Woei-Chyn; Chan, Chia-Tai

    2017-02-08

    Falls are the primary cause of accidents for the elderly in the living environment. Reducing hazards in the living environment and performing exercises for training balance and muscles are the common strategies for fall prevention. However, falls cannot be avoided completely; fall detection provides an alarm that can decrease injuries or death caused by the lack of rescue. The automatic fall detection system has opportunities to provide real-time emergency alarms for improving the safety and quality of home healthcare services. Two common technical challenges are also tackled in order to provide a reliable fall detection algorithm, including variability and ambiguity. We propose a novel hierarchical fall detection algorithm involving threshold-based and knowledge-based approaches to detect a fall event. The threshold-based approach efficiently supports the detection and identification of fall events from continuous sensor data. A multiphase fall model is utilized, including free fall, impact, and rest phases for the knowledge-based approach, which identifies fall events and has the potential to deal with the aforementioned technical challenges of a fall detection system. Seven kinds of falls and seven types of daily activities arranged in an experiment are used to explore the performance of the proposed fall detection algorithm. The overall performances of the sensitivity, specificity, precision, and accuracy using a knowledge-based algorithm are 99.79%, 98.74%, 99.05% and 99.33%, respectively. The results show that the proposed novel hierarchical fall detection algorithm can cope with the variability and ambiguity of the technical challenges and fulfill the reliability, adaptability, and flexibility requirements of an automatic fall detection system with respect to the individual differences.

  12. Novel Hierarchical Fall Detection Algorithm Using a Multiphase Fall Model

    PubMed Central

    Hsieh, Chia-Yeh; Liu, Kai-Chun; Huang, Chih-Ning; Chu, Woei-Chyn; Chan, Chia-Tai

    2017-01-01

    Falls are the primary cause of accidents for the elderly in the living environment. Reducing hazards in the living environment and performing exercises for training balance and muscles are the common strategies for fall prevention. However, falls cannot be avoided completely; fall detection provides an alarm that can decrease injuries or death caused by the lack of rescue. The automatic fall detection system has opportunities to provide real-time emergency alarms for improving the safety and quality of home healthcare services. Two common technical challenges are also tackled in order to provide a reliable fall detection algorithm, including variability and ambiguity. We propose a novel hierarchical fall detection algorithm involving threshold-based and knowledge-based approaches to detect a fall event. The threshold-based approach efficiently supports the detection and identification of fall events from continuous sensor data. A multiphase fall model is utilized, including free fall, impact, and rest phases for the knowledge-based approach, which identifies fall events and has the potential to deal with the aforementioned technical challenges of a fall detection system. Seven kinds of falls and seven types of daily activities arranged in an experiment are used to explore the performance of the proposed fall detection algorithm. The overall performances of the sensitivity, specificity, precision, and accuracy using a knowledge-based algorithm are 99.79%, 98.74%, 99.05% and 99.33%, respectively. The results show that the proposed novel hierarchical fall detection algorithm can cope with the variability and ambiguity of the technical challenges and fulfill the reliability, adaptability, and flexibility requirements of an automatic fall detection system with respect to the individual differences. PMID:28208694

  13. Performance comparisons on spatial lattice algorithm and direct matrix inverse method with application to adaptive arrays processing

    NASA Technical Reports Server (NTRS)

    An, S. H.; Yao, K.

    1986-01-01

    Lattice algorithm has been employed in numerous adaptive filtering applications such as speech analysis/synthesis, noise canceling, spectral analysis, and channel equalization. In this paper the application to adaptive-array processing is discussed. The advantages are fast convergence rate as well as computational accuracy independent of the noise and interference conditions. The results produced by this technique are compared to those obtained by the direct matrix inverse method.

  14. Validation of various adaptive threshold methods of segmentation applied to follicular lymphoma digital images stained with 3,3’-Diaminobenzidine&Haematoxylin

    PubMed Central

    2013-01-01

    The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the ’brown component’ extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without

  15. Electrocardiogram signal denoising based on a new improved wavelet thresholding

    NASA Astrophysics Data System (ADS)

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method.

  16. A proposed adaptive step size perturbation and observation maximum power point tracking algorithm based on photovoltaic system modeling

    NASA Astrophysics Data System (ADS)

    Huang, Yu

    Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.

  17. An adaptive compensation algorithm for temperature drift of micro-electro-mechanical systems gyroscopes using a strong tracking Kalman filter.

    PubMed

    Feng, Yibo; Li, Xisheng; Zhang, Xiaojuan

    2015-05-13

    We present an adaptive algorithm for a system integrated with micro-electro-mechanical systems (MEMS) gyroscopes and a compass to eliminate the influence from the environment, compensate the temperature drift precisely, and improve the accuracy of the MEMS gyroscope. We use a simplified drift model and changing but appropriate model parameters to implement this algorithm. The model of MEMS gyroscope temperature drift is constructed mostly on the basis of the temperature sensitivity of the gyroscope. As the state variables of a strong tracking Kalman filter (STKF), the parameters of the temperature drift model can be calculated to adapt to the environment under the support of the compass. These parameters change intelligently with the environment to maintain the precision of the MEMS gyroscope in the changing temperature. The heading error is less than 0.6° in the static temperature experiment, and also is kept in the range from 5° to -2° in the dynamic outdoor experiment. This demonstrates that the proposed algorithm exhibits strong adaptability to a changing temperature, and performs significantly better than KF and MLR to compensate the temperature drift of a gyroscope and eliminate the influence of temperature variation.

  18. A collaborative approach to developing an electronic health record phenotyping algorithm for drug-induced liver injury

    PubMed Central

    Overby, Casey Lynnette; Pathak, Jyotishman; Gottesman, Omri; Haerian, Krystl; Perotte, Adler; Murphy, Sean; Bruce, Kevin; Johnson, Stephanie; Talwalkar, Jayant; Shen, Yufeng; Ellis, Steve; Kullo, Iftikhar; Chute, Christopher; Friedman, Carol; Bottinger, Erwin; Hripcsak, George; Weng, Chunhua

    2013-01-01

    Objective To describe a collaborative approach for developing an electronic health record (EHR) phenotyping algorithm for drug-induced liver injury (DILI). Methods We analyzed types and causes of differences in DILI case definitions provided by two institutions—Columbia University and Mayo Clinic; harmonized two EHR phenotyping algorithms; and assessed the performance, measured by sensitivity, specificity, positive predictive value, and negative predictive value, of the resulting algorithm at three institutions except that sensitivity was measured only at Columbia University. Results Although these sites had the same case definition, their phenotyping methods differed by selection of liver injury diagnoses, inclusion of drugs cited in DILI cases, laboratory tests assessed, laboratory thresholds for liver injury, exclusion criteria, and approaches to validating phenotypes. We reached consensus on a DILI phenotyping algorithm and implemented it at three institutions. The algorithm was adapted locally to account for differences in populations and data access. Implementations collectively yielded 117 algorithm-selected cases and 23 confirmed true positive cases. Discussion Phenotyping for rare conditions benefits significantly from pooling data across institutions. Despite the heterogeneity of EHRs and varied algorithm implementations, we demonstrated the portability of this algorithm across three institutions. The performance of this algorithm for identifying DILI was comparable with other computerized approaches to identify adverse drug events. Conclusions Phenotyping algorithms developed for rare and complex conditions are likely to require adaptive implementation at multiple institutions. Better approaches are also needed to share algorithms. Early agreement on goals, data sources, and validation methods may improve the portability of the algorithms. PMID:23837993

  19. An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures

    PubMed Central

    Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf

    2016-01-01

    Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer. PMID:27499762

  20. [A spatial adaptive algorithm for endmember extraction on multispectral remote sensing image].

    PubMed

    Zhu, Chang-Ming; Luo, Jian-Cheng; Shen, Zhan-Feng; Li, Jun-Li; Hu, Xiao-Dong

    2011-10-01

    Due to the problem that the convex cone analysis (CCA) method can only extract limited endmember in multispectral imagery, this paper proposed a new endmember extraction method by spatial adaptive spectral feature analysis in multispectral remote sensing image based on spatial clustering and imagery slice. Firstly, in order to remove spatial and spectral redundancies, the principal component analysis (PCA) algorithm was used for lowering the dimensions of the multispectral data. Secondly, iterative self-organizing data analysis technology algorithm (ISODATA) was used for image cluster through the similarity of the pixel spectral. And then, through clustering post process and litter clusters combination, we divided the whole image data into several blocks (tiles). Lastly, according to the complexity of image blocks' landscape and the feature of the scatter diagrams analysis, the authors can determine the number of endmembers. Then using hourglass algorithm extracts endmembers. Through the endmember extraction experiment on TM multispectral imagery, the experiment result showed that the method can extract endmember spectra form multispectral imagery effectively. What's more, the method resolved the problem of the amount of endmember limitation and improved accuracy of the endmember extraction. The method has provided a new way for multispectral image endmember extraction.

  1. SASS: A symmetry adapted stochastic search algorithm exploiting site symmetry

    NASA Astrophysics Data System (ADS)

    Wheeler, Steven E.; Schleyer, Paul v. R.; Schaefer, Henry F.

    2007-03-01

    A simple symmetry adapted search algorithm (SASS) exploiting point group symmetry increases the efficiency of systematic explorations of complex quantum mechanical potential energy surfaces. In contrast to previously described stochastic approaches, which do not employ symmetry, candidate structures are generated within simple point groups, such as C2, Cs, and C2v. This facilitates efficient sampling of the 3N-6 Pople's dimensional configuration space and increases the speed and effectiveness of quantum chemical geometry optimizations. Pople's concept of framework groups [J. Am. Chem. Soc. 102, 4615 (1980)] is used to partition the configuration space into structures spanning all possible distributions of sets of symmetry equivalent atoms. This provides an efficient means of computing all structures of a given symmetry with minimum redundancy. This approach also is advantageous for generating initial structures for global optimizations via genetic algorithm and other stochastic global search techniques. Application of the SASS method is illustrated by locating 14 low-lying stationary points on the cc-pwCVDZ ROCCSD(T) potential energy surface of Li5H2. The global minimum structure is identified, along with many unique, nonintuitive, energetically favorable isomers.

  2. Adaptive twisting sliding mode algorithm for hypersonic reentry vehicle attitude control based on finite-time observer.

    PubMed

    Guo, Zongyi; Chang, Jing; Guo, Jianguo; Zhou, Jun

    2018-06-01

    This paper focuses on the adaptive twisting sliding mode control for the Hypersonic Reentry Vehicles (HRVs) attitude tracking issue. The HRV attitude tracking model is transformed into the error dynamics in matched structure, whereas an unmeasurable state is redefined by lumping the existing unmatched disturbance with the angular rate. Hence, an adaptive finite-time observer is used to estimate the unknown state. Then, an adaptive twisting algorithm is proposed for systems subject to disturbances with unknown bounds. The stability of the proposed observer-based adaptive twisting approach is guaranteed, and the case of noisy measurement is analyzed. Also, the developed control law avoids the aggressive chattering phenomenon of the existing adaptive twisting approaches because the adaptive gains decrease close to the disturbance once the trajectories reach the sliding surface. Finally, numerical simulations on the attitude control of the HRV are conducted to verify the effectiveness and benefit of the proposed approach. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Mass Detection in Mammographic Images Using Wavelet Processing and Adaptive Threshold Technique.

    PubMed

    Vikhe, P S; Thool, V R

    2016-04-01

    Detection of mass in mammogram for early diagnosis of breast cancer is a significant assignment in the reduction of the mortality rate. However, in some cases, screening of mass is difficult task for radiologist, due to variation in contrast, fuzzy edges and noisy mammograms. Masses and micro-calcifications are the distinctive signs for diagnosis of breast cancer. This paper presents, a method for mass enhancement using piecewise linear operator in combination with wavelet processing from mammographic images. The method includes, artifact suppression and pectoral muscle removal based on morphological operations. Finally, mass segmentation for detection using adaptive threshold technique is carried out to separate the mass from background. The proposed method has been tested on 130 (45 + 85) images with 90.9 and 91 % True Positive Fraction (TPF) at 2.35 and 2.1 average False Positive Per Image(FP/I) from two different databases, namely Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM). The obtained results show that, the proposed technique gives improved diagnosis in the early breast cancer detection.

  4. The absolute threshold of cone vision

    PubMed Central

    Koeing, Darran; Hofer, Heidi

    2013-01-01

    We report measurements of the absolute threshold of cone vision, which has been previously underestimated due to sub-optimal conditions or overly strict subjective response criteria. We avoided these limitations by using optimized stimuli and experimental conditions while having subjects respond within a rating scale framework. Small (1′ fwhm), brief (34 msec), monochromatic (550 nm) stimuli were foveally presented at multiple intensities in dark-adapted retina for 5 subjects. For comparison, 4 subjects underwent similar testing with rod-optimized stimuli. Cone absolute threshold, that is, the minimum light energy for which subjects were just able to detect a visual stimulus with any response criterion, was 203 ± 38 photons at the cornea, ∼0.47 log units lower than previously reported. Two-alternative forced-choice measurements in a subset of subjects yielded consistent results. Cone thresholds were less responsive to criterion changes than rod thresholds, suggesting a limit to the stimulus information recoverable from the cone mosaic in addition to the limit imposed by Poisson noise. Results were consistent with expectations for detection in the face of stimulus uncertainty. We discuss implications of these findings for modeling the first stages of human cone vision and interpreting psychophysical data acquired with adaptive optics at the spatial scale of the receptor mosaic. PMID:21270115

  5. Image segmentation for uranium isotopic analysis by SIMS: Combined adaptive thresholding and marker controlled watershed approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willingham, David G.; Naes, Benjamin E.; Heasler, Patrick G.

    A novel approach to particle identification and particle isotope ratio determination has been developed for nuclear safeguard applications. This particle search approach combines an adaptive thresholding algorithm and marker-controlled watershed segmentation (MCWS) transform, which improves the secondary ion mass spectrometry (SIMS) isotopic analysis of uranium containing particle populations for nuclear safeguards applications. The Niblack assisted MCWS approach (a.k.a. SEEKER) developed for this work has improved the identification of isotopically unique uranium particles under conditions that have historically presented significant challenges for SIMS image data processing techniques. Particles obtained from five NIST uranium certified reference materials (CRM U129A, U015, U150, U500more » and U850) were successfully identified in regions of SIMS image data 1) where a high variability in image intensity existed, 2) where particles were touching or were in close proximity to one another and/or 3) where the magnitude of ion signal for a given region was count limited. Analysis of the isotopic distributions of uranium containing particles identified by SEEKER showed four distinct, accurately identified 235U enrichment distributions, corresponding to the NIST certified 235U/238U isotope ratios for CRM U129A/U015 (not statistically differentiated), U150, U500 and U850. Additionally, comparison of the minor uranium isotope (234U, 235U and 236U) atom percent values verified that, even in the absence of high precision isotope ratio measurements, SEEKER could be used to segment isotopically unique uranium particles from SIMS image data. Although demonstrated specifically for SIMS analysis of uranium containing particles for nuclear safeguards, SEEKER has application in addressing a broad set of image processing challenges.« less

  6. Is there a minimum intensity threshold for resistance training-induced hypertrophic adaptations?

    PubMed

    Schoenfeld, Brad J

    2013-12-01

    In humans, regimented resistance training has been shown to promote substantial increases in skeletal muscle mass. With respect to traditional resistance training methods, the prevailing opinion is that an intensity of greater than ~60 % of 1 repetition maximum (RM) is necessary to elicit significant increases in muscular size. It has been surmised that this is the minimum threshold required to activate the complete spectrum of fiber types, particularly those associated with the largest motor units. There is emerging evidence, however, that low-intensity resistance training performed with blood flow restriction (BFR) can promote marked increases in muscle hypertrophy, in many cases equal to that of traditional high-intensity exercise. The anabolic effects of such occlusion-based training have been attributed to increased levels of metabolic stress that mediate hypertrophy at least in part by enhancing recruitment of high-threshold motor units. Recently, several researchers have put forth the theory that low-intensity exercise (≤50 % 1RM) performed without BFR can promote increases in muscle size equal, or perhaps even superior, to that at higher intensities, provided training is carried out to volitional muscular failure. Proponents of the theory postulate that fatiguing contractions at light loads is simply a milder form of BFR and thus ultimately results in maximal muscle fiber recruitment. Current research indicates that low-load exercise can indeed promote increases in muscle growth in untrained subjects, and that these gains may be functionally, metabolically, and/or aesthetically meaningful. However, whether hypertrophic adaptations can equal that achieved with higher intensity resistance exercise (≤60 % 1RM) remains to be determined. Furthermore, it is not clear as to what, if any, hypertrophic effects are seen with low-intensity exercise in well-trained subjects as experimental studies on the topic in this population are lacking. Practical

  7. An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography.

    PubMed

    Treiber, O; Wanninger, F; Führ, H; Panzer, W; Regulla, D; Winkler, G

    2003-02-21

    This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing. a dose reduction by 25% has no serious influence on the detection results. whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography.

  8. An adaptive algorithm for the detection of microcalcifications in simulated low-dose mammography

    NASA Astrophysics Data System (ADS)

    Treiber, O.; Wanninger, F.; Führ, H.; Panzer, W.; Regulla, D.; Winkler, G.

    2003-02-01

    This paper uses the task of microcalcification detection as a benchmark problem to assess the potential for dose reduction in x-ray mammography. We present the results of a newly developed algorithm for detection of microcalcifications as a case study for a typical commercial film-screen system (Kodak Min-R 2000/2190). The first part of the paper deals with the simulation of dose reduction for film-screen mammography based on a physical model of the imaging process. Use of a more sensitive film-screen system is expected to result in additional smoothing of the image. We introduce two different models of that behaviour, called moderate and strong smoothing. We then present an adaptive, model-based microcalcification detection algorithm. Comparing detection results with ground-truth images obtained under the supervision of an expert radiologist allows us to establish the soundness of the detection algorithm. We measure the performance on the dose-reduced images in order to assess the loss of information due to dose reduction. It turns out that the smoothing behaviour has a strong influence on detection rates. For moderate smoothing, a dose reduction by 25% has no serious influence on the detection results, whereas a dose reduction by 50% already entails a marked deterioration of the performance. Strong smoothing generally leads to an unacceptable loss of image quality. The test results emphasize the impact of the more sensitive film-screen system and its characteristics on the problem of assessing the potential for dose reduction in film-screen mammography. The general approach presented in the paper can be adapted to fully digital mammography.

  9. Differentially Private Histogram Publication For Dynamic Datasets: An Adaptive Sampling Approach

    PubMed Central

    Li, Haoran; Jiang, Xiaoqian; Xiong, Li; Liu, Jinfei

    2016-01-01

    Differential privacy has recently become a de facto standard for private statistical data release. Many algorithms have been proposed to generate differentially private histograms or synthetic data. However, most of them focus on “one-time” release of a static dataset and do not adequately address the increasing need of releasing series of dynamic datasets in real time. A straightforward application of existing histogram methods on each snapshot of such dynamic datasets will incur high accumulated error due to the composibility of differential privacy and correlations or overlapping users between the snapshots. In this paper, we address the problem of releasing series of dynamic datasets in real time with differential privacy, using a novel adaptive distance-based sampling approach. Our first method, DSFT, uses a fixed distance threshold and releases a differentially private histogram only when the current snapshot is sufficiently different from the previous one, i.e., with a distance greater than a predefined threshold. Our second method, DSAT, further improves DSFT and uses a dynamic threshold adaptively adjusted by a feedback control mechanism to capture the data dynamics. Extensive experiments on real and synthetic datasets demonstrate that our approach achieves better utility than baseline methods and existing state-of-the-art methods. PMID:26973795

  10. Adaptive Fault Detection on Liquid Propulsion Systems with Virtual Sensors: Algorithms and Architectures

    NASA Technical Reports Server (NTRS)

    Matthews, Bryan L.; Srivastava, Ashok N.

    2010-01-01

    Prior to the launch of STS-119 NASA had completed a study of an issue in the flow control valve (FCV) in the Main Propulsion System of the Space Shuttle using an adaptive learning method known as Virtual Sensors. Virtual Sensors are a class of algorithms that estimate the value of a time series given other potentially nonlinearly correlated sensor readings. In the case presented here, the Virtual Sensors algorithm is based on an ensemble learning approach and takes sensor readings and control signals as input to estimate the pressure in a subsystem of the Main Propulsion System. Our results indicate that this method can detect faults in the FCV at the time when they occur. We use the standard deviation of the predictions of the ensemble as a measure of uncertainty in the estimate. This uncertainty estimate was crucial to understanding the nature and magnitude of transient characteristics during startup of the engine. This paper overviews the Virtual Sensors algorithm and discusses results on a comprehensive set of Shuttle missions and also discusses the architecture necessary for deploying such algorithms in a real-time, closed-loop system or a human-in-the-loop monitoring system. These results were presented at a Flight Readiness Review of the Space Shuttle in early 2009.

  11. Adaptive threshold hunting for the effects of transcranial direct current stimulation on primary motor cortex inhibition.

    PubMed

    Mooney, Ronan A; Cirillo, John; Byblow, Winston D

    2018-06-01

    Primary motor cortex excitability can be modulated by anodal and cathodal transcranial direct current stimulation (tDCS). These neuromodulatory effects may, in part, be dependent on modulation within gamma-aminobutyric acid (GABA)-mediated inhibitory networks. GABAergic function can be quantified non-invasively using adaptive threshold hunting paired-pulse transcranial magnetic stimulation (TMS). The previous studies have used TMS with posterior-anterior (PA) induced current to assess tDCS effects on inhibition. However, TMS with anterior-posterior (AP) induced current in the brain provides a more robust measure of GABA-mediated inhibition. The aim of the present study was to assess the modulation of corticomotor excitability and inhibition after anodal and cathodal tDCS using TMS with PA- and AP-induced current. In 16 young adults (26 ± 1 years), we investigated the response to anodal, cathodal, and sham tDCS in a repeated-measures double-blinded crossover design. Adaptive threshold hunting paired-pulse TMS with PA- and AP-induced current was used to examine separate interneuronal populations within M1 and their influence on corticomotor excitability and short- and long-interval inhibition (SICI and LICI) for up to 60 min after tDCS. Unexpectedly, cathodal tDCS increased corticomotor excitability assessed with AP (P = 0.047) but not PA stimulation (P = 0.74). SICI AP was reduced after anodal tDCS compared with sham (P = 0.040). Pearson's correlations indicated that SICI AP and LICI AP modulation was associated with corticomotor excitability after anodal (P = 0.027) and cathodal tDCS (P = 0.042). The after-effects of tDCS on corticomotor excitability may depend on the direction of the TMS-induced current used to make assessments, and on modulation within GABA-mediated inhibitory circuits.

  12. Adaptive phase extraction: incorporating the Gabor transform in the matching pursuit algorithm.

    PubMed

    Wacker, Matthias; Witte, Herbert

    2011-10-01

    Short-time Fourier transform (STFT), Gabor transform (GT), wavelet transform (WT), and the Wigner-Ville distribution (WVD) are just some examples of time-frequency analysis methods which are frequently applied in biomedical signal analysis. However, all of these methods have their individual drawbacks. The STFT, GT, and WT have a time-frequency resolution that is determined by algorithm parameters and the WVD is contaminated by cross terms. In 1993, Mallat and Zhang introduced the matching pursuit (MP) algorithm that decomposes a signal into a sum of atoms and uses a cross-term free pseudo-WVD to generate a data-adaptive power distribution in the time-frequency space. Thus, it solved some of the problems of the GT and WT but lacks phase information that is crucial e.g., for synchronization analysis. We introduce a new time-frequency analysis method that combines the MP with a pseudo-GT. Therefore, the signal is decomposed into a set of Gabor atoms. Afterward, each atom is analyzed with a Gabor analysis, where the time-domain gaussian window of the analysis matches that of the specific atom envelope. A superposition of the single time-frequency planes gives the final result. This is the first time that a complete analysis of the complex time-frequency plane can be performed in a fully data-adaptive and frequency-selective manner. We demonstrate the capabilities of our approach on a simulation and on real-life magnetoencephalogram data.

  13. Efficiently sampling conformations and pathways using the concurrent adaptive sampling (CAS) algorithm

    NASA Astrophysics Data System (ADS)

    Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.

    2017-08-01

    Molecular dynamics simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules, but they are limited by the time scale barrier. That is, we may not obtain properties' efficiently because we need to run microseconds or longer simulations using femtosecond time steps. To overcome this time scale barrier, we can use the weighted ensemble (WE) method, a powerful enhanced sampling method that efficiently samples thermodynamic and kinetic properties. However, the WE method requires an appropriate partitioning of phase space into discrete macrostates, which can be problematic when we have a high-dimensional collective space or when little is known a priori about the molecular system. Hence, we developed a new WE-based method, called the "Concurrent Adaptive Sampling (CAS) algorithm," to tackle these issues. The CAS algorithm is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective variables and adaptive macrostates to enhance the sampling in the high-dimensional space. This is especially useful for systems in which we do not know what the right reaction coordinates are, in which case we can use many collective variables to sample conformations and pathways. In addition, a clustering technique based on the committor function is used to accelerate sampling the slowest process in the molecular system. In this paper, we introduce the new method and show results from two-dimensional models and bio-molecules, specifically penta-alanine and a triazine trimer.

  14. Text grouping in patent analysis using adaptive K-means clustering algorithm

    NASA Astrophysics Data System (ADS)

    Shanie, Tiara; Suprijadi, Jadi; Zulhanif

    2017-03-01

    Patents are one of the Intellectual Property. Analyzing patent is one requirement in knowing well the development of technology in each country and in the world now. This study uses the patent document coming from the Espacenet server about Green Tea. Patent documents related to the technology in the field of tea is still widespread, so it will be difficult for users to information retrieval (IR). Therefore, it is necessary efforts to categorize documents in a specific group of related terms contained therein. This study uses titles patent text data with the proposed Green Tea in Statistical Text Mining methods consists of two phases: data preparation and data analysis stage. The data preparation phase uses Text Mining methods and data analysis stage is done by statistics. Statistical analysis in this study using a cluster analysis algorithm, the Adaptive K-Means Clustering Algorithm. Results from this study showed that based on the maximum value Silhouette, generate 87 clusters associated fifteen terms therein that can be utilized in the process of information retrieval needs.

  15. Quick fuzzy backpropagation algorithm.

    PubMed

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  16. Adaptive threshold determination for efficient channel sensing in cognitive radio network using mobile sensors

    NASA Astrophysics Data System (ADS)

    Morshed, M. N.; Khatun, S.; Kamarudin, L. M.; Aljunid, S. A.; Ahmad, R. B.; Zakaria, A.; Fakir, M. M.

    2017-03-01

    Spectrum saturation problem is a major issue in wireless communication systems all over the world. Huge number of users is joining each day to the existing fixed band frequency but the bandwidth is not increasing. These requirements demand for efficient and intelligent use of spectrum. To solve this issue, the Cognitive Radio (CR) is the best choice. Spectrum sensing of a wireless heterogeneous network is a fundamental issue to detect the presence of primary users' signals in CR networks. In order to protect primary users (PUs) from harmful interference, the spectrum sensing scheme is required to perform well even in low signal-to-noise ratio (SNR) environments. Meanwhile, the sensing period is usually required to be short enough so that secondary (unlicensed) users (SUs) can fully utilize the available spectrum. CR networks can be designed to manage the radio spectrum more efficiently by utilizing the spectrum holes in primary user's licensed frequency bands. In this paper, we have proposed an adaptive threshold detection method to detect presence of PU signal using free space path loss (FSPL) model in 2.4 GHz WLAN network. The model is designed for mobile sensors embedded in smartphones. The mobile sensors acts as SU while the existing WLAN network (channels) works as PU. The theoretical results show that the desired threshold range detection of mobile sensors mainly depends on the noise floor level of the location in consideration.

  17. Typical performance of approximation algorithms for NP-hard problems

    NASA Astrophysics Data System (ADS)

    Takabe, Satoshi; Hukushima, Koji

    2016-11-01

    Typical performance of approximation algorithms is studied for randomized minimum vertex cover problems. A wide class of random graph ensembles characterized by an arbitrary degree distribution is discussed with the presentation of a theoretical framework. Herein, three approximation algorithms are examined: linear-programming relaxation, loopy-belief propagation, and the leaf-removal algorithm. The former two algorithms are analyzed using a statistical-mechanical technique, whereas the average-case analysis of the last one is conducted using the generating function method. These algorithms have a threshold in the typical performance with increasing average degree of the random graph, below which they find true optimal solutions with high probability. Our study reveals that there exist only three cases, determined by the order of the typical performance thresholds. In addition, we provide some conditions for classification of the graph ensembles and demonstrate explicitly some examples for the difference in thresholds.

  18. Cool, warm, and heat-pain detection thresholds: testing methods and inferences about anatomic distribution of receptors.

    PubMed

    Dyck, P J; Zimmerman, I; Gillen, D A; Johnson, D; Karnes, J L; O'Brien, P C

    1993-08-01

    We recently found that vibratory detection threshold is greatly influenced by the algorithm of testing. Here, we study the influence of stimulus characteristics and algorithm of testing and estimating threshold on cool (CDT), warm (WDT), and heat-pain (HPDT) detection thresholds. We show that continuously decreasing (for CDT) or increasing (for WDT) thermode temperature to the point at which cooling or warming is perceived and signaled by depressing a response key ("appearance" threshold) overestimates threshold with rapid rates of thermal change. The mean of the appearance and disappearance thresholds also does not perform well for insensitive sites and patients. Pyramidal (or flat-topped pyramidal) stimuli ranging in magnitude, in 25 steps, from near skin temperature to 9 degrees C for 10 seconds (for CDT), from near skin temperature to 45 degrees C for 10 seconds (for WDT), and from near skin temperature to 49 degrees C for 10 seconds (for HPDT) provide ideal stimuli for use in several algorithms of testing and estimating threshold. Near threshold, only the initial direction of thermal change from skin temperature is perceived, and not its return to baseline. Use of steps of stimulus intensity allows the subject or patient to take the needed time to decide whether the stimulus was felt or not (in 4, 2, and 1 stepping algorithms), or whether it occurred in stimulus interval 1 or 2 (in two-alternative forced-choice testing). Thermal thresholds were generally significantly lower with a large (10 cm2) than with a small (2.7 cm2) thermode.(ABSTRACT TRUNCATED AT 250 WORDS)

  19. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  20. An Adaptive Impedance Matching Network with Closed Loop Control Algorithm for Inductive Wireless Power Transfer

    PubMed Central

    Miao, Zhidong; Liu, Dake

    2017-01-01

    For an inductive wireless power transfer (IWPT) system, maintaining a reasonable power transfer efficiency and a stable output power are two most challenging design issues, especially when coil distance varies. To solve these issues, this paper presents a novel adaptive impedance matching network (IMN) for IWPT system. In our adaptive IMN IWPT system, the IMN is automatically reconfigured to keep matching with the coils and to adjust the output power adapting to coil distance variation. A closed loop control algorithm is used to change the capacitors continually, which can compensate mismatches and adjust output power simultaneously. The proposed adaptive IMN IWPT system is working at 125 kHz for 2 W power delivered to load. Comparing with the series resonant IWPT system and fixed IMN IWPT system, the power transfer efficiency of our system increases up to 31.79% and 60% when the coupling coefficient varies in a large range from 0.05 to 0.8 for 2 W output power. PMID:28763011

  1. An Adaptive Impedance Matching Network with Closed Loop Control Algorithm for Inductive Wireless Power Transfer.

    PubMed

    Miao, Zhidong; Liu, Dake; Gong, Chen

    2017-08-01

    For an inductive wireless power transfer (IWPT) system, maintaining a reasonable power transfer efficiency and a stable output power are two most challenging design issues, especially when coil distance varies. To solve these issues, this paper presents a novel adaptive impedance matching network (IMN) for IWPT system. In our adaptive IMN IWPT system, the IMN is automatically reconfigured to keep matching with the coils and to adjust the output power adapting to coil distance variation. A closed loop control algorithm is used to change the capacitors continually, which can compensate mismatches and adjust output power simultaneously. The proposed adaptive IMN IWPT system is working at 125 kHz for 2 W power delivered to load. Comparing with the series resonant IWPT system and fixed IMN IWPT system, the power transfer efficiency of our system increases up to 31.79% and 60% when the coupling coefficient varies in a large range from 0.05 to 0.8 for 2 W output power.

  2. An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors.

    PubMed

    Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel

    2016-03-28

    Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA.

  3. An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors

    PubMed Central

    Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel

    2016-01-01

    Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA. PMID:27043559

  4. Performance evaluation of GPU parallelization, space-time adaptive algorithms, and their combination for simulating cardiac electrophysiology.

    PubMed

    Sachetto Oliveira, Rafael; Martins Rocha, Bernardo; Burgarelli, Denise; Meira, Wagner; Constantinides, Christakis; Weber Dos Santos, Rodrigo

    2018-02-01

    The use of computer models as a tool for the study and understanding of the complex phenomena of cardiac electrophysiology has attained increased importance nowadays. At the same time, the increased complexity of the biophysical processes translates into complex computational and mathematical models. To speed up cardiac simulations and to allow more precise and realistic uses, 2 different techniques have been traditionally exploited: parallel computing and sophisticated numerical methods. In this work, we combine a modern parallel computing technique based on multicore and graphics processing units (GPUs) and a sophisticated numerical method based on a new space-time adaptive algorithm. We evaluate each technique alone and in different combinations: multicore and GPU, multicore and GPU and space adaptivity, multicore and GPU and space adaptivity and time adaptivity. All the techniques and combinations were evaluated under different scenarios: 3D simulations on slabs, 3D simulations on a ventricular mouse mesh, ie, complex geometry, sinus-rhythm, and arrhythmic conditions. Our results suggest that multicore and GPU accelerate the simulations by an approximate factor of 33×, whereas the speedups attained by the space-time adaptive algorithms were approximately 48. Nevertheless, by combining all the techniques, we obtained speedups that ranged between 165 and 498. The tested methods were able to reduce the execution time of a simulation by more than 498× for a complex cellular model in a slab geometry and by 165× in a realistic heart geometry simulating spiral waves. The proposed methods will allow faster and more realistic simulations in a feasible time with no significant loss of accuracy. Copyright © 2017 John Wiley & Sons, Ltd.

  5. The Limits to Adaptation: A Systems Approach

    EPA Science Inventory

    The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering parameters), resource constraints (expressed th...

  6. An adaptive Kalman filter technique for context-aware heart rate monitoring.

    PubMed

    Xu, Min; Goldfain, Albert; Dellostritto, Jim; Iyengar, Satish

    2012-01-01

    Traditional physiological monitoring systems convert a person's vital sign waveforms, such as heart rate, respiration rate and blood pressure, into meaningful information by comparing the instant reading with a preset threshold or a baseline without considering the contextual information of the person. It would be beneficial to incorporate the contextual data such as activity status of the person to the physiological data in order to obtain a more accurate representation of a person's physiological status. In this paper, we proposed an algorithm based on adaptive Kalman filter that describes the heart rate response with respect to different activity levels. It is towards our final goal of intelligent detection of any abnormality in the person's vital signs. Experimental results are provided to demonstrate the feasibility of the algorithm.

  7. Adaptation of a Fast Optimal Interpolation Algorithm to the Mapping of Oceangraphic Data

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris; Fieguth, Paul; Wunsch, Carl; Willsky, Alan

    1997-01-01

    A fast, recently developed, multiscale optimal interpolation algorithm has been adapted to the mapping of hydrographic and other oceanographic data. This algorithm produces solution and error estimates which are consistent with those obtained from exact least squares methods, but at a small fraction of the computational cost. Problems whose solution would be completely impractical using exact least squares, that is, problems with tens or hundreds of thousands of measurements and estimation grid points, can easily be solved on a small workstation using the multiscale algorithm. In contrast to methods previously proposed for solving large least squares problems, our approach provides estimation error statistics while permitting long-range correlations, using all measurements, and permitting arbitrary measurement locations. The multiscale algorithm itself, published elsewhere, is not the focus of this paper. However, the algorithm requires statistical models having a very particular multiscale structure; it is the development of a class of multiscale statistical models, appropriate for oceanographic mapping problems, with which we concern ourselves in this paper. The approach is illustrated by mapping temperature in the northeastern Pacific. The number of hydrographic stations is kept deliberately small to show that multiscale and exact least squares results are comparable. A portion of the data were not used in the analysis; these data serve to test the multiscale estimates. A major advantage of the present approach is the ability to repeat the estimation procedure a large number of times for sensitivity studies, parameter estimation, and model testing. We have made available by anonymous Ftp a set of MATLAB-callable routines which implement the multiscale algorithm and the statistical models developed in this paper.

  8. Variable is better than invariable: sparse VSS-NLMS algorithms with application to adaptive MIMO channel estimation.

    PubMed

    Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki

    2014-01-01

    Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.

  9. Variable Is Better Than Invariable: Sparse VSS-NLMS Algorithms with Application to Adaptive MIMO Channel Estimation

    PubMed Central

    Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki

    2014-01-01

    Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286

  10. Impairment of retinal increment thresholds in Huntington's disease.

    PubMed

    Paulus, W; Schwarz, G; Werner, A; Lange, H; Bayer, A; Hofschuster, M; Müller, N; Zrenner, E

    1993-10-01

    We have investigated detection thresholds for a foveal blue test light using a Maxwellian view system in 61 normal subjects, 19 patients with Huntington's chorea, 14 patients with Tourette's syndrome, and 20 patients with schizophrenia. Ten measurements were made: The blue test light (1 degree diameter, 500 msec duration) was presented either superimposed on a yellow adaptation field (5 degree diameter) or 500 msec after switching off this field (transient tritanopia effect). In both cases five different background intensities were presented. The only abnormality found was in patients with Huntington's chorea. During adaptation these patients' thresholds are significantly higher than normal (p < 0.005). No change was found in the transient tritanopia effect. Huntington's disease causes degeneration of several different transmitter systems in the brain. Increment threshold testing allows for noninvasive investigation of patients and confirms the involvement of the retina in the degenerative process in Huntington's chorea.

  11. Variable-Threshold Threshold Elements,

    DTIC Science & Technology

    A threshold element is a mathematical model of certain types of logic gates and of a biological neuron. Much work has been done on the subject of... threshold elements with fixed thresholds; this study concerns itself with elements in which the threshold may be varied, variable- threshold threshold ...elements. Physical realizations include resistor-transistor elements, in which the threshold is simply a voltage. Variation of the threshold causes the

  12. An adaptive guidance algorithm for an aerodynamically assisted orbital plane change maneuver. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Blissit, J. A.

    1986-01-01

    Using analysis results from the post trajectory optimization program, an adaptive guidance algorithm is developed to compensate for density, aerodynamic and thrust perturbations during an atmospheric orbital plane change maneuver. The maneuver offers increased mission flexibility along with potential fuel savings for future reentry vehicles. Although designed to guide a proposed NASA Entry Research Vehicle, the algorithm is sufficiently generic for a range of future entry vehicles. The plane change analysis provides insight suggesting a straight-forward algorithm based on an optimized nominal command profile. Bank angle, angle of attack, and engine thrust level, ignition and cutoff times are modulated to adjust the vehicle's trajectory to achieve the desired end-conditions. A performance evaluation of the scheme demonstrates a capability to guide to within 0.05 degrees of the desired plane change and five nautical miles of the desired apogee altitude while maintaining heating constraints. The algorithm is tested under off-nominal conditions of + or -30% density biases, two density profile models, + or -15% aerodynamic uncertainty, and a 33% thrust loss and for various combinations of these conditions.

  13. Policy Tree Optimization for Adaptive Management of Water Resources Systems

    NASA Astrophysics Data System (ADS)

    Herman, J. D.; Giuliani, M.

    2016-12-01

    Water resources systems must cope with irreducible uncertainty in supply and demand, requiring policy alternatives capable of adapting to a range of possible future scenarios. Recent studies have developed adaptive policies based on "signposts" or "tipping points", which are threshold values of indicator variables that signal a change in policy. However, there remains a need for a general method to optimize the choice of indicators and their threshold values in a way that is easily interpretable for decision makers. Here we propose a conceptual framework and computational algorithm to design adaptive policies as a tree structure (i.e., a hierarchical set of logical rules) using a simulation-optimization approach based on genetic programming. We demonstrate the approach using Folsom Reservoir, California as a case study, in which operating policies must balance the risk of both floods and droughts. Given a set of feature variables, such as reservoir level, inflow observations and forecasts, and time of year, the resulting policy defines the conditions under which flood control and water supply hedging operations should be triggered. Importantly, the tree-based rule sets are easy to interpret for decision making, and can be compared to historical operating policies to understand the adaptations needed under possible climate change scenarios. Several remaining challenges are discussed, including the empirical convergence properties of the method, and extensions to irreversible decisions such as infrastructure. Policy tree optimization, and corresponding open-source software, provide a generalizable, interpretable approach to designing adaptive policies under uncertainty for water resources systems.

  14. Smoothness within ruggedness: the role of neutrality in adaptation.

    PubMed Central

    Huynen, M A; Stadler, P F; Fontana, W

    1996-01-01

    RNA secondary structure folding algorithms predict the existence of connected networks of RNA sequences with identical structure. On such networks, evolving populations split into subpopulations, which diffuse independently in sequence space. This demands a distinction between two mutation thresholds: one at which genotypic information is lost and one at which phenotypic information is lost. In between, diffusion enables the search of vast areas in genotype space while still preserving the dominant phenotype. By this dynamic the success of phenotypic adaptation becomes much less sensitive to the initial conditions in genotype space. Images Fig. 2 PMID:8552647

  15. Enhancing artificial bee colony algorithm with self-adaptive searching strategy and artificial immune network operators for global optimization.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments.

  16. Enhancing Artificial Bee Colony Algorithm with Self-Adaptive Searching Strategy and Artificial Immune Network Operators for Global Optimization

    PubMed Central

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments. PMID:24772023

  17. Accurate motor mapping in awake common marmosets using micro-electrocorticographical stimulation and stochastic threshold estimation

    NASA Astrophysics Data System (ADS)

    Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi

    2018-06-01

    Objective. Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. Approach. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Main results. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Significance. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.

  18. Accurate motor mapping in awake common marmosets using micro-electrocorticographical stimulation and stochastic threshold estimation.

    PubMed

    Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi

    2018-06-01

    Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.

  19. Strategies to overcome photobleaching in algorithm-based adaptive optics for nonlinear in-vivo imaging.

    PubMed

    Caroline Müllenbroich, M; McGhee, Ewan J; Wright, Amanda J; Anderson, Kurt I; Mathieson, Keith

    2014-01-01

    We have developed a nonlinear adaptive optics microscope utilizing a deformable membrane mirror (DMM) and demonstrated its use in compensating for system- and sample-induced aberrations. The optimum shape of the DMM was determined with a random search algorithm optimizing on either two photon fluorescence or second harmonic signals as merit factors. We present here several strategies to overcome photobleaching issues associated with lengthy optimization routines by adapting the search algorithm and the experimental methodology. Optimizations were performed on extrinsic fluorescent dyes, fluorescent beads loaded into organotypic tissue cultures and the intrinsic second harmonic signal of these cultures. We validate the approach of using these preoptimized mirror shapes to compile a robust look-up table that can be applied for imaging over several days and through a variety of tissues. In this way, the photon exposure to the fluorescent cells under investigation is limited to imaging. Using our look-up table approach, we show signal intensity improvement factors ranging from 1.7 to 4.1 in organotypic tissue cultures and freshly excised mouse tissue. Imaging zebrafish in vivo, we demonstrate signal improvement by a factor of 2. This methodology is easily reproducible and could be applied to many photon starved experiments, for example fluorescent life time imaging, or when photobleaching is a concern.

  20. Low complexity Reed-Solomon-based low-density parity-check design for software defined optical transmission system based on adaptive puncturing decoding algorithm

    NASA Astrophysics Data System (ADS)

    Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua

    2016-08-01

    We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.

  1. Threshold secret sharing scheme based on phase-shifting interferometry.

    PubMed

    Deng, Xiaopeng; Shi, Zhengang; Wen, Wei

    2016-11-01

    We propose a new method for secret image sharing with the (3,N) threshold scheme based on phase-shifting interferometry. The secret image, which is multiplied with an encryption key in advance, is first encrypted by using Fourier transformation. Then, the encoded image is shared into N shadow images based on the recording principle of phase-shifting interferometry. Based on the reconstruction principle of phase-shifting interferometry, any three or more shadow images can retrieve the secret image, while any two or fewer shadow images cannot obtain any information of the secret image. Thus, a (3,N) threshold secret sharing scheme can be implemented. Compared with our previously reported method, the algorithm of this paper is suited for not only a binary image but also a gray-scale image. Moreover, the proposed algorithm can obtain a larger threshold value t. Simulation results are presented to demonstrate the feasibility of the proposed method.

  2. Wavelet tree structure based speckle noise removal for optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Yuan, Xin; Liu, Xuan; Liu, Yang

    2018-02-01

    We report a new speckle noise removal algorithm in optical coherence tomography (OCT). Though wavelet domain thresholding algorithms have demonstrated superior advantages in suppressing noise magnitude and preserving image sharpness in OCT, the wavelet tree structure has not been investigated in previous applications. In this work, we propose an adaptive wavelet thresholding algorithm via exploiting the tree structure in wavelet coefficients to remove the speckle noise in OCT images. The threshold for each wavelet band is adaptively selected following a special rule to retain the structure of the image across different wavelet layers. Our results demonstrate that the proposed algorithm outperforms conventional wavelet thresholding, with significant advantages in preserving image features.

  3. An adaptive scale factor based MPPT algorithm for changing solar irradiation levels in outer space

    NASA Astrophysics Data System (ADS)

    Kwan, Trevor Hocksun; Wu, Xiaofeng

    2017-03-01

    Maximum power point tracking (MPPT) techniques are popularly used for maximizing the output of solar panels by continuously tracking the maximum power point (MPP) of their P-V curves, which depend both on the panel temperature and the input insolation. Various MPPT algorithms have been studied in literature, including perturb and observe (P&O), hill climbing, incremental conductance, fuzzy logic control and neural networks. This paper presents an algorithm which improves the MPP tracking performance by adaptively scaling the DC-DC converter duty cycle. The principle of the proposed algorithm is to detect the oscillation by checking the sign (ie. direction) of the duty cycle perturbation between the current and previous time steps. If there is a difference in the signs then it is clear an oscillation is present and the DC-DC converter duty cycle perturbation is subsequently scaled down by a constant factor. By repeating this process, the steady state oscillations become negligibly small which subsequently allows for a smooth steady state MPP response. To verify the proposed MPPT algorithm, a simulation involving irradiances levels that are typically encountered in outer space is conducted. Simulation and experimental results prove that the proposed algorithm is fast and stable in comparison to not only the conventional fixed step counterparts, but also to previous variable step size algorithms.

  4. A star recognition method based on the Adaptive Ant Colony algorithm for star sensors.

    PubMed

    Quan, Wei; Fang, Jiancheng

    2010-01-01

    A new star recognition method based on the Adaptive Ant Colony (AAC) algorithm has been developed to increase the star recognition speed and success rate for star sensors. This method draws circles, with the center of each one being a bright star point and the radius being a special angular distance, and uses the parallel processing ability of the AAC algorithm to calculate the angular distance of any pair of star points in the circle. The angular distance of two star points in the circle is solved as the path of the AAC algorithm, and the path optimization feature of the AAC is employed to search for the optimal (shortest) path in the circle. This optimal path is used to recognize the stellar map and enhance the recognition success rate and speed. The experimental results show that when the position error is about 50″, the identification success rate of this method is 98% while the Delaunay identification method is only 94%. The identification time of this method is up to 50 ms.

  5. An Adaptive Method for Switching between Pedestrian/Car Indoor Positioning Algorithms based on Multilayer Time Sequences

    PubMed Central

    Gu, Zhining; Guo, Wei; Li, Chaoyang; Zhu, Xinyan; Guo, Tao

    2018-01-01

    Pedestrian dead reckoning (PDR) positioning algorithms can be used to obtain a target’s location only for movement with step features and not for driving, for which the trilateral Bluetooth indoor positioning method can be used. In this study, to obtain the precise locations of different states (pedestrian/car) using the corresponding positioning algorithms, we propose an adaptive method for switching between the PDR and car indoor positioning algorithms based on multilayer time sequences (MTSs). MTSs, which consider the behavior context, comprise two main aspects: filtering of noisy data in small-scale time sequences and using a state chain to reduce the time delay of algorithm switching in large-scale time sequences. The proposed method can be expected to realize the recognition of stationary, walking, driving, or other states; switch to the correct indoor positioning algorithm; and improve the accuracy of localization compared to using a single positioning algorithm. Our experiments show that the recognition of static, walking, driving, and other states improves by 5.5%, 45.47%, 26.23%, and 21% on average, respectively, compared with convolutional neural network (CNN) method. The time delay decreases by approximately 0.5–8.5 s for the transition between states and by approximately 24 s for the entire process. PMID:29495503

  6. Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization

    NASA Astrophysics Data System (ADS)

    Li, Li

    2018-03-01

    In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.

  7. A comparative evaluation of adaptive noise cancellation algorithms for minimizing motion artifacts in a forehead-mounted wearable pulse oximeter.

    PubMed

    Comtois, Gary; Mendelson, Yitzhak; Ramuka, Piyush

    2007-01-01

    Wearable physiological monitoring using a pulse oximeter would enable field medics to monitor multiple injuries simultaneously, thereby prioritizing medical intervention when resources are limited. However, a primary factor limiting the accuracy of pulse oximetry is poor signal-to-noise ratio since photoplethysmographic (PPG) signals, from which arterial oxygen saturation (SpO2) and heart rate (HR) measurements are derived, are compromised by movement artifacts. This study was undertaken to quantify SpO2 and HR errors induced by certain motion artifacts utilizing accelerometry-based adaptive noise cancellation (ANC). Since the fingers are generally more vulnerable to motion artifacts, measurements were performed using a custom forehead-mounted wearable pulse oximeter developed for real-time remote physiological monitoring and triage applications. This study revealed that processing motion-corrupted PPG signals by least mean squares (LMS) and recursive least squares (RLS) algorithms can be effective to reduce SpO2 and HR errors during jogging, but the degree of improvement depends on filter order. Although both algorithms produced similar improvements, implementing the adaptive LMS algorithm is advantageous since it requires significantly less operations.

  8. Application of image recognition algorithms for statistical description of nano- and microstructured surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mărăscu, V.; Dinescu, G.; Faculty of Physics, University of Bucharest, 405 Atomistilor Street, Bucharest-Magurele

    In this paper we propose a statistical approach for describing the self-assembling of sub-micronic polystyrene beads on silicon surfaces, as well as the evolution of surface topography due to plasma treatments. Algorithms for image recognition are used in conjunction with Scanning Electron Microscopy (SEM) imaging of surfaces. In a first step, greyscale images of the surface covered by the polystyrene beads are obtained. Further, an adaptive thresholding method was applied for obtaining binary images. The next step consisted in automatic identification of polystyrene beads dimensions, by using Hough transform algorithm, according to beads radius. In order to analyze the uniformitymore » of the self–assembled polystyrene beads, the squared modulus of 2-dimensional Fast Fourier Transform (2- D FFT) was applied. By combining these algorithms we obtain a powerful and fast statistical tool for analysis of micro and nanomaterials with aspect features regularly distributed on surface upon SEM examination.« less

  9. Hybrid Artificial Root Foraging Optimizer Based Multilevel Threshold for Image Segmentation

    PubMed Central

    Liu, Yang; Liu, Junfei

    2016-01-01

    This paper proposes a new plant-inspired optimization algorithm for multilevel threshold image segmentation, namely, hybrid artificial root foraging optimizer (HARFO), which essentially mimics the iterative root foraging behaviors. In this algorithm the new growth operators of branching, regrowing, and shrinkage are initially designed to optimize continuous space search by combining root-to-root communication and coevolution mechanism. With the auxin-regulated scheme, various root growth operators are guided systematically. With root-to-root communication, individuals exchange information in different efficient topologies, which essentially improve the exploration ability. With coevolution mechanism, the hierarchical spatial population driven by evolutionary pressure of multiple subpopulations is structured, which ensure that the diversity of root population is well maintained. The comparative results on a suit of benchmarks show the superiority of the proposed algorithm. Finally, the proposed HARFO algorithm is applied to handle the complex image segmentation problem based on multilevel threshold. Computational results of this approach on a set of tested images show the outperformance of the proposed algorithm in terms of optimization accuracy computation efficiency. PMID:27725826

  10. Hybrid Artificial Root Foraging Optimizer Based Multilevel Threshold for Image Segmentation.

    PubMed

    Liu, Yang; Liu, Junfei; Tian, Liwei; Ma, Lianbo

    2016-01-01

    This paper proposes a new plant-inspired optimization algorithm for multilevel threshold image segmentation, namely, hybrid artificial root foraging optimizer (HARFO), which essentially mimics the iterative root foraging behaviors. In this algorithm the new growth operators of branching, regrowing, and shrinkage are initially designed to optimize continuous space search by combining root-to-root communication and coevolution mechanism. With the auxin-regulated scheme, various root growth operators are guided systematically. With root-to-root communication, individuals exchange information in different efficient topologies, which essentially improve the exploration ability. With coevolution mechanism, the hierarchical spatial population driven by evolutionary pressure of multiple subpopulations is structured, which ensure that the diversity of root population is well maintained. The comparative results on a suit of benchmarks show the superiority of the proposed algorithm. Finally, the proposed HARFO algorithm is applied to handle the complex image segmentation problem based on multilevel threshold. Computational results of this approach on a set of tested images show the outperformance of the proposed algorithm in terms of optimization accuracy computation efficiency.

  11. A comparison of accuracy of fall detection algorithms (threshold-based vs. machine learning) using waist-mounted tri-axial accelerometer signals from a comprehensive set of falls and non-fall trials.

    PubMed

    Aziz, Omar; Musngi, Magnus; Park, Edward J; Mori, Greg; Robinovitch, Stephen N

    2017-01-01

    Falls are the leading cause of injury-related morbidity and mortality among older adults. Over 90 % of hip and wrist fractures and 60 % of traumatic brain injuries in older adults are due to falls. Another serious consequence of falls among older adults is the 'long lie' experienced by individuals who are unable to get up and remain on the ground for an extended period of time after a fall. Considerable research has been conducted over the past decade on the design of wearable sensor systems that can automatically detect falls and send an alert to care providers to reduce the frequency and severity of long lies. While most systems described to date incorporate threshold-based algorithms, machine learning algorithms may offer increased accuracy in detecting falls. In the current study, we compared the accuracy of these two approaches in detecting falls by conducting a comprehensive set of falling experiments with 10 young participants. Participants wore waist-mounted tri-axial accelerometers and simulated the most common causes of falls observed in older adults, along with near-falls and activities of daily living. The overall performance of five machine learning algorithms was greater than the performance of five threshold-based algorithms described in the literature, with support vector machines providing the highest combination of sensitivity and specificity.

  12. Developing an Enhanced Lightning Jump Algorithm for Operational Use

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2009-01-01

    Overall Goals: 1. Build on the lightning jump framework set through previous studies. 2. Understand what typically occurs in nonsevere convection with respect to increases in lightning. 3. Ultimately develop a lightning jump algorithm for use on the Geostationary Lightning Mapper (GLM). 4 Lightning jump algorithm configurations were developed (2(sigma), 3(sigma), Threshold 10 and Threshold 8). 5 algorithms were tested on a population of 47 nonsevere and 38 severe thunderstorms. Results indicate that the 2(sigma) algorithm performed best over the entire thunderstorm sample set with a POD of 87%, a far of 35%, a CSI of 59% and a HSS of 75%.

  13. Can adaptive threshold-based metabolic tumor volume (MTV) and lean body mass corrected standard uptake value (SUL) predict prognosis in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy?

    PubMed

    Akagunduz, Ozlem Ozkaya; Savas, Recep; Yalman, Deniz; Kocacelebi, Kenan; Esassolak, Mustafa

    2015-11-01

    To evaluate the predictive value of adaptive threshold-based metabolic tumor volume (MTV), maximum standardized uptake value (SUVmax) and maximum lean body mass corrected SUV (SULmax) measured on pretreatment positron emission tomography and computed tomography (PET/CT) imaging in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy. Pretreatment PET/CT of the 62 patients with locally advanced head and neck cancer who were treated consecutively between May 2010 and February 2013 were reviewed retrospectively. The maximum FDG uptake of the primary tumor was defined according to SUVmax and SULmax. Multiple threshold levels between 60% and 10% of the SUVmax and SULmax were tested with intervals of 5% to 10% in order to define the most suitable threshold value for the metabolic activity of each patient's tumor (adaptive threshold). MTV was calculated according to this value. We evaluated the relationship of mean values of MTV, SUVmax and SULmax with treatment response, local recurrence, distant metastasis and disease-related death. Receiver-operating characteristic (ROC) curve analysis was done to obtain optimal predictive cut-off values for MTV and SULmax which were found to have a predictive value. Local recurrence-free (LRFS), disease-free (DFS) and overall survival (OS) were examined according to these cut-offs. Forty six patients had complete response, 15 had partial response, and 1 had stable disease 6 weeks after the completion of treatment. Median follow-up of the entire cohort was 18 months. Of 46 complete responders 10 had local recurrence, and of 16 partial or no responders 10 had local progression. Eighteen patients died. Adaptive threshold-based MTV had significant predictive value for treatment response (p=0.011), local recurrence/progression (p=0.050), and disease-related death (p=0.024). SULmax had a predictive value for local recurrence/progression (p=0.030). ROC curves analysis revealed a cut-off value of 14.00 mL for

  14. An improved adaptive interpolation clock recovery loop based on phase splitting algorithm for coherent optical communication system

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Zhang, Qi; Wang, Yong-jun; Tian, Qing-hua; Tian, Feng; Mao, Ya-ya

    2018-01-01

    Traditional clock recovery scheme achieves timing adjustment by digital interpolation, thus recovering the sampling sequence. Based on this, an improved clock recovery architecture joint channel equalization for coherent optical communication system is presented in this paper. The loop is different from the traditional clock recovery. In order to reduce the interpolation error caused by the distortion in the frequency domain of the interpolator and to suppress the spectral mirroring generated by the sampling rate change, the proposed algorithm joint equalization, improves the original interpolator in the loop, along with adaptive filtering, and makes error compensation for the original signals according to the balanced pre-filtering signals. Then the signals are adaptive interpolated through the feedback loop. Furthermore, the phase splitting timing recovery algorithm is adopted in this paper. The time error is calculated according to the improved algorithm when there is no transition between the adjacent symbols, making calculated timing error more accurate. Meanwhile, Carrier coarse synchronization module is placed before the beginning of timing recovery to eliminate the larger frequency offset interference, which effectively adjust the sampling clock phase. In this paper, the simulation results show that the timing error is greatly reduced after the loop is changed. Based on the phase splitting algorithm, the BER and MSE are better than those in the unvaried architecture. In the fiber channel, using MQAM modulation format, after 100 km-transmission of single-mode fiber, especially when ROF(roll-off factor) values tends to 0, the algorithm shows a better clock performance under different ROFs. When SNR values are less than 8, the BER could achieve 10-2 to 10-1 magnitude. Furthermore, the proposed timing recovery is more suitable for the situation with low SNR values.

  15. Combination of Adaptive Feedback Cancellation and Binaural Adaptive Filtering in Hearing Aids

    NASA Astrophysics Data System (ADS)

    Lombard, Anthony; Reindl, Klaus; Kellermann, Walter

    2009-12-01

    We study a system combining adaptive feedback cancellation and adaptive filtering connecting inputs from both ears for signal enhancement in hearing aids. For the first time, such a binaural system is analyzed in terms of system stability, convergence of the algorithms, and possible interaction effects. As major outcomes of this study, a new stability condition adapted to the considered binaural scenario is presented, some already existing and commonly used feedback cancellation performance measures for the unilateral case are adapted to the binaural case, and possible interaction effects between the algorithms are identified. For illustration purposes, a blind source separation algorithm has been chosen as an example for adaptive binaural spatial filtering. Experimental results for binaural hearing aids confirm the theoretical findings and the validity of the new measures.

  16. A multi-SNP association test for complex diseases incorporating an optimal P-value threshold algorithm in nuclear families.

    PubMed

    Wang, Yi-Ting; Sung, Pei-Yuan; Lin, Peng-Lin; Yu, Ya-Wen; Chung, Ren-Hua

    2015-05-15

    Genome-wide association studies (GWAS) have become a common approach to identifying single nucleotide polymorphisms (SNPs) associated with complex diseases. As complex diseases are caused by the joint effects of multiple genes, while the effect of individual gene or SNP is modest, a method considering the joint effects of multiple SNPs can be more powerful than testing individual SNPs. The multi-SNP analysis aims to test association based on a SNP set, usually defined based on biological knowledge such as gene or pathway, which may contain only a portion of SNPs with effects on the disease. Therefore, a challenge for the multi-SNP analysis is how to effectively select a subset of SNPs with promising association signals from the SNP set. We developed the Optimal P-value Threshold Pedigree Disequilibrium Test (OPTPDT). The OPTPDT uses general nuclear families. A variable p-value threshold algorithm is used to determine an optimal p-value threshold for selecting a subset of SNPs. A permutation procedure is used to assess the significance of the test. We used simulations to verify that the OPTPDT has correct type I error rates. Our power studies showed that the OPTPDT can be more powerful than the set-based test in PLINK, the multi-SNP FBAT test, and the p-value based test GATES. We applied the OPTPDT to a family-based autism GWAS dataset for gene-based association analysis and identified MACROD2-AS1 with genome-wide significance (p-value=2.5×10(-6)). Our simulation results suggested that the OPTPDT is a valid and powerful test. The OPTPDT will be helpful for gene-based or pathway association analysis. The method is ideal for the secondary analysis of existing GWAS datasets, which may identify a set of SNPs with joint effects on the disease.

  17. Adaptive Reception for Underwater Communications

    DTIC Science & Technology

    2011-06-01

    Experimental results prove the effectiveness of the receiver. 14. SUBJECT TERMS Underwater acoustic communications, adaptive algorithms , Kalman filter...the update algorithm design and the value of the spatial diversity are addressed. In this research, an adaptive multichannel equalizer made up of a...for the time-varying nature of the channel is to use an Adaptive Decision Feedback Equalizer based on either the RLS or LMS algorithm . Although this

  18. ECG signal performance de-noising assessment based on threshold tuning of dual-tree wavelet transform.

    PubMed

    El B'charri, Oussama; Latif, Rachid; Elmansouri, Khalifa; Abenaou, Abdenbi; Jenkal, Wissam

    2017-02-07

    Since the electrocardiogram (ECG) signal has a low frequency and a weak amplitude, it is sensitive to miscellaneous mixed noises, which may reduce the diagnostic accuracy and hinder the physician's correct decision on patients. The dual tree wavelet transform (DT-WT) is one of the most recent enhanced versions of discrete wavelet transform. However, threshold tuning on this method for noise removal from ECG signal has not been investigated yet. In this work, we shall provide a comprehensive study on the impact of the choice of threshold algorithm, threshold value, and the appropriate wavelet decomposition level to evaluate the ECG signal de-noising performance. A set of simulations is performed on both synthetic and real ECG signals to achieve the promised results. First, the synthetic ECG signal is used to observe the algorithm response. The evaluation results of synthetic ECG signal corrupted by various types of noise has showed that the modified unified threshold and wavelet hyperbolic threshold de-noising method is better in realistic and colored noises. The tuned threshold is then used on real ECG signals from the MIT-BIH database. The results has shown that the proposed method achieves higher performance than the ordinary dual tree wavelet transform into all kinds of noise removal from ECG signal. The simulation results indicate that the algorithm is robust for all kinds of noises with varying degrees of input noise, providing a high quality clean signal. Moreover, the algorithm is quite simple and can be used in real time ECG monitoring.

  19. Directional hearing aid using hybrid adaptive beamformer (HAB) and binaural ITE array

    NASA Astrophysics Data System (ADS)

    Shaw, Scott T.; Larow, Andy J.; Gibian, Gary L.; Sherlock, Laguinn P.; Schulein, Robert

    2002-05-01

    A directional hearing aid algorithm called the Hybrid Adaptive Beamformer (HAB), developed for NIH/NIA, can be applied to many different microphone array configurations. In this project the HAB algorithm was applied to a new array employing in-the-ear microphones at each ear (HAB-ITE), to see if previous HAB performance could be achieved with a more cosmetically acceptable package. With diotic output, the average benefit in threshold SNR was 10.9 dB for three HoH and 11.7 dB for five normal-hearing subjects. These results are slightly better than previous results of equivalent tests with a 3-in. array. With an innovative binaural fitting, a small benefit beyond that provided by diotic adaptive beamforming was observed: 12.5 dB for HoH and 13.3 dB for normal-hearing subjects, a 1.6 dB improvement over the diotic presentation. Subjectively, the binaural fitting preserved binaural hearing abilities, giving the user a sense of space, and providing left-right localization. Thus the goal of creating an adaptive beamformer that simultaneously provides excellent noise reduction and binaural hearing was achieved. Further work remains before the HAB-ITE can be incorporated into a real product, optimizing binaural adaptive beamforming, and integrating the concept with other technologies to produce a viable product prototype. [Work supported by NIH/NIDCD.

  20. Evidence Accumulator or Decision Threshold – Which Cortical Mechanism are We Observing?

    PubMed Central

    Simen, Patrick

    2012-01-01

    Most psychological models of perceptual decision making are of the accumulation-to-threshold variety. The neural basis of accumulation in parietal and prefrontal cortex is therefore a topic of great interest in neuroscience. In contrast, threshold mechanisms have received less attention, and their neural basis has usually been sought in subcortical structures. Here I analyze a model of a decision threshold that can be implemented in the same cortical areas as evidence accumulators, and whose behavior bears on two open questions in decision neuroscience: (1) When ramping activity is observed in a brain region during decision making, does it reflect evidence accumulation? (2) Are changes in speed-accuracy tradeoffs and response biases more likely to be achieved by changes in thresholds, or in accumulation rates and starting points? The analysis suggests that task-modulated ramping activity, by itself, is weak evidence that a brain area mediates evidence accumulation as opposed to threshold readout; and that signs of modulated accumulation are as likely to indicate threshold adaptation as adaptation of starting points and accumulation rates. These conclusions imply that how thresholds are modeled can dramatically impact accumulator-based interpretations of this data. PMID:22737136

  1. Testing of Lagrange multiplier damped least-squares control algorithm for woofer-tweeter adaptive optics

    PubMed Central

    Zou, Weiyao; Burns, Stephen A.

    2012-01-01

    A Lagrange multiplier-based damped least-squares control algorithm for woofer-tweeter (W-T) dual deformable-mirror (DM) adaptive optics (AO) is tested with a breadboard system. We show that the algorithm can complementarily command the two DMs to correct wavefront aberrations within a single optimization process: the woofer DM correcting the high-stroke, low-order aberrations, and the tweeter DM correcting the low-stroke, high-order aberrations. The optimal damping factor for a DM is found to be the median of the eigenvalue spectrum of the influence matrix of that DM. Wavefront control accuracy is maximized with the optimized control parameters. For the breadboard system, the residual wavefront error can be controlled to the precision of 0.03 μm in root mean square. The W-T dual-DM AO has applications in both ophthalmology and astronomy. PMID:22441462

  2. Lidar detection algorithm for time and range anomalies.

    PubMed

    Ben-David, Avishai; Davidson, Charles E; Vanderbeek, Richard G

    2007-10-10

    A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t(1) to t(2)" is addressed, and for range anomaly where the question "is a target present at time t within ranges R(1) and R(2)" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO(2) lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed.

  3. Knowledge-based tracking algorithm

    NASA Astrophysics Data System (ADS)

    Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.

    1990-10-01

    This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.

  4. Policy tree optimization for adaptive management of water resources systems

    NASA Astrophysics Data System (ADS)

    Herman, Jonathan; Giuliani, Matteo

    2017-04-01

    Water resources systems must cope with irreducible uncertainty in supply and demand, requiring policy alternatives capable of adapting to a range of possible future scenarios. Recent studies have developed adaptive policies based on "signposts" or "tipping points" that suggest the need of updating the policy. However, there remains a need for a general method to optimize the choice of the signposts to be used and their threshold values. This work contributes a general framework and computational algorithm to design adaptive policies as a tree structure (i.e., a hierarchical set of logical rules) using a simulation-optimization approach based on genetic programming. Given a set of feature variables (e.g., reservoir level, inflow observations, inflow forecasts), the resulting policy defines both the optimal reservoir operations and the conditions under which such operations should be triggered. We demonstrate the approach using Folsom Reservoir (California) as a case study, in which operating policies must balance the risk of both floods and droughts. Numerical results show that the tree-based policies outperform the ones designed via Dynamic Programming. In addition, they display good adaptive capacity to the changing climate, successfully adapting the reservoir operations across a large set of uncertain climate scenarios.

  5. Automatic threshold selection for multi-class open set recognition

    NASA Astrophysics Data System (ADS)

    Scherreik, Matthew; Rigling, Brian

    2017-05-01

    Multi-class open set recognition is the problem of supervised classification with additional unknown classes encountered after a model has been trained. An open set classifer often has two core components. The first component is a base classifier which estimates the most likely class of a given example. The second component consists of open set logic which estimates if the example is truly a member of the candidate class. Such a system is operated in a feed-forward fashion. That is, a candidate label is first estimated by the base classifier, and the true membership of the example to the candidate class is estimated afterward. Previous works have developed an iterative threshold selection algorithm for rejecting examples from classes which were not present at training time. In those studies, a Platt-calibrated SVM was used as the base classifier, and the thresholds were applied to class posterior probabilities for rejection. In this work, we investigate the effectiveness of other base classifiers when paired with the threshold selection algorithm and compare their performance with the original SVM solution.

  6. A localization algorithm of adaptively determining the ROI of the reference circle in image

    NASA Astrophysics Data System (ADS)

    Xu, Zeen; Zhang, Jun; Zhang, Daimeng; Liu, Xiaomao; Tian, Jinwen

    2018-03-01

    Aiming at solving the problem of accurately positioning the detection probes underwater, this paper proposed a method based on computer vision which can effectively solve this problem. The theory of this method is that: First, because the shape information of the heat tube is similar to a circle in the image, we can find a circle which physical location is well known in the image, we set this circle as the reference circle. Second, we calculate the pixel offset between the reference circle and the probes in the picture, and adjust the steering gear through the offset. As a result, we can accurately measure the physical distance between the probes and the under test heat tubes, then we can know the precise location of the probes underwater. However, how to choose reference circle in image is a difficult problem. In this paper, we propose an algorithm that can adaptively confirm the area of reference circle. In this area, there will be only one circle, and the circle is the reference circle. The test results show that the accuracy of the algorithm of extracting the reference circle in the whole picture without using ROI (region of interest) of the reference circle is only 58.76% and the proposed algorithm is 95.88%. The experimental results indicate that the proposed algorithm can effectively improve the efficiency of the tubes detection.

  7. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm

    NASA Astrophysics Data System (ADS)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.

  8. Improvements to the ShipIR/NTCS adaptive track gate algorithm and 3D flare particle model

    NASA Astrophysics Data System (ADS)

    Ramaswamy, Srinivasan; Vaitekunas, David A.; Gunter, Willem H.; February, Faith J.

    2017-05-01

    A key component in any image-based tracking system is the adaptive tracking algorithm used to segment the image into potential targets, rank-and-select the best candidate target, and gate the selected target to further improve tracker performance. Similarly, a key component in any soft-kill response to an incoming guided missile is the flare/chaff decoy used to distract or seduce the seeker homing system away from the naval platform. This paper describes the recent improvements to the naval threat countermeasure simulator (NTCS) of the NATO-standard ship signature model (ShipIR). Efforts to analyse and match the 3D flare particle model against actual IR measurements of the Chemring TALOS IR round resulted in further refinement of the 3D flare particle distribution. The changes in the flare model characteristics were significant enough to require an overhaul to the adaptive track gate (ATG) algorithm in the way it detects the presence of flare decoys and reacquires the target after flare separation. A series of test scenarios are used to demonstrate the impact of the new flare and ATG on IR tactics simulation.

  9. Validation of elastic registration algorithms based on adaptive irregular grids for medical applications

    NASA Astrophysics Data System (ADS)

    Franz, Astrid; Carlsen, Ingwer C.; Renisch, Steffen; Wischmann, Hans-Aloys

    2006-03-01

    Elastic registration of medical images is an active field of current research. Registration algorithms have to be validated in order to show that they fulfill the requirements of a particular clinical application. Furthermore, validation strategies compare the performance of different registration algorithms and can hence judge which algorithm is best suited for a target application. In the literature, validation strategies for rigid registration algorithms have been analyzed. For a known ground truth they assess the displacement error at a few landmarks, which is not sufficient for elastic transformations described by a huge number of parameters. Hence we consider the displacement error averaged over all pixels in the whole image or in a region-of-interest of clinical relevance. Using artificially, but realistically deformed images of the application domain, we use this quality measure to analyze an elastic registration based on transformations defined on adaptive irregular grids for the following clinical applications: Magnetic Resonance (MR) images of freely moving joints for orthopedic investigations, thoracic Computed Tomography (CT) images for the detection of pulmonary embolisms, and transmission images as used for the attenuation correction and registration of independently acquired Positron Emission Tomography (PET) and CT images. The definition of a region-of-interest allows to restrict the analysis of the registration accuracy to clinically relevant image areas. The behaviour of the displacement error as a function of the number of transformation control points and their placement can be used for identifying the best strategy for the initial placement of the control points.

  10. Detecting wood surface defects with fusion algorithm of visual saliency and local threshold segmentation

    NASA Astrophysics Data System (ADS)

    Wang, Xuejuan; Wu, Shuhang; Liu, Yunpeng

    2018-04-01

    This paper presents a new method for wood defect detection. It can solve the over-segmentation problem existing in local threshold segmentation methods. This method effectively takes advantages of visual saliency and local threshold segmentation. Firstly, defect areas are coarsely located by using spectral residual method to calculate global visual saliency of them. Then, the threshold segmentation of maximum inter-class variance method is adopted for positioning and segmenting the wood surface defects precisely around the coarse located areas. Lastly, we use mathematical morphology to process the binary images after segmentation, which reduces the noise and small false objects. Experiments on test images of insect hole, dead knot and sound knot show that the method we proposed obtains ideal segmentation results and is superior to the existing segmentation methods based on edge detection, OSTU and threshold segmentation.

  11. Investigation of Diesel’s Residual Noise on Predictive Vehicles Noise Cancelling using LMS Adaptive Algorithm

    NASA Astrophysics Data System (ADS)

    Arttini Dwi Prasetyowati, Sri; Susanto, Adhi; Widihastuti, Ida

    2017-04-01

    Every noise problems require different solution. In this research, the noise that must be cancelled comes from roadway. Least Mean Square (LMS) adaptive is one of the algorithm that can be used to cancel that noise. Residual noise always appears and could not be erased completely. This research aims to know the characteristic of residual noise from vehicle’s noise and analysis so that it is no longer appearing as a problem. LMS algorithm was used to predict the vehicle’s noise and minimize the error. The distribution of the residual noise could be observed to determine the specificity of the residual noise. The statistic of the residual noise close to normal distribution with = 0,0435, = 1,13 and the autocorrelation of the residual noise forming impulse. As a conclusion the residual noise is insignificant.

  12. Setting objective thresholds for rare event detection in flow cytometry

    PubMed Central

    Richards, Adam J.; Staats, Janet; Enzor, Jennifer; McKinnon, Katherine; Frelinger, Jacob; Denny, Thomas N.; Weinhold, Kent J.; Chan, Cliburn

    2014-01-01

    The accurate identification of rare antigen-specific cytokine positive cells from peripheral blood mononuclear cells (PBMC) after antigenic stimulation in an intracellular staining (ICS) flow cytometry assay is challenging, as cytokine positive events may be fairly diffusely distributed and lack an obvious separation from the negative population. Traditionally, the approach by flow operators has been to manually set a positivity threshold to partition events into cytokine-positive and cytokine-negative. This approach suffers from subjectivity and inconsistency across different flow operators. The use of statistical clustering methods does not remove the need to find an objective threshold between between positive and negative events since consistent identification of rare event subsets is highly challenging for automated algorithms, especially when there is distributional overlap between the positive and negative events (“smear”). We present a new approach, based on the Fβ measure, that is similar to manual thresholding in providing a hard cutoff, but has the advantage of being determined objectively. The performance of this algorithm is compared with results obtained by expert visual gating. Several ICS data sets from the External Quality Assurance Program Oversight Laboratory (EQAPOL) proficiency program were used to make the comparisons. We first show that visually determined thresholds are difficult to reproduce and pose a problem when comparing results across operators or laboratories, as well as problems that occur with the use of commonly employed clustering algorithms. In contrast, a single parameterization for the Fβ method performs consistently across different centers, samples, and instruments because it optimizes the precision/recall tradeoff by using both negative and positive controls. PMID:24727143

  13. Simulation for noise cancellation using LMS adaptive filter

    NASA Astrophysics Data System (ADS)

    Lee, Jia-Haw; Ooi, Lu-Ean; Ko, Ying-Hao; Teoh, Choe-Yung

    2017-06-01

    In this paper, the fundamental algorithm of noise cancellation, Least Mean Square (LMS) algorithm is studied and enhanced with adaptive filter. The simulation of the noise cancellation using LMS adaptive filter algorithm is developed. The noise corrupted speech signal and the engine noise signal are used as inputs for LMS adaptive filter algorithm. The filtered signal is compared to the original noise-free speech signal in order to highlight the level of attenuation of the noise signal. The result shows that the noise signal is successfully canceled by the developed adaptive filter. The difference of the noise-free speech signal and filtered signal are calculated and the outcome implies that the filtered signal is approaching the noise-free speech signal upon the adaptive filtering. The frequency range of the successfully canceled noise by the LMS adaptive filter algorithm is determined by performing Fast Fourier Transform (FFT) on the signals. The LMS adaptive filter algorithm shows significant noise cancellation at lower frequency range.

  14. Testing of Lagrange multiplier damped least-squares control algorithm for woofer-tweeter adaptive optics.

    PubMed

    Zou, Weiyao; Burns, Stephen A

    2012-03-20

    A Lagrange multiplier-based damped least-squares control algorithm for woofer-tweeter (W-T) dual deformable-mirror (DM) adaptive optics (AO) is tested with a breadboard system. We show that the algorithm can complementarily command the two DMs to correct wavefront aberrations within a single optimization process: the woofer DM correcting the high-stroke, low-order aberrations, and the tweeter DM correcting the low-stroke, high-order aberrations. The optimal damping factor for a DM is found to be the median of the eigenvalue spectrum of the influence matrix of that DM. Wavefront control accuracy is maximized with the optimized control parameters. For the breadboard system, the residual wavefront error can be controlled to the precision of 0.03 μm in root mean square. The W-T dual-DM AO has applications in both ophthalmology and astronomy. © 2012 Optical Society of America

  15. Band-pass filtering algorithms for adaptive control of compressor pre-stall modes in aircraft gas-turbine engine

    NASA Astrophysics Data System (ADS)

    Kuznetsova, T. A.

    2018-05-01

    The methods for increasing gas-turbine aircraft engines' (GTE) adaptive properties to interference based on empowerment of automatic control systems (ACS) are analyzed. The flow pulsation in suction and a discharge line of the compressor, which may cause the stall, are considered as the interference. The algorithmic solution to the problem of GTE pre-stall modes’ control adapted to stability boundary is proposed. The aim of the study is to develop the band-pass filtering algorithms to provide the detection functions of the compressor pre-stall modes for ACS GTE. The characteristic feature of pre-stall effect is the increase of pressure pulsation amplitude over the impeller at the multiples of the rotor’ frequencies. The used method is based on a band-pass filter combining low-pass and high-pass digital filters. The impulse response of the high-pass filter is determined through a known low-pass filter impulse response by spectral inversion. The resulting transfer function of the second order band-pass filter (BPF) corresponds to a stable system. The two circuit implementations of BPF are synthesized. Designed band-pass filtering algorithms were tested in MATLAB environment. Comparative analysis of amplitude-frequency response of proposed implementation allows choosing the BPF scheme providing the best quality of filtration. The BPF reaction to the periodic sinusoidal signal, simulating the experimentally obtained pressure pulsation function in the pre-stall mode, was considered. The results of model experiment demonstrated the effectiveness of applying band-pass filtering algorithms as part of ACS to identify the pre-stall mode of the compressor for detection of pressure fluctuations’ peaks, characterizing the compressor’s approach to the stability boundary.

  16. Adaptive truncation of matrix decompositions and efficient estimation of NMR relaxation distributions

    NASA Astrophysics Data System (ADS)

    Teal, Paul D.; Eccles, Craig

    2015-04-01

    The two most successful methods of estimating the distribution of nuclear magnetic resonance relaxation times from two dimensional data are data compression followed by application of the Butler-Reeds-Dawson algorithm, and a primal-dual interior point method using preconditioned conjugate gradient. Both of these methods have previously been presented using a truncated singular value decomposition of matrices representing the exponential kernel. In this paper it is shown that other matrix factorizations are applicable to each of these algorithms, and that these illustrate the different fundamental principles behind the operation of the algorithms. These are the rank-revealing QR (RRQR) factorization and the LDL factorization with diagonal pivoting, also known as the Bunch-Kaufman-Parlett factorization. It is shown that both algorithms can be improved by adaptation of the truncation as the optimization process progresses, improving the accuracy as the optimal value is approached. A variation on the interior method viz, the use of barrier function instead of the primal-dual approach, is found to offer considerable improvement in terms of speed and reliability. A third type of algorithm, related to the algorithm known as Fast iterative shrinkage-thresholding algorithm, is applied to the problem. This method can be efficiently formulated without the use of a matrix decomposition.

  17. Pitch-Learning Algorithm For Speech Encoders

    NASA Technical Reports Server (NTRS)

    Bhaskar, B. R. Udaya

    1988-01-01

    Adaptive algorithm detects and corrects errors in sequence of estimates of pitch period of speech. Algorithm operates in conjunction with techniques used to estimate pitch period. Used in such parametric and hybrid speech coders as linear predictive coders and adaptive predictive coders.

  18. Speech perception at positive signal-to-noise ratios using adaptive adjustment of time compression.

    PubMed

    Schlueter, Anne; Brand, Thomas; Lemke, Ulrike; Nitzschner, Stefan; Kollmeier, Birger; Holube, Inga

    2015-11-01

    Positive signal-to-noise ratios (SNRs) characterize listening situations most relevant for hearing-impaired listeners in daily life and should therefore be considered when evaluating hearing aid algorithms. For this, a speech-in-noise test was developed and evaluated, in which the background noise is presented at fixed positive SNRs and the speech rate (i.e., the time compression of the speech material) is adaptively adjusted. In total, 29 younger and 12 older normal-hearing, as well as 24 older hearing-impaired listeners took part in repeated measurements. Younger normal-hearing and older hearing-impaired listeners conducted one of two adaptive methods which differed in adaptive procedure and step size. Analysis of the measurements with regard to list length and estimation strategy for thresholds resulted in a practical method measuring the time compression for 50% recognition. This method uses time-compression adjustment and step sizes according to Versfeld and Dreschler [(2002). J. Acoust. Soc. Am. 111, 401-408], with sentence scoring, lists of 30 sentences, and a maximum likelihood method for threshold estimation. Evaluation of the procedure showed that older participants obtained higher test-retest reliability compared to younger participants. Depending on the group of listeners, one or two lists are required for training prior to data collection.

  19. Mouse epileptic seizure detection with multiple EEG features and simple thresholding technique

    NASA Astrophysics Data System (ADS)

    Tieng, Quang M.; Anbazhagan, Ashwin; Chen, Min; Reutens, David C.

    2017-12-01

    Objective. Epilepsy is a common neurological disorder characterized by recurrent, unprovoked seizures. The search for new treatments for seizures and epilepsy relies upon studies in animal models of epilepsy. To capture data on seizures, many applications require prolonged electroencephalography (EEG) with recordings that generate voluminous data. The desire for efficient evaluation of these recordings motivates the development of automated seizure detection algorithms. Approach. A new seizure detection method is proposed, based on multiple features and a simple thresholding technique. The features are derived from chaos theory, information theory and the power spectrum of EEG recordings and optimally exploit both linear and nonlinear characteristics of EEG data. Main result. The proposed method was tested with real EEG data from an experimental mouse model of epilepsy and distinguished seizures from other patterns with high sensitivity and specificity. Significance. The proposed approach introduces two new features: negative logarithm of adaptive correlation integral and power spectral coherence ratio. The combination of these new features with two previously described features, entropy and phase coherence, improved seizure detection accuracy significantly. Negative logarithm of adaptive correlation integral can also be used to compute the duration of automatically detected seizures.

  20. Estimation of pulse rate from ambulatory PPG using ensemble empirical mode decomposition and adaptive thresholding.

    PubMed

    Pittara, Melpo; Theocharides, Theocharis; Orphanidou, Christina

    2017-07-01

    A new method for deriving pulse rate from PPG obtained from ambulatory patients is presented. The method employs Ensemble Empirical Mode Decomposition to identify the pulsatile component from noise-corrupted PPG, and then uses a set of physiologically-relevant rules followed by adaptive thresholding, in order to estimate the pulse rate in the presence of noise. The method was optimized and validated using 63 hours of data obtained from ambulatory hospital patients. The F1 score obtained with respect to expertly annotated data was 0.857 and the mean absolute errors of estimated pulse rates with respect to heart rates obtained from ECG collected in parallel were 1.72 bpm for "good" quality PPG and 4.49 bpm for "bad" quality PPG. Both errors are within the clinically acceptable margin-of-error for pulse rate/heart rate measurements, showing the promise of the proposed approach for inclusion in next generation wearable sensors.

  1. An interactive adaptive remeshing algorithm for the two-dimensional Euler equations

    NASA Technical Reports Server (NTRS)

    Slack, David C.; Walters, Robert W.; Lohner, R.

    1990-01-01

    An interactive adaptive remeshing algorithm utilizing a frontal grid generator and a variety of time integration schemes for the two-dimensional Euler equations on unstructured meshes is presented. Several device dependent interactive graphics interfaces have been developed along with a device independent DI-3000 interface which can be employed on any computer that has the supporting software including the Cray-2 supercomputers Voyager and Navier. The time integration methods available include: an explicit four stage Runge-Kutta and a fully implicit LU decomposition. A cell-centered finite volume upwind scheme utilizing Roe's approximate Riemann solver is developed. To obtain higher order accurate results a monotone linear reconstruction procedure proposed by Barth is utilized. Results for flow over a transonic circular arc and flow through a supersonic nozzle are examined.

  2. Robust adaptive 3-D segmentation of vessel laminae from fluorescence confocal microscope images and parallel GPU implementation.

    PubMed

    Narayanaswamy, Arunachalam; Dwarakapuram, Saritha; Bjornsson, Christopher S; Cutler, Barbara M; Shain, William; Roysam, Badrinath

    2010-03-01

    This paper presents robust 3-D algorithms to segment vasculature that is imaged by labeling laminae, rather than the lumenal volume. The signal is weak, sparse, noisy, nonuniform, low-contrast, and exhibits gaps and spectral artifacts, so adaptive thresholding and Hessian filtering based methods are not effective. The structure deviates from a tubular geometry, so tracing algorithms are not effective. We propose a four step approach. The first step detects candidate voxels using a robust hypothesis test based on a model that assumes Poisson noise and locally planar geometry. The second step performs an adaptive region growth to extract weakly labeled and fine vessels while rejecting spectral artifacts. To enable interactive visualization and estimation of features such as statistical confidence, local curvature, local thickness, and local normal, we perform the third step. In the third step, we construct an accurate mesh representation using marching tetrahedra, volume-preserving smoothing, and adaptive decimation algorithms. To enable topological analysis and efficient validation, we describe a method to estimate vessel centerlines using a ray casting and vote accumulation algorithm which forms the final step of our algorithm. Our algorithm lends itself to parallel processing, and yielded an 8 x speedup on a graphics processor (GPU). On synthetic data, our meshes had average error per face (EPF) values of (0.1-1.6) voxels per mesh face for peak signal-to-noise ratios from (110-28 dB). Separately, the error from decimating the mesh to less than 1% of its original size, the EPF was less than 1 voxel/face. When validated on real datasets, the average recall and precision values were found to be 94.66% and 94.84%, respectively.

  3. AutoNR: an automated system that measures ECAP thresholds with the Nucleus Freedom cochlear implant via machine intelligence.

    PubMed

    Botros, Andrew; van Dijk, Bas; Killian, Matthijs

    2007-05-01

    AutoNRT is an automated system that measures electrically evoked compound action potential (ECAP) thresholds from the auditory nerve with the Nucleus Freedom cochlear implant. ECAP thresholds along the electrode array are useful in objectively fitting cochlear implant systems for individual use. This paper provides the first detailed description of the AutoNRT algorithm and its expert systems, and reports the clinical success of AutoNRT to date. AutoNRT determines thresholds by visual detection, using two decision tree expert systems that automatically recognise ECAPs. The expert systems are guided by a dataset of 5393 neural response measurements. The algorithm approaches threshold from lower stimulus levels, ensuring recipient safety during postoperative measurements. Intraoperative measurements use the same algorithm but proceed faster by beginning at stimulus levels much closer to threshold. When searching for ECAPs, AutoNRT uses a highly specific expert system (specificity of 99% during training, 96% during testing; sensitivity of 91% during training, 89% during testing). Once ECAPs are established, AutoNRT uses an unbiased expert system to determine an accurate threshold. Throughout the execution of the algorithm, recording parameters (such as implant amplifier gain) are automatically optimised when needed. In a study that included 29 intraoperative and 29 postoperative subjects (a total of 418 electrodes), AutoNRT determined a threshold in 93% of cases where a human expert also determined a threshold. When compared to the median threshold of multiple human observers on 77 randomly selected electrodes, AutoNRT performed as accurately as the 'average' clinician. AutoNRT has demonstrated a high success rate and a level of performance that is comparable with human experts. It has been used in many clinics worldwide throughout the clinical trial and commercial launch of Nucleus Custom Sound Suite, significantly streamlining the clinical procedures associated with

  4. An innovative iterative thresholding algorithm for tumour segmentation and volumetric quantification on SPECT images: Monte Carlo-based methodology and validation.

    PubMed

    Pacilio, M; Basile, C; Shcherbinin, S; Caselli, F; Ventroni, G; Aragno, D; Mango, L; Santini, E

    2011-06-01

    Positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging play an important role in the segmentation of functioning parts of organs or tumours, but an accurate and reproducible delineation is still a challenging task. In this work, an innovative iterative thresholding method for tumour segmentation has been proposed and implemented for a SPECT system. This method, which is based on experimental threshold-volume calibrations, implements also the recovery coefficients (RC) of the imaging system, so it has been called recovering iterative thresholding method (RIThM). The possibility to employ Monte Carlo (MC) simulations for system calibration was also investigated. The RIThM is an iterative algorithm coded using MATLAB: after an initial rough estimate of the volume of interest, the following calculations are repeated: (i) the corresponding source-to-background ratio (SBR) is measured and corrected by means of the RC curve; (ii) the threshold corresponding to the amended SBR value and the volume estimate is then found using threshold-volume data; (iii) new volume estimate is obtained by image thresholding. The process goes on until convergence. The RIThM was implemented for an Infinia Hawkeye 4 (GE Healthcare) SPECT/CT system, using a Jaszczak phantom and several test objects. Two MC codes were tested to simulate the calibration images: SIMIND and SimSet. For validation, test images consisting of hot spheres and some anatomical structures of the Zubal head phantom were simulated with SIMIND code. Additional test objects (flasks and vials) were also imaged experimentally. Finally, the RIThM was applied to evaluate three cases of brain metastases and two cases of high grade gliomas. Comparing experimental thresholds and those obtained by MC simulations, a maximum difference of about 4% was found, within the errors (+/- 2% and +/- 5%, for volumes > or = 5 ml or < 5 ml, respectively). Also for the RC data, the comparison showed

  5. Improve threshold segmentation using features extraction to automatic lung delimitation.

    PubMed

    França, Cleunio; Vasconcelos, Germano; Diniz, Paula; Melo, Pedro; Diniz, Jéssica; Novaes, Magdala

    2013-01-01

    With the consolidation of PACS and RIS systems, the development of algorithms for tissue segmentation and diseases detection have intensely evolved in recent years. These algorithms have advanced to improve its accuracy and specificity, however, there is still some way until these algorithms achieved satisfactory error rates and reduced processing time to be used in daily diagnosis. The objective of this study is to propose a algorithm for lung segmentation in x-ray computed tomography images using features extraction, as Centroid and orientation measures, to improve the basic threshold segmentation. As result we found a accuracy of 85.5%.

  6. Genetic Algorithm-Guided, Adaptive Model Order Reduction of Flexible Aircrafts

    NASA Technical Reports Server (NTRS)

    Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter; Brenner, Martin J.

    2017-01-01

    This paper presents a methodology for automated model order reduction (MOR) of flexible aircrafts to construct linear parameter-varying (LPV) reduced order models (ROM) for aeroservoelasticity (ASE) analysis and control synthesis in broad flight parameter space. The novelty includes utilization of genetic algorithms (GAs) to automatically determine the states for reduction while minimizing the trial-and-error process and heuristics requirement to perform MOR; balanced truncation for unstable systems to achieve locally optimal realization of the full model; congruence transformation for "weak" fulfillment of state consistency across the entire flight parameter space; and ROM interpolation based on adaptive grid refinement to generate a globally functional LPV ASE ROM. The methodology is applied to the X-56A MUTT model currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that X-56A ROM with less than one-seventh the number of states relative to the original model is able to accurately predict system response among all input-output channels for pitch, roll, and ASE control at various flight conditions. The GA-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The adaptive refinement allows selective addition of the grid points in the parameter space where flight dynamics varies dramatically to enhance interpolation accuracy without over-burdening controller synthesis and onboard memory efforts downstream. The present MOR framework can be used by control engineers for robust ASE controller synthesis and novel vehicle design.

  7. A high speed implementation of the random decrement algorithm

    NASA Technical Reports Server (NTRS)

    Kiraly, L. J.

    1982-01-01

    The algorithm is useful for measuring net system damping levels in stochastic processes and for the development of equivalent linearized system response models. The algorithm works by summing together all subrecords which occur after predefined threshold level is crossed. The random decrement signature is normally developed by scanning stored data and adding subrecords together. The high speed implementation of the random decrement algorithm exploits the digital character of sampled data and uses fixed record lengths of 2(n) samples to greatly speed up the process. The contributions to the random decrement signature of each data point was calculated only once and in the same sequence as the data were taken. A hardware implementation of the algorithm using random logic is diagrammed and the process is shown to be limited only by the record size and the threshold crossing frequency of the sampled data. With a hardware cycle time of 200 ns and 1024 point signature, a threshold crossing frequency of 5000 Hertz can be processed and a stably averaged signature presented in real time.

  8. Threshold-based segmentation of fluorescent and chromogenic images of microglia, astrocytes and oligodendrocytes in FIJI.

    PubMed

    Healy, Sinead; McMahon, Jill; Owens, Peter; Dockery, Peter; FitzGerald, Una

    2018-02-01

    Image segmentation is often imperfect, particularly in complex image sets such z-stack micrographs of slice cultures and there is a need for sufficient details of parameters used in quantitative image analysis to allow independent repeatability and appraisal. For the first time, we have critically evaluated, quantified and validated the performance of different segmentation methodologies using z-stack images of ex vivo glial cells. The BioVoxxel toolbox plugin, available in FIJI, was used to measure the relative quality, accuracy, specificity and sensitivity of 16 global and 9 local threshold automatic thresholding algorithms. Automatic thresholding yields improved binary representation of glial cells compared with the conventional user-chosen single threshold approach for confocal z-stacks acquired from ex vivo slice cultures. The performance of threshold algorithms varies considerably in quality, specificity, accuracy and sensitivity with entropy-based thresholds scoring highest for fluorescent staining. We have used the BioVoxxel toolbox to correctly and consistently select the best automated threshold algorithm to segment z-projected images of ex vivo glial cells for downstream digital image analysis and to define segmentation quality. The automated OLIG2 cell count was validated using stereology. As image segmentation and feature extraction can quite critically affect the performance of successive steps in the image analysis workflow, it is becoming increasingly necessary to consider the quality of digital segmenting methodologies. Here, we have applied, validated and extended an existing performance-check methodology in the BioVoxxel toolbox to z-projected images of ex vivo glia cells. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Adaptive spatial filtering improves speech reception in noise while preserving binaural cues.

    PubMed

    Bissmeyer, Susan R S; Goldsworthy, Raymond L

    2017-09-01

    Hearing loss greatly reduces an individual's ability to comprehend speech in the presence of background noise. Over the past decades, numerous signal-processing algorithms have been developed to improve speech reception in these situations for cochlear implant and hearing aid users. One challenge is to reduce background noise while not introducing interaural distortion that would degrade binaural hearing. The present study evaluates a noise reduction algorithm, referred to as binaural Fennec, that was designed to improve speech reception in background noise while preserving binaural cues. Speech reception thresholds were measured for normal-hearing listeners in a simulated environment with target speech generated in front of the listener and background noise originating 90° to the right of the listener. Lateralization thresholds were also measured in the presence of background noise. These measures were conducted in anechoic and reverberant environments. Results indicate that the algorithm improved speech reception thresholds, even in highly reverberant environments. Results indicate that the algorithm also improved lateralization thresholds for the anechoic environment while not affecting lateralization thresholds for the reverberant environments. These results provide clear evidence that this algorithm can improve speech reception in background noise while preserving binaural cues used to lateralize sound.

  10. 3D GGO candidate extraction in lung CT images using multilevel thresholding on supervoxels

    NASA Astrophysics Data System (ADS)

    Huang, Shan; Liu, Xiabi; Han, Guanghui; Zhao, Xinming; Zhao, Yanfeng; Zhou, Chunwu

    2018-02-01

    The earlier detection of ground glass opacity (GGO) is of great importance since GGOs are more likely to be malignant than solid nodules. However, the detection of GGO is a difficult task in lung cancer screening. This paper proposes a novel GGO candidate extraction method, which performs multilevel thresholding on supervoxels in 3D lung CT images. Firstly, we segment the lung parenchyma based on Otsu algorithm. Secondly, the voxels which are adjacent in 3D discrete space and sharing similar grayscale are clustered into supervoxels. This procedure is used to enhance GGOs and reduce computational complexity. Thirdly, Hessian matrix is used to emphasize focal GGO candidates. Lastly, an improved adaptive multilevel thresholding method is applied on segmented clusters to extract GGO candidates. The proposed method was evaluated on a set of 19 lung CT scans containing 166 GGO lesions from the Lung CT Imaging Signs (LISS) database. The experimental results show that our proposed GGO candidate extraction method is effective, with a sensitivity of 100% and 26.3 of false positives per scan (665 GGO candidates, 499 non-GGO regions and 166 GGO regions). It can handle both focal GGOs and diffuse GGOs.

  11. Adaptive thresholding image series from fluorescence confocal scanning laser microscope using orientation intensity profiles

    NASA Astrophysics Data System (ADS)

    Feng, Judy J.; Ip, Horace H.; Cheng, Shuk H.

    2004-05-01

    Many grey-level thresholding methods based on histogram or other statistic information about the interest image such as maximum entropy and so on have been proposed in the past. However, most methods based on statistic analysis of the images concerned little about the characteristics of morphology of interest objects, which sometimes could provide very important indication which can help to find the optimum threshold, especially for those organisms which have special texture morphologies such as vasculature, neuro-network etc. in medical imaging. In this paper, we propose a novel method for thresholding the fluorescent vasculature image series recorded from Confocal Scanning Laser Microscope. After extracting the basic orientation of the slice of vessels inside a sub-region partitioned from the images, we analysis the intensity profiles perpendicular to the vessel orientation to get the reasonable initial threshold for each region. Then the threshold values of those regions near the interest one both in x-y and optical directions have been referenced to get the final result of thresholds of the region, which makes the whole stack of images look more continuous. The resulting images are characterized by suppressing both noise and non-interest tissues conglutinated to vessels, while improving the vessel connectivities and edge definitions. The value of the method for idealized thresholding the fluorescence images of biological objects is demonstrated by a comparison of the results of 3D vascular reconstruction.

  12. Comparison of 30-2 Standard and Fast programs of Swedish Interactive Threshold Algorithm of Humphrey Field Analyzer for perimetry in patients with intracranial tumors.

    PubMed

    Singh, Manav Deep; Jain, Kanika

    2017-11-01

    To find out whether 30-2 Swedish Interactive Threshold Algorithm (SITA) Fast is comparable to 30-2 SITA Standard as a tool for perimetry among the patients with intracranial tumors. This was a prospective cross-sectional study involving 80 patients aged ≥18 years with imaging proven intracranial tumors and visual acuity better than 20/60. The patients underwent multiple visual field examinations using the two algorithms till consistent and repeatable results were obtained. A total of 140 eyes of 80 patients were analyzed. Almost 60% of patients undergoing perimetry with SITA Standard required two or more sessions to obtain consistent results, whereas the same could be obtained in 81.42% with SITA Fast in the first session itself. Of 140 eyes, 70 eyes had recordable field defects and the rest had no defects as detected by either of the two algorithms. Mean deviation (MD) (P = 0.56), pattern standard deviation (PSD) (P = 0.22), visual field index (P = 0.83) and number of depressed points at P < 5%, 2%, 1%, and 0.5% on MD and PSD probability plots showed no statistically significant difference between two algorithms. Bland-Altman test showed that considerable variability existed between two algorithms. Perimetry performed by SITA Standard and SITA Fast algorithm of Humphrey Field Analyzer gives comparable results among the patients of intracranial tumors. Being more time efficient and with a shorter learning curve, SITA Fast my be recommended as a standard test for the purpose of perimetry among these patients.

  13. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm.

    PubMed

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  14. Appropriate threshold levels of cardiac beat-to-beat variation in semi-automatic analysis of equine ECG recordings.

    PubMed

    Flethøj, Mette; Kanters, Jørgen K; Pedersen, Philip J; Haugaard, Maria M; Carstensen, Helena; Olsen, Lisbeth H; Buhl, Rikke

    2016-11-28

    Although premature beats are a matter of concern in horses, the interpretation of equine ECG recordings is complicated by a lack of standardized analysis criteria and a limited knowledge of the normal beat-to-beat variation of equine cardiac rhythm. The purpose of this study was to determine the appropriate threshold levels of maximum acceptable deviation of RR intervals in equine ECG analysis, and to evaluate a novel two-step timing algorithm by quantifying the frequency of arrhythmias in a cohort of healthy adult endurance horses. Beat-to-beat variation differed considerably with heart rate (HR), and an adaptable model consisting of three different HR ranges with separate threshold levels of maximum acceptable RR deviation was consequently defined. For resting HRs <60 beats/min (bpm) the threshold level of RR deviation was set at 20%, for HRs in the intermediate range between 60 and 100 bpm the threshold was 10%, and for exercising HRs >100 bpm, the threshold level was 4%. Supraventricular premature beats represented the most prevalent arrhythmia category with varying frequencies in seven horses at rest (median 7, range 2-86) and six horses during exercise (median 2, range 1-24). Beat-to-beat variation of equine cardiac rhythm varies according to HR, and threshold levels in equine ECG analysis should be adjusted accordingly. Standardization of the analysis criteria will enable comparisons of studies and follow-up examinations of patients. A small number of supraventricular premature beats appears to be a normal finding in endurance horses. Further studies are required to validate the findings and determine the clinical significance of premature beats in horses.

  15. Adapting iterative algorithms for solving large sparse linear systems for efficient use on the CDC CYBER 205

    NASA Technical Reports Server (NTRS)

    Kincaid, D. R.; Young, D. M.

    1984-01-01

    Adapting and designing mathematical software to achieve optimum performance on the CYBER 205 is discussed. Comments and observations are made in light of recent work done on modifying the ITPACK software package and on writing new software for vector supercomputers. The goal was to develop very efficient vector algorithms and software for solving large sparse linear systems using iterative methods.

  16. Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li

    2017-12-01

    In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.

  17. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  18. An Adaptive Mesh Algorithm: Mesh Structure and Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scannapieco, Anthony J.

    2016-06-21

    The purpose of Adaptive Mesh Refinement is to minimize spatial errors over the computational space not to minimize the number of computational elements. The additional result of the technique is that it may reduce the number of computational elements needed to retain a given level of spatial accuracy. Adaptive mesh refinement is a computational technique used to dynamically select, over a region of space, a set of computational elements designed to minimize spatial error in the computational model of a physical process. The fundamental idea is to increase the mesh resolution in regions where the physical variables are represented bymore » a broad spectrum of modes in k-space, hence increasing the effective global spectral coverage of those physical variables. In addition, the selection of the spatially distributed elements is done dynamically by cyclically adjusting the mesh to follow the spectral evolution of the system. Over the years three types of AMR schemes have evolved; block, patch and locally refined AMR. In block and patch AMR logical blocks of various grid sizes are overlaid to span the physical space of interest, whereas in locally refined AMR no logical blocks are employed but locally nested mesh levels are used to span the physical space. The distinction between block and patch AMR is that in block AMR the original blocks refine and coarsen entirely in time, whereas in patch AMR the patches change location and zone size with time. The type of AMR described herein is a locally refi ned AMR. In the algorithm described, at any point in physical space only one zone exists at whatever level of mesh that is appropriate for that physical location. The dynamic creation of a locally refi ned computational mesh is made practical by a judicious selection of mesh rules. With these rules the mesh is evolved via a mesh potential designed to concentrate the nest mesh in regions where the physics is modally dense, and coarsen zones in regions where the physics is

  19. MO-DE-207A-12: Toward Patient-Specific 4DCT Reconstruction Using Adaptive Velocity Binning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, E.D.; Glide-Hurst, C.; Wayne State University, Detroit, MI

    2016-06-15

    Purpose: While 4DCT provides organ/tumor motion information, it often samples data over 10–20 breathing cycles. For patients presenting with compromised pulmonary function, breathing patterns can change over the acquisition time, potentially leading to tumor delineation discrepancies. This work introduces a novel adaptive velocity-modulated binning (AVB) 4DCT algorithm that modulates the reconstruction based on the respiratory waveform, yielding a patient-specific 4DCT solution. Methods: AVB was implemented in a research reconstruction configuration. After filtering the respiratory waveform, the algorithm examines neighboring data to a phase reconstruction point and the temporal gate is widened until the difference between the reconstruction point and waveformmore » exceeds a threshold value—defined as percent difference between maximum/minimum waveform amplitude. The algorithm only impacts reconstruction if the gate width exceeds a set minimum temporal width required for accurate reconstruction. A sensitivity experiment of threshold values (0.5, 1, 5, 10, and 12%) was conducted to examine the interplay between threshold, signal to noise ratio (SNR), and image sharpness for phantom and several patient 4DCT cases using ten-phase reconstructions. Individual phase reconstructions were examined. Subtraction images and regions of interest were compared to quantify changes in SNR. Results: AVB increased signal in reconstructed 4DCT slices for respiratory waveforms that met the prescribed criteria. For the end-exhale phases, where the respiratory velocity is low, patient data revealed a threshold of 0.5% demonstrated increased SNR in the AVB reconstructions. For intermediate breathing phases, threshold values were required to be >10% to notice appreciable changes in CT intensity with AVB. AVB reconstructions exhibited appreciably higher SNR and reduced noise in regions of interest that were photon deprived such as the liver. Conclusion: We demonstrated that patient

  20. Fast and automatic algorithm for optic disc extraction in retinal images using principle-component-analysis-based preprocessing and curvelet transform.

    PubMed

    Shahbeig, Saleh; Pourghassem, Hossein

    2013-01-01

    Optic disc or optic nerve (ON) head extraction in retinal images has widespread applications in retinal disease diagnosis and human identification in biometric systems. This paper introduces a fast and automatic algorithm for detecting and extracting the ON region accurately from the retinal images without the use of the blood-vessel information. In this algorithm, to compensate for the destructive changes of the illumination and also enhance the contrast of the retinal images, we estimate the illumination of background and apply an adaptive correction function on the curvelet transform coefficients of retinal images. In other words, we eliminate the fault factors and pave the way to extract the ON region exactly. Then, we detect the ON region from retinal images using the morphology operators based on geodesic conversions, by applying a proper adaptive correction function on the reconstructed image's curvelet transform coefficients and a novel powerful criterion. Finally, using a local thresholding on the detected area of the retinal images, we extract the ON region. The proposed algorithm is evaluated on available images of DRIVE and STARE databases. The experimental results indicate that the proposed algorithm obtains an accuracy rate of 100% and 97.53% for the ON extractions on DRIVE and STARE databases, respectively.

  1. Adaptive Load-Balancing Algorithms using Symmetric Broadcast Networks

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    In a distributed computing environment, it is important to ensure that the processor workloads are adequately balanced, Among numerous load-balancing algorithms, a unique approach due to Das and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three efficient SBN-based dynamic load-balancing algorithms, and implement them on an SGI Origin2000. A thorough experimental study with Poisson distributed synthetic loads demonstrates that our algorithms are effective in balancing system load. By optimizing completion time and idle time, the proposed algorithms are shown to compare favorably with several existing approaches.

  2. Development and evaluation of a data-adaptive alerting algorithm for univariate temporal biosurveillance data.

    PubMed

    Elbert, Yevgeniy; Burkom, Howard S

    2009-11-20

    This paper discusses further advances in making robust predictions with the Holt-Winters forecasts for a variety of syndromic time series behaviors and introduces a control-chart detection approach based on these forecasts. Using three collections of time series data, we compare biosurveillance alerting methods with quantified measures of forecast agreement, signal sensitivity, and time-to-detect. The study presents practical rules for initialization and parameterization of biosurveillance time series. Several outbreak scenarios are used for detection comparison. We derive an alerting algorithm from forecasts using Holt-Winters-generalized smoothing for prospective application to daily syndromic time series. The derived algorithm is compared with simple control-chart adaptations and to more computationally intensive regression modeling methods. The comparisons are conducted on background data from both authentic and simulated data streams. Both types of background data include time series that vary widely by both mean value and cyclic or seasonal behavior. Plausible, simulated signals are added to the background data for detection performance testing at signal strengths calculated to be neither too easy nor too hard to separate the compared methods. Results show that both the sensitivity and the timeliness of the Holt-Winters-based algorithm proved to be comparable or superior to that of the more traditional prediction methods used for syndromic surveillance.

  3. An Ultra-Low Power Turning Angle Based Biomedical Signal Compression Engine with Adaptive Threshold Tuning.

    PubMed

    Zhou, Jun; Wang, Chao

    2017-08-06

    Intelligent sensing is drastically changing our everyday life including healthcare by biomedical signal monitoring, collection, and analytics. However, long-term healthcare monitoring generates tremendous data volume and demands significant wireless transmission power, which imposes a big challenge for wearable healthcare sensors usually powered by batteries. Efficient compression engine design to reduce wireless transmission data rate with ultra-low power consumption is essential for wearable miniaturized healthcare sensor systems. This paper presents an ultra-low power biomedical signal compression engine for healthcare data sensing and analytics in the era of big data and sensor intelligence. It extracts the feature points of the biomedical signal by window-based turning angle detection. The proposed approach has low complexity and thus low power consumption while achieving a large compression ratio (CR) and good quality of reconstructed signal. Near-threshold design technique is adopted to further reduce the power consumption on the circuit level. Besides, the angle threshold for compression can be adaptively tuned according to the error between the original signal and reconstructed signal to address the variation of signal characteristics from person to person or from channel to channel to meet the required signal quality with optimal CR. For demonstration, the proposed biomedical compression engine has been used and evaluated for ECG compression. It achieves an average (CR) of 71.08% and percentage root-mean-square difference (PRD) of 5.87% while consuming only 39 nW. Compared to several state-of-the-art ECG compression engines, the proposed design has significantly lower power consumption while achieving similar CRD and PRD, making it suitable for long-term wearable miniaturized sensor systems to sense and collect healthcare data for remote data analytics.

  4. An Adaptable Power System with Software Control Algorithm

    NASA Technical Reports Server (NTRS)

    Castell, Karen; Bay, Mike; Hernandez-Pellerano, Amri; Ha, Kong

    1998-01-01

    A low cost, flexible and modular spacecraft power system design was developed in response to a call for an architecture that could accommodate multiple missions in the small to medium load range. Three upcoming satellites will use this design, with one launch date in 1999 and two in the year 2000. The design consists of modular hardware that can be scaled up or down, without additional cost, to suit missions in the 200 to 600 Watt orbital average load range. The design will be applied to satellite orbits that are circular, polar elliptical and a libration point orbit. Mission unique adaptations are accomplished in software and firmware. In designing this advanced, adaptable power system, the major goals were reduction in weight volume and cost. This power system design represents reductions in weight of 78 percent, volume of 86 percent and cost of 65 percent from previous comparable systems. The efforts to miniaturize the electronics without sacrificing performance has created streamlined power electronics with control functions residing in the system microprocessor. The power system design can handle any battery size up to 50 Amp-hour and any battery technology. The three current implementations will use both nickel cadmium and nickel hydrogen batteries ranging in size from 21 to 50 Amp-hours. Multiple batteries can be used by adding another battery module. Any solar cell technology can be used and various array layouts can be incorporated with no change in Power System Electronics (PSE) hardware. Other features of the design are the standardized interfaces between cards and subsystems and immunity to radiation effects up to 30 krad Total Ionizing Dose (TID) and 35 Mev/cm(exp 2)-kg for Single Event Effects (SEE). The control algorithm for the power system resides in a radiation-hardened microprocessor. A table driven software design allows for flexibility in mission specific requirements. By storing critical power system constants in memory, modifying the system

  5. Adaptable Binary Programs

    DTIC Science & Technology

    1994-04-01

    a variation of Ziv - Lempel compression [ZL77]. We found that using a standard compression algorithm rather than semantic compression allowed simplified...mentation. In Proceedings of the Conference on Programming Language Design and Implementation, 1993. (ZL77] J. Ziv and A. Lempel . A universal algorithm ...required by adaptable binaries. Our ABS stores adaptable binary information using the conventional binary symbol table and compresses this data using

  6. A graph based algorithm for adaptable dynamic airspace configuration for NextGen

    NASA Astrophysics Data System (ADS)

    Savai, Mehernaz P.

    The National Airspace System (NAS) is a complicated large-scale aviation network, consisting of many static sectors wherein each sector is controlled by one or more controllers. The main purpose of the NAS is to enable safe and prompt air travel in the U.S. However, such static configuration of sectors will not be able to handle the continued growth of air travel which is projected to be more than double the current traffic by 2025. Under the initiative of the Next Generation of Air Transportation system (NextGen), the main objective of Adaptable Dynamic Airspace Configuration (ADAC) is that the sectors should change to the changing traffic so as to reduce the controller workload variance with time while increasing the throughput. Change in the resectorization should be such that there is a minimal increase in exchange of air traffic among controllers. The benefit of a new design (improvement in workload balance, etc.) should sufficiently exceed the transition cost, in order to deserve a change. This leads to the analysis of the concept of transition workload which is the cost associated with a transition from one sectorization to another. Given two airspace configurations, a transition workload metric which considers the air traffic as well as the geometry of the airspace is proposed. A solution to reduce this transition workload is also discussed. The algorithm is specifically designed to be implemented for the Dynamic Airspace Configuration (DAC) Algorithm. A graph model which accurately represents the air route structure and air traffic in the NAS is used to formulate the airspace configuration problem. In addition, a multilevel graph partitioning algorithm is developed for Dynamic Airspace Configuration which partitions the graph model of airspace with given user defined constraints and hence provides the user more flexibility and control over various partitions. In terms of air traffic management, vertices represent airports and waypoints. Some of the major

  7. Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves.

    PubMed

    Lee, Wen-Chung; Wu, Yun-Chun

    2016-01-01

    The area under the receiver operating characteristic curve is a widely used index to characterize the performance of diagnostic tests and prediction models. However, the index does not explicitly acknowledge the utilities of risk predictions. Moreover, for most clinical settings, what counts is whether a prediction model can guide therapeutic decisions in a way that improves patient outcomes, rather than to simply update probabilities.Based on decision theory, the authors propose an alternative index, the "average deviation about the probability threshold" (ADAPT).An ADAPT curve (a plot of ADAPT value against the probability threshold) neatly characterizes the decision-analysis performances of a risk prediction model.Several prediction models can be compared for their ADAPT values at a chosen probability threshold, for a range of plausible threshold values, or for the whole ADAPT curves. This should greatly facilitate the selection of diagnostic tests and prediction models.

  8. Parallel Algorithm Solves Coupled Differential Equations

    NASA Technical Reports Server (NTRS)

    Hayashi, A.

    1987-01-01

    Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.

  9. Intelligent Use of CFAR Algorithms

    DTIC Science & Technology

    1993-05-01

    the reference windows can raise the threshold too high in many CFAR algorithms and result in masking of targets. GCMLD is a modification of CMLD that...AD-A267 755 RL-TR-93-75 III 11 III II liiI Interim Report May 1993 INTELLIGENT USE OF CFAR ALGORITHMS Kaman Sciences Corporation P. Antonik, B...AND DATES COVERED IMay 1993 Inte ’rim Jan 92 - Se2 92 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS INTELLIGENT USE OF CFAR ALGORITHMS C - F30602-91-C-0017

  10. An algorithm for simulating fracture of cohesive-frictional materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nukala, Phani K; Sampath, Rahul S; Barai, Pallab

    Fracture of disordered frictional granular materials is dominated by interfacial failure response that is characterized by de-cohesion followed by frictional sliding response. To capture such an interfacial failure response, we introduce a cohesive-friction random fuse model (CFRFM), wherein the cohesive response of the interface is represented by a linear stress-strain response until a failure threshold, which is then followed by a constant response at a threshold lower than the initial failure threshold to represent the interfacial frictional sliding mechanism. This paper presents an efficient algorithm for simulating fracture of such disordered frictional granular materials using the CFRFM. We note that,more » when applied to perfectly plastic disordered materials, our algorithm is both theoretically and numerically equivalent to the traditional tangent algorithm (Roux and Hansen 1992 J. Physique II 2 1007) used for such simulations. However, the algorithm is general and is capable of modeling discontinuous interfacial response. Our numerical simulations using the algorithm indicate that the local and global roughness exponents ({zeta}{sub loc} and {zeta}, respectively) of the fracture surface are equal to each other, and the two-dimensional crack roughness exponent is estimated to be {zeta}{sub loc} = {zeta} = 0.69 {+-} 0.03.« less

  11. Psychophysical Measurement of Rod and Cone Thresholds in Stargardt Disease with Full-Field Stimuli

    PubMed Central

    Collison, Frederick T.; Fishman, Gerald A.; McAnany, J. Jason; Zernant, Jana; Allikmets, Rando

    2014-01-01

    Purpose To investigate psychophysical thresholds in Stargardt disease with the full-field stimulus test (FST). Methods Visual acuity (VA), spectral-domain optical coherence tomography (SD-OCT), full-field electroretinogram (ERG), and FST measurements were made in one eye of 24 patients with Stargardt disease. Dark-adapted rod FST thresholds were measured with short-wavelength stimuli, and cone FST thresholds were obtained from the cone plateau phase of dark adaptation using long-wavelength stimuli. Correlation coefficients were calculated for FST thresholds versus macular thickness, VA and ERG amplitudes. Results Stargardt patient FST cone thresholds correlated significantly with VA, macular thickness, and ERG cone-response amplitudes (all P<0.01). The patients’ FST rod thresholds correlated with ERG rod-response amplitudes (P<0.01), but not macular thickness (P=0.05). All Stargardt disease patients with flecks confined to the macula and most of the patients with flecks extending outside of the macula had normal FST thresholds. All patients with extramacular atrophic changes had elevated FST cone thresholds and most had elevated FST rod thresholds. Conclusion FST rod and cone threshold elevation in Stargardt disease patients correlated well with measures of structure and function, as well as ophthalmoscopic retinal appearance. FST appears to be a useful tool for assessing rod and cone function in Stargardt disease. PMID:24695063

  12. The Random-Threshold Generalized Unfolding Model and Its Application of Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien

    2013-01-01

    The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…

  13. Real time algorithms for sharp wave ripple detection.

    PubMed

    Sethi, Ankit; Kemere, Caleb

    2014-01-01

    Neural activity during sharp wave ripples (SWR), short bursts of co-ordinated oscillatory activity in the CA1 region of the rodent hippocampus, is implicated in a variety of memory functions from consolidation to recall. Detection of these events in an algorithmic framework, has thus far relied on simple thresholding techniques with heuristically derived parameters. This study is an investigation into testing and improving the current methods for detection of SWR events in neural recordings. We propose and profile methods to reduce latency in ripple detection. Proposed algorithms are tested on simulated ripple data. The findings show that simple realtime algorithms can improve upon existing power thresholding methods and can detect ripple activity with latencies in the range of 10-20 ms.

  14. Novel medical image enhancement algorithms

    NASA Astrophysics Data System (ADS)

    Agaian, Sos; McClendon, Stephen A.

    2010-01-01

    In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.

  15. Threshold detection in an on-off binary communications channel with atmospheric scintillation

    NASA Technical Reports Server (NTRS)

    Webb, W. E.; Marino, J. T., Jr.

    1974-01-01

    The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-emperical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. Bit error probabilities for non-optimum threshold detection system were also investigated.

  16. Threshold detection in an on-off binary communications channel with atmospheric scintillation

    NASA Technical Reports Server (NTRS)

    Webb, W. E.

    1975-01-01

    The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-empirical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. The bit error probabilities for nonoptimum threshold detection systems were also investigated.

  17. Woofer-tweeter adaptive optics scanning laser ophthalmoscopic imaging based on Lagrange-multiplier damped least-squares algorithm.

    PubMed

    Zou, Weiyao; Qi, Xiaofeng; Burns, Stephen A

    2011-07-01

    We implemented a Lagrange-multiplier (LM)-based damped least-squares (DLS) control algorithm in a woofer-tweeter dual deformable-mirror (DM) adaptive optics scanning laser ophthalmoscope (AOSLO). The algorithm uses data from a single Shack-Hartmann wavefront sensor to simultaneously correct large-amplitude low-order aberrations by a woofer DM and small-amplitude higher-order aberrations by a tweeter DM. We measured the in vivo performance of high resolution retinal imaging with the dual DM AOSLO. We compared the simultaneous LM-based DLS dual DM controller with both single DM controller, and a successive dual DM controller. We evaluated performance using both wavefront (RMS) and image quality metrics including brightness and power spectrum. The simultaneous LM-based dual DM AO can consistently provide near diffraction-limited in vivo routine imaging of human retina.

  18. The successively temporal error concealment algorithm using error-adaptive block matching principle

    NASA Astrophysics Data System (ADS)

    Lee, Yu-Hsuan; Wu, Tsai-Hsing; Chen, Chao-Chyun

    2014-09-01

    Generally, the temporal error concealment (TEC) adopts the blocks around the corrupted block (CB) as the search pattern to find the best-match block in previous frame. Once the CB is recovered, it is referred to as the recovered block (RB). Although RB can be the search pattern to find the best-match block of another CB, RB is not the same as its original block (OB). The error between the RB and its OB limits the performance of TEC. The successively temporal error concealment (STEC) algorithm is proposed to alleviate this error. The STEC procedure consists of tier-1 and tier-2. The tier-1 divides a corrupted macroblock into four corrupted 8 × 8 blocks and generates a recovering order for them. The corrupted 8 × 8 block with the first place of recovering order is recovered in tier-1, and remaining 8 × 8 CBs are recovered in tier-2 along the recovering order. In tier-2, the error-adaptive block matching principle (EA-BMP) is proposed for the RB as the search pattern to recover remaining corrupted 8 × 8 blocks. The proposed STEC outperforms sophisticated TEC algorithms on average PSNR by 0.3 dB on the packet error rate of 20% at least.

  19. Rainfall Estimation over the Nile Basin using an Adapted Version of the SCaMPR Algorithm

    NASA Astrophysics Data System (ADS)

    Habib, E. H.; Kuligowski, R. J.; Elshamy, M. E.; Ali, M. A.; Haile, A.; Amin, D.; Eldin, A.

    2011-12-01

    Management of Egypt's Aswan High Dam is critical not only for flood control on the Nile but also for ensuring adequate water supplies for most of Egypt since rainfall is scarce over the vast majority of its land area. However, reservoir inflow is driven by rainfall over Sudan, Ethiopia, Uganda, and several other countries from which routine rain gauge data are sparse. Satellite-derived estimates of rainfall offer a much more detailed and timely set of data to form a basis for decisions on the operation of the dam. A single-channel infrared algorithm is currently in operational use at the Egyptian Nile Forecast Center (NFC). This study reports on the adaptation of a multi-spectral, multi-instrument satellite rainfall estimation algorithm (Self-Calibrating Multivariate Precipitation Retrieval, SCaMPR) for operational application over the Nile Basin. The algorithm uses a set of rainfall predictors from multi-spectral Infrared cloud top observations and self-calibrates them to a set of predictands from Microwave (MW) rain rate estimates. For application over the Nile Basin, the SCaMPR algorithm uses multiple satellite IR channels recently available to NFC from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). Microwave rain rates are acquired from multiple sources such as SSM/I, SSMIS, AMSU, AMSR-E, and TMI. The algorithm has two main steps: rain/no-rain separation using discriminant analysis, and rain rate estimation using stepwise linear regression. We test two modes of algorithm calibration: real-time calibration with continuous updates of coefficients with newly coming MW rain rates, and calibration using static coefficients that are derived from IR-MW data from past observations. We also compare the SCaMPR algorithm to other global-scale satellite rainfall algorithms (e.g., 'Tropical Rainfall Measuring Mission (TRMM) and other sources' (TRMM-3B42) product, and the National Oceanographic and Atmospheric Administration Climate Prediction Center (NOAA

  20. Absolute auditory threshold: testing the absolute.

    PubMed

    Heil, Peter; Matysiak, Artur

    2017-11-02

    The mechanisms underlying the detection of sounds in quiet, one of the simplest tasks for auditory systems, are debated. Several models proposed to explain the threshold for sounds in quiet and its dependence on sound parameters include a minimum sound intensity ('hard threshold'), below which sound has no effect on the ear. Also, many models are based on the assumption that threshold is mediated by integration of a neural response proportional to sound intensity. Here, we test these ideas. Using an adaptive forced choice procedure, we obtained thresholds of 95 normal-hearing human ears for 18 tones (3.125 kHz carrier) in quiet, each with a different temporal amplitude envelope. Grand-mean thresholds and standard deviations were well described by a probabilistic model according to which sensory events are generated by a Poisson point process with a low rate in the absence, and higher, time-varying rates in the presence, of stimulation. The subject actively evaluates the process and bases the decision on the number of events observed. The sound-driven rate of events is proportional to the temporal amplitude envelope of the bandpass-filtered sound raised to an exponent. We find no evidence for a hard threshold: When the model is extended to include such a threshold, the fit does not improve. Furthermore, we find an exponent of 3, consistent with our previous studies and further challenging models that are based on the assumption of the integration of a neural response that, at threshold sound levels, is directly proportional to sound amplitude or intensity. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.