Science.gov

Sample records for adaptive threshold methods

  1. Adaptive Thresholds

    SciTech Connect

    Bremer, P. -T.

    2014-08-26

    ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.

  2. Developing Bayesian adaptive methods for estimating sensitivity thresholds (d') in Yes-No and forced-choice tasks.

    PubMed

    Lesmes, Luis A; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A; Albright, Thomas D

    2015-01-01

    Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold-the signal intensity corresponding to a pre-defined sensitivity level (d' = 1)-in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks-(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection-the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10-0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods.

  3. Developing Bayesian adaptive methods for estimating sensitivity thresholds (d′) in Yes-No and forced-choice tasks

    PubMed Central

    Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.

    2015-01-01

    Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798

  4. An adaptive threshold method for improving astrometry of space debris CCD images

    NASA Astrophysics Data System (ADS)

    Sun, Rong-yu; Zhao, Chang-yin

    2014-06-01

    Optical survey is a main technique for observing space debris, and precisely measuring the positions of space debris is of great importance. Due to several factors, e.g. the angle object normal to the observer, the shape as well as the attitude of the object, the variations of observed characteristics for low earth orbital space debris are distinct. When we look at optical CCD images of observed objects, the size and brightness are varying, hence it’s difficult to decide the threshold during centroid measurement and precise astrometry. Traditionally the threshold is given empirically and constantly in data reduction, and obviously it’s not suitable for data reduction of space debris. Here we offer a solution to provide the threshold. Our method assumes that the PSF (point spread function) is Gaussian and estimates the signal flux by a directly two-dimensional Gaussian fit, then a cubic spline interpolation is performed to divide each initial pixel into several sub-pixels, at last the threshold is determined by the estimation of signal flux and the sub-pixels above threshold are separated to estimate the centroid. A trail observation of the fast spinning satellite Ajisai is made and the CCD frames are obtained to test our algorithm. The calibration precision of various threshold is obtained through the comparison between the observed equatorial position and the reference one, the latter are obtained from the precise ephemeris of the satellite. The results indicate that our method reduces the total errors of measurements, it works effectively in improving the centering precision of space debris images.

  5. Adaptive thresholding of digital subtraction angiography images

    NASA Astrophysics Data System (ADS)

    Sang, Nong; Li, Heng; Peng, Weixue; Zhang, Tianxu

    2005-10-01

    In clinical practice, digital subtraction angiography (DSA) is a powerful technique for the visualization of blood vessels in the human body. Blood vessel segmentation is a main problem for 3D vascular reconstruction. In this paper, we propose a new adaptive thresholding method for the segmentation of DSA images. Each pixel of the DSA images is declared to be a vessel/background point with regard to a threshold and a few local characteristic limits depending on some information contained in the pixel neighborhood window. The size of the neighborhood window is set according to a priori knowledge of the diameter of vessels to make sure that each window contains the background definitely. Some experiments on cerebral DSA images are given, which show that our proposed method yields better results than global thresholding methods and some other local thresholding methods do.

  6. Validation of various adaptive threshold methods of segmentation applied to follicular lymphoma digital images stained with 3,3’-Diaminobenzidine&Haematoxylin

    PubMed Central

    2013-01-01

    The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the ’brown component’ extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without

  7. Improved visual background extractor using an adaptive distance threshold

    NASA Astrophysics Data System (ADS)

    Han, Guang; Wang, Jinkuan; Cai, Xi

    2014-11-01

    Camouflage is a challenging issue in moving object detection. Even the recent and advanced background subtraction technique, visual background extractor (ViBe), cannot effectively deal with it. To better handle camouflage according to the perception characteristics of the human visual system (HVS) in terms of minimum change of intensity under a certain background illumination, we propose an improved ViBe method using an adaptive distance threshold, named IViBe for short. Different from the original ViBe using a fixed distance threshold for background matching, our approach adaptively sets a distance threshold for each background sample based on its intensity. Through analyzing the performance of the HVS in discriminating intensity changes, we determine a reasonable ratio between the intensity of a background sample and its corresponding distance threshold. We also analyze the impacts of our adaptive threshold together with an update mechanism on detection results. Experimental results demonstrate that our method outperforms ViBe even when the foreground and background share similar intensities. Furthermore, in a scenario where foreground objects are motionless for several frames, our IViBe not only reduces the initial false negatives, but also suppresses the diffusion of misclassification caused by those false negatives serving as erroneous background seeds, and hence shows an improved performance compared to ViBe.

  8. An Adaptive Threshold in Mammalian Neocortical Evolution

    PubMed Central

    Kalinka, Alex T.; Tomancak, Pavel; Huttner, Wieland B.

    2014-01-01

    Expansion of the neocortex is a hallmark of human evolution. However, determining which adaptive mechanisms facilitated its expansion remains an open question. Here we show, using the gyrencephaly index (GI) and other physiological and life-history data for 102 mammalian species, that gyrencephaly is an ancestral mammalian trait. We find that variation in GI does not evolve linearly across species, but that mammals constitute two principal groups above and below a GI threshold value of 1.5, approximately equal to 109 neurons, which may be characterized by distinct constellations of physiological and life-history traits. By integrating data on neurogenic period, neuroepithelial founder pool size, cell-cycle length, progenitor-type abundances, and cortical neuron number into discrete mathematical models, we identify symmetric proliferative divisions of basal progenitors in the subventricular zone of the developing neocortex as evolutionarily necessary for generating a 14-fold increase in daily prenatal neuron production, traversal of the GI threshold, and thus establishment of two principal groups. We conclude that, despite considerable neuroanatomical differences, changes in the length of the neurogenic period alone, rather than any novel neurogenic progenitor lineage, are sufficient to explain differences in neuron number and neocortical size between species within the same principal group. PMID:25405475

  9. Adaptive Spike Threshold Enables Robust and Temporally Precise Neuronal Encoding

    PubMed Central

    Resnik, Andrey; Celikel, Tansu; Englitz, Bernhard

    2016-01-01

    Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information. PMID:27304526

  10. Adaptive thresholding for reliable topological inference in single subject fMRI analysis.

    PubMed

    Gorgolewski, Krzysztof J; Storkey, Amos J; Bastin, Mark E; Pernet, Cyril R

    2012-01-01

    Single subject fMRI has proved to be a useful tool for mapping functional areas in clinical procedures such as tumor resection. Using fMRI data, clinicians assess the risk, plan and execute such procedures based on thresholded statistical maps. However, because current thresholding methods were developed mainly in the context of cognitive neuroscience group studies, most single subject fMRI maps are thresholded manually to satisfy specific criteria related to single subject analyzes. Here, we propose a new adaptive thresholding method which combines Gamma-Gaussian mixture modeling with topological thresholding to improve cluster delineation. In a series of simulations we show that by adapting to the signal and noise properties, the new method performs well in terms of total number of errors but also in terms of the trade-off between false negative and positive cluster error rates. Similarly, simulations show that adaptive thresholding performs better than fixed thresholding in terms of over and underestimation of the true activation border (i.e., higher spatial accuracy). Finally, through simulations and a motor test-retest study on 10 volunteer subjects, we show that adaptive thresholding improves reliability, mainly by accounting for the global signal variance. This in turn increases the likelihood that the true activation pattern can be determined offering an automatic yet flexible way to threshold single subject fMRI maps.

  11. Adaptive thresholding for reliable topological inference in single subject fMRI analysis.

    PubMed

    Gorgolewski, Krzysztof J; Storkey, Amos J; Bastin, Mark E; Pernet, Cyril R

    2012-01-01

    Single subject fMRI has proved to be a useful tool for mapping functional areas in clinical procedures such as tumor resection. Using fMRI data, clinicians assess the risk, plan and execute such procedures based on thresholded statistical maps. However, because current thresholding methods were developed mainly in the context of cognitive neuroscience group studies, most single subject fMRI maps are thresholded manually to satisfy specific criteria related to single subject analyzes. Here, we propose a new adaptive thresholding method which combines Gamma-Gaussian mixture modeling with topological thresholding to improve cluster delineation. In a series of simulations we show that by adapting to the signal and noise properties, the new method performs well in terms of total number of errors but also in terms of the trade-off between false negative and positive cluster error rates. Similarly, simulations show that adaptive thresholding performs better than fixed thresholding in terms of over and underestimation of the true activation border (i.e., higher spatial accuracy). Finally, through simulations and a motor test-retest study on 10 volunteer subjects, we show that adaptive thresholding improves reliability, mainly by accounting for the global signal variance. This in turn increases the likelihood that the true activation pattern can be determined offering an automatic yet flexible way to threshold single subject fMRI maps. PMID:22936908

  12. Methods for automatic trigger threshold adjustment

    DOEpatents

    Welch, Benjamin J; Partridge, Michael E

    2014-03-18

    Methods are presented for adjusting trigger threshold values to compensate for drift in the quiescent level of a signal monitored for initiating a data recording event, thereby avoiding false triggering conditions. Initial threshold values are periodically adjusted by re-measuring the quiescent signal level, and adjusting the threshold values by an offset computation based upon the measured quiescent signal level drift. Re-computation of the trigger threshold values can be implemented on time based or counter based criteria. Additionally, a qualification width counter can be utilized to implement a requirement that a trigger threshold criterion be met a given number of times prior to initiating a data recording event, further reducing the possibility of a false triggering situation.

  13. Methods for threshold determination in multiplexed assays

    SciTech Connect

    Tammero, Lance F. Bentley; Dzenitis, John M; Hindson, Benjamin J

    2014-06-24

    Methods for determination of threshold values of signatures comprised in an assay are described. Each signature enables detection of a target. The methods determine a probability density function of negative samples and a corresponding false positive rate curve. A false positive criterion is established and a threshold for that signature is determined as a point at which the false positive rate curve intersects the false positive criterion. A method for quantitative analysis and interpretation of assay results together with a method for determination of a desired limit of detection of a signature in an assay are also described.

  14. Adaptive local thresholding for robust nucleus segmentation utilizing shape priors

    NASA Astrophysics Data System (ADS)

    Wang, Xiuzhong; Srinivas, Chukka

    2016-03-01

    This paper describes a novel local thresholding method for foreground detection. First, a Canny edge detection method is used for initial edge detection. Then, tensor voting is applied on the initial edge pixels, using a nonsymmetric tensor field tailored to encode prior information about nucleus size, shape, and intensity spatial distribution. Tensor analysis is then performed to generate the saliency image and, based on that, the refined edge. Next, the image domain is divided into blocks. In each block, at least one foreground and one background pixel are sampled for each refined edge pixel. The saliency weighted foreground histogram and background histogram are then created. These two histograms are used to calculate a threshold by minimizing the background and foreground pixel classification error. The block-wise thresholds are then used to generate the threshold for each pixel via interpolation. Finally, the foreground is obtained by comparing the original image with the threshold image. The effective use of prior information, combined with robust techniques, results in far more reliable foreground detection, which leads to robust nucleus segmentation.

  15. Adaptive threshold harvesting and the suppression of transients.

    PubMed

    Segura, Juan; Hilker, Frank M; Franco, Daniel

    2016-04-21

    Fluctuations in population size are in many cases undesirable, as they can induce outbreaks and extinctions or impede the optimal management of populations. We propose the strategy of adaptive threshold harvesting (ATH) to control fluctuations in population size. In this strategy, the population is harvested whenever population size has grown beyond a certain proportion in comparison to the previous generation. Taking such population increases into account, ATH intervenes also at smaller population sizes than the strategy of threshold harvesting. Moreover, ATH is the harvesting version of adaptive limiter control (ALC) that has recently been shown to stabilize population oscillations in both experiments and theoretical studies. We find that ATH has similar stabilization properties as ALC and thus offers itself as a harvesting alternative for the control of pests, exploitation of biological resources, or when restocking interventions required from ALC are unfeasible. We present numerical simulations of ATH to illustrate its performance in the presence of noise, lattice effect, and Allee effect. In addition, we propose an adjustment to both ATH and ALC that restricts interventions when control seems unnecessary, i.e. when population size is too small or too large, respectively. This adjustment cancels prolonged transients. PMID:26854876

  16. Adaptive Algebraic Multigrid Methods

    SciTech Connect

    Brezina, M; Falgout, R; MacLachlan, S; Manteuffel, T; McCormick, S; Ruge, J

    2004-04-09

    Our ability to simulate physical processes numerically is constrained by our ability to solve the resulting linear systems, prompting substantial research into the development of multiscale iterative methods capable of solving these linear systems with an optimal amount of effort. Overcoming the limitations of geometric multigrid methods to simple geometries and differential equations, algebraic multigrid methods construct the multigrid hierarchy based only on the given matrix. While this allows for efficient black-box solution of the linear systems associated with discretizations of many elliptic differential equations, it also results in a lack of robustness due to assumptions made on the near-null spaces of these matrices. This paper introduces an extension to algebraic multigrid methods that removes the need to make such assumptions by utilizing an adaptive process. The principles which guide the adaptivity are highlighted, as well as their application to algebraic multigrid solution of certain symmetric positive-definite linear systems.

  17. Baseline Adaptive Wavelet Thresholding Technique for sEMG Denoising

    NASA Astrophysics Data System (ADS)

    Bartolomeo, L.; Zecca, M.; Sessa, S.; Lin, Z.; Mukaeda, Y.; Ishii, H.; Takanishi, Atsuo

    2011-06-01

    The surface Electromyography (sEMG) signal is affected by different sources of noises: current technology is considerably robust to the interferences of the power line or the cable motion artifacts, but still there are many limitations with the baseline and the movement artifact noise. In particular, these sources have frequency spectra that include also the low-frequency components of the sEMG frequency spectrum; therefore, a standard all-bandwidth filtering could alter important information. The Wavelet denoising method has been demonstrated to be a powerful solution in processing white Gaussian noise in biological signals. In this paper we introduce a new technique for the denoising of the sEMG signal: by using the baseline of the signal before the task, we estimate the thresholds to apply to the Wavelet thresholding procedure. The experiments have been performed on ten healthy subjects, by placing the electrodes on the Extensor Carpi Ulnaris and Triceps Brachii on right upper and lower arms, and performing a flexion and extension of the right wrist. An Inertial Measurement Unit, developed in our group, has been used to recognize the movements of the hands to segment the exercise and the pre-task baseline. Finally, we show better performances of the proposed method in term of noise cancellation and distortion of the signal, quantified by a new suggested indicator of denoising quality, compared to the standard Donoho technique.

  18. Adaptations to training at the individual anaerobic threshold.

    PubMed

    Keith, S P; Jacobs, I; McLellan, T M

    1992-01-01

    The individual anaerobic threshold (Th(an)) is the highest metabolic rate at which blood lactate concentrations can be maintained at a steady-state during prolonged exercise. The purpose of this study was to test the hypothesis that training at the Th(an) would cause a greater change in indicators of training adaptation than would training "around" the Th(an). Three groups of subjects were evaluated before, and again after 4 and 8 weeks of training: a control group, a group which trained continuously for 30 min at the Th(an) intensity (SS), and a group (NSS) which divided the 30 min of training into 7.5-min blocks at intensities which alternated between being below the Th(an) [Th(an) -30% of the difference between Th(an) and maximal oxygen consumption (VO2max)] and above the Th(an) (Th(an) +30% of the difference between Th(an) and VO2max). The VO2max increased significantly from 4.06 to 4.27 l.min-1 in SS and from 3.89 to 4.06 l.min-1 in NSS. The power output (W) at Th(an) increased from 70.5 to 79.8% VO2max in SS and from 71.1 to 80.7% VO2max in NSS. The magnitude of change in VO2max, W at Th(an), % VO2max at Th(an) and in exercise time to exhaustion at the pretraining Th(an) was similar in both trained groups. Vastus lateralis citrate synthase and 3-hydroxyacyl-CoA-dehydrogenase activities increased to the same extent in both trained groups. While all of these training-induced adaptations were statistically significant (P < 0.05), there were no significant changes in any of these variables for the control subjects.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:1425631

  19. Spike-Threshold Adaptation Predicted by Membrane Potential Dynamics In Vivo

    PubMed Central

    Fontaine, Bertrand; Peña, José Luis; Brette, Romain

    2014-01-01

    Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo. PMID:24722397

  20. Research of adaptive threshold edge detection algorithm based on statistics canny operator

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Wang, Huaisuo; Huang, Hua

    2015-12-01

    The traditional Canny operator cannot get the optimal threshold in different scene, on this foundation, an improved Canny edge detection algorithm based on adaptive threshold is proposed. The result of the experiment pictures indicate that the improved algorithm can get responsible threshold, and has the better accuracy and precision in the edge detection.

  1. A New Adaptive Image Denoising Method Based on Neighboring Coefficients

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.

  2. An adaptive threshold based image processing technique for improved glaucoma detection and classification.

    PubMed

    Issac, Ashish; Partha Sarathi, M; Dutta, Malay Kishore

    2015-11-01

    Glaucoma is an optic neuropathy which is one of the main causes of permanent blindness worldwide. This paper presents an automatic image processing based method for detection of glaucoma from the digital fundus images. In this proposed work, the discriminatory parameters of glaucoma infection, such as cup to disc ratio (CDR), neuro retinal rim (NRR) area and blood vessels in different regions of the optic disc has been used as features and fed as inputs to learning algorithms for glaucoma diagnosis. These features which have discriminatory changes with the occurrence of glaucoma are strategically used for training the classifiers to improve the accuracy of identification. The segmentation of optic disc and cup based on adaptive threshold of the pixel intensities lying in the optic nerve head region. Unlike existing methods the proposed algorithm is based on an adaptive threshold that uses local features from the fundus image for segmentation of optic cup and optic disc making it invariant to the quality of the image and noise content which may find wider acceptability. The experimental results indicate that such features are more significant in comparison to the statistical or textural features as considered in existing works. The proposed work achieves an accuracy of 94.11% with a sensitivity of 100%. A comparison of the proposed work with the existing methods indicates that the proposed approach has improved accuracy of classification glaucoma from a digital fundus which may be considered clinically significant.

  3. Reinforcement learning for adaptive threshold control of restorative brain-computer interfaces: a Bayesian simulation.

    PubMed

    Bauer, Robert; Gharabaghi, Alireza

    2015-01-01

    Restorative brain-computer interfaces (BCI) are increasingly used to provide feedback of neuronal states in a bid to normalize pathological brain activity and achieve behavioral gains. However, patients and healthy subjects alike often show a large variability, or even inability, of brain self-regulation for BCI control, known as BCI illiteracy. Although current co-adaptive algorithms are powerful for assistive BCIs, their inherent class switching clashes with the operant conditioning goal of restorative BCIs. Moreover, due to the treatment rationale, the classifier of restorative BCIs usually has a constrained feature space, thus limiting the possibility of classifier adaptation. In this context, we applied a Bayesian model of neurofeedback and reinforcement learning for different threshold selection strategies to study the impact of threshold adaptation of a linear classifier on optimizing restorative BCIs. For each feedback iteration, we first determined the thresholds that result in minimal action entropy and maximal instructional efficiency. We then used the resulting vector for the simulation of continuous threshold adaptation. We could thus show that threshold adaptation can improve reinforcement learning, particularly in cases of BCI illiteracy. Finally, on the basis of information-theory, we provided an explanation for the achieved benefits of adaptive threshold setting. PMID:25729347

  4. Reinforcement learning for adaptive threshold control of restorative brain-computer interfaces: a Bayesian simulation.

    PubMed

    Bauer, Robert; Gharabaghi, Alireza

    2015-01-01

    Restorative brain-computer interfaces (BCI) are increasingly used to provide feedback of neuronal states in a bid to normalize pathological brain activity and achieve behavioral gains. However, patients and healthy subjects alike often show a large variability, or even inability, of brain self-regulation for BCI control, known as BCI illiteracy. Although current co-adaptive algorithms are powerful for assistive BCIs, their inherent class switching clashes with the operant conditioning goal of restorative BCIs. Moreover, due to the treatment rationale, the classifier of restorative BCIs usually has a constrained feature space, thus limiting the possibility of classifier adaptation. In this context, we applied a Bayesian model of neurofeedback and reinforcement learning for different threshold selection strategies to study the impact of threshold adaptation of a linear classifier on optimizing restorative BCIs. For each feedback iteration, we first determined the thresholds that result in minimal action entropy and maximal instructional efficiency. We then used the resulting vector for the simulation of continuous threshold adaptation. We could thus show that threshold adaptation can improve reinforcement learning, particularly in cases of BCI illiteracy. Finally, on the basis of information-theory, we provided an explanation for the achieved benefits of adaptive threshold setting.

  5. Reinforcement learning for adaptive threshold control of restorative brain-computer interfaces: a Bayesian simulation

    PubMed Central

    Bauer, Robert; Gharabaghi, Alireza

    2015-01-01

    Restorative brain-computer interfaces (BCI) are increasingly used to provide feedback of neuronal states in a bid to normalize pathological brain activity and achieve behavioral gains. However, patients and healthy subjects alike often show a large variability, or even inability, of brain self-regulation for BCI control, known as BCI illiteracy. Although current co-adaptive algorithms are powerful for assistive BCIs, their inherent class switching clashes with the operant conditioning goal of restorative BCIs. Moreover, due to the treatment rationale, the classifier of restorative BCIs usually has a constrained feature space, thus limiting the possibility of classifier adaptation. In this context, we applied a Bayesian model of neurofeedback and reinforcement learning for different threshold selection strategies to study the impact of threshold adaptation of a linear classifier on optimizing restorative BCIs. For each feedback iteration, we first determined the thresholds that result in minimal action entropy and maximal instructional efficiency. We then used the resulting vector for the simulation of continuous threshold adaptation. We could thus show that threshold adaptation can improve reinforcement learning, particularly in cases of BCI illiteracy. Finally, on the basis of information-theory, we provided an explanation for the achieved benefits of adaptive threshold setting. PMID:25729347

  6. Positive–negative corresponding normalized ghost imaging based on an adaptive threshold

    NASA Astrophysics Data System (ADS)

    Li, G. L.; Zhao, Y.; Yang, Z. H.; Liu, X.

    2016-11-01

    Ghost imaging (GI) technology has attracted increasing attention as a new imaging technique in recent years. However, the signal-to-noise ratio (SNR) of GI with pseudo-thermal light needs to be improved before it meets engineering application demands. We therefore propose a new scheme called positive–negative correspondence normalized GI based on an adaptive threshold (PCNGI-AT) to achieve a good performance with less amount of data. In this work, we use both the advantages of normalized GI (NGI) and positive–negative correspondence GI (P–NCGI). The correctness and feasibility of the scheme were proved in theory before we designed an adaptive threshold selection method, in which the parameter of object signal selection conditions is replaced by the normalizing value. The simulation and experimental results reveal that the SNR of the proposed scheme is better than that of time-correspondence differential GI (TCDGI), avoiding the calculation of the matrix of correlation and reducing the amount of data used. The method proposed will make GI far more practical in engineering applications.

  7. Motion Estimation Based on Mutual Information and Adaptive Multi-Scale Thresholding.

    PubMed

    Xu, Rui; Taubman, David; Naman, Aous Thabit

    2016-03-01

    This paper proposes a new method of calculating a matching metric for motion estimation. The proposed method splits the information in the source images into multiple scale and orientation subbands, reduces the subband values to a binary representation via an adaptive thresholding algorithm, and uses mutual information to model the similarity of corresponding square windows in each image. A moving window strategy is applied to recover a dense estimated motion field whose properties are explored. The proposed matching metric is a sum of mutual information scores across space, scale, and orientation. This facilitates the exploitation of information diversity in the source images. Experimental comparisons are performed amongst several related approaches, revealing that the proposed matching metric is better able to exploit information diversity, generating more accurate motion fields.

  8. Cooperative Spectrum Sensing with Multiple Antennas Using Adaptive Double-Threshold Based Energy Detector in Cognitive Radio Networks

    NASA Astrophysics Data System (ADS)

    Bagwari, A.; Tomar, G. S.

    2014-04-01

    In Cognitive radio networks, spectrum sensing is used to sense the unused spectrum in an opportunistic manner. In this paper, multiple antennas based energy detector utilizing adaptive double-threshold for spectrum sensing is proposed, which enhances detection performance and overcomes sensing failure problem as well. The detection threshold is made adaptive to the fluctuation of the received signal power in each local detector of cognitive radio (CR) user. Numerical results show that by using multiple antennas at the CRs, it is possible to significantly improve detection performance at very low signal-to-noise ratio (SNR). Further, the scheme was analyzed in conjunction with cooperative spectrum sensing (CSS), where CRs utilize selection combining of the decision statistics obtained by an adaptive double-threshold energy detector for making a binary decision of the presence or absence of a primary user. The decision of each CR is forwarded over error free orthogonal channels to the fusion centre, which takes the final decision of a spectrum hole. It is further found that CSS with multiple antenna-based energy detector with adaptive double-threshold improves detection performance around 26.8 % as compared to hierarchical with quantization method at -12 dB SNR, under the condition that a small number of sensing nodes are used in spectrum sensing.

  9. Reinforcement learning by Hebbian synapses with adaptive thresholds.

    PubMed

    Pennartz, C M

    1997-11-01

    A central problem in learning theory is how the vertebrate brain processes reinforcing stimuli in order to master complex sensorimotor tasks. This problem belongs to the domain of supervised learning, in which errors in the response of a neural network serve as the basis for modification of synaptic connectivity in the network and thereby train it on a computational task. The model presented here shows how a reinforcing feedback can modify synapses in a neuronal network according to the principles of Hebbian learning. The reinforcing feedback steers synapses towards long-term potentiation or depression by critically influencing the rise in postsynaptic calcium, in accordance with findings on synaptic plasticity in mammalian brain. An important feature of the model is the dependence of modification thresholds on the previous history of reinforcing feedback processed by the network. The learning algorithm trained networks successfully on a task in which a population vector in the motor output was required to match a sensory stimulus vector presented shortly before. In another task, networks were trained to compute coordinate transformations by combining different visual inputs. The model continued to behave well when simplified units were replaced by single-compartment neurons equipped with several conductances and operating in continuous time. This novel form of reinforcement learning incorporates essential properties of Hebbian synaptic plasticity and thereby shows that supervised learning can be accomplished by a learning rule similar to those used in physiologically plausible models of unsupervised learning. The model can be crudely correlated to the anatomy and electrophysiology of the amygdala, prefrontal and cingulate cortex and has predictive implications for further experiments on synaptic plasticity and learning processes mediated by these areas.

  10. Optimum threshold selection method of centroid computation for Gaussian spot

    NASA Astrophysics Data System (ADS)

    Li, Xuxu; Li, Xinyang; Wang, Caixia

    2015-10-01

    Centroid computation of Gaussian spot is often conducted to get the exact position of a target or to measure wave-front slopes in the fields of target tracking and wave-front sensing. Center of Gravity (CoG) is the most traditional method of centroid computation, known as its low algorithmic complexity. However both electronic noise from the detector and photonic noise from the environment reduces its accuracy. In order to improve the accuracy, thresholding is unavoidable before centroid computation, and optimum threshold need to be selected. In this paper, the model of Gaussian spot is established to analyze the performance of optimum threshold under different Signal-to-Noise Ratio (SNR) conditions. Besides, two optimum threshold selection methods are introduced: TmCoG (using m % of the maximum intensity of spot as threshold), and TkCoG ( usingμn +κσ n as the threshold), μn and σn are the mean value and deviation of back noise. Firstly, their impact on the detection error under various SNR conditions is simulated respectively to find the way to decide the value of k or m. Then, a comparison between them is made. According to the simulation result, TmCoG is superior over TkCoG for the accuracy of selected threshold, and detection error is also lower.

  11. Implementation for temporal noise identification using adaptive threshold of infrared imaging system

    NASA Astrophysics Data System (ADS)

    Lim, Inok

    2007-10-01

    Bad pixels are spatial or temporal noise which arise from dead pixels by fixed signal levels or blinking pixels by variable signal levels that go beyond the bounds of normal pixel levels at the temperature. Because bad pixels are the false targets over infrared imaging system for tracking, those must be corrected. Main contribution to the number of bad pixels is fixed pattern noise (FPN) according to increasing array size. And it is more simple to establish whether FPN is or not through analyzing of accumulated frames. But it needs to calculate with more complex implementation such standard deviation from frame to frame in case of the temporal noise. Both cases it is very important to establish the threshold levels for identifying at variable operating temperatures. In this paper, we propose a more efficient data analysis method and a temporal noise identification method using adaptive threshold for infrared imaging system, and the hardware is implemented to identify and replace bad pixels. And its result is confirmed visually by bad pixel map images.

  12. Unipolar terminal-attractor based neural associative memory with adaptive threshold

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)

    1993-01-01

    A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.

  13. Unipolar Terminal-Attractor Based Neural Associative Memory with Adaptive Threshold

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)

    1996-01-01

    A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner-product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.

  14. Adapting to a changing environment: non-obvious thresholds in multi-scale systems.

    PubMed

    Perryman, Clare; Wieczorek, Sebastian

    2014-10-01

    Many natural and technological systems fail to adapt to changing external conditions and move to a different state if the conditions vary too fast. Such 'non-adiabatic' processes are ubiquitous, but little understood. We identify these processes with a new nonlinear phenomenon-an intricate threshold where a forced system fails to adiabatically follow a changing stable state. In systems with multiple time scales, we derive existence conditions that show such thresholds to be generic, but non-obvious, meaning they cannot be captured by traditional stability theory. Rather, the phenomenon can be analysed using concepts from modern singular perturbation theory: folded singularities and canard trajectories, including composite canards. Thus, non-obvious thresholds should explain the failure to adapt to a changing environment in a wide range of multi-scale systems including: tipping points in the climate system, regime shifts in ecosystems, excitability in nerve cells, adaptation failure in regulatory genes and adiabatic switching in technology. PMID:25294963

  15. Adapting to a changing environment: non-obvious thresholds in multi-scale systems

    PubMed Central

    Perryman, Clare; Wieczorek, Sebastian

    2014-01-01

    Many natural and technological systems fail to adapt to changing external conditions and move to a different state if the conditions vary too fast. Such ‘non-adiabatic’ processes are ubiquitous, but little understood. We identify these processes with a new nonlinear phenomenon—an intricate threshold where a forced system fails to adiabatically follow a changing stable state. In systems with multiple time scales, we derive existence conditions that show such thresholds to be generic, but non-obvious, meaning they cannot be captured by traditional stability theory. Rather, the phenomenon can be analysed using concepts from modern singular perturbation theory: folded singularities and canard trajectories, including composite canards. Thus, non-obvious thresholds should explain the failure to adapt to a changing environment in a wide range of multi-scale systems including: tipping points in the climate system, regime shifts in ecosystems, excitability in nerve cells, adaptation failure in regulatory genes and adiabatic switching in technology. PMID:25294963

  16. Adaptive Threshold Neural Spike Detector Using Stationary Wavelet Transform in CMOS.

    PubMed

    Yang, Yuning; Boling, C Sam; Kamboh, Awais M; Mason, Andrew J

    2015-11-01

    Spike detection is an essential first step in the analysis of neural recordings. Detection at the frontend eases the bandwidth requirement for wireless data transfer of multichannel recordings to extra-cranial processing units. In this work, a low power digital integrated spike detector based on the lifting stationary wavelet transform is presented and developed. By monitoring the standard deviation of wavelet coefficients, the proposed detector can adaptively set a threshold value online for each channel independently without requiring user intervention. A prototype 16-channel spike detector was designed and tested in an FPGA. The method enables spike detection with nearly 90% accuracy even when the signal-to-noise ratio is as low as 2. The design was mapped to 130 nm CMOS technology and shown to occupy 0.014 mm(2) of area and dissipate 1.7 μW of power per channel, making it suitable for implantable multichannel neural recording systems. PMID:25955990

  17. Adaptive Threshold Neural Spike Detector Using Stationary Wavelet Transform in CMOS.

    PubMed

    Yang, Yuning; Boling, C Sam; Kamboh, Awais M; Mason, Andrew J

    2015-11-01

    Spike detection is an essential first step in the analysis of neural recordings. Detection at the frontend eases the bandwidth requirement for wireless data transfer of multichannel recordings to extra-cranial processing units. In this work, a low power digital integrated spike detector based on the lifting stationary wavelet transform is presented and developed. By monitoring the standard deviation of wavelet coefficients, the proposed detector can adaptively set a threshold value online for each channel independently without requiring user intervention. A prototype 16-channel spike detector was designed and tested in an FPGA. The method enables spike detection with nearly 90% accuracy even when the signal-to-noise ratio is as low as 2. The design was mapped to 130 nm CMOS technology and shown to occupy 0.014 mm(2) of area and dissipate 1.7 μW of power per channel, making it suitable for implantable multichannel neural recording systems.

  18. Olfactory Detection Thresholds and Adaptation in Adults with Autism Spectrum Condition

    ERIC Educational Resources Information Center

    Tavassoli, T.; Baron-Cohen, S.

    2012-01-01

    Sensory issues have been widely reported in Autism Spectrum Conditions (ASC). Since olfaction is one of the least investigated senses in ASC, the current studies explore olfactory detection thresholds and adaptation to olfactory stimuli in adults with ASC. 80 participants took part, 38 (18 females, 20 males) with ASC and 42 control participants…

  19. Severe Obesity Shifts Metabolic Thresholds but Does Not Attenuate Aerobic Training Adaptations in Zucker Rats

    PubMed Central

    Rosa, Thiago S.; Simões, Herbert G.; Rogero, Marcelo M.; Moraes, Milton R.; Denadai, Benedito S.; Arida, Ricardo M.; Andrade, Marília S.; Silva, Bruno M.

    2016-01-01

    Severe obesity affects metabolism with potential to influence the lactate and glycemic response to different exercise intensities in untrained and trained rats. Here we evaluated metabolic thresholds and maximal aerobic capacity in rats with severe obesity and lean counterparts at pre- and post-training. Zucker rats (obese: n = 10, lean: n = 10) were submitted to constant treadmill bouts, to determine the maximal lactate steady state, and an incremental treadmill test, to determine the lactate threshold, glycemic threshold and maximal velocity at pre and post 8 weeks of treadmill training. Velocities of the lactate threshold and glycemic threshold agreed with the maximal lactate steady state velocity on most comparisons. The maximal lactate steady state velocity occurred at higher percentage of the maximal velocity in Zucker rats at pre-training than the percentage commonly reported and used for training prescription for other rat strains (i.e., 60%) (obese = 78 ± 9% and lean = 68 ± 5%, P < 0.05 vs. 60%). The maximal lactate steady state velocity and maximal velocity were lower in the obese group at pre-training (P < 0.05 vs. lean), increased in both groups at post-training (P < 0.05 vs. pre), but were still lower in the obese group at post-training (P < 0.05 vs. lean). Training-induced increase in maximal lactate steady state, lactate threshold and glycemic threshold velocities was similar between groups (P > 0.05), whereas increase in maximal velocity was greater in the obese group (P < 0.05 vs. lean). In conclusion, lactate threshold, glycemic threshold and maximal lactate steady state occurred at similar exercise intensity in Zucker rats at pre- and post-training. Severe obesity shifted metabolic thresholds to higher exercise intensity at pre-training, but did not attenuate submaximal and maximal aerobic training adaptations. PMID:27148063

  20. Method For Model-Reference Adaptive Control

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1990-01-01

    Relatively simple method of model-reference adaptive control (MRAC) developed from two prior classes of MRAC techniques: signal-synthesis method and parameter-adaption method. Incorporated into unified theory, which yields more general adaptation scheme.

  1. A Threshold-Adaptive Reputation System on Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Tsai, Hsiao-Chien; Lo, Nai-Wei; Wu, Tzong-Chen

    In recent years huge potential benefits from novel applications in mobile ad hoc networks (MANET) have been discussed extensively. However, without robust security mechanisms and systems to provide safety shell through the MANET infrastructure, MANET applications can be vulnerable and hammered by malicious attackers easily. In order to detect misbehaved message routing and identify malicious attackers in MANET, schemes based on reputation concept have shown their advantages in this area in terms of good scalability and simple threshold-based detection strategy. We observed that previous reputation schemes generally use predefined thresholds which do not take into account the effect of behavior dynamics between nodes in a period of time. In this paper, we propose a Threshold-Adaptive Reputation System (TARS) to overcome the shortcomings of static threshold strategy and improve the overall MANET performance under misbehaved routing attack. A fuzzy-based inference engine is introduced to evaluate the trustiness of a node's one-hop neighbors. Malicious nodes whose trust values are lower than the adaptive threshold, will be detected and filtered out by their honest neighbors during trustiness evaluation process. The results of network simulation show that the TARS outperforms other compared schemes under security attacks in most cases and at the same time reduces the decrease of total packet delivery ratio by 67% in comparison with MANET without reputation system.

  2. Low-Threshold Active Teaching Methods for Mathematic Instruction

    ERIC Educational Resources Information Center

    Marotta, Sebastian M.; Hargis, Jace

    2011-01-01

    In this article, we present a large list of low-threshold active teaching methods categorized so the instructor can efficiently access and target the deployment of conceptually based lessons. The categories include teaching strategies for lecture on large and small class sizes; student action individually, in pairs, and groups; games; interaction…

  3. Adaptive thresholding of chest temporal subtraction images in computer-aided diagnosis of pathologic change

    NASA Astrophysics Data System (ADS)

    Harrison, Melanie; Looper, Jared; Armato, Samuel G.

    2016-03-01

    Radiologists frequently use chest radiographs acquired at different times to diagnose a patient by identifying regions of change. Temporal subtraction (TS) images are formed when a computer warps a radiographic image to register and then subtract one image from the other, accentuating regions of change. The purpose of this study was to create a computeraided diagnostic (CAD) system to threshold chest TS images and identify candidate regions of pathologic change. Each thresholding technique created two different candidate regions: light and dark. Light regions have a high gray-level mean, while dark regions have a low gray-level mean; areas with no change appear as medium-gray pixels. Ten different thresholding techniques were examined and compared. By thresholding light and dark candidate regions separately, the number of properly thresholded regions improved. The thresholding of light and dark regions separately produced fewer overall candidate regions that included more regions of actual pathologic change than global thresholding of the image. Overall, the moment-preserving method produced the best results for light regions, while the normal distribution method produced the best results for dark regions. Separation of light and dark candidate regions by thresholding shows potential as the first step in creating a CAD system to detect pathologic change in chest TS images.

  4. An adaptive level set method

    SciTech Connect

    Milne, R.B.

    1995-12-01

    This thesis describes a new method for the numerical solution of partial differential equations of the parabolic type on an adaptively refined mesh in two or more spatial dimensions. The method is motivated and developed in the context of the level set formulation for the curvature dependent propagation of surfaces in three dimensions. In that setting, it realizes the multiple advantages of decreased computational effort, localized accuracy enhancement, and compatibility with problems containing a range of length scales.

  5. Future temperature in southwest Asia projected to exceed a threshold for human adaptability

    NASA Astrophysics Data System (ADS)

    Pal, Jeremy S.; Eltahir, Elfatih A. B.

    2016-02-01

    A human body may be able to adapt to extremes of dry-bulb temperature (commonly referred to as simply temperature) through perspiration and associated evaporative cooling provided that the wet-bulb temperature (a combined measure of temperature and humidity or degree of `mugginess’) remains below a threshold of 35 °C. (ref. ). This threshold defines a limit of survivability for a fit human under well-ventilated outdoor conditions and is lower for most people. We project using an ensemble of high-resolution regional climate model simulations that extremes of wet-bulb temperature in the region around the Arabian Gulf are likely to approach and exceed this critical threshold under the business-as-usual scenario of future greenhouse gas concentrations. Our results expose a specific regional hotspot where climate change, in the absence of significant mitigation, is likely to severely impact human habitability in the future.

  6. Shape anomaly detection under strong measurement noise: An analytical approach to adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Krasichkov, Alexander S.; Grigoriev, Eugene B.; Bogachev, Mikhail I.; Nifontov, Eugene M.

    2015-10-01

    We suggest an analytical approach to the adaptive thresholding in a shape anomaly detection problem. We find an analytical expression for the distribution of the cosine similarity score between a reference shape and an observational shape hindered by strong measurement noise that depends solely on the noise level and is independent of the particular shape analyzed. The analytical treatment is also confirmed by computer simulations and shows nearly perfect agreement. Using this analytical solution, we suggest an improved shape anomaly detection approach based on adaptive thresholding. We validate the noise robustness of our approach using typical shapes of normal and pathological electrocardiogram cycles hindered by additive white noise. We show explicitly that under high noise levels our approach considerably outperforms the conventional tactic that does not take into account variations in the noise level.

  7. Watershed safety and quality control by safety threshold method

    NASA Astrophysics Data System (ADS)

    Da-Wei Tsai, David; Mengjung Chou, Caroline; Ramaraj, Rameshprabu; Liu, Wen-Cheng; Honglay Chen, Paris

    2014-05-01

    Taiwan was warned as one of the most dangerous countries by IPCC and the World Bank. In such an exceptional and perilous island, we would like to launch the strategic research of land-use management on the catastrophe prevention and environmental protection. This study used the watershed management by "Safety Threshold Method" to restore and to prevent the disasters and pollution on island. For the deluge prevention, this study applied the restoration strategy to reduce total runoff which was equilibrium to 59.4% of the infiltration each year. For the sediment management, safety threshold management could reduce the sediment below the equilibrium of the natural sediment cycle. In the water quality issues, the best strategies exhibited the significant total load reductions of 10% in carbon (BOD5), 15% in nitrogen (nitrate) and 9% in phosphorus (TP). We found out the water quality could meet the BOD target by the 50% peak reduction with management. All the simulations demonstrated the safety threshold method was helpful to control the loadings within the safe range of disasters and environmental quality. Moreover, from the historical data of whole island, the past deforestation policy and the mistake economic projects were the prime culprits. Consequently, this study showed a practical method to manage both the disasters and pollution in a watershed scale by the land-use management.

  8. Impact of sub and supra-threshold adaptation currents in networks of spiking neurons.

    PubMed

    Colliaux, David; Yger, Pierre; Kaneko, Kunihiko

    2015-12-01

    Neuronal adaptation is the intrinsic capacity of the brain to change, by various mechanisms, its dynamical responses as a function of the context. Such a phenomena, widely observed in vivo and in vitro, is known to be crucial in homeostatic regulation of the activity and gain control. The effects of adaptation have already been studied at the single-cell level, resulting from either voltage or calcium gated channels both activated by the spiking activity and modulating the dynamical responses of the neurons. In this study, by disentangling those effects into a linear (sub-threshold) and a non-linear (supra-threshold) part, we focus on the the functional role of those two distinct components of adaptation onto the neuronal activity at various scales, starting from single-cell responses up to recurrent networks dynamics, and under stationary or non-stationary stimulations. The effects of slow currents on collective dynamics, like modulation of population oscillation and reliability of spike patterns, is quantified for various types of adaptation in sparse recurrent networks.

  9. Impact of sub and supra-threshold adaptation currents in networks of spiking neurons.

    PubMed

    Colliaux, David; Yger, Pierre; Kaneko, Kunihiko

    2015-12-01

    Neuronal adaptation is the intrinsic capacity of the brain to change, by various mechanisms, its dynamical responses as a function of the context. Such a phenomena, widely observed in vivo and in vitro, is known to be crucial in homeostatic regulation of the activity and gain control. The effects of adaptation have already been studied at the single-cell level, resulting from either voltage or calcium gated channels both activated by the spiking activity and modulating the dynamical responses of the neurons. In this study, by disentangling those effects into a linear (sub-threshold) and a non-linear (supra-threshold) part, we focus on the the functional role of those two distinct components of adaptation onto the neuronal activity at various scales, starting from single-cell responses up to recurrent networks dynamics, and under stationary or non-stationary stimulations. The effects of slow currents on collective dynamics, like modulation of population oscillation and reliability of spike patterns, is quantified for various types of adaptation in sparse recurrent networks. PMID:26400658

  10. Variable threshold method for ECG R-peak detection.

    PubMed

    Kew, Hsein-Ping; Jeong, Do-Un

    2011-10-01

    In this paper, a wearable belt-type ECG electrode worn around the chest by measuring the real-time ECG is produced in order to minimize the inconvenient in wearing. ECG signal is detected using a potential instrument system. The measured ECG signal is transmits via an ultra low power consumption wireless data communications unit to personal computer using Zigbee-compatible wireless sensor node. ECG signals carry a lot of clinical information for a cardiologist especially the R-peak detection in ECG. R-peak detection generally uses the threshold value which is fixed. There will be errors in peak detection when the baseline changes due to motion artifacts and signal size changes. Preprocessing process which includes differentiation process and Hilbert transform is used as signal preprocessing algorithm. Thereafter, variable threshold method is used to detect the R-peak which is more accurate and efficient than fixed threshold value method. R-peak detection using MIT-BIH databases and Long Term Real-Time ECG is performed in this research in order to evaluate the performance analysis.

  11. Objectivity and validity of EMG method in estimating anaerobic threshold.

    PubMed

    Kang, S-K; Kim, J; Kwon, M; Eom, H

    2014-08-01

    The purposes of this study were to verify and compare the performances of anaerobic threshold (AT) point estimates among different filtering intervals (9, 15, 20, 25, 30 s) and to investigate the interrelationships of AT point estimates obtained by ventilatory threshold (VT) and muscle fatigue thresholds using electromyographic (EMG) activity during incremental exercise on a cycle ergometer. 69 untrained male university students, yet pursuing regular exercise voluntarily participated in this study. The incremental exercise protocol was applied with a consistent stepwise increase in power output of 20 watts per minute until exhaustion. AT point was also estimated in the same manner using V-slope program with gas exchange parameters. In general, the estimated values of AT point-time computed by EMG method were more consistent across 5 filtering intervals and demonstrated higher correlations among themselves when compared with those values obtained by VT method. The results found in the present study suggest that the EMG signals could be used as an alternative or a new option in estimating AT point. Also the proposed computing procedure implemented in Matlab for the analysis of EMG signals appeared to be valid and reliable as it produced nearly identical values and high correlations with VT estimates. PMID:24988194

  12. Comparison of different automatic adaptive threshold selection techniques for estimating discharge from river width

    NASA Astrophysics Data System (ADS)

    Elmi, Omid; Javad Tourian, Mohammad; Sneeuw, Nico

    2015-04-01

    The importance of river discharge monitoring is critical for e.g., water resource planning, climate change, hazard monitoring. River discharge has been measured at in situ gauges for more than a century. Despite various attempts, some basins are still ungauged. Moreover, a reduction in the number of worldwide gauging stations increases the interest to employ remote sensing data for river discharge monitoring. Finding an empirical relationship between simultaneous in situ measurements of discharge and river widths derived from satellite imagery has been introduced as a straightforward remote sensing alternative. Classifying water and land in an image is the primary task for defining the river width. Water appears dark in the near infrared and infrared bands in satellite images. As a result low values in the histogram usually represent the water content. In this way, applying a threshold on the image histogram and separating into two different classes is one of the most efficient techniques to build a water mask. Beside its simple definition, finding the appropriate threshold value in each image is the most critical issue. The threshold is variable due to changes in the water level, river extent, atmosphere, sunlight radiation, onboard calibration of the satellite over time. These complexities in water body classification are the main source of error in river width estimation. In this study, we are looking for the most efficient adaptive threshold algorithm to estimate the river discharge. To do this, all cloud free MODIS images coincident with the in situ measurement are collected. Next a number of automatic threshold selection techniques are employed to generate different dynamic water masks. Then, for each of them a separate empirical relationship between river widths and discharge measurements are determined. Through these empirical relationships, we estimate river discharge at the gauge and then validate our results against in situ measurements and also

  13. Robust Optimal Adaptive Control Method with Large Adaptive Gain

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2009-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.

  14. Methods of scaling threshold color difference using printed samples

    NASA Astrophysics Data System (ADS)

    Huang, Min; Cui, Guihua; Liu, Haoxue; Luo, M. Ronnier

    2012-01-01

    A series of printed samples on substrate of semi-gloss paper and with the magnitude of threshold color difference were prepared for scaling the visual color difference and to evaluate the performance of different method. The probabilities of perceptibly was used to normalized to Z-score and different color differences were scaled to the Z-score. The visual color difference was got, and checked with the STRESS factor. The results indicated that only the scales have been changed but the relative scales between pairs in the data are preserved.

  15. Impact of slow K(+) currents on spike generation can be described by an adaptive threshold model.

    PubMed

    Kobayashi, Ryota; Kitano, Katsunori

    2016-06-01

    A neuron that is stimulated by rectangular current injections initially responds with a high firing rate, followed by a decrease in the firing rate. This phenomenon is called spike-frequency adaptation and is usually mediated by slow K(+) currents, such as the M-type K(+) current (I M ) or the Ca(2+)-activated K(+) current (I AHP ). It is not clear how the detailed biophysical mechanisms regulate spike generation in a cortical neuron. In this study, we investigated the impact of slow K(+) currents on spike generation mechanism by reducing a detailed conductance-based neuron model. We showed that the detailed model can be reduced to a multi-timescale adaptive threshold model, and derived the formulae that describe the relationship between slow K(+) current parameters and reduced model parameters. Our analysis of the reduced model suggests that slow K(+) currents have a differential effect on the noise tolerance in neural coding. PMID:27085337

  16. Fine tuning of the threshold of T cell selection by the Nck adapters.

    PubMed

    Roy, Edwige; Togbe, Dieudonnée; Holdorf, Amy; Trubetskoy, Dmitry; Nabti, Sabrina; Küblbeck, Günter; Schmitt, Sabine; Kopp-Schneider, Annette; Leithäuser, Frank; Möller, Peter; Bladt, Friedhelm; Hämmerling, Günter J; Arnold, Bernd; Pawson, Tony; Tafuri, Anna

    2010-12-15

    Thymic selection shapes the T cell repertoire to ensure maximal antigenic coverage against pathogens while preventing autoimmunity. Recognition of self-peptides in the context of peptide-MHC complexes by the TCR is central to this process, which remains partially understood at the molecular level. In this study we provide genetic evidence that the Nck adapter proteins are essential for thymic selection. In vivo Nck deletion resulted in a reduction of the thymic cellularity, defective positive selection of low-avidity T cells, and impaired deletion of thymocytes engaged by low-potency stimuli. Nck-deficient thymocytes were characterized by reduced ERK activation, particularly pronounced in mature single positive thymocytes. Taken together, our findings identify a crucial role for the Nck adapters in enhancing TCR signal strength, thereby fine-tuning the threshold of thymocyte selection and shaping the preimmune T cell repertoire.

  17. Fine tuning of the threshold of T cell selection by the Nck adapters.

    PubMed

    Roy, Edwige; Togbe, Dieudonnée; Holdorf, Amy; Trubetskoy, Dmitry; Nabti, Sabrina; Küblbeck, Günter; Schmitt, Sabine; Kopp-Schneider, Annette; Leithäuser, Frank; Möller, Peter; Bladt, Friedhelm; Hämmerling, Günter J; Arnold, Bernd; Pawson, Tony; Tafuri, Anna

    2010-12-15

    Thymic selection shapes the T cell repertoire to ensure maximal antigenic coverage against pathogens while preventing autoimmunity. Recognition of self-peptides in the context of peptide-MHC complexes by the TCR is central to this process, which remains partially understood at the molecular level. In this study we provide genetic evidence that the Nck adapter proteins are essential for thymic selection. In vivo Nck deletion resulted in a reduction of the thymic cellularity, defective positive selection of low-avidity T cells, and impaired deletion of thymocytes engaged by low-potency stimuli. Nck-deficient thymocytes were characterized by reduced ERK activation, particularly pronounced in mature single positive thymocytes. Taken together, our findings identify a crucial role for the Nck adapters in enhancing TCR signal strength, thereby fine-tuning the threshold of thymocyte selection and shaping the preimmune T cell repertoire. PMID:21078909

  18. A novel method for determining target detection thresholds

    NASA Astrophysics Data System (ADS)

    Grossman, S.

    2015-05-01

    Target detection is the act of isolating objects of interest from the surrounding clutter, generally using some form of test to include objects in the found class. However, the method of determining the threshold is overlooked relying on manual determination either through empirical observation or guesswork. The question remains: how does an analyst identify the detection threshold that will produce the optimum results? This work proposes the concept of a target detection sweet spot where the missed detection probability curve crosses the false detection curve; this represents the point at which missed detects are traded for false detects in order to effect positive or negative changes in the detection probability. ROC curves are used to characterize detection probabilities and false alarm rates based on empirically derived data. It identifies the relationship between the empirically derived results and the first moment statistic of the histogram of the pixel target value data and then proposes a new method of applying the histogram results in an automated fashion to predict the target detection sweet spot at which to begin automated target detection.

  19. Survival thresholds and mortality rates in adaptive dynamics: conciliating deterministic and stochastic simulations.

    PubMed

    Perthame, Benoît; Gauduchon, Mathias

    2010-09-01

    Deterministic population models for adaptive dynamics are derived mathematically from individual-centred stochastic models in the limit of large populations. However, it is common that numerical simulations of both models fit poorly and give rather different behaviours in terms of evolution speeds and branching patterns. Stochastic simulations involve extinction phenomenon operating through demographic stochasticity, when the number of individual 'units' is small. Focusing on the class of integro-differential adaptive models, we include a similar notion in the deterministic formulations, a survival threshold, which allows phenotypical traits in the population to vanish when represented by few 'individuals'. Based on numerical simulations, we show that the survival threshold changes drastically the solution; (i) the evolution speed is much slower, (ii) the branching patterns are reduced continuously and (iii) these patterns are comparable to those obtained with stochastic simulations. The rescaled models can also be analysed theoretically. One can recover the concentration phenomena on well-separated Dirac masses through the constrained Hamilton-Jacobi equation in the limit of small mutations and large observation times. PMID:19734200

  20. Detection of fiducial points in ECG waves using iteration based adaptive thresholds.

    PubMed

    Wonjune Kang; Kyunguen Byun; Hong-Goo Kang

    2015-08-01

    This paper presents an algorithm for the detection of fiducial points in electrocardiogram (ECG) waves using iteration based adaptive thresholds. By setting the search range of the processing frame to the interval between two consecutive R peaks, the peaks of T and P waves are used as reference salient points (RSPs) to detect the fiducial points. The RSPs are selected from candidates whose slope variation factors are larger than iteratively defined adaptive thresholds. Considering the fact that the number of RSPs varies depending on whether the ECG wave is normal or not, the proposed algorithm proceeds with a different methodology for determining fiducial points based on the number of detected RSPs. Testing was performed using twelve records from the MIT-BIH Arrhythmia Database that were manually marked for comparison with the estimated locations of the fiducial points. The means of absolute distances between the true locations and the points estimated by the algorithm are 12.2 ms and 7.9 ms for the starting points of P and Q waves, and 9.3 ms and 13.9 ms for the ending points of S and T waves. Since the computational complexity of the proposed algorithm is very low, it is feasible for use in mobile devices. PMID:26736854

  1. Simple method for model reference adaptive control

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1989-01-01

    A simple method is presented for combined signal synthesis and parameter adaptation within the framework of model reference adaptive control theory. The results are obtained using a simple derivation based on an improved Liapunov function.

  2. Hearing threshold estimation by auditory steady-state responses with narrow-band chirps and adaptive stimulus patterns: implementation in clinical routine.

    PubMed

    Seidel, David Ulrich; Flemming, Tobias Angelo; Park, Jonas Jae-Hyun; Remmert, Stephan

    2015-01-01

    Objective hearing threshold estimation by auditory steady-state responses (ASSR) can be accelerated by the use of narrow-band chirps and adaptive stimulus patterns. This modification has been examined in only a few clinical studies. In this study, clinical data is validated and extended, and the applicability of the method in audiological diagnostics routine is examined. In 60 patients (normal hearing and hearing impaired), ASSR and pure tone audiometry (PTA) thresholds were compared. ASSR were evoked by binaural multi-frequent narrow-band chirps with adaptive stimulus patterns. The precision and required testing time for hearing threshold estimation were determined. The average differences between ASSR and PTA thresholds were 18, 12, 17 and 19 dB for normal hearing (PTA ≤ 20 dB) and 5, 9, 9 and 11 dB for hearing impaired (PTA > 20 dB) at the frequencies of 500, 1,000, 2,000 and 4,000 Hz, respectively, and the differences were significant in all frequencies with the exception of 1 kHz. Correlation coefficients between ASSR and PTA thresholds were 0.36, 0.47, 0.54 and 0.51 for normal hearing and 0.73, 0.74, 0.72 and 0.71 for hearing impaired at 500, 1,000, 2,000 and 4,000 Hz, respectively. Mean ASSR testing time was 33 ± 8 min. In conclusion, auditory steady-state responses with narrow-band-chirps and adaptive stimulus patterns is an efficient method for objective frequency-specific hearing threshold estimation. Precision of threshold estimation is most limited for slighter hearing loss at 500 Hz. The required testing time is acceptable for the application in everyday clinical routine. PMID:24305781

  3. An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures.

    PubMed

    Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Tian, Yun; Duan, Fuqing; Pan, Yutong

    2016-01-01

    Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels. PMID:27597878

  4. An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures

    PubMed Central

    Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Duan, Fuqing; Pan, Yutong

    2016-01-01

    Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels.

  5. An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures

    PubMed Central

    Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Duan, Fuqing; Pan, Yutong

    2016-01-01

    Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels. PMID:27597878

  6. Adaptive windowed range-constrained Otsu method using local information

    NASA Astrophysics Data System (ADS)

    Zheng, Jia; Zhang, Dinghua; Huang, Kuidong; Sun, Yuanxi; Tang, Shaojie

    2016-01-01

    An adaptive windowed range-constrained Otsu method using local information is proposed for improving the performance of image segmentation. First, the reason why traditional thresholding methods do not perform well in the segmentation of complicated images is analyzed. Therein, the influences of global and local thresholdings on the image segmentation are compared. Second, two methods that can adaptively change the size of the local window according to local information are proposed by us. The characteristics of the proposed methods are analyzed. Thereby, the information on the number of edge pixels in the local window of the binarized variance image is employed to adaptively change the local window size. Finally, the superiority of the proposed method over other methods such as the range-constrained Otsu, the active contour model, the double Otsu, the Bradley's, and the distance-regularized level set evolution is demonstrated. It is validated by the experiments that the proposed method can keep more details and acquire much more satisfying area overlap measure as compared with the other conventional methods.

  7. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    PubMed

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908

  8. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    PubMed

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  9. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding

    PubMed Central

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908

  10. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding

    NASA Astrophysics Data System (ADS)

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.

    2016-08-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  11. Wavelet-based acoustic emission detection method with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Menon, Sunil; Schoess, Jeffrey N.; Hamza, Rida; Busch, Darryl

    2000-06-01

    Reductions in Navy maintenance budgets and available personnel have dictated the need to transition from time-based to 'condition-based' maintenance. Achieving this will require new enabling diagnostic technologies. One such technology, the use of acoustic emission for the early detection of helicopter rotor head dynamic component faults, has been investigated by Honeywell Technology Center for its rotor acoustic monitoring system (RAMS). This ambitious, 38-month, proof-of-concept effort, which was a part of the Naval Surface Warfare Center Air Vehicle Diagnostics System program, culminated in a successful three-week flight test of the RAMS system at Patuxent River Flight Test Center in September 1997. The flight test results demonstrated that stress-wave acoustic emission technology can detect signals equivalent to small fatigue cracks in rotor head components and can do so across the rotating articulated rotor head joints and in the presence of other background acoustic noise generated during flight operation. This paper presents the results of stress wave data analysis of the flight-test dataset using wavelet-based techniques to assess background operational noise vs. machinery failure detection results.

  12. A fast and efficient adaptive threshold rate control scheme for remote sensing images.

    PubMed

    Chen, Xiao; Xu, Xiaoqing

    2012-01-01

    The JPEG2000 image compression standard is ideal for processing remote sensing images. However, its algorithm is complex and it requires large amounts of memory, making it difficult to adapt to the limited transmission and storage resources necessary for remote sensing images. In the present study, an improved rate control algorithm for remote sensing images is proposed. The required coded blocks are sorted downward according to their numbers of bit planes prior to entropy coding. An adaptive threshold computed from the combination of the minimum number of bit planes, along with the minimum rate-distortion slope and the compression ratio, is used to truncate passes of each code block during Tier-1 encoding. This routine avoids the encoding of all code passes and improves the coding efficiency. The simulation results show that the computational cost and working buffer memory size of the proposed algorithm reach only 18.13 and 7.81%, respectively, of the same parameters in the postcompression rate distortion algorithm, while the peak signal-to-noise ratio across the images remains almost the same. The proposed algorithm not only greatly reduces the code complexity and buffer requirements but also maintains the image quality.

  13. [An adaptive threshloding segmentation method for urinary sediment image].

    PubMed

    Li, Yongming; Zeng, Xiaoping; Qin, Jian; Han, Liang

    2009-02-01

    In this paper is proposed a new method to solve the segmentation of the complicated defocusing urinary sediment image. The main points of the method are: (1) using wavelet transforms and morphology to erase the effect of defocusing and realize the first segmentation, (2) using adaptive threshold processing in accordance to the subimages after wavelet processing, and (3) using 'peel off' algorithm to deal with the overlapped cells' segmentations. The experimental results showed that this method was not affected by the defocusing, and it made good use of many kinds of characteristics of the images. So this new mehtod can get very precise segmentation; it is effective for defocusing urinary sediment image segmentation.

  14. Odor threshold prediction by means of the Monte Carlo method.

    PubMed

    Toropov, Andrey A; Toropova, Alla P; Cappellini, Luigi; Benfenati, Emilio; Davoli, Enrico

    2016-11-01

    A large set of organic compounds (n=906) has been used as a basis to build up a model for the odor threshold (mg/m(3)). The statistical characteristics of the best model are the following: n=523, r(2)=0.647, RMSE=1.18 (training set); n=191, r(2)=0.610, RMSE=1.03, (calibration set); and n=192, r(2)=0.686, RMSE=1.06 (validation set). A mechanistic interpretation of the model is presented as the lists of statistical promoters of the increase and decrease in the odor threshold.

  15. Odor threshold prediction by means of the Monte Carlo method.

    PubMed

    Toropov, Andrey A; Toropova, Alla P; Cappellini, Luigi; Benfenati, Emilio; Davoli, Enrico

    2016-11-01

    A large set of organic compounds (n=906) has been used as a basis to build up a model for the odor threshold (mg/m(3)). The statistical characteristics of the best model are the following: n=523, r(2)=0.647, RMSE=1.18 (training set); n=191, r(2)=0.610, RMSE=1.03, (calibration set); and n=192, r(2)=0.686, RMSE=1.06 (validation set). A mechanistic interpretation of the model is presented as the lists of statistical promoters of the increase and decrease in the odor threshold. PMID:27500544

  16. An adaptive unsupervised hyperspectral classification method based on Gaussian distribution

    NASA Astrophysics Data System (ADS)

    Yue, Jiang; Wu, Jing-wei; Zhang, Yi; Bai, Lian-fa

    2014-11-01

    In order to achieve adaptive unsupervised clustering in the high precision, a method using Gaussian distribution to fit the similarity of the inter-class and the noise distribution is proposed in this paper, and then the automatic segmentation threshold is determined by the fitting result. First, according with the similarity measure of the spectral curve, this method assumes that the target and the background both in Gaussian distribution, the distribution characteristics is obtained through fitting the similarity measure of minimum related windows and center pixels with Gaussian function, and then the adaptive threshold is achieved. Second, make use of the pixel minimum related windows to merge adjacent similar pixels into a picture-block, then the dimensionality reduction is completed and the non-supervised classification is realized. AVIRIS data and a set of hyperspectral data we caught are used to evaluate the performance of the proposed method. Experimental results show that the proposed algorithm not only realizes the adaptive but also outperforms K-MEANS and ISODATA on the classification accuracy, edge recognition and robustness.

  17. Presepsin (sCD14-ST) in emergency department: the need for adapted threshold values?

    PubMed

    Chenevier-Gobeaux, Camille; Trabattoni, Eloise; Roelens, Marie; Borderie, Didier; Claessens, Yann-Erick

    2014-01-01

    Presepsin is elevated in patients developing infections and increases in a severity-dependent manner. We aimed to evaluate circulating values of this new biomarker in a population free of any acute infectious disorder. We recruited 144 consecutive patients presenting at the emergency department (ED) without acute infection or acute/unstable disorder, and 54 healthy participants. Presepsin plasmatic concentrations were measured on the PATHFAST point-of-care analyzer. The 95th percentile of presepsin values in the ED population is 750ng/L. Presepsin was significantly increased in patients aged ≥70years vs. younger patients (470 [380-601] ng/L vs. 300 [201-457] ng/L, p<0.001). Prevalence of elevated presepsin values was increased in patients in comparison to controls (80% vs.13%, p<0.001), and in patients aged ≥70years in comparison to younger patients (87% vs. 47%, p<0.001). Presepsin concentrations were significantly increased in patients with kidney dysfunction. Aging was an independent predictor of an elevated presepsin value. In conclusion, presepsin concentrations increase with age and kidney dysfunction. Therefore interpretation of presepsin concentrations might be altered in the elderly or in patients with impaired renal function. Adapted thresholds are needed for specific populations.

  18. Multichannel spike detector with an adaptive threshold based on a Sigma-delta control loop.

    PubMed

    Gagnon-Turcotte, G; Gosselin, B

    2015-08-01

    In this paper, we present a digital spike detector using an adaptive threshold which is suitable for real time processing of 32 electrophysiological channels in parallel. Such a new scheme is based on a Sigma-delta control loop that precisely estimates the standard deviation of the amplitude of the noise of the input signal to optimize the detection rate. Additionally, it is not dependent on the amplitude of the input signal thanks to a robust algorithm. The spike detector is implemented inside a Spartan-6 FPGA using low resources, only FPGA basic logic blocks, and is using a low clock frequency under 6 MHz for minimal power consumption. We present a comparison showing that the proposed system can compete with a dedicated off-line spike detection software. The whole system achieves up to 100% of true positive detection rate for SNRs down to 5 dB while achieving 62.3% of true positive detection rate for an SNR as low as -2 dB at a 150 AP/s firing rate. PMID:26737934

  19. A comparison of two methods for measuring thermal thresholds in diabetic neuropathy.

    PubMed Central

    Levy, D; Abraham, R; Reid, G

    1989-01-01

    Thermal thresholds can be measured psychophysically using either the method of limits or a forced-choice method. We have compared the two methods in 367 diabetic patients, 128 with symptomatic neuropathy. The Sensortek method was chosen for the forced-choice device, the Somedic modification of the Marstock method for a method of limits. Cooling and heat pain thresholds were also measured using the Marstock method. Somedic thermal thresholds increase with age in normal subjects, but not to a clinically significant degree. In diabetics Marstock warm threshold increased by 0.8 degrees C/decade, Sensortek by 0.1 degrees C/decade. Both methods had a high coefficient of variation in normal subjects (Sensortek 29%, Marstock warm 14%, cool 42%). The prevalence of abnormal thresholds was similar for both methods (28-32%), though Marstock heat pain thresholds were less frequently abnormal (18%). Only 15-18% of patients had abnormal results in both tests. Sensortek thresholds were significantly lower on repeat testing, and all thresholds were higher in symptomatic patients. Both methods are suitable for clinical thermal testing, though the method of limits is quicker. In screening studies the choice of a suitable apparatus need not be determined by the psychophysical basis of the test. PMID:2795077

  20. Variational method for adaptive grid generation

    SciTech Connect

    Brackbill, J.U.

    1983-01-01

    A variational method for generating adaptive meshes is described. Functionals measuring smoothness, skewness, orientation, and the Jacobian are minimized to generate a mapping from a rectilinear domain in natural coordinate to an arbitrary domain in physical coordinates. From the mapping, a mesh is easily constructed. In using the method to adaptively zone computational problems, as few as one third the number of mesh points are required in each coordinate direction compared with a uniformly zoned mesh.

  1. Restrictive Stochastic Item Selection Methods in Cognitive Diagnostic Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Chun; Chang, Hua-Hua; Huebner, Alan

    2011-01-01

    This paper proposes two new item selection methods for cognitive diagnostic computerized adaptive testing: the restrictive progressive method and the restrictive threshold method. They are built upon the posterior weighted Kullback-Leibler (KL) information index but include additional stochastic components either in the item selection index or in…

  2. The Random-Threshold Generalized Unfolding Model and Its Application of Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien

    2013-01-01

    The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…

  3. Simulation of mid-infrared clutter rejection. 1: One-dimensional LMS spatial filter and adaptive threshold algorithms.

    PubMed

    Longmire, M S; Milton, A F; Takken, E H

    1982-11-01

    Several 1-D signal processing techniques have been evaluated by simulation with a digital computer using high-spatial-resolution (0.15 mrad) noise data gathered from back-lit clouds and uniform sky with a scanning data collection system operating in the 4.0-4.8-microm spectral band. Two ordinary bandpass filters and a least-mean-square (LMS) spatial filter were evaluated in combination with a fixed or adaptive threshold algorithm. The combination of a 1-D LMS filter and a 1-D adaptive threshold sensor was shown to reject extreme cloud clutter effectively and to provide nearly equal signal detection in a clear and cluttered sky, at least in systems whose NEI (noise equivalent irradiance) exceeds 1.5 x 10(-13) W/cm(2) and whose spatial resolution is better than 0.15 x 0.36 mrad. A summary gives highlights of the work, key numerical results, and conclusions.

  4. A de-noising algorithm based on wavelet threshold-exponential adaptive window width-fitting for ground electrical source airborne transient electromagnetic signal

    NASA Astrophysics Data System (ADS)

    Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun

    2016-05-01

    The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.

  5. Evaluation of Maryland abutment scour equation through selected threshold velocity methods

    USGS Publications Warehouse

    Benedict, S.T.

    2010-01-01

    The U.S. Geological Survey, in cooperation with the Maryland State Highway Administration, used field measurements of scour to evaluate the sensitivity of the Maryland abutment scour equation to the critical (or threshold) velocity variable. Four selected methods for estimating threshold velocity were applied to the Maryland abutment scour equation, and the predicted scour to the field measurements were compared. Results indicated that performance of the Maryland abutment scour equation was sensitive to the threshold velocity with some threshold velocity methods producing better estimates of predicted scour than did others. In addition, results indicated that regional stream characteristics can affect the performance of the Maryland abutment scour equation with moderate-gradient streams performing differently from low-gradient streams. On the basis of the findings of the investigation, guidance for selecting threshold velocity methods for application to the Maryland abutment scour equation are provided, and limitations are noted.

  6. Segmentation of White Blood Cell from Acute Lymphoblastic Leukemia Images Using Dual-Threshold Method.

    PubMed

    Li, Yan; Zhu, Rui; Mi, Lei; Cao, Yihui; Yao, Di

    2016-01-01

    We propose a dual-threshold method based on a strategic combination of RGB and HSV color space for white blood cell (WBC) segmentation. The proposed method consists of three main parts: preprocessing, threshold segmentation, and postprocessing. In the preprocessing part, we get two images for further processing: one contrast-stretched gray image and one H component image from transformed HSV color space. In the threshold segmentation part, a dual-threshold method is proposed for improving the conventional single-threshold approaches and a golden section search method is used for determining the optimal thresholds. For the postprocessing part, mathematical morphology and median filtering are utilized to denoise and remove incomplete WBCs. The proposed method was tested in segmenting the lymphoblasts on a public Acute Lymphoblastic Leukemia (ALL) image dataset. The results show that the performance of the proposed method is better than single-threshold approach independently performed in RGB and HSV color space and the overall single WBC segmentation accuracy reaches 97.85%, showing a good prospect in subsequent lymphoblast classification and ALL diagnosis. PMID:27313659

  7. Threshold selection for classification of MR brain images by clustering method

    NASA Astrophysics Data System (ADS)

    Moldovanu, Simona; Obreja, Cristian; Moraru, Luminita

    2015-12-01

    Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.

  8. Threshold selection for classification of MR brain images by clustering method

    SciTech Connect

    Moldovanu, Simona; Obreja, Cristian; Moraru, Luminita

    2015-12-07

    Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.

  9. Adaptive sequential methods for detecting network intrusions

    NASA Astrophysics Data System (ADS)

    Chen, Xinjia; Walker, Ernest

    2013-06-01

    In this paper, we propose new sequential methods for detecting port-scan attackers which routinely perform random "portscans" of IP addresses to find vulnerable servers to compromise. In addition to rigorously control the probability of falsely implicating benign remote hosts as malicious, our method performs significantly faster than other current solutions. Moreover, our method guarantees that the maximum amount of observational time is bounded. In contrast to the previous most effective method, Threshold Random Walk Algorithm, which is explicit and analytical in nature, our proposed algorithm involve parameters to be determined by numerical methods. We have introduced computational techniques such as iterative minimax optimization for quick determination of the parameters of the new detection algorithm. A framework of multi-valued decision for detecting portscanners and DoS attacks is also proposed.

  10. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    PubMed

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  11. Fast Parallel MR Image Reconstruction via B1-based, Adaptive Restart, Iterative Soft Thresholding Algorithms (BARISTA)

    PubMed Central

    Noll, Douglas C.; Fessler, Jeffrey A.

    2014-01-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484

  12. Evaluation of different methods for determining growing degree-day thresholds in apricot cultivars

    NASA Astrophysics Data System (ADS)

    Ruml, Mirjana; Vuković, Ana; Milatović, Dragan

    2010-07-01

    The aim of this study was to examine different methods for determining growing degree-day (GDD) threshold temperatures for two phenological stages (full bloom and harvest) and select the optimal thresholds for a greater number of apricot ( Prunus armeniaca L.) cultivars grown in the Belgrade region. A 10-year data series were used to conduct the study. Several commonly used methods to determine the threshold temperatures from field observation were evaluated: (1) the least standard deviation in GDD; (2) the least standard deviation in days; (3) the least coefficient of variation in GDD; (4) regression coefficient; (5) the least standard deviation in days with a mean temperature above the threshold; (6) the least coefficient of variation in days with a mean temperature above the threshold; and (7) the smallest root mean square error between the observed and predicted number of days. In addition, two methods for calculating daily GDD, and two methods for calculating daily mean air temperatures were tested to emphasize the differences that can arise by different interpretations of basic GDD equation. The best agreement with observations was attained by method (7). The lower threshold temperature obtained by this method differed among cultivars from -5.6 to -1.7°C for full bloom, and from -0.5 to 6.6°C for harvest. However, the “Null” method (lower threshold set to 0°C) and “Fixed Value” method (lower threshold set to -2°C for full bloom and to 3°C for harvest) gave very good results. The limitations of the widely used method (1) and methods (5) and (6), which generally performed worst, are discussed in the paper.

  13. Domain adaptive boosting method and its applications

    NASA Astrophysics Data System (ADS)

    Geng, Jie; Miao, Zhenjiang

    2015-03-01

    Differences of data distributions widely exist among datasets, i.e., domains. For many pattern recognition, nature language processing, and content-based analysis systems, a decrease in performance caused by the domain differences between the training and testing datasets is still a notable problem. We propose a domain adaptation method called domain adaptive boosting (DAB). It is based on the AdaBoost approach with extensions to cover the domain differences between the source and target domains. Two main stages are contained in this approach: source-domain clustering and source-domain sample selection. By iteratively adding the selected training samples from the source domain, the discrimination model is able to achieve better domain adaptation performance based on a small validation set. The DAB algorithm is suitable for the domains with large scale samples and easy to extend for multisource adaptation. We implement this method on three computer vision systems: the skin detection model in single images, the video concept detection model, and the object classification model. In the experiments, we compare the performances of several commonly used methods and the proposed DAB. Under most situations, the DAB is superior.

  14. Structured adaptive grid generation using algebraic methods

    NASA Technical Reports Server (NTRS)

    Yang, Jiann-Cherng; Soni, Bharat K.; Roger, R. P.; Chan, Stephen C.

    1993-01-01

    The accuracy of the numerical algorithm depends not only on the formal order of approximation but also on the distribution of grid points in the computational domain. Grid adaptation is a procedure which allows optimal grid redistribution as the solution progresses. It offers the prospect of accurate flow field simulations without the use of an excessively timely, computationally expensive, grid. Grid adaptive schemes are divided into two basic categories: differential and algebraic. The differential method is based on a variational approach where a function which contains a measure of grid smoothness, orthogonality and volume variation is minimized by using a variational principle. This approach provided a solid mathematical basis for the adaptive method, but the Euler-Lagrange equations must be solved in addition to the original governing equations. On the other hand, the algebraic method requires much less computational effort, but the grid may not be smooth. The algebraic techniques are based on devising an algorithm where the grid movement is governed by estimates of the local error in the numerical solution. This is achieved by requiring the points in the large error regions to attract other points and points in the low error region to repel other points. The development of a fast, efficient, and robust algebraic adaptive algorithm for structured flow simulation applications is presented. This development is accomplished in a three step process. The first step is to define an adaptive weighting mesh (distribution mesh) on the basis of the equidistribution law applied to the flow field solution. The second, and probably the most crucial step, is to redistribute grid points in the computational domain according to the aforementioned weighting mesh. The third and the last step is to reevaluate the flow property by an appropriate search/interpolate scheme at the new grid locations. The adaptive weighting mesh provides the information on the desired concentration

  15. Irregular seismic data reconstruction based on exponential threshold model of POCS method

    NASA Astrophysics Data System (ADS)

    Gao, Jian-Jun; Chen, Xiao-Hong; Li, Jing-Ye; Liu, Guo-Chang; Ma, Jian

    2010-09-01

    Irregular seismic data causes problems with multi-trace processing algorithms and degrades processing quality. We introduce the Projection onto Convex Sets (POCS) based image restoration method into the seismic data reconstruction field to interpolate irregularly missing traces. For entire dead traces, we transfer the POCS iteration reconstruction process from the time to frequency domain to save computational cost because forward and reverse Fourier time transforms are not needed. In each iteration, the selection threshold parameter is important for reconstruction efficiency. In this paper, we designed two types of threshold models to reconstruct irregularly missing seismic data. The experimental results show that an exponential threshold can greatly reduce iterations and improve reconstruction efficiency compared to a linear threshold for the same reconstruction result. We also analyze the antinoise and anti-alias ability of the POCS reconstruction method. Finally, theoretical model tests and real data examples indicate that the proposed method is efficient and applicable.

  16. A method to determine activation thresholds in fMRI paradigms.

    PubMed

    Arndt, S; Gold, S; Cizadlo, T; Zheng, J; Ehrhardt, J C; Flaum, M

    1997-08-01

    Determining meaningful activation thresholds in functional magnetic resonance imaging (fMRI) paradigms is complicated by several factors. These include the time-series nature of the data, the influence of physiological rhythms (e.g. respiration) and vacillations introduced by the experimental design (e.g. cueing). We present an empirical threshold for each subject and each fMRI experiment that takes these factors into account. The method requires an additional fMRI data set as similar to the experimental paradigm as possible without dichotomously varying the experimental task of interest. A letter fluency task was used to illustrate this method. This technique differs from classical methods since the Pearson correlation probability values tabulated from statistical theory are not used. Rather each subject defines his or her own set of threshold probability values for correlations. It is against these empirical thresholds, not Pearson's, that an experimental fMRI correlation is assessed.

  17. Exploring a Proposed WHO Method to Determine Thresholds for Seasonal Influenza Surveillance

    PubMed Central

    Tay, Ee Laine; Grant, Kristina; Kirk, Martyn; Mounts, Anthony; Kelly, Heath

    2013-01-01

    Introduction Health authorities find thresholds useful to gauge the start and severity of influenza seasons. We explored a method for deriving thresholds proposed in an influenza surveillance manual published by the World Health Organization (WHO). Methods For 2002-2011, we analysed two routine influenza-like-illness (ILI) datasets, general practice sentinel surveillance and a locum medical service sentinel surveillance, plus laboratory data and hospital admissions for influenza. For each sentinel dataset, we created two composite variables from the product of weekly ILI data and the relevant laboratory data, indicating the proportion of tested specimens that were positive. For all datasets, including the composite datasets, we aligned data on the median week of peak influenza or ILI activity and assigned three threshold levels: seasonal threshold, determined by inspection; and two intensity thresholds termed average and alert thresholds, determined by calculations of means, medians, confidence intervals (CI) and percentiles. From the thresholds, we compared the seasonal onset, end and intensity across all datasets from 2002-2011. Correlation between datasets was assessed using the mean correlation coefficient. Results The median week of peak activity was week 34 for all datasets, except hospital data (week 35). Means and medians were comparable and the 90% upper CIs were similar to the 95th percentiles. Comparison of thresholds revealed variations in defining the start of a season but good agreement in describing the end and intensity of influenza seasons, except in hospital admissions data after the pandemic year of 2009. The composite variables improved the agreements between the ILI and other datasets. Datasets were well correlated, with mean correlation coefficients of >0.75 for a range of combinations. Conclusions Thresholds for influenza surveillance are easily derived from historical surveillance and laboratory data using the approach proposed by WHO. Use

  18. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  19. An adaptive selective frequency damping method

    NASA Astrophysics Data System (ADS)

    Jordi, Bastien; Cotter, Colin; Sherwin, Spencer

    2015-03-01

    The selective frequency damping (SFD) method is used to obtain unstable steady-state solutions of dynamical systems. The stability of this method is governed by two parameters that are the control coefficient and the filter width. Convergence is not guaranteed for arbitrary choice of these parameters. Even when the method does converge, the time necessary to reach a steady-state solution may be very long. We present an adaptive SFD method. We show that by modifying the control coefficient and the filter width all along the solver execution, we can reach an optimum convergence rate. This method is based on successive approximations of the dominant eigenvalue of the flow studied. We design a one-dimensional model to select SFD parameters that enable us to control the evolution of the least stable eigenvalue of the system. These parameters are then used for the application of the SFD method to the multi-dimensional flow problem. We apply this adaptive method to a set of classical test cases of computational fluid dynamics and show that the steady-state solutions obtained are similar to what can be found in the literature. Then we apply it to a specific vortex dominated flow (of interest for the automotive industry) whose stability had never been studied before. Seventh Framework Programme of the European Commission - ANADE project under Grant Contract PITN-GA-289428.

  20. Reliability of a Simple Method for Determining Salt Taste Detection and Recognition Thresholds.

    PubMed

    Giguère, Jean-François; Piovesana, Paula de Moura; Proulx-Belhumeur, Alexandra; Doré, Michel; Sampaio, Karina de Lemos; Gallani, Maria-Cecilia

    2016-03-01

    The aim of this study was to assess the reliability of a rapid analytical method to determine salt taste detection and recognition thresholds based on the ASTM E679 method. Reliability was evaluated according to criterion of temporal stability with a 1-week interval test-retest, with 29 participants. Thresholds were assessed by using the 3-AFC technique with 15 ascending concentrations of salt solution (1-292 mM, 1.5-fold steps) and estimated by 2 approaches: individual (geometric means) and group (graphical) thresholds. The proportion of agreement between the test and retest results was estimated using intraclass coefficient correlations. The detection and recognition thresholds calculated by the geometric mean were 2.8 and 18.6mM at session 1 and 2.3 and 14.5mM at session 2 and according to the graphical approach, 2.7 and 18.6mM at session 1 and 1.7 and 16.3mM at session 2. The proportion of agreement between test and retest for the detection and recognition thresholds was 0.430 (95% CI: 0.080-0.680) and 0.660 (95% CI: 0.400-0.830). This fast and simple method to assess salt taste detection and recognition thresholds demonstrated satisfactory evidence of reliability and it could be useful for large population studies. PMID:26733539

  1. The Isolation, Primacy, and Recency Effects Predicted by an Adaptive LTD/LTP Threshold in Postsynaptic Cells.

    PubMed

    Sikström, Sverker

    2006-03-01

    An item that stands out (is isolated) from its context is better remembered than an item consistent with the context. This isolation effect cannot be accounted for by increased attention, because it occurs when the isolated item is presented as the first item, or by impoverished memory of nonisolated items, because the isolated item is better remembered than a control list consisting of equally different items. The isolation effect is seldom experimentally or theoretically related to the primacy or the recency effects-that is, the improved performance on the first few and last items, respectively, on the serial position curve. The primacy effect cannot easily be accounted for by rehearsal in short-term memory because it occurs when rehearsal is eliminated. This article suggests that the primacy, the recency, and the isolation effects can be accounted for by experience-dependent synaptic plasticity in neural cells. Neurological empirical data suggest that the threshold that determines whether cells will show long-term potentiation (LTP) or long-term depression (LTD) varies as a function of recent postsynaptic activity and that synaptic plasticity is bounded. By implementing an adaptive LTP-LTD threshold in an artificial neural network, the various aspects of the isolation, the primacy, and the recency effects are accounted for, whereas none of these phenomena are accounted for if the threshold is constant. This theory suggests a possible link between the cognitive and the neurological levels.

  2. Ensemble transform sensitivity method for adaptive observations

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Xie, Yuanfu; Wang, Hongli; Chen, Dehui; Toth, Zoltan

    2016-01-01

    The Ensemble Transform (ET) method has been shown to be useful in providing guidance for adaptive observation deployment. It predicts forecast error variance reduction for each possible deployment using its corresponding transformation matrix in an ensemble subspace. In this paper, a new ET-based sensitivity (ETS) method, which calculates the gradient of forecast error variance reduction in terms of analysis error variance reduction, is proposed to specify regions for possible adaptive observations. ETS is a first order approximation of the ET; it requires just one calculation of a transformation matrix, increasing computational efficiency (60%-80% reduction in computational cost). An explicit mathematical formulation of the ETS gradient is derived and described. Both the ET and ETS methods are applied to the Hurricane Irene (2011) case and a heavy rainfall case for comparison. The numerical results imply that the sensitive areas estimated by the ETS and ET are similar. However, ETS is much more efficient, particularly when the resolution is higher and the number of ensemble members is larger.

  3. Informativeness of the threshold adaptation and fatigue for assessment of the auditory health risk.

    PubMed

    Tzaneva, L

    1997-09-01

    The management and control of modern automated productions evidence the elevated role of the auditory system as one of the distant sensor communication systems. The present study informs about the auditory adaptation and fatigue at the end of the working shift of 385 operators at control boards in "Kremikovtzi" State Company. The results of the monitoring of the dynamic changes show significant changes, expressed in aggravation of the auditory fatigue manifestation by moderate to significant disturbance of the adaptation-recovery processes. The analysis established a significant positive correlation between the changes in auditory fatigue and the duration of service. The frequencies of the speech range are preserved for a long time. The elevated auditory fatigue is observed in the injured hearing band 4,000 Hz, followed by 6,000 Hz and continuous dissemination to the middle frequencies of the speech band-2,000 and 1,000 Hz. The results of the study of the adaptation-recovery processes are characterised by statistically significant reliability, single direction and reproducibility and can be applied as informative criteria for assessment of the auditory health risk.

  4. Using threshold segmentation methods to measure dynamic vasodilatation in a series of optical images

    NASA Astrophysics Data System (ADS)

    Chen, Shangbin; Li, Pengcheng; Zeng, Shaoqun; Luo, Qingming

    2005-03-01

    Intrinsic optical signals imaging (IOSI) is a novel technique for functional neuroimaging in vivo, especially in the study of cortical spreading depression (CSD). At 550 nm wavelength, the optical images during CSD showed significant vasodilatation of some small arteries in the surface of cortex of rats. In order to quantify the arteries" diameter change, two kinds of threshold segmentation methods are applied, one is Isodata algorithm thresholding, the other is Otsu"s thresholding. Firstly, we set up a simple model to prove that segmentation of the vessel in a rectangle region could be equally transferred to describe the diameter change. The two methods could automatically select right thresholds for segmentation, so they were suitable for acquiring the dynamic vasodilatation in a series of optical images by computer. Comparing with the traditional method, the new methods were more robust and with high performance. By the methods, we found the vasodilatation could be distinguished as two processes during one CSD episode, a small vasodilatation before the great one that had been commonly reported before. The hemodynamic character during CSD deserves further study. And the methods can be easily applied to the other optical imaging experiments when the vascular dynamic is concerned.

  5. A Simple Method to Predict Threshold Shear Velocity in the Field

    NASA Astrophysics Data System (ADS)

    Li, J.; Okin, G. S.; Herrick, J. E.; Miller, M. E.; Munson, S. M.; Belnap, J.

    2009-12-01

    A very important parameter in predicting wind erosion is the threshold shear velocity, which is the minimal shear velocity required to initiate deflation of soil particles. Modeling and wind tunnel are primary methods in predicting threshold shear velocity. However, most models have limited applications in the presence of roughness elements, and running a wind tunnel in the field is labor-intensive and time-consuming. Soil crust (both physical and biological) is known to be a crucial factor affecting soil stability and threshold shear velocity. In this report, a simple and portable field method was tested in multiple locations of Utah for the estimation of threshold shear velocity. This method includes measuring size of holes (length and width) induced by shooting a “bullet ball” or “BB” gun, applying a pocket penetrometer, and a torvane on soil surface in the field. In the first stage of the experiment, a conventional wind tunnel was run in combination with BB gun, penetrometer, and torvane in field conditions for a range of soil texture. Results from both the BB gun and penetrometer applied at 45 degree to the ground were significantly correlated with the threshold shear velocity obtained using the wind tunnel (R2=0.70, P<0.001). In the second stage, BB gun and penetrometer method was applied to a serial of sites which have BSNE wind erosion monitors and known horizontal sediment fluxes. Our results showed that a combination of BB gun and penetrometer is able to provide decent prediction of threshold shear velocity in the presence of vegetation under different soil physical and biological conditions.

  6. Adaptive Accommodation Control Method for Complex Assembly

    NASA Astrophysics Data System (ADS)

    Kang, Sungchul; Kim, Munsang; Park, Shinsuk

    Robotic systems have been used to automate assembly tasks in manufacturing and in teleoperation. Conventional robotic systems, however, have been ineffective in controlling contact force in multiple contact states of complex assemblythat involves interactions between complex-shaped parts. Unlike robots, humans excel at complex assembly tasks by utilizing their intrinsic impedance, forces and torque sensation, and tactile contact clues. By examining the human behavior in assembling complex parts, this study proposes a novel geometry-independent control method for robotic assembly using adaptive accommodation (or damping) algorithm. Two important conditions for complex assembly, target approachability and bounded contact force, can be met by the proposed control scheme. It generates target approachable motion that leads the object to move closer to a desired target position, while contact force is kept under a predetermined value. Experimental results from complex assembly tests have confirmed the feasibility and applicability of the proposed method.

  7. Adaptive method with intercessory feedback control for an intelligent agent

    DOEpatents

    Goldsmith, Steven Y.

    2004-06-22

    An adaptive architecture method with feedback control for an intelligent agent provides for adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. An adaptive architecture method with feedback control for multiple intelligent agents provides for coordinating and adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. Re-programming of the adaptive architecture is through a nexus which coordinates reflexive and deliberator components.

  8. Real time algorithm invariant to natural lighting with LBP techniques through an adaptive thresholding implemented in GPU processors

    NASA Astrophysics Data System (ADS)

    Orjuela-Vargas, S. A.; Triana-Martinez, J.; Yañez, J. P.; Philips, W.

    2014-03-01

    Video analysis in real time requires fast and efficient algorithms to extract relevant information from a considerable number, commonly 25, of frames per second. Furthermore, robust algorithms for outdoor visual scenes may retrieve correspondent features along the day where a challenge is to deal with lighting changes. Currently, Local Binary Pattern (LBP) techniques are widely used for extracting features due to their robustness to illumination changes and the low requirements for implementation. We propose to compute an automatic threshold based on the distribution of the intensity residuals resulting from the pairwise comparisons when using LBP techniques. The intensity residuals distribution can be modelled by a Generalized Gaussian Distribution (GGD). In this paper we compute the adaptive threshold using the parameters of the GGD. We present a CUDA implementation of our proposed algorithm. We use the LBPSYM technique. Our approach is tested on videos of four different urban scenes with mobilities captured during day and night. The extracted features can be used in a further step to determine patterns, identify objects or detect background. However, further research must be conducted for blurring correction since the scenes at night are commonly blurred due to artificial lighting.

  9. Influence of threshold value in the use of statistical methods for groundwater vulnerability assessment.

    PubMed

    Masetti, Marco; Sterlacchini, Simone; Ballabio, Cristiano; Sorichetta, Alessandro; Poli, Simone

    2009-06-01

    Statistical techniques can be used in groundwater pollution problems to determine the relationships among observed contamination (impacted wells representing an occurrence of what has to be predicted), environmental factors that may influence it and the potential contamination sources. Determination of a threshold concentration to discriminate between impacted or non impacted wells represents a key issue in the application of these techniques. In this work the effects on groundwater vulnerability assessment by statistical methods due to the use of different threshold values have been evaluated. The study area (Province of Milan, northern Italy) is about 2000 km(2) and groundwater nitrate concentration is constantly monitored by a net of about 300 wells. Along with different predictor factors three different threshold values of nitrate concentration have been considered to perform the vulnerability assessment of the shallow unconfined aquifer. The likelihood ratio model has been chosen to analyze the spatial distribution of the vulnerable areas. The reliability of the three final vulnerability maps has been tested showing that all maps identify a general positive trend relating mean nitrate concentration in the wells and vulnerability classes the same wells belong to. Then using the kappa coefficient the influence of the different threshold values has been evaluated comparing the spatial distribution of the resulting vulnerability classes in each map. The use of different threshold does not determine different vulnerability assessment if results are analyzed on a broad scale, even if the smaller threshold value gives the poorest performance in terms of reliability. On the contrary, the spatial distribution of a detailed vulnerability assessment is strongly influenced by the selected threshold used to identify the occurrences, suggesting that there is a strong relationship among the number of identified occurrences, the scale of the maps representing the predictor

  10. Adapting implicit methods to parallel processors

    SciTech Connect

    Reeves, L.; McMillin, B.; Okunbor, D.; Riggins, D.

    1994-12-31

    When numerically solving many types of partial differential equations, it is advantageous to use implicit methods because of their better stability and more flexible parameter choice, (e.g. larger time steps). However, since implicit methods usually require simultaneous knowledge of the entire computational domain, these methods axe difficult to implement directly on distributed memory parallel processors. This leads to infrequent use of implicit methods on parallel/distributed systems. The usual implementation of implicit methods is inefficient due to the nature of parallel systems where it is common to take the computational domain and distribute the grid points over the processors so as to maintain a relatively even workload per processor. This creates a problem at the locations in the domain where adjacent points are not on the same processor. In order for the values at these points to be calculated, messages have to be exchanged between the corresponding processors. Without special adaptation, this will result in idle processors during part of the computation, and as the number of idle processors increases, the lower the effective speed improvement by using a parallel processor.

  11. Automation of a center pivot using the temperature-time-threshold method of irriation scheduling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A center pivot was completely automated using the temperature-time-threshold (TTT) method of irrigation scheduling. An array of infrared thermometers was mounted on the center pivot and these were used to remotely determine the crop leaf temperature as an indicator of crop water stress. We describ...

  12. A simple method to estimate threshold friction velocity of wind erosion in the field

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Nearly all wind erosion models require the specification of threshold friction velocity (TFV). Yet determining TFV of wind erosion in field conditions is difficult as it depends on both soil characteristics and distribution of vegetation or other roughness elements. While several reliable methods ha...

  13. Linearly-Constrained Adaptive Signal Processing Methods

    NASA Astrophysics Data System (ADS)

    Griffiths, Lloyd J.

    1988-01-01

    In adaptive least-squares estimation problems, a desired signal d(n) is estimated using a linear combination of L observation values samples xi (n), x2(n), . . . , xL-1(n) and denoted by the vector X(n). The estimate is formed as the inner product of this vector with a corresponding L-dimensional weight vector W. One particular weight vector of interest is Wopt which minimizes the mean-square between d(n) and the estimate. In this context, the term `mean-square difference' is a quadratic measure such as statistical expectation or time average. The specific value of W which achieves the minimum is given by the prod-uct of the inverse data covariance matrix and the cross-correlation between the data vector and the desired signal. The latter is often referred to as the P-vector. For those cases in which time samples of both the desired and data vector signals are available, a variety of adaptive methods have been proposed which will guarantee that an iterative weight vector Wa(n) converges (in some sense) to the op-timal solution. Two which have been extensively studied are the recursive least-squares (RLS) method and the LMS gradient approximation approach. There are several problems of interest in the communication and radar environment in which the optimal least-squares weight set is of interest and in which time samples of the desired signal are not available. Examples can be found in array processing in which only the direction of arrival of the desired signal is known and in single channel filtering where the spectrum of the desired response is known a priori. One approach to these problems which has been suggested is the P-vector algorithm which is an LMS-like approximate gradient method. Although it is easy to derive the mean and variance of the weights which result with this algorithm, there has never been an identification of the corresponding underlying error surface which the procedure searches. The purpose of this paper is to suggest an alternative

  14. Adaptive model training system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M

    2014-11-18

    An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.

  15. Adaptive model training system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M; Lee, Vo

    2014-04-15

    An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.

  16. Direct comparison of two statistical methods for determination of evoked-potential thresholds

    NASA Astrophysics Data System (ADS)

    Langford, Ted L.; Patterson, James H., Jr.

    1994-07-01

    Several statistical procedures have been proposed as objective methods for determining evoked-potential thresholds. Data have been presented to support each of the methods, but there have not been direct comparisons using the same data. The goal of the present study was to evaluate correlation and variance ratio statistics using common data. A secondary goal was to evaluate the utility of a derived potential for determining thresholds. Chronic, bipolar electrodes were stereotaxically implanted in the inferior colliculi of six chinchillas. Evoked potentials were obtained at 0.25, 0.5, 1.0, 2.0, 4.0 and 8.0 kHz using 12-ms tone bursts and 12-ms tone bursts superimposed on 120-ms pedestal tones which were of the same frequency as the bursts, but lower in amplitude by 15 dB. Alternate responses were averaged in blocks of 200 to 4000 depending on the size of the response. Correlations were calculated for the pairs of averages. A response was deemed present if the correlation coefficient reached the 0.05 level of significance in 4000 or fewer averages. Threshold was defined as the mean of the level at which the correlation was significant and a level 5 dB below that at which it was not. Variance ratios were calculated as described by Elberling and Don (1984) using the same data. Averaged tone burst and tone burst-plus pedestal data were differenced and the resulting waveforms subjected to the same statistical analyses described above. All analyses yielded thresholds which were essentially the same as those obtained using behavioral methods. When the difference between stimulus durations is taken into account, however, evoked-potential methods produced lower thresholds than behavioral methods.

  17. Online Adaptive Replanning Method for Prostate Radiotherapy

    SciTech Connect

    Ahunbay, Ergun E.; Peng Cheng; Holmes, Shannon; Godley, Andrew; Lawton, Colleen; Li, X. Allen

    2010-08-01

    Purpose: To report the application of an adaptive replanning technique for prostate cancer radiotherapy (RT), consisting of two steps: (1) segment aperture morphing (SAM), and (2) segment weight optimization (SWO), to account for interfraction variations. Methods and Materials: The new 'SAM+SWO' scheme was retroactively applied to the daily CT images acquired for 10 prostate cancer patients on a linear accelerator and CT-on-Rails combination during the course of RT. Doses generated by the SAM+SWO scheme based on the daily CT images were compared with doses generated after patient repositioning using the current planning target volume (PTV) margin (5 mm, 3 mm toward rectum) and a reduced margin (2 mm), along with full reoptimization scans based on the daily CT images to evaluate dosimetry benefits. Results: For all cases studied, the online replanning method provided significantly better target coverage when compared with repositioning with reduced PTV (13% increase in minimum prostate dose) and improved organ sparing when compared with repositioning with regular PTV (13% decrease in the generalized equivalent uniform dose of rectum). The time required to complete the online replanning process was 6 {+-} 2 minutes. Conclusion: The proposed online replanning method can be used to account for interfraction variations for prostate RT with a practically acceptable time frame (5-10 min) and with significant dosimetric benefits. On the basis of this study, the developed online replanning scheme is being implemented in the clinic for prostate RT.

  18. [A cloud detection algorithm for MODIS images combining Kmeans clustering and multi-spectral threshold method].

    PubMed

    Wang, Wei; Song, Wei-Guo; Liu, Shi-Xing; Zhang, Yong-Ming; Zheng, Hong-Yang; Tian, Wei

    2011-04-01

    An improved method for detecting cloud combining Kmeans clustering and the multi-spectral threshold approach is described. On the basis of landmark spectrum analysis, MODIS data is categorized into two major types initially by Kmeans method. The first class includes clouds, smoke and snow, and the second class includes vegetation, water and land. Then a multi-spectral threshold detection is applied to eliminate interference such as smoke and snow for the first class. The method is tested with MODIS data at different time under different underlying surface conditions. By visual method to test the performance of the algorithm, it was found that the algorithm can effectively detect smaller area of cloud pixels and exclude the interference of underlying surface, which provides a good foundation for the next fire detection approach.

  19. Research on biochemical spectrum denoising based on a novel wavelet threshold function and an improved translation-invariance method

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Zeng, Lvming; Huang, Zhen; Huang, Shuanggen

    2008-12-01

    In this paper, an improved wavelet threshold denoising with combined translation invariance(TI)method is adopted to remove noises existed in the bio-chemical spectrum. Meanwhile, a novel wavelet threshold function and an optimal threshold determination algorithm are proposed. The new function is continuous and high-order derivable, it can overcome the vibration phenomena generated by the classical threshold function and decrease the error of reconstructed spectrum. So, it is superior to the frequency-domain filtering methods, the soft- and hard-threshold function proposed by D.L. Donoho and the semisoft-threshold function proposed by Gao, etc. The experimental results show that the improved TI wavelet threshold(TI-WT) denoising method can availably eliminate the Pseudo-Gibbs phenomena generated by the traditional wavelet thresholding method. At the same time, the improved wavelet threshold function and the TI-WT method present lower root mean-square-error (RMSE) and higher signal-to-noise ratio(SNR) than the frequency-domain filtering, classical soft and hard-threshold denoising The SNR increasing from 17.3200 to 32.5609, the RMSE decreasing from 4.0244 to 0.6257. Otherwise, The improved denoising method not only makes the spectrum smooth, but also effectively preserves the edge characteristics of the original spectrum.

  20. Adaptive method for real-time gait phase detection based on ground contact forces.

    PubMed

    Yu, Lie; Zheng, Jianbin; Wang, Yang; Song, Zhengge; Zhan, Enqi

    2015-01-01

    A novel method is presented to detect real-time gait phases based on ground contact forces (GCFs) measured by force sensitive resistors (FSRs). The traditional threshold method (TM) sets a threshold to divide the GCFs into on-ground and off-ground statuses. However, TM is neither an adaptive nor real-time method. The threshold setting is based on body weight or the maximum and minimum GCFs in the gait cycles, resulting in different thresholds needed for different walking conditions. Additionally, the maximum and minimum GCFs are only obtainable after data processing. Therefore, this paper proposes a proportion method (PM) that calculates the sums and proportions of GCFs wherein the GCFs are obtained from FSRs. A gait analysis is then implemented by the proposed gait phase detection algorithm (GPDA). Finally, the PM reliability is determined by comparing the detection results between PM and TM. Experimental results demonstrate that the proposed PM is highly reliable in all walking conditions. In addition, PM could be utilized to analyze gait phases in real time. Finally, PM exhibits strong adaptability to different walking conditions.

  1. A multi-threshold sampling method for TOF PET signal processing

    SciTech Connect

    Kim, Heejong; Kao, Chien-Min; Xie, Q.; Chen, Chin-Tu; Zhou, L.; Tang, F.; Frisch, Henry; Moses, William W.; Choong, Woon-Seng

    2009-02-02

    As an approach to realizing all-digital data acquisition for positron emission tomography (PET), we have previously proposed and studied a multithreshold sampling method to generate samples of a PET event waveform with respect to a few user-defined amplitudes. In this sampling scheme, one can extract both the energy and timing information for an event. In this paper, we report our prototype implementation of this sampling method and the performance results obtained with this prototype. The prototype consists of two multi-threshold discriminator boards and a time-to-digital converter (TDC) board. Each of the multi-threshold discriminator boards takes one input and provides up to 8 threshold levels, which can be defined by users, for sampling the input signal. The TDC board employs the CERN HPTDC chip that determines the digitized times of the leading and falling edges of the discriminator output pulses. We connect our prototype electronics to the outputs of two Hamamatsu R9800 photomultiplier tubes (PMTs) that are individually coupled to a 6.25 x 6.25 x 25mm{sup 3} LSO crystal. By analyzing waveform samples generated by using four thresholds, we obtain a coincidence timing resolution of about 340 ps and an {approx}18% energy resolution at 511 keV. We are also able to estimate the decay-time constant from the resulting samples and obtain a mean value of 44 ns with an {approx}9 ns FWHM. In comparison, using digitized waveforms obtained at a 20 GSps sampling rate for the same LSO/PMT modules we obtain {approx}300 ps coincidence timing resolution, {approx}14% energy resolution at 511 keV, and {approx}5 ns FWHM for the estimated decay-time constant. Details of the results on the timing and energy resolutions by using the multi-threshold method indicate that it is a promising approach for implementing digital PET data acquisition.

  2. Critical review and hydrologic application of threshold detection methods for the generalized Pareto (GP) distribution

    NASA Astrophysics Data System (ADS)

    Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto

    2016-04-01

    Estimation of extreme rainfall from data constitutes one of the most important issues in statistical hydrology, as it is associated with the design of hydraulic structures and flood water management. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a generalized Pareto (GP) distribution model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data, graphical methods where one studies the dependence of GP distribution parameters (or related metrics) on the threshold level u, and Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. In this work, we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 daily rainfall records from the NOAA-NCDC open-access database, with more than 110 years of data. We find that non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while methods that are based on asymptotic properties of the upper distribution tail lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e. on the order of 0.1 ÷ 0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on pre-asymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2÷12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the

  3. Evaluation of automated threshold selection methods for accurately sizing microscopic fluorescent cells by image analysis.

    PubMed Central

    Sieracki, M E; Reichenbach, S E; Webb, K L

    1989-01-01

    The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and

  4. Fast Hearing-Threshold Estimation Using Multiple Auditory Steady-State Responses with Narrow-Band Chirps and Adaptive Stimulus Patterns

    PubMed Central

    Mühler, Roland; Mentzel, Katrin; Verhey, Jesko

    2012-01-01

    This paper describes the estimation of hearing thresholds in normal-hearing and hearing-impaired subjects on the basis of multiple-frequency auditory steady-state responses (ASSRs). The ASSR was measured using two new techniques: (i) adaptive stimulus patterns and (ii) narrow-band chirp stimuli. ASSR thresholds in 16 normal-hearing and 16 hearing-impaired adults were obtained simultaneously at both ears at 500, 1000, 2000, and 4000 Hz, using a multiple-frequency stimulus built up of four one-octave-wide narrow-band chirps with a repetition rate of 40 Hz. A statistical test in the frequency domain was used to detect the response. The recording of the steady-state responses was controlled in eight independent recording channels with an adaptive, semiautomatic algorithm. The average differences between the behavioural hearing thresholds and the ASSR threshold estimate were 10, 8, 13, and 15 dB for test frequencies of 500, 1000, 2000, and 4000 Hz, respectively. The average overall test duration of 18.6 minutes for the threshold estimations at the four frequencies and both ears demonstrates the benefit of an adaptive recording algorithm and the efficiency of optimised narrow-band chirp stimuli. PMID:22619622

  5. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics.

    PubMed

    Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo

    2013-06-01

    In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery.

  6. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics.

    PubMed

    Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo

    2013-06-01

    In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery. PMID:23761849

  7. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics

    PubMed Central

    Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo

    2013-01-01

    In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery. PMID:23761849

  8. Adaptive numerical methods for partial differential equations

    SciTech Connect

    Cololla, P.

    1995-07-01

    This review describes a structured approach to adaptivity. The Automated Mesh Refinement (ARM) algorithms developed by M Berger are described, touching on hyperbolic and parabolic applications. Adaptivity is achieved by overlaying finer grids only in areas flagged by a generalized error criterion. The author discusses some of the issues involved in abutting disparate-resolution grids, and demonstrates that suitable algorithms exist for dissipative as well as hyperbolic systems.

  9. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    PubMed

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-01

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978

  10. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    PubMed

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-01

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.

  11. Threshold-free method for three-dimensional segmentation of organelles

    NASA Astrophysics Data System (ADS)

    Chan, Yee-Hung M.; Marshall, Wallace F.

    2012-03-01

    An ongoing challenge in the field of cell biology is to how to quantify the size and shape of organelles within cells. Automated image analysis methods often utilize thresholding for segmentation, but the calculated surface of objects depends sensitively on the exact threshold value chosen, and this problem is generally worse at the upper and lower zboundaries because of the anisotropy of the point spread function. We present here a threshold-independent method for extracting the three-dimensional surface of vacuoles in budding yeast whose limiting membranes are labeled with a fluorescent fusion protein. These organelles typically exist as a clustered set of 1-10 sphere-like compartments. Vacuole compartments and center points are identified manually within z-stacks taken using a spinning disk confocal microscope. A set of rays is defined originating from each center point and radiating outwards in random directions. Intensity profiles are calculated at coordinates along these rays, and intensity maxima are taken as the points the rays cross the limiting membrane of the vacuole. These points are then fit with a weighted sum of basis functions to define the surface of the vacuole, and then parameters such as volume and surface area are calculated. This method is able to determine the volume and surface area of spherical beads (0.96 to 2 micron diameter) with less than 10% error, and validation using model convolution methods produce similar results. Thus, this method provides an accurate, automated method for measuring the size and morphology of organelles and can be generalized to measure cells and other objects on biologically relevant length-scales.

  12. Air and Bone Conduction Click and Tone-burst Auditory Brainstem Thresholds using Kalman Adaptive Processing in Non-sedated Normal Hearing Infants

    PubMed Central

    Elsayed, Alaaeldin M.; Hunter, Lisa L.; Keefe, Douglas H.; Feeney, M. Patrick; Brown, David K.; Meinzen-Derr, Jareen K.; Baroch, Kelly; Sullivan-Mahoney, Maureen; Francis, Kara; Schaid, Leigh G.

    2015-01-01

    Objective To study normative thresholds and latencies for click and tone-burst auditory brainstem response (TB-ABR) for air and bone conduction in normal infants and those discharged from neonatal intensive care units (NICU), who passed newborn hearing screening and follow-up DPOAE. An evoked potential system (Vivosonic Integrity™) that incorporates Bluetooth electrical isolation and Kalman-weighted adaptive processing to improve signal to noise ratios was employed for this study. Results were compared with other published data. Research Design One hundred forty-five infants who passed two-stage hearing screening with transient-evoked otoacoustic emission (OAE) or automated ABR were assessed with clicks at 70 dB nHL and threshold TB-ABR. Tone-bursts at frequencies between 500 to 4000 Hz were employed for air and bone conduction ABR testing using a specified staircase threshold search to establish threshold levels and Wave V peak latencies. Results Median air conduction hearing thresholds using TB-ABR ranged from 0-20 dB nHL, depending on stimulus frequency. Median bone conduction thresholds were 10 dB nHL across all frequencies, and median air-bone gaps were 0 dB across all frequencies. There was no significant threshold difference between left and right ears and no significant relationship between thresholds and hearing loss risk factors, ethnicity or gender. Older age was related to decreased latency for air conduction. Compared to previous studies, mean air conduction thresholds were found at slightly lower (better) levels, while bone conduction levels were better at 2000 Hz and higher at 500 Hz. Latency values were longer at 500 Hz than previous studies using other instrumentation. Sleep state did not affect air or bone conduction thresholds. Conclusions This study demonstrated slightly better Wave V thresholds for air conduction than previous infant studies. The differences found in the current study, while statistically significant, were within the test

  13. Non-parametric permutation thresholding for adaptive nonlinear beamformer analysis on MEG revealed oscillatory neuronal dynamics in human brain.

    PubMed

    Ishii, Ryouhei; Canuet, Leonides; Aoki, Yasunori; Ikeda, Shunichiro; Hata, Masahiro; Iwase, Masao; Takeda, Masatoshi

    2013-01-01

    Adaptive nonlinear beamformer technique for analyzing magnetoencephalography (MEG) data has been proved to be powerful tool for both brain research and clinical applications. A general method of analyzing multiple subject data with a formal statistical treatment for the group data has been developed and applied for various types of MEG data. Our latest application of this method was frontal midline theta rhythm (Fmθ), which indicates focused attention and appears widely distributed over medial prefrontal areas in EEG recordings. To localize cortical generators of the magnetic counterpart of Fmθ precisely and identify cortical sources and underlying neural activity associated with mental calculation processing (i.e., arithmetic subtraction), we applied adaptive nonlinear beamformer and permutation analysis on MEG data. As a result, it was indicated that Fmθ is generated in the dorsal anterior cingulate and adjacent medial prefrontal cortex. Gamma event-related synchronization is as an index of activation in right parietal regions subserving mental subtraction associated with basic numerical processing and number-based spatial attention. Gamma desynchronization appeared in the right lateral prefrontal cortex, likely representing a mechanism to interrupt neural activity that can interfere with the ongoing cognitive task. We suggest that the combination of adaptive nonlinear beamformer and permutation analysis on MEG data is quite powerful tool to reveal the oscillatory neuronal dynamics in human brain. PMID:24110810

  14. Principles and Methods of Adapted Physical Education.

    ERIC Educational Resources Information Center

    Arnheim, Daniel D.; And Others

    Programs in adapted physical education are presented preceded by a background of services for the handicapped, by the psychosocial implications of disability, and by the growth and development of the handicapped. Elements of conducting programs discussed are organization and administration, class organization, facilities, exercise programs…

  15. Adaptive method for electron bunch profile prediction

    SciTech Connect

    Scheinker, Alexander; Gessner, Spencer

    2015-10-01

    We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET. © 2015 authors. Published by the American Physical Society.

  16. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  17. A simple method to estimate threshold friction velocity of wind erosion in the field

    NASA Astrophysics Data System (ADS)

    Li, Junran; Okin, Gregory S.; Herrick, Jeffrey E.; Belnap, Jayne; Munson, Seth M.; Miller, Mark E.

    2010-05-01

    This study provides a fast and easy-to-apply method to estimate threshold friction velocity (TFV) of wind erosion in the field. Wind tunnel experiments and a variety of ground measurements including air gun, pocket penetrometer, torvane, and roughness chain were conducted in Moab, Utah and cross-validated in the Mojave Desert, California. Patterns between TFV and ground measurements were examined to identify the optimum method for estimating TFV. The results show that TFVs were best predicted using the air gun and penetrometer measurements in the Moab sites. This empirical method, however, systematically underestimated TFVs in the Mojave Desert sites. Further analysis showed that TFVs in the Mojave sites can be satisfactorily estimated with a correction for rock cover, which is presumably the main cause of the underestimation of TFVs. The proposed method may be also applied to estimate TFVs in environments where other non-erodible elements such as postharvest residuals are found.

  18. Outlier Measures and Norming Methods for Computerized Adaptive Tests.

    ERIC Educational Resources Information Center

    Bradlow, Eric T.; Weiss, Robert E.

    2001-01-01

    Compares four methods that map outlier statistics to a familiarity probability scale (a "P" value). Explored these methods in the context of computerized adaptive test data from a 1995 nationally administered computerized examination for professionals in the medical industry. (SLD)

  19. Assessing Adaptive Instructional Design Tools and Methods in ADAPT[IT].

    ERIC Educational Resources Information Center

    Eseryel, Deniz; Spector, J. Michael

    ADAPT[IT] (Advanced Design Approach for Personalized Training - Interactive Tools) is a European project within the Information Society Technologies program that is providing design methods and tools to guide a training designer according to the latest cognitive science and standardization principles. ADAPT[IT] addresses users in two significantly…

  20. A Classification Method of Inquiry E-mails for Describing FAQ with Automatic Setting Mechanism of Judgment Thresholds

    NASA Astrophysics Data System (ADS)

    Tsuda, Yuki; Akiyoshi, Masanori; Samejima, Masaki; Oka, Hironori

    In this paper the authors propose a classification method of inquiry e-mails for describing FAQ (Frequently Asked Questions) and automatic setting mechanism of judgment thresholds. In this method, a dictionary used for classification of inquiries is generated and updated automatically by statistical information of characteristic words in clusters, and inquiries are classified correctly to each proper cluster by using the dictionary. Threshold values are automatically set by using statistical information.

  1. Adaptive computational methods for aerothermal heating analysis

    NASA Technical Reports Server (NTRS)

    Price, John M.; Oden, J. Tinsley

    1988-01-01

    The development of adaptive gridding techniques for finite-element analysis of fluid dynamics equations is described. The developmental work was done with the Euler equations with concentration on shock and inviscid flow field capturing. Ultimately this methodology is to be applied to a viscous analysis for the purpose of predicting accurate aerothermal loads on complex shapes subjected to high speed flow environments. The development of local error estimate strategies as a basis for refinement strategies is discussed, as well as the refinement strategies themselves. The application of the strategies to triangular elements and a finite-element flux-corrected-transport numerical scheme are presented. The implementation of these strategies in the GIM/PAGE code for 2-D and 3-D applications is documented and demonstrated.

  2. An adaptive pseudospectral method for discontinuous problems

    NASA Technical Reports Server (NTRS)

    Augenbaum, Jeffrey M.

    1988-01-01

    The accuracy of adaptively chosen, mapped polynomial approximations is studied for functions with steep gradients or discontinuities. It is shown that, for steep gradient functions, one can obtain spectral accuracy in the original coordinate system by using polynomial approximations in a transformed coordinate system with substantially fewer collocation points than are necessary using polynomial expansion directly in the original, physical, coordinate system. It is also shown that one can avoid the usual Gibbs oscillation associated with steep gradient solutions of hyperbolic pde's by approximation in suitably chosen coordinate systems. Continuous, high gradient solutions are computed with spectral accuracy (as measured in the physical coordinate system). Discontinuous solutions associated with nonlinear hyperbolic equations can be accurately computed by using an artificial viscosity chosen to smooth out the solution in the mapped, computational domain. Thus, shocks can be effectively resolved on a scale that is subgrid to the resolution available with collocation only in the physical domain. Examples with Fourier and Chebyshev collocation are given.

  3. Adaptable radiation monitoring system and method

    DOEpatents

    Archer, Daniel E.; Beauchamp, Brock R.; Mauger, G. Joseph; Nelson, Karl E.; Mercer, Michael B.; Pletcher, David C.; Riot, Vincent J.; Schek, James L.; Knapp, David A.

    2006-06-20

    A portable radioactive-material detection system capable of detecting radioactive sources moving at high speeds. The system has at least one radiation detector capable of detecting gamma-radiation and coupled to an MCA capable of collecting spectral data in very small time bins of less than about 150 msec. A computer processor is connected to the MCA for determining from the spectral data if a triggering event has occurred. Spectral data is stored on a data storage device, and a power source supplies power to the detection system. Various configurations of the detection system may be adaptably arranged for various radiation detection scenarios. In a preferred embodiment, the computer processor operates as a server which receives spectral data from other networked detection systems, and communicates the collected data to a central data reporting system.

  4. Simultaneous seismic data interpolation and denoising with a new adaptive method based on dreamlet transform

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Wu, Ru-Shan; Chen, Xiaohong; Li, Jingye

    2015-05-01

    Interpolation and random noise removal is a pre-requisite for multichannel techniques because the irregularity and random noise in observed data can affect their performances. Projection Onto Convex Sets (POCS) method can better handle seismic data interpolation if the data's signal-to-noise ratio (SNR) is high, while it has difficulty in noisy situations because it inserts the noisy observed seismic data in each iteration. Weighted POCS method can weaken the noise effects, while the performance is affected by the choice of weight factors and is still unsatisfactory. Thus, a new weighted POCS method is derived through the Iterative Hard Threshold (IHT) view, and in order to eliminate random noise, a new adaptive method is proposed to achieve simultaneous seismic data interpolation and denoising based on dreamlet transform. Performances of the POCS method, the weighted POCS method and the proposed method are compared in simultaneous seismic data interpolation and denoising which demonstrate the validity of the proposed method. The recovered SNRs confirm that the proposed adaptive method is the most effective among the three methods. Numerical examples on synthetic and real data demonstrate the validity of the proposed adaptive method.

  5. Moving and adaptive grid methods for compressible flows

    NASA Technical Reports Server (NTRS)

    Trepanier, Jean-Yves; Camarero, Ricardo

    1995-01-01

    This paper describes adaptive grid methods developed specifically for compressible flow computations. The basic flow solver is a finite-volume implementation of Roe's flux difference splitting scheme or arbitrarily moving unstructured triangular meshes. The grid adaptation is performed according to geometric and flow requirements. Some results are included to illustrate the potential of the methodology.

  6. Adaptive mesh strategies for the spectral element method

    NASA Technical Reports Server (NTRS)

    Mavriplis, Catherine

    1992-01-01

    An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.

  7. Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd

    2015-01-01

    Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.

  8. Determining electrically evoked compound action potential thresholds: A comparison of computer versus human analysis methods

    PubMed Central

    Glassman, E. Katelyn; Hughes, Michelle L.

    2012-01-01

    Objectives Current cochlear implants (CIs) have telemetry capabilities for measuring the electrically evoked compound action potential (ECAP). Neural Response Telemetry (NRT™; Cochlear) and Neural Response Imaging (NRI; Advanced Bionics [AB]) can measure ECAP responses across a range of stimulus levels to obtain an amplitude growth function. Software-specific algorithms automatically mark the leading negative peak, N1, and the following positive peak/plateau, P2, and apply linear regression to estimate ECAP threshold. Alternatively, clinicians may apply expert judgments to modify the peak markers placed by the software algorithms, and/or use visual detection to identify the lowest level yielding a measurable ECAP response. The goals of this study were to: (1) assess the variability between human and computer decisions for (a) marking N1 and P2, and (b) determination of linear regression threshold (LRT) and visual detection threshold (VDT); and (2) compare LRT and VDT methods within and across human and computer decision methods. Design ECAP amplitude growth functions were measured for three electrodes in each of 20 ears (10 Cochlear Nucleus® 24RE/CI512, and 10 AB CII/90K). LRT, defined as the current level yielding an ECAP with zero amplitude, was calculated for both computer- (C-LRT) and human-picked peaks (H-LRT). VDT, defined as the lowest level resulting in a measurable ECAP response, was also calculated for both computer- (C-VDT) and human-picked peaks (H-VDT). Because NRI assigns peak markers to all waveforms but does not include waveforms with amplitudes less than 20 μV in its regression calculation, C-VDT for AB subjects was defined as the lowest current level yielding an amplitude ≥20 μV. Results Overall, there were significant correlations between human and computer decisions for peak-marker placement, LRT, and VDT for both manufacturers (r = 0.78 to 1.00, p < 0.001). For Cochlear devices, LRT and VDT correlated equally well for both computer- and

  9. Investigation of Adaptive-threshold Approaches for Determining Area-Time Integrals from Satellite Infrared Data to Estimate Convective Rain Volumes

    NASA Technical Reports Server (NTRS)

    Smith, Paul L.; VonderHaar, Thomas H.

    1996-01-01

    The principal goal of this project is to establish relationships that would allow application of area-time integral (ATI) calculations based upon satellite data to estimate rainfall volumes. The research is being carried out as a collaborative effort between the two participating organizations, with the satellite data analysis to determine values for the ATIs being done primarily by the STC-METSAT scientists and the associated radar data analysis to determine the 'ground-truth' rainfall estimates being done primarily at the South Dakota School of Mines and Technology (SDSM&T). Synthesis of the two separate kinds of data and investigation of the resulting rainfall-versus-ATI relationships is then carried out jointly. The research has been pursued using two different approaches, which for convenience can be designated as the 'fixed-threshold approach' and the 'adaptive-threshold approach'. In the former, an attempt is made to determine a single temperature threshold in the satellite infrared data that would yield ATI values for identifiable cloud clusters which are closely related to the corresponding rainfall amounts as determined by radar. Work on the second, or 'adaptive-threshold', approach for determining the satellite ATI values has explored two avenues: (1) attempt involved choosing IR thresholds to match the satellite ATI values with ones separately calculated from the radar data on a case basis; and (2) an attempt involved a striaghtforward screening analysis to determine the (fixed) offset that would lead to the strongest correlation and lowest standard error of estimate in the relationship between the satellite ATI values and the corresponding rainfall volumes.

  10. Identification of nonlinear optical systems using adaptive kernel methods

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Zhang, Changjiang; Zhang, Haoran; Feng, Genliang; Xu, Xiuling

    2005-12-01

    An identification approach of nonlinear optical dynamic systems, based on adaptive kernel methods which are modified version of least squares support vector machine (LS-SVM), is presented in order to obtain the reference dynamic model for solving real time applications such as adaptive signal processing of the optical systems. The feasibility of this approach is demonstrated with the computer simulation through identifying a Bragg acoustic-optical bistable system. Unlike artificial neural networks, the adaptive kernel methods possess prominent advantages: over fitting is unlikely to occur by employing structural risk minimization criterion, the global optimal solution can be uniquely obtained owing to that its training is performed through the solution of a set of linear equations. Also, the adaptive kernel methods are still effective for the nonlinear optical systems with a variation of the system parameter. This method is robust with respect to noise, and it constitutes another powerful tool for the identification of nonlinear optical systems.

  11. Assessing threshold values for eutrophication management using Bayesian method in Yuqiao Reservoir, North China.

    PubMed

    Li, Xue; Xu, Yuan; Zhao, Gang; Shi, Chunli; Wang, Zhong-Liang; Wang, Yuqiu

    2015-04-01

    The eutrophication problem of drinking water source is directly related to the security of urban water supplication, and phosphorus has been proved as an important element to the water quality of the most northern hemisphere lakes and reservoirs. In the paper, 15-year monitoring records (1990∼2004) of Yuqiao Reservoir were used to model the changing trend of the total phosphorus (TP), analyze the uncertainty of nutrient parameters, and estimate the threshold of eutrophication management at a specific water quality goal by the application of Bayesian method through chemical material balance (CMB) model. The results revealed that Yuqiao Reservoir was a P-controlled water ecosystem, and the inner concentration of TP in the reservoir was significantly correlated with TP loading concentration, hydraulic retention coefficient, and bottom water dissolved oxygen concentration. In the case, the goal of water quality for TP in the reservoir was set to be 0.05 mg L(-1) (the third level of national surface water standard for reservoirs according to GB3838-2002), management measures could be taken to improve water quality in reservoir through controlling the highest inflow phosphorus concentration (0.15∼0.21 mg L(-1)) and the lowest DO concentration (3.76∼5.59 mg L(-1)) to the threshold. Inverse method was applied to evaluate the joint manage measures, and the results revealed that it was a valuable measure to avoid eutrophication by controlling lowest dissolved oxygen concentration and adjusting the inflow and outflow of reservoir.

  12. Adaptive upscaling with the dual mesh method

    SciTech Connect

    Guerillot, D.; Verdiere, S.

    1997-08-01

    The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.

  13. Adaptive Finite Element Methods for Continuum Damage Modeling

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.

    1995-01-01

    The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.

  14. Space Systems - Safety and Compatibility of Materials - Method to Determine the Flammability Thresholds of Materials

    NASA Technical Reports Server (NTRS)

    Hirsch, David

    2009-01-01

    Spacecraft fire safety emphasizes fire prevention, which is achieved primarily through the use of fire-resistant materials. Materials selection for spacecraft is based on conventional flammability acceptance tests, along with prescribed quantity limitations and configuration control for items that are non-pass or questionable. ISO 14624-1 and -2 are the major methods used to evaluate flammability of polymeric materials intended for use in the habitable environments of spacecraft. The methods are upward flame-propagation tests initiated in static environments and using a well-defined igniter flame at the bottom of the sample. The tests are conducted in the most severe flaming combustion environment expected in the spacecraft. The pass/fail test logic of ISO 14624-1 and -2 does not allow a quantitative comparison with reduced gravity or microgravity test results; therefore their use is limited, and possibilities for in-depth theoretical analyses and realistic estimates of spacecraft fire extinguishment requirements are practically eliminated. To better understand the applicability of laboratory test data to actual spacecraft environments, a modified ISO 14624 protocol has been proposed that, as an alternative to qualifying materials as pass/fail in the worst-expected environments, measures the actual upward flammability limit for the material. A working group established by NASA to provide recommendations for exploration spacecraft internal atmospheres realized the importance of correlating laboratory data with real-life environments and recommended NASA to develop a flammability threshold test method. The working group indicated that for the Constellation Program, the flammability threshold information will allow NASA to identify materials with increased flammability risk from oxygen concentration and total pressure changes, minimize potential impacts, and allow for development of sound requirements for new spacecraft and extravehicular landers and habitats

  15. Adaptive Transmission Control Method for Communication-Broadcasting Integrated Services

    NASA Astrophysics Data System (ADS)

    Koto, Hideyuki; Furuya, Hiroki; Nakamura, Hajime

    This paper proposes an adaptive transmission control method for massive and intensive telecommunication traffic generated by communication-broadcasting integrated services. The proposed method adaptively controls data transmissions from viewers depending on the congestion states, so that severe congestion can be effectively avoided. Furthermore, it utilizes the broadcasting channel which is not only scalable, but also reliable for controlling the responses from vast numbers of viewers. The performance of the proposed method is evaluated through experiments on a test bed where approximately one million viewers are emulated. The obtained results quantitatively demonstrate the performance of the proposed method and its effectiveness under massive and intensive traffic conditions.

  16. An auto-adaptive background subtraction method for Raman spectra

    NASA Astrophysics Data System (ADS)

    Xie, Yi; Yang, Lidong; Sun, Xilong; Wu, Dewen; Chen, Qizhen; Zeng, Yongming; Liu, Guokun

    2016-05-01

    Background subtraction is a crucial step in the preprocessing of Raman spectrum. Usually, parameter manipulating of the background subtraction method is necessary for the efficient removal of the background, which makes the quality of the spectrum empirically dependent. In order to avoid artificial bias, we proposed an auto-adaptive background subtraction method without parameter adjustment. The main procedure is: (1) select the local minima of spectrum while preserving major peaks, (2) apply an interpolation scheme to estimate background, (3) and design an iteration scheme to improve the adaptability of background subtraction. Both simulated data and Raman spectra have been used to evaluate the proposed method. By comparing the backgrounds obtained from three widely applied methods: the polynomial, the Baek's and the airPLS, the auto-adaptive method meets the demand of practical applications in terms of efficiency and accuracy.

  17. Standardised method of determining vibratory perception thresholds for diagnosis and screening in neurological investigation.

    PubMed Central

    Goldberg, J M; Lindblom, U

    1979-01-01

    Vibration threshold determinations were made by means of an electromagnetic vibrator at three sites (carpal, tibial, and tarsal), which were primarily selected for examining patients with polyneuropathy. Because of the vast variation demonstrated for both vibrator output and tissue damping, the thresholds were expressed in terms of amplitude of stimulator movement measured by means of an accelerometer, instead of applied voltage which is commonly used. Statistical analysis revealed a higher power of discimination for amplitude measurements at all three stimulus sites. Digital read-out gave the best statistical result and was also most practical. Reference values obtained from 110 healthy males, 10 to 74 years of age, were highly correlated with age for both upper and lower extremities. The variance of the vibration perception threshold was less than that of the disappearance threshold, and determination of the perception threshold alone may be sufficient in most cases. PMID:501379

  18. Stability and error estimation for Component Adaptive Grid methods

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph; Zhu, Xiaolei

    1994-01-01

    Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.

  19. A new threshold selection method for peak over for nonstationary time series

    NASA Astrophysics Data System (ADS)

    Zhou, C. R.; Chen, Y. F.; Gu, S. H.; Huang, Q.; Yuan, J. C.; Yu, S. N.

    2016-08-01

    In the context of global climate change, human activities dramatically damage the consistency of hydrological time series. Peak Over Threshold (POT) series have become an alternative to the traditional Annual Maximum series, but it is still underutilized due to its complexity. Most literature about POT tended to employ only one threshold regardless of the non-stationarity of the whole series. Obviously, it is unwise to ignore the fact that our hydrological time series may no longer be a stationary stochastic process. Hence, in this paper, we take the daily runoff time series of the Yichang gauge station on the Yangtze River in China as an example, and try to shed light on the selection of the threshold provided non- stationarity of our time series. The Mann-Kendall test is applied to detect the change points; then, we gave different thresholds according to the change points to the sub-series. Comparing the goodness-of-fit of the series with one and several thresholds, it clearly investigates the series that employs different thresholds performs much better than that just fixes one threshold during the selection of the peak events.

  20. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    NASA Astrophysics Data System (ADS)

    Bo, Wurigen; Shashkov, Mikhail

    2015-10-01

    eW present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35,34,6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. In the standard ReALE method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way. In the current paper we present a new adaptive ReALE method, A-ReALE, that is based on the following design principles. First, a monitor function (or error indicator) based on the Hessian of some flow parameter(s) is utilized. Second, an equi-distribution principle for the monitor function is used as a criterion for adapting the mesh. Third, a centroidal Voronoi tessellation is used to adapt the mesh. Fourth, we scale the monitor function to avoid very small and large cells and then smooth it to permit the use of theoretical results related to weighted centroidal Voronoi tessellation. In the A-ReALE method, both number of cells and their locations are allowed to change at the rezone stage on each time step. The number of generators at each time step is chosen to guarantee the required spatial resolution in regions where monitor function reaches its maximum value. We present all details required for implementation of new adaptive A-ReALE method and demonstrate its performance in comparison with standard ReALE method on series of numerical examples.

  1. Stepwise Threshold Clustering: A New Method for Genotyping MHC Loci Using Next-Generation Sequencing Technology

    PubMed Central

    Stutz, William E.; Bolnick, Daniel I.

    2014-01-01

    Genes of the vertebrate major histocompatibility complex (MHC) are of great interest to biologists because of their important role in immunity and disease, and their extremely high levels of genetic diversity. Next generation sequencing (NGS) technologies are quickly becoming the method of choice for high-throughput genotyping of multi-locus templates like MHC in non-model organisms.Previous approaches to genotyping MHC genes using NGS technologies suffer from two problems:1) a “gray zone” where low frequency alleles and high frequency artifacts can be difficult to disentangle and 2) a similar sequence problem, where very similar alleles can be difficult to distinguish as two distinct alleles. Here were present a new method for genotyping MHC loci – Stepwise Threshold Clustering (STC) – that addresses these problems by taking full advantage of the increase in sequence data provided by NGS technologies. Unlike previous approaches for genotyping MHC with NGS data that attempt to classify individual sequences as alleles or artifacts, STC uses a quasi-Dirichlet clustering algorithm to cluster similar sequences at increasing levels of sequence similarity. By applying frequency and similarity based criteria to clusters rather than individual sequences, STC is able to successfully identify clusters of sequences that correspond to individual or similar alleles present in the genomes of individual samples. Furthermore, STC does not require duplicate runs of all samples, increasing the number of samples that can be genotyped in a given project. We show how the STC method works using a single sample library. We then apply STC to 295 threespine stickleback (Gasterosteus aculeatus) samples from four populations and show that neighboring populations differ significantly in MHC allele pools. We show that STC is a reliable, accurate, efficient, and flexible method for genotyping MHC that will be of use to biologists interested in a variety of downstream applications. PMID

  2. Threshold detection for the generalized Pareto distribution: Review of representative methods and application to the NOAA NCDC daily rainfall database

    NASA Astrophysics Data System (ADS)

    Langousis, Andreas; Mamalakis, Antonios; Puliga, Michelangelo; Deidda, Roberto

    2016-04-01

    In extreme excess modeling, one fits a generalized Pareto (GP) distribution to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as nonparametric methods that are intended to locate the changing point between extreme and nonextreme regions of the data, graphical methods where one studies the dependence of GP-related metrics on the threshold level u, and Goodness-of-Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. Here we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 overcentennial daily rainfall records from the NOAA-NCDC database. We find that nonparametric methods are generally not reliable, while methods that are based on GP asymptotic properties lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e., on the order of 0.1-0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on preasymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2 and 12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the empirical records, as well as variations in their size, constitute the two most important factors that may significantly affect the accuracy of the obtained results.

  3. Adaptive thresholding algorithm based on SAR images and wind data to segment oil spills along the northwest coast of the Iberian Peninsula.

    PubMed

    Mera, David; Cotos, José M; Varela-Pet, José; Garcia-Pineda, Oscar

    2012-10-01

    Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the ocean's surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time.

  4. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation.

    PubMed

    Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes

    2016-01-01

    The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is

  5. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation

    PubMed Central

    Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes

    2016-01-01

    The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is

  6. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation.

    PubMed

    Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes

    2016-07-22

    The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is

  7. Automated Analysis of Human Sperm Number and Concentration (Oligospermia) Using Otsu Threshold Method and Labelling

    NASA Astrophysics Data System (ADS)

    Susrama, I. G.; Purnama, K. E.; Purnomo, M. H.

    2016-01-01

    Oligospermia is a male fertility issue defined as a low sperm concentration in the ejaculate. Normally the sperm concentration is 20-120 million/ml, while Oligospermia patients has sperm concentration less than 20 million/ml. Sperm test done in the fertility laboratory to determine oligospermia by checking fresh sperm according to WHO standards in 2010 [9]. The sperm seen in a microscope using a Neubauer improved counting chamber and manually count the number of sperm. In order to be counted automatically, this research made an automation system to analyse and count the sperm concentration called Automated Analysis of Sperm Concentration Counters (A2SC2) using Otsu threshold segmentation process and morphology. Data sperm used is the fresh sperm directly in the analysis in the laboratory from 10 people. The test results using A2SC2 method obtained an accuracy of 91%. Thus in this study, A2SC2 can be used to calculate the amount and concentration of sperm automatically

  8. A method for measurement of the bubble formation threshold in biological liquids.

    PubMed

    Bjorno, L; Kornum, L O; Krag, P; Nielsen, C H; Paulev, P E

    1977-06-01

    Liquid under pressure is saturated with a given gas, such as argon, nitrogen, or air, by circulation through a column of gas exchangers. A sample of the gas-saturated liquid is isolated in a test chamber, the volume of which can be increased by means of a moving piston. The piston motion is cyclical with a variable frequency. Pressure in the test chamber is measured by means of a capacitive pressure pick-up. When the volume increase of the gas-saturated liquid in the test chamber is compensated for by the development of gas phase bubbles, the pressure decrease will stop; the recording device will show a pressure plateau, or a dip in the pressure-time course, depending on the velocity of the growth of the bubbles. Bubble formation threshold was independent of the frequency of the piston movement within frequency limits from 1 Hz down to 10(-3) Hz. Most experiements were carried out at a single frequency of 0.5 Hz. This new method appears to have advantages over previous ones.

  9. Adaptive multiscale model reduction with Generalized Multiscale Finite Element Methods

    NASA Astrophysics Data System (ADS)

    Chung, Eric; Efendiev, Yalchin; Hou, Thomas Y.

    2016-09-01

    In this paper, we discuss a general multiscale model reduction framework based on multiscale finite element methods. We give a brief overview of related multiscale methods. Due to page limitations, the overview focuses on a few related methods and is not intended to be comprehensive. We present a general adaptive multiscale model reduction framework, the Generalized Multiscale Finite Element Method. Besides the method's basic outline, we discuss some important ingredients needed for the method's success. We also discuss several applications. The proposed method allows performing local model reduction in the presence of high contrast and no scale separation.

  10. Numerical simulation on the adaptation of forms in trabecular bone to mechanical disuse and basic multi-cellular unit activation threshold at menopause

    NASA Astrophysics Data System (ADS)

    Gong, He; Fan, Yubo; Zhang, Ming

    2008-04-01

    The objective of this paper is to identify the effects of mechanical disuse and basic multi-cellular unit (BMU) activation threshold on the form of trabecular bone during menopause. A bone adaptation model with mechanical- biological factors at BMU level was integrated with finite element analysis to simulate the changes of trabecular bone structure during menopause. Mechanical disuse and changes in the BMU activation threshold were applied to the model for the period from 4 years before to 4 years after menopause. The changes in bone volume fraction, trabecular thickness and fractal dimension of the trabecular structures were used to quantify the changes of trabecular bone in three different cases associated with mechanical disuse and BMU activation threshold. It was found that the changes in the simulated bone volume fraction were highly correlated and consistent with clinical data, and that the trabecular thickness reduced significantly during menopause and was highly linearly correlated with the bone volume fraction, and that the change trend of fractal dimension of the simulated trabecular structure was in correspondence with clinical observations. The numerical simulation in this paper may help to better understand the relationship between the bone morphology and the mechanical, as well as biological environment; and can provide a quantitative computational model and methodology for the numerical simulation of the bone structural morphological changes caused by the mechanical environment, and/or the biological environment.

  11. Final Report: Symposium on Adaptive Methods for Partial Differential Equations

    SciTech Connect

    Pernice, M.; Johnson, C.R.; Smith, P.J.; Fogelson, A.

    1998-12-10

    OAK-B135 Final Report: Symposium on Adaptive Methods for Partial Differential Equations. Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.

  12. Non-parametric bootstrapping method for measuring the temporal discrimination threshold for movement disorders

    NASA Astrophysics Data System (ADS)

    Butler, John S.; Molloy, Anna; Williams, Laura; Kimmich, Okka; Quinlivan, Brendan; O'Riordan, Sean; Hutchinson, Michael; Reilly, Richard B.

    2015-08-01

    Objective. Recent studies have proposed that the temporal discrimination threshold (TDT), the shortest detectable time period between two stimuli, is a possible endophenotype for adult onset idiopathic isolated focal dystonia (AOIFD). Patients with AOIFD, the third most common movement disorder, and their first-degree relatives have been shown to have abnormal visual and tactile TDTs. For this reason it is important to fully characterize each participant’s data. To date the TDT has only been reported as a single value. Approach. Here, we fit individual participant data with a cumulative Gaussian to extract the mean and standard deviation of the distribution. The mean represents the point of subjective equality (PSE), the inter-stimulus interval at which participants are equally likely to respond that two stimuli are one stimulus (synchronous) or two different stimuli (asynchronous). The standard deviation represents the just noticeable difference (JND) which is how sensitive participants are to changes in temporal asynchrony around the PSE. We extended this method by submitting the data to a non-parametric bootstrapped analysis to get 95% confidence intervals on individual participant data. Main results. Both the JND and PSE correlate with the TDT value but are independent of each other. Hence this suggests that they represent different facets of the TDT. Furthermore, we divided groups by age and compared the TDT, PSE, and JND values. The analysis revealed a statistical difference for the PSE which was only trending for the TDT. Significance. The analysis method will enable deeper analysis of the TDT to leverage subtle differences within and between control and patient groups, not apparent in the standard TDT measure.

  13. A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2004-01-28

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.

  14. A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2002-10-19

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.

  15. A fourth order accurate adaptive mesh refinement method forpoisson's equation

    SciTech Connect

    Barad, Michael; Colella, Phillip

    2004-08-20

    We present a block-structured adaptive mesh refinement (AMR) method for computing solutions to Poisson's equation in two and three dimensions. It is based on a conservative, finite-volume formulation of the classical Mehrstellen methods. This is combined with finite volume AMR discretizations to obtain a method that is fourth-order accurate in solution error, and with easily verifiable solvability conditions for Neumann and periodic boundary conditions.

  16. A prior-knowledge-based threshold segmentation method of forward-looking sonar images for underwater linear object detection

    NASA Astrophysics Data System (ADS)

    Liu, Lixin; Bian, Hongyu; Yagi, Shin-ichi; Yang, Xiaodong

    2016-07-01

    Raw sonar images may not be used for underwater detection or recognition directly because disturbances such as the grating-lobe and multi-path disturbance affect the gray-level distribution of sonar images and cause phantom echoes. To search for a more robust segmentation method with a reasonable computational cost, a prior-knowledge-based threshold segmentation method of underwater linear object detection is discussed. The possibility of guiding the segmentation threshold evolution of forward-looking sonar images using prior knowledge is verified by experiment. During the threshold evolution, the collinear relation of two lines that correspond to double peaks in the voting space of the edged image is used as the criterion of termination. The interaction is reflected in the sense that the Hough transform contributes to the basis of the collinear relation of lines, while the binary image generated from the current threshold provides the resource of the Hough transform. The experimental results show that the proposed method could maintain a good tradeoff between the segmentation quality and the computational time in comparison with conventional segmentation methods. The proposed method redounds to a further process for unsupervised underwater visual understanding.

  17. Wavelet methods in multi-conjugate adaptive optics

    NASA Astrophysics Data System (ADS)

    Helin, T.; Yudytskiy, M.

    2013-08-01

    The next generation ground-based telescopes rely heavily on adaptive optics for overcoming the limitation of atmospheric turbulence. In the future adaptive optics modalities, like multi-conjugate adaptive optics (MCAO), atmospheric tomography is the major mathematical and computational challenge. In this severely ill-posed problem, a fast and stable reconstruction algorithm is needed that can take into account many real-life phenomena of telescope imaging. We introduce a novel reconstruction method for the atmospheric tomography problem and demonstrate its performance and flexibility in the context of MCAO. Our method is based on using locality properties of compactly supported wavelets, both in the spatial and frequency domains. The reconstruction in the atmospheric tomography problem is obtained by solving the Bayesian MAP estimator with a conjugate-gradient-based algorithm. An accelerated algorithm with preconditioning is also introduced. Numerical performance is demonstrated on the official end-to-end simulation tool OCTOPUS of European Southern Observatory.

  18. A Conditional Exposure Control Method for Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Finkelman, Matthew; Nering, Michael L.; Roussos, Louis A.

    2009-01-01

    In computerized adaptive testing (CAT), ensuring the security of test items is a crucial practical consideration. A common approach to reducing item theft is to define maximum item exposure rates, i.e., to limit the proportion of examinees to whom a given item can be administered. Numerous methods for controlling exposure rates have been proposed…

  19. Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.

    ERIC Educational Resources Information Center

    Butler, Ronald W.

    The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…

  20. Anaerobic threshold assessment through the ventilatory method during roller-ski skating testing: right or wrong?

    PubMed

    Fabre, Nicolas; Bortolan, Lorenzo; Pellegrini, Barbara; Zerbini, Livio; Mourot, Laurent; Schena, Federico

    2012-02-01

    This study aimed at questioning the validity of the ventilatory method to determine the anaerobic threshold (respiratory compensation point [RCP]) during an incremental roller-ski skating test to exhaustion. Nine elite crosscountry skiers were evaluated. The skiers carried out an incremental roller-ski test on a treadmill with the V2 skating technique. Ventilatory parameters were continuously collected breath by breath, thanks to a portable gas exchange measurement system. Poling signal was obtained using instrumented ski poles. For each stage, ventilatory and poling signals were synchronized and averaged. The poor coefficient of interobserver reliability for the time at RCP confirmed the great difficulty felt by the 3 blinded reviewers for the RCP determination. Moreover, the reviewer agreed with the impossibility of determining RCP in 4 of the 9 skiers. There was no significant difference between breathing frequency (Bf) and poling frequency (Pf) during the last 8 stages. However, it seems that the differences observed during the first stages arose from the use of either a strictly 1:1 or a 1:2 Bf to Pf ratio when the exercise intensity was still moderate. So, even if there were significant differences between the frequencies, the Bf was strictly subordinate to the Pf during the entire test. In the same way, the normalized tidal volume and peak poling forces curves were superposable. These findings showed that when the upper body is mainly involved in the propulsion, the determinants of the ventilation are strictly dependent on the poling pattern during an incremental test to exhaustion. Thus, during roller-ski skating, the determination of RCP must be used cautiously because too much depending on mechanical factors.

  1. Application of a Threshold Method to the TRMM Radar for the Estimation of Space-Time Rain Rate Statistics

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Jones, Jeffrey A.

    1997-01-01

    One of the TRMM radar products of interest is the monthly-averaged rain rates over 5 x 5 degree cells. Clearly, the most directly way of calculating these and similar statistics is to compute them from the individual estimates made over the instantaneous field of view of the Instrument (4.3 km horizontal resolution). An alternative approach is the use of a threshold method. It has been established that over sufficiently large regions the fractional area above a rain rate threshold and the area-average rain rate are well correlated for particular choices of the threshold [e.g., Kedem et al., 19901]. A straightforward application of this method to the TRMM data would consist of the conversion of the individual reflectivity factors to rain rates followed by a calculation of the fraction of these that exceed a particular threshold. Previous results indicate that for thresholds near or at 5 mm/h, the correlation between this fractional area and the area-average rain rate is high. There are several drawbacks to this approach, however. At the TRMM radar frequency of 13.8 GHz the signal suffers attenuation so that the negative bias of the high resolution rain rate estimates will increase as the path attenuation increases. To establish a quantitative relationship between fractional area and area-average rain rate, an independent means of calculating the area-average rain rate is needed such as an array of rain gauges. This type of calibration procedure, however, is difficult for a spaceborne radar such as TRMM. To estimate a statistic other than the mean of the distribution requires, in general, a different choice of threshold and a different set of tuning parameters.

  2. Dynamic-thresholding level set: a novel computer-aided volumetry method for liver tumors in hepatic CT images

    NASA Astrophysics Data System (ADS)

    Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.

    2007-03-01

    Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.

  3. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    SciTech Connect

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALE method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.

  4. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    DOE PAGES

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less

  5. Solving Chemical Master Equations by an Adaptive Wavelet Method

    SciTech Connect

    Jahnke, Tobias; Galan, Steffen

    2008-09-01

    Solving chemical master equations is notoriously difficult due to the tremendous number of degrees of freedom. We present a new numerical method which efficiently reduces the size of the problem in an adaptive way. The method is based on a sparse wavelet representation and an algorithm which, in each time step, detects the essential degrees of freedom required to approximate the solution up to the desired accuracy.

  6. Workshop on adaptive grid methods for fusion plasmas

    SciTech Connect

    Wiley, J.C.

    1995-07-01

    The author describes a general `hp` finite element method with adaptive grids. The code was based on the work of Oden, et al. The term `hp` refers to the method of spatial refinement (h), in conjunction with the order of polynomials used as a part of the finite element discretization (p). This finite element code seems to handle well the different mesh grid sizes occuring between abuted grids with different resolutions.

  7. Method and system for environmentally adaptive fault tolerant computing

    NASA Technical Reports Server (NTRS)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  8. On the Use of Adaptive Wavelet-based Methods for Ocean Modeling and Data Assimilation Problems

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg V.; Yousuff Hussaini, M.; Souopgui, Innocent

    2014-05-01

    Latest advancements in parallel wavelet-based numerical methodologies for the solution of partial differential equations, combined with the unique properties of wavelet analysis to unambiguously identify and isolate localized dynamically dominant flow structures, make it feasible to start developing integrated approaches for ocean modeling and data assimilation problems that take advantage of temporally and spatially varying meshes. In this talk the Parallel Adaptive Wavelet Collocation Method with spatially and temporarily varying thresholding is presented and the feasibility/potential advantages of its use for ocean modeling are discussed. The second half of the talk focuses on the recently developed Simultaneous Space-time Adaptive approach that addresses one of the main challenges of variational data assimilation, namely the requirement to have a forward solution available when solving the adjoint problem. The issue is addressed by concurrently solving forward and adjoint problems in the entire space-time domain on a near optimal adaptive computational mesh that automatically adapts to spatio-temporal structures of the solution. The compressed space-time form of the solution eliminates the need to save or recompute forward solution for every time slice, as it is typically done in traditional time marching variational data assimilation approaches. The simultaneous spacio-temporal discretization of both the forward and the adjoint problems makes it possible to solve both of them concurrently on the same space-time adaptive computational mesh reducing the amount of saved data to the strict minimum for a given a priori controlled accuracy of the solution. The simultaneous space-time adaptive approach of variational data assimilation is demonstrated for the advection diffusion problem in 1D-t and 2D-t dimensions.

  9. ICASE/LaRC Workshop on Adaptive Grid Methods

    NASA Technical Reports Server (NTRS)

    South, Jerry C., Jr. (Editor); Thomas, James L. (Editor); Vanrosendale, John (Editor)

    1995-01-01

    Solution-adaptive grid techniques are essential to the attainment of practical, user friendly, computational fluid dynamics (CFD) applications. In this three-day workshop, experts gathered together to describe state-of-the-art methods in solution-adaptive grid refinement, analysis, and implementation; to assess the current practice; and to discuss future needs and directions for research. This was accomplished through a series of invited and contributed papers. The workshop focused on a set of two-dimensional test cases designed by the organizers to aid in assessing the current state of development of adaptive grid technology. In addition, a panel of experts from universities, industry, and government research laboratories discussed their views of needs and future directions in this field.

  10. Organic thin film devices with stabilized threshold voltage and mobility, and method for preparing the devices

    DOEpatents

    Nastasi, Michael Anthony; Wang, Yongqiang; Fraboni, Beatrice; Cosseddu, Piero; Bonfiglio, Annalisa

    2013-06-11

    Organic thin film devices that included an organic thin film subjected to a selected dose of a selected energy of ions exhibited a stabilized mobility (.mu.) and threshold voltage (VT), a decrease in contact resistance R.sub.C, and an extended operational lifetime that did not degrade after 2000 hours of operation in the air.

  11. A new EC-PC threshold estimation method for in vivo neural spike detection

    NASA Astrophysics Data System (ADS)

    Yang, Zhi; Liu, Wentai; Keshtkaran, Mohammad Reza; Zhou, Yin; Xu, Jian; Pikov, Victor; Guan, Cuntai; Lian, Yong

    2012-08-01

    This paper models in vivo neural signals and noise for extracellular spike detection. Although the recorded data approximately follow Gaussian distribution, they clearly deviate from white Gaussian noise due to neuronal synchronization and sparse distribution of spike energy. Our study predicts the coexistence of two components embedded in neural data dynamics, one in the exponential form (noise) and the other in the power form (neural spikes). The prediction of the two components has been confirmed in experiments of in vivo sequences recorded from the hippocampus, cortex surface, and spinal cord; both acute and long-term recordings; and sleep and awake states. These two components are further used as references for threshold estimation. Different from the conventional wisdom of setting a threshold at 3×RMS, the estimated threshold exhibits a significant variation. When our algorithm was tested on synthesized sequences with a different signal to noise ratio and on/off firing dynamics, inferred threshold statistics track the benchmarks well. We envision that this work may be applied to a wide range of experiments as a front-end data analysis tool.

  12. Methods for Assessing Item, Step, and Threshold Invariance in Polytomous Items Following the Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Myers, Nicholas D.; Wolfe, Edward W.

    2008-01-01

    Measurement invariance in the partial credit model (PCM) can be conceptualized in several different but compatible ways. In this article the authors distinguish between three forms of measurement invariance in the PCM: step invariance, item invariance, and threshold invariance. Approaches for modeling these three forms of invariance are proposed,…

  13. An Adaptive Cross-Architecture Combination Method for Graph Traversal

    SciTech Connect

    You, Yang; Song, Shuaiwen; Kerbyson, Darren J.

    2014-06-18

    Breadth-First Search (BFS) is widely used in many real-world applications including computational biology, social networks, and electronic design automation. The combination method, using both top-down and bottom-up techniques, is the most effective BFS approach. However, current combination methods rely on trial-and-error and exhaustive search to locate the optimal switching point, which may cause significant runtime overhead. To solve this problem, we design an adaptive method based on regression analysis to predict an optimal switching point for the combination method at runtime within less than 0.1% of the BFS execution time.

  14. An adaptive technique for multiscale approximate entropy (MAEbin) threshold (r) selection: application to heart rate variability (HRV) and systolic blood pressure variability (SBPV) under postural stress.

    PubMed

    Singh, Amritpal; Saini, Barjinder Singh; Singh, Dilbag

    2016-06-01

    Multiscale approximate entropy (MAE) is used to quantify the complexity of a time series as a function of time scale τ. Approximate entropy (ApEn) tolerance threshold selection 'r' is based on either: (1) arbitrary selection in the recommended range (0.1-0.25) times standard deviation of time series (2) or finding maximum ApEn (ApEnmax) i.e., the point where self-matches start to prevail over other matches and choosing the corresponding 'r' (rmax) as threshold (3) or computing rchon by empirically finding the relation between rmax, SD1/SD2 ratio and N using curve fitting, where, SD1 and SD2 are short-term and long-term variability of a time series respectively. None of these methods is gold standard for selection of 'r'. In our previous study [1], an adaptive procedure for selection of 'r' is proposed for approximate entropy (ApEn). In this paper, this is extended to multiple time scales using MAEbin and multiscale cross-MAEbin (XMAEbin). We applied this to simulations i.e. 50 realizations (n = 50) of random number series, fractional Brownian motion (fBm) and MIX (P) [1] series of data length of N = 300 and short term recordings of HRV and SBPV performed under postural stress from supine to standing. MAEbin and XMAEbin analysis was performed on laboratory recorded data of 50 healthy young subjects experiencing postural stress from supine to upright. The study showed that (i) ApEnbin of HRV is more than SBPV in supine position but is lower than SBPV in upright position (ii) ApEnbin of HRV decreases from supine i.e. 1.7324 ± 0.112 (mean ± SD) to upright 1.4916 ± 0.108 due to vagal inhibition (iii) ApEnbin of SBPV increases from supine i.e. 1.5535 ± 0.098 to upright i.e. 1.6241 ± 0.101 due sympathetic activation (iv) individual and cross complexities of RRi and systolic blood pressure (SBP) series depend on time scale under consideration (v) XMAEbin calculated using ApEnmax is correlated with cross-MAE calculated using ApEn (0.1-0.26) in steps of 0

  15. Adaptive neural network nonlinear control for BTT missile based on the differential geometry method

    NASA Astrophysics Data System (ADS)

    Wu, Hao; Wang, Yongji; Xu, Jiangsheng

    2007-11-01

    A new nonlinear control strategy incorporated the differential geometry method with adaptive neural networks is presented for the nonlinear coupling system of Bank-to-Turn missile in reentry phase. The basic control law is designed using the differential geometry feedback linearization method, and the online learning neural networks are used to compensate the system errors due to aerodynamic parameter errors and external disturbance in view of the arbitrary nonlinear mapping and rapid online learning ability for multi-layer neural networks. The online weights and thresholds tuning rules are deduced according to the tracking error performance functions by Levenberg-Marquardt algorithm, which will make the learning process faster and more stable. The six degree of freedom simulation results show that the attitude angles can track the desired trajectory precisely. It means that the proposed strategy effectively enhance the stability, the tracking performance and the robustness of the control system.

  16. An Adaptive Derivative-based Method for Function Approximation

    SciTech Connect

    Tong, C

    2008-10-22

    To alleviate the high computational cost of large-scale multi-physics simulations to study the relationships between the model parameters and the outputs of interest, response surfaces are often used in place of the exact functional relationships. This report explores a method for response surface construction using adaptive sampling guided by derivative information at each selected sample point. This method is especially suitable for applications that can readily provide added information such as gradients and Hessian with respect to the input parameters under study. When higher order terms (third and above) in the Taylor series are negligible, the approximation error for this method can be controlled. We present details of the adaptive algorithm and numerical results on a few test problems.

  17. Adaptive IMEX schemes for high-order unstructured methods

    NASA Astrophysics Data System (ADS)

    Vermeire, Brian C.; Nadarajah, Siva

    2015-01-01

    We present an adaptive implicit-explicit (IMEX) method for use with high-order unstructured schemes. The proposed method makes use of the Gerschgorin theorem to conservatively estimate the influence of each individual degree of freedom on the spectral radius of the discretization. This information is used to split the system into implicit and explicit regions, adapting to unsteady features in the flow. We dynamically repartition the domain to balance the number of implicit and explicit elements per core. As a consequence, we are able to achieve an even load balance for each implicit/explicit stage of the IMEX scheme. We investigate linear advection-diffusion, isentropic vortex advection, unsteady laminar flow over an SD7003 airfoil, and turbulent flow over a circular cylinder. Results show that the proposed method consistently yields a stable discretization, and maintains the theoretical order of accuracy of the high-order spatial schemes.

  18. Final Report: Symposium on Adaptive Methods for Partial Differential Equations

    SciTech Connect

    Pernice, Michael; Johnson, Christopher R.; Smith, Philip J.; Fogelson, Aaron

    1998-12-08

    Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.

  19. Is heart rate variability a feasible method to determine anaerobic threshold in progressive resistance exercise in coronary artery disease?

    PubMed Central

    Sperling, Milena P. R.; Simões, Rodrigo P.; Caruso, Flávia C. R.; Mendes, Renata G.; Arena, Ross; Borghi-Silva, Audrey

    2016-01-01

    ABSTRACT Background Recent studies have shown that the magnitude of the metabolic and autonomic responses during progressive resistance exercise (PRE) is associated with the determination of the anaerobic threshold (AT). AT is an important parameter to determine intensity in dynamic exercise. Objectives To investigate the metabolic and cardiac autonomic responses during dynamic resistance exercise in patients with Coronary Artery Disease (CAD). Method Twenty men (age = 63±7 years) with CAD [Left Ventricular Ejection Fraction (LVEF) = 60±10%] underwent a PRE protocol on a leg press until maximal exertion. The protocol began at 10% of One Repetition Maximum Test (1-RM), with subsequent increases of 10% until maximal exhaustion. Heart Rate Variability (HRV) indices from Poincaré plots (SD1, SD2, SD1/SD2) and time domain (rMSSD and RMSM), and blood lactate were determined at rest and during PRE. Results Significant alterations in HRV and blood lactate were observed starting at 30% of 1-RM (p<0.05). Bland-Altman plots revealed a consistent agreement between blood lactate threshold (LT) and rMSSD threshold (rMSSDT) and between LT and SD1 threshold (SD1T). Relative values of 1-RM in all LT, rMSSDT and SD1T did not differ (29%±5 vs 28%±5 vs 29%±5 Kg, respectively). Conclusion HRV during PRE could be a feasible noninvasive method of determining AT in CAD patients to plan intensities during cardiac rehabilitation. PMID:27556384

  20. [Research on ECG de-noising method based on ensemble empirical mode decomposition and wavelet transform using improved threshold function].

    PubMed

    Ye, Linlin; Yang, Dan; Wang, Xu

    2014-06-01

    A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal. PMID:25219236

  1. [Research on ECG de-noising method based on ensemble empirical mode decomposition and wavelet transform using improved threshold function].

    PubMed

    Ye, Linlin; Yang, Dan; Wang, Xu

    2014-06-01

    A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal.

  2. Comparison of Threshold Detection Methods for the Generalized Pareto Distribution (GPD): Application to the NOAA-NCDC Daily Rainfall Dataset

    NASA Astrophysics Data System (ADS)

    Deidda, Roberto; Mamalakis, Antonis; Langousis, Andreas

    2015-04-01

    One of the most crucial issues in statistical hydrology is the estimation of extreme rainfall from data. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a Generalized Pareto Distribution (GPD) model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches that can be grouped into three basic classes: a) non-parametric methods that locate the changing point between extreme and non-extreme regions of the data, b) graphical methods where one studies the dependence of the GPD parameters (or related metrics) to the threshold level u, and c) Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GPD model is applicable. In this work, we review representative methods for GPD threshold detection, discuss fundamental differences in their theoretical bases, and apply them to daily rainfall records from the NOAA-NCDC open-access database (http://www.ncdc.noaa.gov/oa/climate/ghcn-daily/). We find that non-parametric methods that locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while graphical methods and GoF metrics that rely on limiting arguments for the upper distribution tail lead to unrealistically high thresholds u. The latter is expected, since one checks the validity of the limiting arguments rather than the applicability of a GPD distribution model. Better performance is demonstrated by graphical methods and GoF metrics that rely on GPD properties. Finally, we discuss the effects of data quantization (common in hydrologic applications) on the estimated thresholds. Acknowledgments: The research project is implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General

  3. Advanced numerical methods in mesh generation and mesh adaptation

    SciTech Connect

    Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A

    2010-01-01

    Numerical solution of partial differential equations requires appropriate meshes, efficient solvers and robust and reliable error estimates. Generation of high-quality meshes for complex engineering models is a non-trivial task. This task is made more difficult when the mesh has to be adapted to a problem solution. This article is focused on a synergistic approach to the mesh generation and mesh adaptation, where best properties of various mesh generation methods are combined to build efficiently simplicial meshes. First, the advancing front technique (AFT) is combined with the incremental Delaunay triangulation (DT) to build an initial mesh. Second, the metric-based mesh adaptation (MBA) method is employed to improve quality of the generated mesh and/or to adapt it to a problem solution. We demonstrate with numerical experiments that combination of all three methods is required for robust meshing of complex engineering models. The key to successful mesh generation is the high-quality of the triangles in the initial front. We use a black-box technique to improve surface meshes exported from an unattainable CAD system. The initial surface mesh is refined into a shape-regular triangulation which approximates the boundary with the same accuracy as the CAD mesh. The DT method adds robustness to the AFT. The resulting mesh is topologically correct but may contain a few slivers. The MBA uses seven local operations to modify the mesh topology. It improves significantly the mesh quality. The MBA method is also used to adapt the mesh to a problem solution to minimize computational resources required for solving the problem. The MBA has a solid theoretical background. In the first two experiments, we consider the convection-diffusion and elasticity problems. We demonstrate the optimal reduction rate of the discretization error on a sequence of adaptive strongly anisotropic meshes. The key element of the MBA method is construction of a tensor metric from hierarchical edge

  4. Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak

    2004-01-01

    High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel

  5. An improved filtering method based on EEMD and wavelet-threshold for modal parameter identification of hydraulic structure

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Lian, Jijian; Liu, Fang

    2016-02-01

    Modal parameter identification is a core issue in the health monitoring and damage detection of hydraulic structures. The parameters are mainly obtained from the measured vibrational response under ambient excitation. However, the response signal is mixed with noise and interference signals, which will cover the structure vibration information; therefore, the parameter cannot be identified. This paper proposes an improved filtering method based on an ensemble empirical mode decomposition (EEMD) and wavelet threshold method. A 'noise index' is presented to estimate the noise degree of the components decomposed by the EEMD, and this index is related to the wavelet threshold calculation. In addition, the improved filtering method combined with an eigensystem realization algorithm (ERA) and a singular entropy (SE) is applied to an operational modal identification of a roof overflow powerhouse with a bulb tubular unit.

  6. Space-time adaptive numerical methods for geophysical applications.

    PubMed

    Castro, C E; Käser, M; Toro, E F

    2009-11-28

    In this paper we present high-order formulations of the finite volume and discontinuous Galerkin finite-element methods for wave propagation problems with a space-time adaptation technique using unstructured meshes in order to reduce computational cost without reducing accuracy. Both methods can be derived in a similar mathematical framework and are identical in their first-order version. In their extension to higher order accuracy in space and time, both methods use spatial polynomials of higher degree inside each element, a high-order solution of the generalized Riemann problem and a high-order time integration method based on the Taylor series expansion. The static adaptation strategy uses locally refined high-resolution meshes in areas with low wave speeds to improve the approximation quality. Furthermore, the time step length is chosen locally adaptive such that the solution is evolved explicitly in time by an optimal time step determined by a local stability criterion. After validating the numerical approach, both schemes are applied to geophysical wave propagation problems such as tsunami waves and seismic waves comparing the new approach with the classical global time-stepping technique. The problem of mesh partitioning for large-scale applications on multi-processor architectures is discussed and a new mesh partition approach is proposed and tested to further reduce computational cost. PMID:19840984

  7. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2003-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  8. Efficient Unstructured Grid Adaptation Methods for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Carter, Melissa B.; Deere, Karen A.; Waithe, Kenrick A.

    2008-01-01

    This paper examines the use of two grid adaptation methods to improve the accuracy of the near-to-mid field pressure signature prediction of supersonic aircraft computed using the USM3D unstructured grid flow solver. The first method (ADV) is an interactive adaptation process that uses grid movement rather than enrichment to more accurately resolve the expansion and compression waves. The second method (SSGRID) uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid with the pressure waves and reduce the cell count required to achieve an accurate signature prediction at a given distance from the vehicle. Both methods initially create negative volume cells that are repaired in a module in the ADV code. While both approaches provide significant improvements in the near field signature (< 3 body lengths) relative to a baseline grid without increasing the number of grid points, only the SSGRID approach allows the details of the signature to be accurately computed at mid-field distances (3-10 body lengths) for direct use with mid-field-to-ground boom propagation codes.

  9. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2001-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65deg delta wing with different values of leading-edge bluntness, and the second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the windtunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  10. Robust flicker evaluation method for low power adaptive dimming LCDs

    NASA Astrophysics Data System (ADS)

    Kim, Seul-Ki; Song, Seok-Jeong; Nam, Hyoungsik

    2015-05-01

    This paper describes a robust dimming flicker evaluation method of adaptive dimming algorithms for low power liquid crystal displays (LCDs). While the previous methods use sum of square difference (SSD) values without excluding the image sequence information, the proposed modified SSD (mSSD) values are obtained only with the dimming flicker effects by making use of differential images. The proposed scheme is verified for eight dimming configurations of two dimming level selection methods and four temporal filters over three test videos. Furthermore, a new figure of merit is introduced to cover the dimming flicker as well as image qualities and power consumption.

  11. New method for measuring the laser-induced damage threshold of optical thin film

    NASA Astrophysics Data System (ADS)

    Su, Jun-hong; Wang, Hong; Xi, Ying-xue

    2012-10-01

    The laser-induced damage threshold (LIDT) of thin film means that the thin film can withstand a maximum intensity of laser radiation. The film will be damaged when the irradiation under high laser intensity is greater than the value of LIDT. In this paper, an experimental platform with measurement operator interfaces and control procedures in the VB circumstance is built according to ISO11254-1. In order to obtain more accurate results than that with manual measurement, in the software system, a hardware device can be controlled by control widget on the operator interfaces. According to the sample characteristic, critical parameters of the LIDT measurement system such as spot diameter, damage threshold region, and critical damage pixel number are set up on the man-machine conversation interface, which could realize intelligent measurements of the LIDT. According to experimental data, the LIDT is obtained by fitting damage curve automatically.

  12. Efficient method for calculations of ro-vibrational states in triatomic molecules near dissociation threshold: Application to ozone

    NASA Astrophysics Data System (ADS)

    Teplukhin, Alexander; Babikov, Dmitri

    2016-09-01

    A method for calculations of rotational-vibrational states of triatomic molecules up to dissociation threshold (and scattering resonances above it) is devised, that combines hyper-spherical coordinates, sequential diagonalization-truncation procedure, optimized grid DVR, and complex absorbing potential. Efficiency and accuracy of the method and new code are tested by computing the spectrum of ozone up to dissociation threshold, using two different potential energy surfaces. In both cases good agreement with results of previous studies is obtained for the lower energy states localized in the deep (˜10 000 cm-1) covalent well. Upper part of the bound state spectrum, within 600 cm-1 below dissociation threshold, is also computed and is analyzed in detail. It is found that long progressions of symmetric-stretching and bending states (up to 8 and 11 quanta, respectively) survive up to dissociation threshold and even above it, whereas excitations of the asymmetric-stretching overtones couple to the local vibration modes, making assignments difficult. Within 140 cm-1 below dissociation threshold, large-amplitude vibrational states of a floppy complex O⋯O2 are formed over the shallow van der Waals plateau. These are assigned using two local modes: the rocking-motion and the dissociative-motion progressions, up to 6 quanta in each, both with frequency ˜20 cm-1. Many of these plateau states are mixed with states of the covalent well. Interestingly, excitation of the rocking-motion helps keeping these states localized within the plateau region, by raising the effective barrier.

  13. Experimental and Finite Element Modeling of Near-Threshold Fatigue Crack Growth for the K-Decreasing Test Method

    NASA Technical Reports Server (NTRS)

    Smith, Stephen W.; Seshadri, Banavara R.; Newman, John A.

    2015-01-01

    The experimental methods to determine near-threshold fatigue crack growth rate data are prescribed in ASTM standard E647. To produce near-threshold data at a constant stress ratio (R), the applied stress-intensity factor (K) is decreased as the crack grows based on a specified K-gradient. Consequently, as the fatigue crack growth rate threshold is approached and the crack tip opening displacement decreases, remote crack wake contact may occur due to the plastically deformed crack wake surfaces and shield the growing crack tip resulting in a reduced crack tip driving force and non-representative crack growth rate data. If such data are used to life a component, the evaluation could yield highly non-conservative predictions. Although this anomalous behavior has been shown to be affected by K-gradient, starting K level, residual stresses, environmental assisted cracking, specimen geometry, and material type, the specifications within the standard to avoid this effect are limited to a maximum fatigue crack growth rate and a suggestion for the K-gradient value. This paper provides parallel experimental and computational simulations for the K-decreasing method for two materials (an aluminum alloy, AA 2024-T3 and a titanium alloy, Ti 6-2-2-2-2) to aid in establishing clear understanding of appropriate testing requirements. These simulations investigate the effect of K-gradient, the maximum value of stress-intensity factor applied, and material type. A material independent term is developed to guide in the selection of appropriate test conditions for most engineering alloys. With the use of such a term, near-threshold fatigue crack growth rate tests can be performed at accelerated rates, near-threshold data can be acquired in days instead of weeks without having to establish testing criteria through trial and error, and these data can be acquired for most engineering materials, even those that are produced in relatively small product forms.

  14. An adaptive locally optimal method detecting weak deterministic signals

    NASA Astrophysics Data System (ADS)

    Wang, C. H.

    1983-10-01

    A new method for detecting weak signals in interference and clutter in radar systems is presented. The detector which uses this method is adaptive for an environment varying with time and locally optimal for detecting targets and constant false-alarm ratio (CFAR) for the statistics of interference and clutter varying with time. The loss of CFAR is small, and the detector is also simple in structure. The statistical equivalent transfer characteristic of a rank quantizer which can be used as part of an adaptive locally most powerful detector (ALMP) is obtained. It is shown that the distribution-free Doppler processor of Dillard (1974) is not only a nonparameter detector, but also an ALMP detector under certain conditions.

  15. Optimal and adaptive methods of processing hydroacoustic signals (review)

    NASA Astrophysics Data System (ADS)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  16. New multispectral MRI data fusion technique for white matter lesion segmentation: method and comparison with thresholding in FLAIR images

    PubMed Central

    Ferguson, Karen J.; Chappell, Francesca M.; Wardlaw, Joanna M.

    2010-01-01

    Objective Brain tissue segmentation by conventional threshold-based techniques may have limited accuracy and repeatability in older subjects. We present a new multispectral magnetic resonance (MR) image analysis approach for segmenting normal and abnormal brain tissue, including white matter lesions (WMLs). Methods We modulated two 1.5T MR sequences in the red/green colour space and calculated the tissue volumes using minimum variance quantisation. We tested it on 14 subjects, mean age 73.3 ± 10 years, representing the full range of WMLs and atrophy. We compared the results of WML segmentation with those using FLAIR-derived thresholds, examined the effect of sampling location, WML amount and field inhomogeneities, and tested observer reliability and accuracy. Results FLAIR-derived thresholds were significantly affected by the location used to derive the threshold (P = 0.0004) and by WML volume (P = 0.0003), and had higher intra-rater variability than the multispectral technique (mean difference ± SD: 759 ± 733 versus 69 ± 326 voxels respectively). The multispectral technique misclassified 16 times fewer WMLs. Conclusion Initial testing suggests that the multispectral technique is highly reproducible and accurate with the potential to be applied to routinely collected clinical MRI data. Electronic supplementary material The online version of this article (doi:10.1007/s00330-010-1718-6) contains supplementary material, which is available to authorized users. PMID:20157814

  17. Method and apparatus for telemetry adaptive bandwidth compression

    NASA Astrophysics Data System (ADS)

    Graham, Olin L.

    1987-07-01

    Methods and apparatus are provided for automatic and/or manual adaptive bandwidth compression of telemetry. An adaptive sampler samples a video signal from a scanning sensor and generates a sequence of sampled fields. Each field and range rate information from the sensor are hence sequentially transmitted to and stored in a multiple and adaptive field storage means. The field storage means then, in response to an automatic or manual control signal, transfers the stored sampled field signals to a video monitor in a form for sequential or simultaneous display of a desired number of stored signal fields. The sampling ratio of the adaptive sample, the relative proportion of available communication bandwidth allocated respectively to transmitted data and video information, and the number of fields simultaneously displayed are manually or automatically selectively adjustable in functional relationship to each other and detected range rate. In one embodiment, when relatively little or no scene motion is detected, the control signal maximizes sampling ratio and causes simultaneous display of all stored fields, thus maximizing resolution and bandwidth available for data transmission. When increased scene motion is detected, the control signal is adjusted accordingly to cause display of fewer fields. If greater resolution is desired, the control signal is adjusted to increase the sampling ratio.

  18. A Diffusion Synthetic Acceleration Method for Block Adaptive Mesh Refinement.

    SciTech Connect

    Ward, R. C.; Baker, R. S.; Morel, J. E.

    2005-01-01

    A prototype two-dimensional Diffusion Synthetic Acceleration (DSA) method on a Block-based Adaptive Mesh Refinement (BAMR) transport mesh has been developed. The Block-Adaptive Mesh Refinement Diffusion Synthetic Acceleration (BAMR-DSA) method was tested in the PARallel TIme-Dependent SN (PARTISN) deterministic transport code. The BAMR-DSA equations are derived by differencing the DSA equation using a vertex-centered diffusion discretization that is diamond-like and may be characterized as 'partially' consistent. The derivation of a diffusion discretization that is fully consistent with diamond transport differencing on BAMR mesh does not appear to be possible. However, despite being partially consistent, the BAMR-DSA method is effective for many applications. The BAMR-DSA solver was implemented and tested in two dimensions for rectangular (XY) and cylindrical (RZ) geometries. Testing results confirm that a partially consistent BAMR-DSA method will introduce instabilities for extreme cases, e.g., scattering ratios approaching 1.0 with optically thick cells, but for most realistic problems the BAMR-DSA method provides effective acceleration. The initial use of a full matrix to store and LU-Decomposition to solve the BAMR-DSA equations has been extended to include Compressed Sparse Row (CSR) storage and a Conjugate Gradient (CG) solver. The CSR and CG methods provide significantly more efficient and faster storage and solution methods.

  19. A New Online Calibration Method for Multidimensional Computerized Adaptive Testing.

    PubMed

    Chen, Ping; Wang, Chun

    2016-09-01

    Multidimensional-Method A (M-Method A) has been proposed as an efficient and effective online calibration method for multidimensional computerized adaptive testing (MCAT) (Chen & Xin, Paper presented at the 78th Meeting of the Psychometric Society, Arnhem, The Netherlands, 2013). However, a key assumption of M-Method A is that it treats person parameter estimates as their true values, thus this method might yield erroneous item calibration when person parameter estimates contain non-ignorable measurement errors. To improve the performance of M-Method A, this paper proposes a new MCAT online calibration method, namely, the full functional MLE-M-Method A (FFMLE-M-Method A). This new method combines the full functional MLE (Jones & Jin in Psychometrika 59:59-75, 1994; Stefanski & Carroll in Annals of Statistics 13:1335-1351, 1985) with the original M-Method A in an effort to correct for the estimation error of ability vector that might otherwise adversely affect the precision of item calibration. Two correction schemes are also proposed when implementing the new method. A simulation study was conducted to show that the new method generated more accurate item parameter estimation than the original M-Method A in almost all conditions. PMID:26608960

  20. Sub-Volumetric Classification and Visualization of Emphysema Using a Multi-Threshold Method and Neural Network

    NASA Astrophysics Data System (ADS)

    Tan, Kok Liang; Tanaka, Toshiyuki; Nakamura, Hidetoshi; Shirahata, Toru; Sugiura, Hiroaki

    Chronic Obstructive Pulmonary Disease is a disease in which the airways and tiny air sacs (alveoli) inside the lung are partially obstructed or destroyed. Emphysema is what occurs as more and more of the walls between air sacs get destroyed. The goal of this paper is to produce a more practical emphysema-quantification algorithm that has higher correlation with the parameters of pulmonary function tests compared to classical methods. The use of the threshold range from approximately -900 Hounsfield Unit to -990 Hounsfield Unit for extracting emphysema from CT has been reported in many papers. From our experiments, we realize that a threshold which is optimal for a particular CT data set might not be optimal for other CT data sets due to the subtle radiographic variations in the CT images. Consequently, we propose a multi-threshold method that utilizes ten thresholds between and including -900 Hounsfield Unit and -990 Hounsfield Unit for identifying the different potential emphysematous regions in the lung. Subsequently, we divide the lung into eight sub-volumes. From each sub-volume, we calculate the ratio of the voxels with the intensity below a certain threshold. The respective ratios of the voxels below the ten thresholds are employed as the features for classifying the sub-volumes into four emphysema severity classes. Neural network is used as the classifier. The neural network is trained using 80 training sub-volumes. The performance of the classifier is assessed by classifying 248 test sub-volumes of the lung obtained from 31 subjects. Actual diagnoses of the sub-volumes are hand-annotated and consensus-classified by radiologists. The four-class classification accuracy of the proposed method is 89.82%. The sub-volumetric classification results produced in this study encompass not only the information of emphysema severity but also the distribution of emphysema severity from the top to the bottom of the lung. We hypothesize that besides emphysema severity, the

  1. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift.

    PubMed

    Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping

    2016-01-01

    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068

  2. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    PubMed Central

    Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping

    2016-01-01

    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068

  3. Adaptive density partitioning technique in the auxiliary plane wave method

    NASA Astrophysics Data System (ADS)

    Kurashige, Yuki; Nakajima, Takahito; Hirao, Kimihiko

    2006-01-01

    We have developed the adaptive density partitioning technique (ADPT) in the auxiliary plane wave method, in which a part of the density is expanded to plane waves, for the fast evaluation of Coulomb matrix. Our partitioning is based on the error estimations and allows us to control the accuracy and efficiency. Moreover, we can drastically reduce the core Gaussian products that are left in Gaussian representation (its analytical integrals is the bottleneck in this method). For the taxol molecule with 6-31G** basis, the core Gaussian products accounted only for 5% in submicrohartree error.

  4. Identification of Molecular Fingerprints in Human Heat Pain Thresholds by Use of an Interactive Mixture Model R Toolbox (AdaptGauss).

    PubMed

    Ultsch, Alfred; Thrun, Michael C; Hansen-Goos, Onno; Lötsch, Jörn

    2015-10-28

    Biomedical data obtained during cell experiments, laboratory animal research, or human studies often display a complex distribution. Statistical identification of subgroups in research data poses an analytical challenge. Here were introduce an interactive R-based bioinformatics tool, called "AdaptGauss". It enables a valid identification of a biologically-meaningful multimodal structure in the data by fitting a Gaussian mixture model (GMM) to the data. The interface allows a supervised selection of the number of subgroups. This enables the expectation maximization (EM) algorithm to adapt more complex GMM than usually observed with a noninteractive approach. Interactively fitting a GMM to heat pain threshold data acquired from human volunteers revealed a distribution pattern with four Gaussian modes located at temperatures of 32.3, 37.2, 41.4, and 45.4 °C. Noninteractive fitting was unable to identify a meaningful data structure. Obtained results are compatible with known activity temperatures of different TRP ion channels suggesting the mechanistic contribution of different heat sensors to the perception of thermal pain. Thus, sophisticated analysis of the modal structure of biomedical data provides a basis for the mechanistic interpretation of the observations. As it may reflect the involvement of different TRP thermosensory ion channels, the analysis provides a starting point for hypothesis-driven laboratory experiments.

  5. Parallel, adaptive finite element methods for conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.

    1994-01-01

    We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.

  6. Investigation of the Multiple Method Adaptive Control (MMAC) method for flight control systems

    NASA Technical Reports Server (NTRS)

    Athans, M.; Baram, Y.; Castanon, D.; Dunn, K. P.; Green, C. S.; Lee, W. H.; Sandell, N. R., Jr.; Willsky, A. S.

    1979-01-01

    The stochastic adaptive control of the NASA F-8C digital-fly-by-wire aircraft using the multiple model adaptive control (MMAC) method is presented. The selection of the performance criteria for the lateral and the longitudinal dynamics, the design of the Kalman filters for different operating conditions, the identification algorithm associated with the MMAC method, the control system design, and simulation results obtained using the real time simulator of the F-8 aircraft at the NASA Langley Research Center are discussed.

  7. Threshold-Voltage-Shift Compensation and Suppression Method Using Hydrogenated Amorphous Silicon Thin-Film Transistors for Large Active Matrix Organic Light-Emitting Diode Displays

    NASA Astrophysics Data System (ADS)

    Oh, Kyonghwan; Kwon, Oh-Kyong

    2012-03-01

    A threshold-voltage-shift compensation and suppression method for active matrix organic light-emitting diode (AMOLED) displays fabricated using a hydrogenated amorphous silicon thin-film transistor (TFT) backplane is proposed. The proposed method compensates for the threshold voltage variation of TFTs due to different threshold voltage shifts during emission time and extends the lifetime of the AMOLED panel. Measurement results show that the error range of emission current is from -1.1 to +1.7% when the threshold voltage of TFTs varies from 1.2 to 3.0 V.

  8. An adaptive Tikhonov regularization method for fluorescence molecular tomography.

    PubMed

    Cao, Xu; Zhang, Bin; Wang, Xin; Liu, Fei; Liu, Ke; Luo, Jianwen; Bai, Jing

    2013-08-01

    The high degree of absorption and scattering of photons propagating through biological tissues makes fluorescence molecular tomography (FMT) reconstruction a severe ill-posed problem and the reconstructed result is susceptible to noise in the measurements. To obtain a reasonable solution, Tikhonov regularization (TR) is generally employed to solve the inverse problem of FMT. However, with a fixed regularization parameter, the Tikhonov solutions suffer from low resolution. In this work, an adaptive Tikhonov regularization (ATR) method is presented. Considering that large regularization parameters can smoothen the solution with low spatial resolution, while small regularization parameters can sharpen the solution with high level of noise, the ATR method adaptively updates the spatially varying regularization parameters during the iteration process and uses them to penalize the solutions. The ATR method can adequately sharpen the feasible region with fluorescent probes and smoothen the region without fluorescent probes resorting to no complementary priori information. Phantom experiments are performed to verify the feasibility of the proposed method. The results demonstrate that the proposed method can improve the spatial resolution and reduce the noise of FMT reconstruction at the same time.

  9. Automated microcalcification detection in mammograms using statistical variable-box-threshold filter method

    NASA Astrophysics Data System (ADS)

    Wilson, Mark; Mitra, Sunanda; Roberson, Glenn H.; Shieh, Yao-Yang

    1997-10-01

    Currently early detection of breast cancer is primarily accomplished by mammography and suspicious findings may lead to a decision for performing a biopsy. Digital enhancement and pattern recognition techniques may aid in early detection of some patterns such as microcalcification clusters indicating onset of DCIS (ductal carcinoma in situ) that accounts for 20% of all mammographically detected breast cancers and could be treated when detected early. These individual calcifications are hard to detect due to size and shape variability and inhomogeneous background texture. Our study addresses only early detection of microcalcifications that allows the radiologist to interpret the x-ray findings in computer-aided enhanced form easier than evaluating the x-ray film directly. We present an algorithm which locates microcalcifications based on local grayscale variability and of tissue structures and image statistics. Threshold filters with lower and upper bounds computed from the image statistics of the entire image and selected subimages were designed to enhance the entire image. This enhanced image was used as the initial image for identifying the micro-calcifications based on the variable box threshold filters at different resolutions. The test images came from the Texas Tech University Health Sciences Center and the MIAS mammographic database, which are classified into various categories including microcalcifications. Classification of other types of abnormalities in mammograms based on their characteristic features is addressed in later studies.

  10. Planetary gearbox fault diagnosis using an adaptive stochastic resonance method

    NASA Astrophysics Data System (ADS)

    Lei, Yaguo; Han, Dong; Lin, Jing; He, Zhengjia

    2013-07-01

    Planetary gearboxes are widely used in aerospace, automotive and heavy industry applications due to their large transmission ratio, strong load-bearing capacity and high transmission efficiency. The tough operation conditions of heavy duty and intensive impact load may cause gear tooth damage such as fatigue crack and teeth missed etc. The challenging issues in fault diagnosis of planetary gearboxes include selection of sensitive measurement locations, investigation of vibration transmission paths and weak feature extraction. One of them is how to effectively discover the weak characteristics from noisy signals of faulty components in planetary gearboxes. To address the issue in fault diagnosis of planetary gearboxes, an adaptive stochastic resonance (ASR) method is proposed in this paper. The ASR method utilizes the optimization ability of ant colony algorithms and adaptively realizes the optimal stochastic resonance system matching input signals. Using the ASR method, the noise may be weakened and weak characteristics highlighted, and therefore the faults can be diagnosed accurately. A planetary gearbox test rig is established and experiments with sun gear faults including a chipped tooth and a missing tooth are conducted. And the vibration signals are collected under the loaded condition and various motor speeds. The proposed method is used to process the collected signals and the results of feature extraction and fault diagnosis demonstrate its effectiveness.

  11. Robust time and frequency domain estimation methods in adaptive control

    NASA Technical Reports Server (NTRS)

    Lamaire, Richard Orville

    1987-01-01

    A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.

  12. The SMART CLUSTER METHOD - adaptive earthquake cluster analysis and declustering

    NASA Astrophysics Data System (ADS)

    Schaefer, Andreas; Daniell, James; Wenzel, Friedemann

    2016-04-01

    Earthquake declustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity with usual applications comprising of probabilistic seismic hazard assessments (PSHAs) and earthquake prediction methods. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation. Various methods have been developed to address this issue from other researchers. These have differing ranges of complexity ranging from rather simple statistical window methods to complex epidemic models. This study introduces the smart cluster method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal identification. Hereby, an adaptive search algorithm for data point clusters is adopted. It uses the earthquake density in the spatio-temporal neighbourhood of each event to adjust the search properties. The identified clusters are subsequently analysed to determine directional anisotropy, focussing on a strong correlation along the rupture plane and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010/2011 Darfield-Christchurch events, an adaptive classification procedure is applied to disassemble subsequent ruptures which may have been grouped into an individual cluster using near-field searches, support vector machines and temporal splitting. The steering parameters of the search behaviour are linked to local earthquake properties like magnitude of completeness, earthquake density and Gutenberg-Richter parameters. The method is capable of identifying and classifying earthquake clusters in space and time. It is tested and validated using earthquake data from California and New Zealand. As a result of the cluster identification process, each event in

  13. A method to quantify infectious airborne pathogens at concentrations below the threshold of quantification by culture

    PubMed Central

    Cutler, Timothy D.; Wang, Chong; Hoff, Steven J.; Zimmerman, Jeffrey J.

    2013-01-01

    In aerobiology, dose-response studies are used to estimate the risk of infection to a susceptible host presented by exposure to a specific dose of an airborne pathogen. In the research setting, host- and pathogen-specific factors that affect the dose-response continuum can be accounted for by experimental design, but the requirement to precisely determine the dose of infectious pathogen to which the host was exposed is often challenging. By definition, quantification of viable airborne pathogens is based on the culture of micro-organisms, but some airborne pathogens are transmissible at concentrations below the threshold of quantification by culture. In this paper we present an approach to the calculation of exposure dose at microbiologically unquantifiable levels using an application of the “continuous-stirred tank reactor (CSTR) model” and the validation of this approach using rhodamine B dye as a surrogate for aerosolized microbial pathogens in a dynamic aerosol toroid (DAT). PMID:24082399

  14. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods

    PubMed Central

    Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675

  15. An adaptive pseudo-spectral method for reaction diffusion problems

    NASA Technical Reports Server (NTRS)

    Bayliss, A.; Matkowsky, B. J.; Gottlieb, D.; Minkoff, M.

    1989-01-01

    The spectral interpolation error was considered for both the Chebyshev pseudo-spectral and Galerkin approximations. A family of functionals I sub r (u), with the property that the maximum norm of the error is bounded by I sub r (u)/J sub r, where r is an integer and J is the degree of the polynomial approximation, was developed. These functionals are used in the adaptive procedure whereby the problem is dynamically transformed to minimize I sub r (u). The number of collocation points is then chosen to maintain a prescribed error bound. The method is illustrated by various examples from combustion problems in one and two dimensions.

  16. An adaptive pseudo-spectral method for reaction diffusion problems

    NASA Technical Reports Server (NTRS)

    Bayliss, A.; Gottlieb, D.; Matkowsky, B. J.; Minkoff, M.

    1987-01-01

    The spectral interpolation error was considered for both the Chebyshev pseudo-spectral and Galerkin approximations. A family of functionals I sub r (u), with the property that the maximum norm of the error is bounded by I sub r (u)/J sub r, where r is an integer and J is the degree of the polynomial approximation, was developed. These functionals are used in the adaptive procedure whereby the problem is dynamically transformed to minimize I sub r (u). The number of collocation points is then chosen to maintain a prescribed error bound. The method is illustrated by various examples from combustion problems in one and two dimensions.

  17. A multilevel adaptive projection method for unsteady incompressible flow

    NASA Technical Reports Server (NTRS)

    Howell, Louis H.

    1993-01-01

    There are two main requirements for practical simulation of unsteady flow at high Reynolds number: the algorithm must accurately propagate discontinuous flow fields without excessive artificial viscosity, and it must have some adaptive capability to concentrate computational effort where it is most needed. We satisfy the first of these requirements with a second-order Godunov method similar to those used for high-speed flows with shocks, and the second with a grid-based refinement scheme which avoids some of the drawbacks associated with unstructured meshes. These two features of our algorithm place certain constraints on the projection method used to enforce incompressibility. Velocities are cell-based, leading to a Laplacian stencil for the projection which decouples adjacent grid points. We discuss features of the multigrid and multilevel iteration schemes required for solution of the resulting decoupled problem. Variable-density flows require use of a modified projection operator--we have found a multigrid method for this modified projection that successfully handles density jumps of thousands to one. Numerical results are shown for the 2D adaptive and 3D variable-density algorithms.

  18. A flexible genome-wide bootstrap method that accounts for ranking and threshold-selection bias in GWAS interpretation and replication study design.

    PubMed

    Faye, Laura L; Sun, Lei; Dimitromanolakis, Apostolos; Bull, Shelley B

    2011-07-10

    The phenomenon known as the winner's curse is a form of selection bias that affects estimates of genetic association. In genome-wide association studies (GWAS) the bias is exacerbated by the use of stringent selection thresholds and ranking over hundreds of thousands of single nucleotide polymorphisms (SNPs). We develop an improved multi-locus bootstrap point estimate and confidence interval, which accounts for both ranking- and threshold-selection bias in the presence of genome-wide SNP linkage disequilibrium structure. The bootstrap method easily adapts to various study designs and alternative test statistics as well as complex SNP selection criteria. The latter is demonstrated by our application to the Wellcome Trust Case Control Consortium findings, in which the selection criterion was the minimum of the p-values for the additive and genotypic genetic effect models. In contrast, existing likelihood-based bias-reduced estimators account for the selection criterion applied to an SNP as if it were the only one tested, and so are more simple computationally, but do not address ranking across SNPs. Our simulation studies show that the bootstrap bias-reduced estimates are usually closer to the true genetic effect than the likelihood estimates and are less variable with a narrower confidence interval. Replication study sample size requirements computed from the bootstrap bias-reduced estimates are adequate 75-90 per cent of the time compared to 53-60 per cent of the time for the likelihood method. The bootstrap methods are implemented in a user-friendly package able to provide point and interval estimation for both binary and quantitative phenotypes in large-scale GWAS.

  19. The modified Dmax method is reliable to predict the second ventilatory threshold in elite cross-country skiers.

    PubMed

    Fabre, Nicolas; Balestreri, Filippo; Pellegrini, Barbara; Schena, Federico

    2010-06-01

    This study was designed to evaluate, in elite cross-country skiers, the capacity of the DMAX lactate threshold method and its modified version (DMAX MOD) to accurately predict the second ventilatory threshold (VT2). Twenty-three elite cross-country skiers carried out an incremental roller-ski test on a motorized treadmill. Ventilation, heart rate (HR), and gas exchanges were continuously recorded during the test. Blood was sampled at the end of each 3-minute work stage for lactate concentration measurements. The VT2 was individually determined by visual analysis. The DMAX, DMAX MOD points also with the 4 mmol.L(-1) fixed lactate concentration value (4 mM) were determined by a computerized program. Paired t tests showed nonsignificant differences between HR at VT2 and HR at DMAX MOD, between HR at VT2 and HR at 4 mM, and between HR at DMAX MOD and HR at 4 mM. HR at DMAX was significantly lower than HR at VT2, DMAX MOD, and at 4 mM (p<0.001). HR at VT2 was strongly correlated with HR at 4 mM (r=0.93, p<0.001), HR at DMAX (r=0.97, p<0.001) and especially with HR at DMAX MOD (r=0.99, p<0.001). Bland-Altman plots showed that HR at DMAX underestimated HR at VT2 and permitted to observe that the DMAX method and particularly the DMAX MOD method had smaller limits of agreement as compared with the 4 mM method. Our results showed that the DMAX MOD lactate threshold measurement is extremely accurate to predict VT2 in elite cross-country skiers.

  20. An adaptive stepsize method for the chemical Langevin equation.

    PubMed

    Ilie, Silvana; Teslya, Alexandra

    2012-05-14

    Mathematical and computational modeling are key tools in analyzing important biological processes in cells and living organisms. In particular, stochastic models are essential to accurately describe the cellular dynamics, when the assumption of the thermodynamic limit can no longer be applied. However, stochastic models are computationally much more challenging than the traditional deterministic models. Moreover, many biochemical systems arising in applications have multiple time-scales, which lead to mathematical stiffness. In this paper we investigate the numerical solution of a stochastic continuous model of well-stirred biochemical systems, the chemical Langevin equation. The chemical Langevin equation is a stochastic differential equation with multiplicative, non-commutative noise. We propose an adaptive stepsize algorithm for approximating the solution of models of biochemical systems in the Langevin regime, with small noise, based on estimates of the local error. The underlying numerical method is the Milstein scheme. The proposed adaptive method is tested on several examples arising in applications and it is shown to have improved efficiency and accuracy compared to the existing fixed stepsize schemes.

  1. An adaptive PCA fusion method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Guo, Qing; Li, An; Zhang, Hongqun; Feng, Zhongkui

    2014-10-01

    The principal component analysis (PCA) method is a popular fusion method used for its efficiency and high spatial resolution improvement. However, the spectral distortion is often found in PCA. In this paper, we propose an adaptive PCA method to enhance the spectral quality of the fused image. The amount of spatial details of the panchromatic (PAN) image injected into each band of the multi-spectral (MS) image is appropriately determined by a weighting matrix, which is defined by the edges of the PAN image, the edges of the MS image and the proportions between MS bands. In order to prove the effectiveness of the proposed method, the qualitative visual and quantitative analyses are introduced. The correlation coefficient (CC), the spectral discrepancy (SPD), and the spectral angle mapper (SAM) are used to measure the spectral quality of each fused band image. Q index is calculated to evaluate the global spectral quality of all the fused bands as a whole. The spatial quality is evaluated by the average gradient (AG) and the standard deviation (STD). Experimental results show that the proposed method improves the spectral quality very much comparing to the original PCA method while maintaining the high spatial quality of the original PCA.

  2. A fast, robust, and simple implicit method for adaptive time-stepping on adaptive mesh-refinement grids

    NASA Astrophysics Data System (ADS)

    Commerçon, B.; Debout, V.; Teyssier, R.

    2014-03-01

    Context. Implicit solvers present strong limitations when used on supercomputing facilities and in particular for adaptive mesh-refinement codes. Aims: We present a new method for implicit adaptive time-stepping on adaptive mesh-refinement grids. We implement it in the radiation-hydrodynamics solver we designed for the RAMSES code for astrophysical purposes and, more particularly, for protostellar collapse. Methods: We briefly recall the radiation-hydrodynamics equations and the adaptive time-stepping methodology used for hydrodynamical solvers. We then introduce the different types of boundary conditions (Dirichlet, Neumann, and Robin) that are used at the interface between levels and present our implementation of the new method in the RAMSES code. The method is tested against classical diffusion and radiation-hydrodynamics tests, after which we present an application for protostellar collapse. Results: We show that using Dirichlet boundary conditions at level interfaces is a good compromise between robustness and accuracy and that it can be used in structure formation calculations. The gain in computational time over our former unique time step method ranges from factors of 5 to 50 depending on the level of adaptive time-stepping and on the problem. We successfully compare the old and new methods for protostellar collapse calculations that involve highly non linear physics. Conclusions: We have developed a simple but robust method for adaptive time-stepping of implicit scheme on adaptive mesh-refinement grids. It can be applied to a wide variety of physical problems that involve diffusion processes.

  3. Fast and adaptive method for SAR superresolution imaging based on point scattering model and optimal basis selection.

    PubMed

    Wang, Zheng-ming; Wang, Wei-wei

    2009-07-01

    A novel fast and adaptive method for synthetic aperture radar (SAR) superresolution imaging is developed. Based on the point scattering model in the phase history domain, a dictionary is constructed so that the superresolution imaging process can be converted to a problem of sparse parameter estimation. The approximate orthogonality of this dictionary is exploited by theoretical derivation and experimental verification. Based on the orthogonality of the dictionary, we propose a fast algorithm for basis selection. Meanwhile, a threshold for obtaining the number and positions of the scattering centers is determined automatically from the inner product curves of the bases and observed data. Furthermore, the sensitivity of the threshold on estimation performance is analyzed. To reduce the burden of mass calculation and memory, a simplified superresolution imaging process is designed according to the characteristics of the imaging parameters. The experimental results of the simulated images and an MSTAR image illustrate the validity of this method and its robustness in the case of the high noise level. Compared with the traditional regularization method with the sparsity constraint, our proposed method suffers less computation complexity and has better adaptability.

  4. A Spectral Adaptive Mesh Refinement Method for the Burgers equation

    NASA Astrophysics Data System (ADS)

    Nasr Azadani, Leila; Staples, Anne

    2013-03-01

    Adaptive mesh refinement (AMR) is a powerful technique in computational fluid dynamics (CFD). Many CFD problems have a wide range of scales which vary with time and space. In order to resolve all the scales numerically, high grid resolutions are required. The smaller the scales the higher the resolutions should be. However, small scales are usually formed in a small portion of the domain or in a special period of time. AMR is an efficient method to solve these types of problems, allowing high grid resolutions where and when they are needed and minimizing memory and CPU time. Here we formulate a spectral version of AMR in order to accelerate simulations of a 1D model for isotropic homogenous turbulence, the Burgers equation, as a first test of this method. Using pseudo spectral methods, we applied AMR in Fourier space. The spectral AMR (SAMR) method we present here is applied to the Burgers equation and the results are compared with the results obtained using standard solution methods performed using a fine mesh.

  5. Robust image registration using adaptive coherent point drift method

    NASA Astrophysics Data System (ADS)

    Yang, Lijuan; Tian, Zheng; Zhao, Wei; Wen, Jinhuan; Yan, Weidong

    2016-04-01

    Coherent point drift (CPD) method is a powerful registration tool under the framework of the Gaussian mixture model (GMM). However, the global spatial structure of point sets is considered only without other forms of additional attribute information. The equivalent simplification of mixing parameters and the manual setting of the weight parameter in GMM make the CPD method less robust to outlier and have less flexibility. An adaptive CPD method is proposed to automatically determine the mixing parameters by embedding the local attribute information of features into the construction of GMM. In addition, the weight parameter is treated as an unknown parameter and automatically determined in the expectation-maximization algorithm. In image registration applications, the block-divided salient image disk extraction method is designed to detect sparse salient image features and local self-similarity is used as attribute information to describe the local neighborhood structure of each feature. The experimental results on optical images and remote sensing images show that the proposed method can significantly improve the matching performance.

  6. The Formative Method for Adapting Psychotherapy (FMAP): A community-based developmental approach to culturally adapting therapy

    PubMed Central

    Hwang, Wei-Chin

    2010-01-01

    How do we culturally adapt psychotherapy for ethnic minorities? Although there has been growing interest in doing so, few therapy adaptation frameworks have been developed. The majority of these frameworks take a top-down theoretical approach to adapting psychotherapy. The purpose of this paper is to introduce a community-based developmental approach to modifying psychotherapy for ethnic minorities. The Formative Method for Adapting Psychotherapy (FMAP) is a bottom-up approach that involves collaborating with consumers to generate and support ideas for therapy adaptation. It involves 5-phases that target developing, testing, and reformulating therapy modifications. These phases include: (a) generating knowledge and collaborating with stakeholders (b) integrating generated information with theory and empirical and clinical knowledge, (c) reviewing the initial culturally adapted clinical intervention with stakeholders and revising the culturally adapted intervention, (d) testing the culturally adapted intervention, and (e) finalizing the culturally adapted intervention. Application of the FMAP is illustrated using examples from a study adapting psychotherapy for Chinese Americans, but can also be readily applied to modify therapy for other ethnic groups. PMID:20625458

  7. A probabilistic spatial dengue fever risk assessment by a threshold-based-quantile regression method.

    PubMed

    Chiu, Chuan-Hung; Wen, Tzai-Hung; Chien, Lung-Chang; Yu, Hwa-Lung

    2014-01-01

    Understanding the spatial characteristics of dengue fever (DF) incidences is crucial for governmental agencies to implement effective disease control strategies. We investigated the associations between environmental and socioeconomic factors and DF geographic distribution, are proposed a probabilistic risk assessment approach that uses threshold-based quantile regression to identify the significant risk factors for DF transmission and estimate the spatial distribution of DF risk regarding full probability distributions. To interpret risk, return period was also included to characterize the frequency pattern of DF geographic occurrences. The study area included old Kaohsiung City and Fongshan District, two areas in Taiwan that have been affected by severe DF infections in recent decades. Results indicated that water-related facilities, including canals and ditches, and various types of residential area, as well as the interactions between them, were significant factors that elevated DF risk. By contrast, the increase of per capita income and its associated interactions with residential areas mitigated the DF risk in the study area. Nonlinear associations between these factors and DF risk were present in various quantiles, implying that water-related factors characterized the underlying spatial patterns of DF, and high-density residential areas indicated the potential for high DF incidence (e.g., clustered infections). The spatial distributions of DF risks were assessed in terms of three distinct map presentations: expected incidence rates, incidence rates in various return periods, and return periods at distinct incidence rates. These probability-based spatial risk maps exhibited distinct DF risks associated with environmental factors, expressed as various DF magnitudes and occurrence probabilities across Kaohsiung, and can serve as a reference for local governmental agencies.

  8. Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method

    NASA Astrophysics Data System (ADS)

    Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony

    Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.

  9. Adaptive Ripple Down Rules Method based on Description Length

    NASA Astrophysics Data System (ADS)

    Yoshida, Tetsuya; Wada, Takuya; Motoda, Hiroshi; Washio, Takashi

    A knowledge acquisition method Ripple Down Rules (RDR) can directly acquire and encode knowledge from human experts. It is an incremental acquisition method and each new piece of knowledge is added as an exception to the existing knowledge base. Past researches on RDR method assume that the problem domain is stable. This is not the case in reality, especially when an environment changes. Things change over time. This paper proposes an adaptive Ripple Down Rules method based on the Minimum Description Length Principle aiming at knowledge acquisition in a dynamically changing environment. We consider the change in the correspondence between attribute-values and class labels as a typical change in the environment. When such a change occurs, some pieces of knowledge previously acquired become worthless, and the existence of such knowledge may hinder acquisition of new knowledge. In our approach knowledge deletion is carried out as well as knowledge acquisition so that useless knowledge is properly discarded to ensure efficient knowledge acquisition while maintaining the prediction accuracy for future data. Furthermore, pruning is incorporated into the incremental knowledge acquisition in RDR to improve the prediction accuracy of the constructed knowledge base. Experiments were conducted by simulating the change in the correspondence between attribute-values and class labels using the datasets in UCI repository. The results are encouraging.

  10. Adaptive Mesh Refinement in Computational Astrophysics -- Methods and Applications

    NASA Astrophysics Data System (ADS)

    Balsara, D.

    2001-12-01

    The advent of robust, reliable and accurate higher order Godunov schemes for many of the systems of equations of interest in computational astrophysics has made it important to understand how to solve them in multi-scale fashion. This is so because the physics associated with astrophysical phenomena evolves in multi-scale fashion and we wish to arrive at a multi-scale simulational capability to represent the physics. Because astrophysical systems have magnetic fields, multi-scale magnetohydrodynamics (MHD) is of especial interest. In this paper we first discuss general issues in adaptive mesh refinement (AMR). We then focus on the important issues in carrying out divergence-free AMR-MHD and catalogue the progress we have made in that area. We show that AMR methods lend themselves to easy parallelization. We then discuss applications of the RIEMANN framework for AMR-MHD to problems in computational astophysics.

  11. Nonlinear threshold effect in the Z-scan method of characterizing limiters for high-intensity laser light

    NASA Astrophysics Data System (ADS)

    Tereshchenko, S. A.; Savelyev, M. S.; Podgaetsky, V. M.; Gerasimenko, A. Yu.; Selishchev, S. V.

    2016-09-01

    A threshold model is described which permits one to determine the properties of limiters for high-powered laser light. It takes into account the threshold characteristics of the nonlinear optical interaction between the laser beam and the limiter working material. The traditional non-threshold model is a particular case of the threshold model when the limiting threshold is zero. The nonlinear characteristics of carbon nanotubes in liquid and solid media are obtained from experimental Z-scan data. Specifically, the nonlinear threshold effect was observed for aqueous dispersions of nanotubes, but not for nanotubes in solid polymethylmethacrylate. The threshold model fits the experimental Z-scan data better than the non-threshold model. Output characteristics were obtained that integrally describe the nonlinear properties of the optical limiters.

  12. The dynamic time-over-threshold method for multi-channel APD based gamma-ray detectors

    NASA Astrophysics Data System (ADS)

    Orita, T.; Shimazoe, K.; Takahashi, H.

    2015-03-01

    t- Recent advances in manufacturing technology have enabled the use of multi-channel pixelated detectors in gamma-ray imaging applications. When obtaining gamma-ray measurements, it is important to obtain pulse height information in order to avoid unnecessary events such as scattering. However, as the number of channels increases, more electronics are needed to process each channel's signal, and the corresponding increases in circuit size and power consumption can result in practical problems. The time-over-threshold (ToT) method, which has recently become popular in the medical field, is a signal processing technique that can effectively avoid such problems. However, ToT suffers from poor linearity and its dynamic range is limited. We therefore propose a new ToT technique called the dynamic time-over-threshold (dToT) method [4]. A new signal processing system using dToT and CR-RC shaping demonstrated much better linearity than that of a conventional ToT. Using a test circuit with a new Gd3Al2Ga3O12 (GAGG) scintillator and an avalanche photodiode, the pulse height spectra of 137Cs and 22Na sources were measured with high linearity. Based on these results, we designed a new application-specific integrated circuit (ASIC) for this multi-channel dToT system, measured the spectra of a 22Na source, and investigated the linearity of the system.

  13. Adaptive mesh refinement and adjoint methods in geophysics simulations

    NASA Astrophysics Data System (ADS)

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times

  14. Question on the measurement of the metal work function in an electron spectrometer by the secondary-electron emission threshold method

    NASA Technical Reports Server (NTRS)

    Alov, N. V.; Dadayan, K. A.

    1988-01-01

    The feasibility of measuring metal work functions using the secondary emission threshold method and an electron spectrometer is demonstrated. Measurements are reported for Nb, Mo, Ta, and W bombarded by Ar(+) ions.

  15. Adaptive mesh generation for edge-element finite element method

    NASA Astrophysics Data System (ADS)

    Tsuboi, Hajime; Gyimothy, Szabolcs

    2001-06-01

    An adaptive mesh generation method for two- and three-dimensional finite element methods using edge elements is proposed. Since the tangential component continuity is preserved when using edge elements, the strategy of creating new nodes is based on evaluation of the normal component of the magnetic vector potential across element interfaces. The evaluation is performed at the middle point of edge of a triangular element for two-dimensional problems or at the gravity center of triangular surface of a tetrahedral element for three-dimensional problems. At the boundary of two elements, the error estimator is the ratio of the normal component discontinuity to the maximum value of the potential in the same material. One or more nodes are set at the middle points of the edges according to the value of the estimator as well as the subdivision of elements where new nodes have been created. A final mesh will be obtained after several iterations. Some computation results of two- and three-dimensional problems using the proposed method are shown.

  16. Evaluation of Adaptive Subdivision Method on Mobile Device

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Shafry Mohd; Isa, Siti Aida Mohd; Rehman, Amjad; Saba, Tanzila

    2013-06-01

    Recently, there are significant improvements in the capabilities of mobile devices; but rendering large 3D object is still tedious because of the constraint in resources of mobile devices. To reduce storage requirement, 3D object is simplified but certain area of curvature is compromised and the surface will not be smooth. Therefore a method to smoother selected area of a curvature is implemented. One of the popular methods is adaptive subdivision method. Experiments are performed using two data with results based on processing time, rendering speed and the appearance of the object on the devices. The result shows a downfall in frame rate performance due to the increase in the number of triangles with each level of iteration while the processing time of generating the new mesh also significantly increase. Since there is a difference in screen size between the devices the surface on the iPhone appears to have more triangles and more compact than the surface displayed on the iPad. [Figure not available: see fulltext.

  17. Adaptive Elastic Net for Generalized Methods of Moments.

    PubMed

    Caner, Mehmet; Zhang, Hao Helen

    2014-01-30

    Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.

  18. Adaptive enhancement method of infrared image based on scene feature

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao; Bai, Tingzhu; Shang, Fei

    2008-12-01

    All objects emit radiation in amounts related to their temperature and their ability to emit radiation. The infrared image shows the invisible infrared radiation emitted directly. Because of the advantages, the technology of infrared imaging is applied to many kinds of fields. But compared with visible image, the disadvantages of infrared image are obvious. The characteristics of low luminance, low contrast and the inconspicuous difference target and background are the main disadvantages of infrared image. The aim of infrared image enhancement is to improve the interpretability or perception of information in infrared image for human viewers, or to provide 'better' input for other automated image processing techniques. Most of the adaptive algorithm for image enhancement is mainly based on the gray-scale distribution of infrared image, and is not associated with the actual image scene of the features. So the pertinence of infrared image enhancement is not strong, and the infrared image is not conducive to the application of infrared surveillance. In this paper we have developed a scene feature-based algorithm to enhance the contrast of infrared image adaptively. At first, after analyzing the scene feature of different infrared image, we have chosen the feasible parameters to describe the infrared image. In the second place, we have constructed the new histogram distributing base on the chosen parameters by using Gaussian function. In the last place, the infrared image is enhanced by constructing a new form of histogram. Experimental results show that the algorithm has better performance than other methods mentioned in this paper for infrared scene images.

  19. Field-based aeolian sediment transport threshold measurement: Sensors, calculation methods, and standards as a strategy for improving inter-study comparison

    NASA Astrophysics Data System (ADS)

    Barchyn, Thomas Edward

    Aeolian sediment transport threshold is commonly defined as the minimum wind speed (or shear stress) necessary for wind-driven sediment transport. Threshold is a core parameter in most models of aeolian transport. Recent advances in methodology for field-based measurement of threshold show promise for improving parameterizations; however, investigators have varied in choice of method and sensor. The impacts of modifying measurement system configuration are unknown. To address this, two field tests were performed: (i) comparison of four piezoelectric sediment transport sensors, and (ii) comparison of four calculation methods. Data from both comparisons suggest that threshold measurements are non-negligibly modified by measurement system configuration and are incomparable. A poor understanding of natural sediment transport dynamics suggests that development of calibration methods could be difficult. Development of technical standards was explored to improve commensurability of measurements. Standards could assist future researchers with data syntheses and integration.

  20. Method for removing tilt control in adaptive optics systems

    DOEpatents

    Salmon, Joseph Thaddeus

    1998-01-01

    A new adaptive optics system and method of operation, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G'=(I-X(X.sup.T X).sup.-1 X.sup.T)G(I-A)

  1. Method for removing tilt control in adaptive optics systems

    DOEpatents

    Salmon, J.T.

    1998-04-28

    A new adaptive optics system and method of operation are disclosed, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G{prime} = (I{minus}X(X{sup T} X){sup {minus}1}X{sup T})G(I{minus}A). 3 figs.

  2. Adaptive two-regime method: Application to front propagation

    SciTech Connect

    Robinson, Martin Erban, Radek; Flegg, Mark

    2014-03-28

    The Adaptive Two-Regime Method (ATRM) is developed for hybrid (multiscale) stochastic simulation of reaction-diffusion problems. It efficiently couples detailed Brownian dynamics simulations with coarser lattice-based models. The ATRM is a generalization of the previously developed Two-Regime Method [Flegg et al., J. R. Soc., Interface 9, 859 (2012)] to multiscale problems which require a dynamic selection of regions where detailed Brownian dynamics simulation is used. Typical applications include a front propagation or spatio-temporal oscillations. In this paper, the ATRM is used for an in-depth study of front propagation in a stochastic reaction-diffusion system which has its mean-field model given in terms of the Fisher equation [R. Fisher, Ann. Eugen. 7, 355 (1937)]. It exhibits a travelling reaction front which is sensitive to stochastic fluctuations at the leading edge of the wavefront. Previous studies into stochastic effects on the Fisher wave propagation speed have focused on lattice-based models, but there has been limited progress using off-lattice (Brownian dynamics) models, which suffer due to their high computational cost, particularly at the high molecular numbers that are necessary to approach the Fisher mean-field model. By modelling only the wavefront itself with the off-lattice model, it is shown that the ATRM leads to the same Fisher wave results as purely off-lattice models, but at a fraction of the computational cost. The error analysis of the ATRM is also presented for a morphogen gradient model.

  3. An adaptive training method for optimal interpolative neural nets.

    PubMed

    Liu, T Z; Yen, C W

    1997-04-01

    In contrast to conventional multilayered feedforward networks which are typically trained by iterative gradient search methods, an optimal interpolative (OI) net can be trained by a noniterative least squares algorithm called RLS-OI. The basic idea of RLS-OI is to use a subset of the training set, whose inputs are called subprototypes, to constrain the OI net solution. A subset of these subprototypes, called prototypes, is then chosen as the parameter vectors of the activation functions of the OI net to satisfy the subprototype constraints in the least squares (LS) sense. By dynamically increasing the numbers of subprototypes and prototypes, RLS-OI evolves the OI net from scratch to the extent sufficient to solve a given classification problem. To improve the performance of RLS-OI, this paper addresses two important problems in OI net training: the selection of the subprototypes and the selection of the prototypes. By choosing subprototypes from poorly classified regions, this paper proposes a new subprototype selection method which is adaptive to the changing classification performance of the growing OI net. This paper also proposes a new prototype selection criterion to reduce the complexity of the OI net. For the same training accuracy, simulation results demonstrate that the proposed approach produces smaller OI net than the RLS-OI algorithm. Experimental results also show that the proposed approach is less sensitive to the variation of the training set than RLS-OI.

  4. Adaptive two-regime method: application to front propagation.

    PubMed

    Robinson, Martin; Flegg, Mark; Erban, Radek

    2014-03-28

    The Adaptive Two-Regime Method (ATRM) is developed for hybrid (multiscale) stochastic simulation of reaction-diffusion problems. It efficiently couples detailed Brownian dynamics simulations with coarser lattice-based models. The ATRM is a generalization of the previously developed Two-Regime Method [Flegg et al., J. R. Soc., Interface 9, 859 (2012)] to multiscale problems which require a dynamic selection of regions where detailed Brownian dynamics simulation is used. Typical applications include a front propagation or spatio-temporal oscillations. In this paper, the ATRM is used for an in-depth study of front propagation in a stochastic reaction-diffusion system which has its mean-field model given in terms of the Fisher equation [R. Fisher, Ann. Eugen. 7, 355 (1937)]. It exhibits a travelling reaction front which is sensitive to stochastic fluctuations at the leading edge of the wavefront. Previous studies into stochastic effects on the Fisher wave propagation speed have focused on lattice-based models, but there has been limited progress using off-lattice (Brownian dynamics) models, which suffer due to their high computational cost, particularly at the high molecular numbers that are necessary to approach the Fisher mean-field model. By modelling only the wavefront itself with the off-lattice model, it is shown that the ATRM leads to the same Fisher wave results as purely off-lattice models, but at a fraction of the computational cost. The error analysis of the ATRM is also presented for a morphogen gradient model.

  5. An adaptive high and low impedance fault detection method

    SciTech Connect

    Yu, D.C. ); Khan, S.H. )

    1994-10-01

    An integrated high impedance fault (HIF) and low impedance fault (LIF) detection method is proposed in this paper. For a HIF detection, the proposed technique is based on a number of characteristics of the HIF current. These characteristics are: fault current magnitude, magnitude of the 3rd harmonic current, magnitude of the 5th harmonic current, the angle of the third harmonic current, the angle difference between the third harmonics current and the fundamental voltage, negative sequence current of HIF. These characteristics are identified by modeling the distribution feeders in EMTP. Apart from these characteristics, the above ambient (average) negative sequence current is also considered. An adjustable block out region around the average load current is provided. The average load current is calculated at every 18,000 cycles (5 minutes) interval. This adaptive feature will not only make the proposed scheme more sensitive to the low fault current, but it will also prevent the relay from tripping during the normal load current. In this paper, the logic circuit required for implementing the proposed HIF detection methods is also included. With minimal modifications, the logic developed for the HIF detection can be applied for the low impedance fault (LIF) detection. A complete logic circuit which detects both the HIF and LIF is proposed. Using this combined logic, the need of installing separate devices for HIF and LIF detection can be eliminated.

  6. Adaptable Metadata Rich IO Methods for Portable High Performance IO

    SciTech Connect

    Lofstead, J.; Zheng, Fang; Klasky, Scott A; Schwan, Karsten

    2009-01-01

    Since IO performance on HPC machines strongly depends on machine characteristics and configuration, it is important to carefully tune IO libraries and make good use of appropriate library APIs. For instance, on current petascale machines, independent IO tends to outperform collective IO, in part due to bottlenecks at the metadata server. The problem is exacerbated by scaling issues, since each IO library scales differently on each machine, and typically, operates efficiently to different levels of scaling on different machines. With scientific codes being run on a variety of HPC resources, efficient code execution requires us to address three important issues: (1) end users should be able to select the most efficient IO methods for their codes, with minimal effort in terms of code updates or alterations; (2) such performance-driven choices should not prevent data from being stored in the desired file formats, since those are crucial for later data analysis; and (3) it is important to have efficient ways of identifying and selecting certain data for analysis, to help end users cope with the flood of data produced by high end codes. This paper employs ADIOS, the ADaptable IO System, as an IO API to address (1)-(3) above. Concerning (1), ADIOS makes it possible to independently select the IO methods being used by each grouping of data in an application, so that end users can use those IO methods that exhibit best performance based on both IO patterns and the underlying hardware. In this paper, we also use this facility of ADIOS to experimentally evaluate on petascale machines alternative methods for high performance IO. Specific examples studied include methods that use strong file consistency vs. delayed parallel data consistency, as that provided by MPI-IO or POSIX IO. Concerning (2), to avoid linking IO methods to specific file formats and attain high IO performance, ADIOS introduces an efficient intermediate file format, termed BP, which can be converted, at small

  7. Principles and Methods of Adapted Physical Education and Recreation.

    ERIC Educational Resources Information Center

    Arnheim, Daniel D.; And Others

    This text is designed for the elementary and secondary school physical educator and the recreation specialist in adapted physical education and, more specifically, as a text for college courses in adapted and corrective physical education and therapeutic recreation. The text is divided into four major divisions: scope, key teaching and therapy…

  8. Tsunami modelling with adaptively refined finite volume methods

    USGS Publications Warehouse

    LeVeque, R.J.; George, D.L.; Berger, M.J.

    2011-01-01

    Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.

  9. A hybrid method for optimization of the adaptive Goldstein filter

    NASA Astrophysics Data System (ADS)

    Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue

    2014-12-01

    The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.

  10. LDRD Final Report: Adaptive Methods for Laser Plasma Simulation

    SciTech Connect

    Dorr, M R; Garaizar, F X; Hittinger, J A

    2003-01-29

    The goal of this project was to investigate the utility of parallel adaptive mesh refinement (AMR) in the simulation of laser plasma interaction (LPI). The scope of work included the development of new numerical methods and parallel implementation strategies. The primary deliverables were (1) parallel adaptive algorithms to solve a system of equations combining plasma fluid and light propagation models, (2) a research code implementing these algorithms, and (3) an analysis of the performance of parallel AMR on LPI problems. The project accomplished these objectives. New algorithms were developed for the solution of a system of equations describing LPI. These algorithms were implemented in a new research code named ALPS (Adaptive Laser Plasma Simulator) that was used to test the effectiveness of the AMR algorithms on the Laboratory's large-scale computer platforms. The details of the algorithm and the results of the numerical tests were documented in an article published in the Journal of Computational Physics [2]. A principal conclusion of this investigation is that AMR is most effective for LPI systems that are ''hydrodynamically large'', i.e., problems requiring the simulation of a large plasma volume relative to the volume occupied by the laser light. Since the plasma-only regions require less resolution than the laser light, AMR enables the use of efficient meshes for such problems. In contrast, AMR is less effective for, say, a single highly filamented beam propagating through a phase plate, since the resulting speckle pattern may be too dense to adequately separate scales with a locally refined mesh. Ultimately, the gain to be expected from the use of AMR is highly problem-dependent. One class of problems investigated in this project involved a pair of laser beams crossing in a plasma flow. Under certain conditions, energy can be transferred from one beam to the other via a resonant interaction with an ion acoustic wave in the crossing region. AMR provides an

  11. Solution of Reactive Compressible Flows Using an Adaptive Wavelet Method

    NASA Astrophysics Data System (ADS)

    Zikoski, Zachary; Paolucci, Samuel; Powers, Joseph

    2008-11-01

    This work presents numerical simulations of reactive compressible flow, including detailed multicomponent transport, using an adaptive wavelet algorithm. The algorithm allows for dynamic grid adaptation which enhances our ability to fully resolve all physically relevant scales. The thermodynamic properties, equation of state, and multicomponent transport properties are provided by CHEMKIN and TRANSPORT libraries. Results for viscous detonation in a H2:O2:Ar mixture, and other problems in multiple dimensions, are included.

  12. Silicon Photomultiplier-Based Multi-Channel Gamma Ray Detector Using the Dynamic Time-Over-Threshold Method

    NASA Astrophysics Data System (ADS)

    Nakamura, Y.; Shimazoe, K.; Takahashi, H.

    2016-02-01

    Silicon photomultipliers (SiPMs), which are a relatively new type of photon detector, have received more attention in the fields of nuclear medicine and high-energy physics because of their compactness and high gain up to 106. In this work, a SiPM-based multi-channel gamma ray detector with individual read out based on the dynamic time-over-threshold (dToT) method is implemented and demonstrated as an elemental material for large-area gamma ray imager applications. The detector consists of 64 channels of KETEK SiPM PM6660 (6 × 6 mm2 containing 10,000 micro-cells of 60 × 60 μm2) coupled to an 8 × 8 array of high-energy resolution Gd3(Al,Ga)5O12(Ce) (HR-GAGG) crystals (10 × 10 × 10 mm3) segmented by a 1 mm thick BaSO4 reflector. To produce a digital pulse containing linear energy information, the dToT-based read-out circuit consists of a CR-RC shaping amplifier (2.2 μs) and comparator with a feedback component. By modelling the pulse of the SiPM, the light output, and the CR-RC shaping amplifier, the integral-non-linearity (INL) was numerically calculated in terms of the delay time and the time constant of dynamic threshold movement. The experimental results of the averaged INL and energy resolution were 5.8±1.6% and the full-width-at-half-maximum (FWHM) of 7.4±0.9% at 662 keV, respectively. The 64-channel single-mode detector module was successfully implemented, demonstrating potential for its use as an elemental material for large-area gamma ray imaging applications.

  13. High-resolution threshold photoelectron study of the propargyl radical by the vacuum ultraviolet laser velocity-map imaging method

    NASA Astrophysics Data System (ADS)

    Gao, Hong; Xu, Yuntao; Yang, Lei; Lam, Chow-Shing; Wang, Hailing; Zhou, Jingang; Ng, C. Y.

    2011-12-01

    By employing the vacuum ultraviolet (VUV) laser velocity-map imaging (VMI) photoelectron scheme to discriminate energetic photoelectrons, we have measured the VUV-VMI-threshold photoelectrons (VUV-VMI-TPE) spectra of propargyl radical [C3H3({tilde X}{}^2B_1)] near its ionization threshold at photoelectron energy bandwidths of 3 and 7 cm-1 (full-width at half-maximum, FWHM). The simulation of the VUV-VMI-TPE spectra thus obtained, along with the Stark shift correction, has allowed the determination of a precise value 70 156 ± 4 cm-1 (8.6982 ± 0.0005 eV) for the ionization energy (IE) of C3H3. In the present VMI-TPE experiment, the Stark shift correction is determined by comparing the VUV-VMI-TPE and VUV laser pulsed field ionization-photoelectron (VUV-PFI-PE) spectra for the origin band of the photoelectron spectrum of the {tilde X}^ + {- tilde X} transition of chlorobenzene. The fact that the FWHMs for this origin band observed using the VUV-VMI-TPE and VUV-PFI-PE methods are nearly the same indicates that the energy resolutions achieved in the VUV-VMI-TPE and VUV-PFI-PE measurements are comparable. The IE(C3H3) value obtained based on the VUV-VMI-TPE measurement is consistent with the value determined by the VUV laser PIE spectrum of supersonically cooled C3H3({tilde X}{}^2B_1) radicals, which is also reported in this article.

  14. On Accuracy of Adaptive Grid Methods for Captured Shocks

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2002-01-01

    The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.

  15. [Comparison of the threshold interpolation and whole-line method on logMAR chart and Snellen chart for visual acuity testing].

    PubMed

    Veselý, P; Ventruba, J

    2009-10-01

    The main goal of our study was to prove the statistical significant difference between the threshold interpolation logMAR method on ETDRS chart and the whole-line method on Snellen chart with Sloan letters. We had 108 measurements with the threshold interpolation method and the whole-line method on ETDRS chart and the whole-line method on Snellen chart. The average value measured with the threshold method in ETDRS was 1,132 (min. 0,660, max. 1,580), with the whole-line method on ETDRS it was 1,134 (min. 0,630, max. 1,580) and with the whole-line method on Snellen chart it was 1,183 (min. 0,630, max. 1,600). We have proved statistical significant difference between the threshold interpolation method made on ETDRS chart and the whole-line method made on Snellen chart (p < 0.001). The values measured with the whole-line method on Snellen chart were overvalued. The exact and reliable measuring of visual acuity is an important component of further examinations (e.g. contrast sensitivity, perimetry, tonometry), which enable us to make a correct diagnosis of pathological changes on human eye structures.

  16. Method and system for spatial data input, manipulation and distribution via an adaptive wireless transceiver

    NASA Technical Reports Server (NTRS)

    Wang, Ray (Inventor)

    2009-01-01

    A method and system for spatial data manipulation input and distribution via an adaptive wireless transceiver. The method and system include a wireless transceiver for automatically and adaptively controlling wireless transmissions using a Waveform-DNA method. The wireless transceiver can operate simultaneously over both the short and long distances. The wireless transceiver is automatically adaptive and wireless devices can send and receive wireless digital and analog data from various sources rapidly in real-time via available networks and network services.

  17. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  18. Systems and Methods for Derivative-Free Adaptive Control

    NASA Technical Reports Server (NTRS)

    Yucelen, Tansel (Inventor); Kim, Kilsoo (Inventor); Calise, Anthony J. (Inventor)

    2015-01-01

    An adaptive control system is disclosed. The control system can control uncertain dynamic systems. The control system can employ one or more derivative-free adaptive control architectures. The control system can further employ one or more derivative-free weight update laws. The derivative-free weight update laws can comprise a time-varying estimate of an ideal vector of weights. The control system of the present invention can therefore quickly stabilize systems that undergo sudden changes in dynamics, caused by, for example, sudden changes in weight. Embodiments of the present invention can also provide a less complex control system than existing adaptive control systems. The control system can control aircraft and other dynamic systems, such as, for example, those with non-minimum phase dynamics.

  19. Study of adaptive methods for data compression of scanner data

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The performance of adaptive image compression techniques and the applicability of a variety of techniques to the various steps in the data dissemination process are examined in depth. It is concluded that the bandwidth of imagery generated by scanners can be reduced without introducing significant degradation such that the data can be transmitted over an S-band channel. This corresponds to a compression ratio equivalent to 1.84 bits per pixel. It is also shown that this can be achieved using at least two fairly simple techniques with weight-power requirements well within the constraints of the LANDSAT-D satellite. These are the adaptive 2D DPCM and adaptive hybrid techniques.

  20. A New Method to Cancel RFI---The Adaptive Filter

    NASA Astrophysics Data System (ADS)

    Bradley, R.; Barnbaum, C.

    1996-12-01

    An increasing amount of precious radio frequency spectrum in the VHF, UHF, and microwave bands is being utilized each year to support new commercial and military ventures, and all have the potential to interfere with radio astronomy observations. Some radio spectral lines of astronomical interest occur outside the protected radio astronomy bands and are unobservable due to heavy interference. Conventional approaches to deal with RFI include legislation, notch filters, RF shielding, and post-processing techniques. Although these techniques are somewhat successful, each suffers from insufficient interference cancellation. One concept of interference excision that has not been used before in radio astronomy is adaptive interference cancellation. The concept of adaptive interference canceling was first introduced in the mid-1970s as a way to reduce unwanted noise in low frequency (audio) systems. Examples of such systems include the canceling of maternal ECG in fetal electrocardiography and the reduction of engine noise in the passenger compartment of automobiles. Only recently have high-speed digital filter chips made adaptive filtering possible in a bandwidth as large a few megahertz, finally opening the door to astronomical uses. The system consists of two receivers: the main beam of the radio telescope receives the desired signal corrupted by RFI coming in the sidelobes, and the reference antenna receives only the RFI. The reference antenna is processed using a digital adaptive filter and then subtracted from the signal in the main beam, thus producing the system output. The weights of the digital filter are adjusted by way of an algorithm that minimizes, in a least-squares sense, the power output of the system. Through an adaptive-iterative process, the interference canceler will lock onto the RFI and the filter will adjust itself to minimize the effect of the RFI at the system output. We are building a prototype 100 MHz receiver and will measure the cancellation

  1. ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve.

    PubMed

    Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

    2014-01-01

    In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments.

  2. Novel ‘hunting’ method using transcranial magnetic stimulation over parietal cortex disrupts visuospatial sensitivity in relation to motor thresholds

    PubMed Central

    Oliver, R; Bjoertomt, O; Driver, J; Greenwood, R; Rothwell, J

    2010-01-01

    There is considerable inter-study and inter-individual variation in the scalp location of parietal sites where transcranial magnetic stimulation (TMS) may modulate visuospatial behaviours (see Ryan, Bonilha, & Jackson 2006); and no clear consensus on methods for identifying such sites. Here we introduce a novel TMS “hunting paradigm” that allows rapid, reliable identification of a site over right anterior intraparietal sulcus (IPS), where short trains (at 10 Hz for 0.5s) of TMS disrupt performance of a task in which subjects judge the presence or absence of a small peripheral gap (at 14 degrees eccentricity), on one or other (known) side of an extended (29 degrees) horizontal line centred on fixation. Signal detection analysis confirmed that TMS at this site reduced sensitivity (d’) for gap targets in the left visual hemifield. A further experiment showed that the same right-parietal TMS increased sensitivity instead for gaps in the right hemifield. Comparing TMS across a grid of scalp locations around the identified ‘hotspot’ confirmed the spatial specificity. Assessment of the TMS intensity required to produce the phenomena found this was linearly related to individuals’ resting motor TMS threshold over hand M1. Our approach provides a systematic new way to identify an effective site and intensity in individuals, at which TMS over right parietal cortex reliably changes visuospatial sensitivity. PMID:19651149

  3. The use of the spectral method within the fast adaptive composite grid method

    SciTech Connect

    McKay, S.M.

    1994-12-31

    The use of efficient algorithms for the solution of partial differential equations has been sought for many years. The fast adaptive composite grid (FAC) method combines an efficient algorithm with high accuracy to obtain low cost solutions to partial differential equations. The FAC method achieves fast solution by combining solutions on different grids with varying discretizations and using multigrid like techniques to find fast solution. Recently, the continuous FAC (CFAC) method has been developed which utilizes an analytic solution within a subdomain to iterate to a solution of the problem. This has been shown to achieve excellent results when the analytic solution can be found. The CFAC method will be extended to allow solvers which construct a function for the solution, e.g., spectral and finite element methods. In this discussion, the spectral methods will be used to provide a fast, accurate solution to the partial differential equation. As spectral methods are more accurate than finite difference methods, the ensuing accuracy from this hybrid method outside of the subdomain will be investigated.

  4. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  5. Method and apparatus for adaptive force and position control of manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1989-01-01

    The present invention discloses systematic methods and apparatus for the design of real time controllers. Real-time control employs adaptive force/position by use of feedforward and feedback controllers, with the feedforward controller being the inverse of the linearized model of robot dynamics and containing only proportional-double-derivative terms is disclosed. The feedback controller, of the proportional-integral-derivative type, ensures that manipulator joints follow reference trajectories and the feedback controller achieves robust tracking of step-plus-exponential trajectories, all in real time. The adaptive controller includes adaptive force and position control within a hybrid control architecture. The adaptive controller, for force control, achieves tracking of desired force setpoints, and the adaptive position controller accomplishes tracking of desired position trajectories. Circuits in the adaptive feedback and feedforward controllers are varied by adaptation laws.

  6. Adaptive aggregation method for the Chemical Master Equation.

    PubMed

    Zhang, Jingwei; Watson, Layne T; Cao, Yang

    2009-01-01

    One important aspect of biological systems such as gene regulatory networks and protein-protein interaction networks is the stochastic nature of interactions between chemical species. Such stochastic behaviour can be accurately modelled by the Chemical Master Equation (CME). However, the CME usually imposes intensive computational requirements when used to characterise molecular biological systems. The major challenge comes from the curse of dimensionality, which has been tackled by a few research papers. The essential goal is to aggregate the system efficiently with limited approximation errors. This paper presents an adaptive way to implement the aggregation process using information collected from Monte Carlo simulations. Numerical results show the effectiveness of the proposed algorithm.

  7. An adaptive response surface method for crashworthiness optimization

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Yang, Ren-Jye; Zhu, Ping

    2013-11-01

    Response surface-based design optimization has been commonly used for optimizing large-scale design problems in the automotive industry. However, most response surface models are built by a limited number of design points without considering data uncertainty. In addition, the selection of a response surface in the literature is often arbitrary. This article uses a Bayesian metric to systematically select the best available response surface among several candidates in a library while considering data uncertainty. An adaptive, efficient response surface strategy, which minimizes the number of computationally intensive simulations, was developed for design optimization of large-scale complex problems. This methodology was demonstrated by a crashworthiness optimization example.

  8. Improvement in adaptive nonuniformity correction method with nonlinear model for infrared focal plane arrays

    NASA Astrophysics Data System (ADS)

    Rui, Lai; Yin-Tang, Yang; Qing, Li; Hui-Xin, Zhou

    2009-09-01

    The scene adaptive nonuniformity correction (NUC) technique is commonly used to decrease the fixed pattern noise (FPN) in infrared focal plane arrays (IRFPA). However, the correction precision of existing scene adaptive NUC methods is reduced by the nonlinear response of IRFPA detectors seriously. In this paper, an improved scene adaptive NUC method that employs "S"-curve model to approximate the detector response is presented. The performance of the proposed method is tested with real infrared video sequence, and the experimental results validate that our method can promote the correction precision considerably.

  9. Nonlinear mode decomposition: a noise-robust, adaptive decomposition method.

    PubMed

    Iatsenko, Dmytro; McClintock, Peter V E; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool-nonlinear mode decomposition (NMD)-which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques-which, together with the adaptive choice of their parameters, make it extremely noise robust-and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  10. Nonlinear mode decomposition: A noise-robust, adaptive decomposition method

    NASA Astrophysics Data System (ADS)

    Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  11. Nonlinear mode decomposition: a noise-robust, adaptive decomposition method.

    PubMed

    Iatsenko, Dmytro; McClintock, Peter V E; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool-nonlinear mode decomposition (NMD)-which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques-which, together with the adaptive choice of their parameters, make it extremely noise robust-and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download. PMID:26465549

  12. Investigating Item Exposure Control Methods in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Ozturk, Nagihan Boztunc; Dogan, Nuri

    2015-01-01

    This study aims to investigate the effects of item exposure control methods on measurement precision and on test security under various item selection methods and item pool characteristics. In this study, the Randomesque (with item group sizes of 5 and 10), Sympson-Hetter, and Fade-Away methods were used as item exposure control methods. Moreover,…

  13. Radiation Treatment Planning Using Positron Emission and Computed Tomography for Lung and Pharyngeal Cancers: A Multiple-Threshold Method for [{sup 18}F]Fluoro-2-Deoxyglucose Activity

    SciTech Connect

    Okubo, Mitsuru; Nishimura, Yasumasa; Nakamatsu, Kiyoshi; Okumura, Masahiko R.T.; Shibata, Toru; Kanamori, Shuichi; Hanaoka, Kouhei R.T.; Hosono, Makoto

    2010-06-01

    Purpose: Clinical applicability of a multiple-threshold method for [{sup 18}F]fluoro-2-deoxyglucose (FDG) activity in radiation treatment planning was evaluated. Methods and Materials: A total of 32 patients who underwent positron emission and computed tomography (PET/CT) simulation were included; 18 patients had lung cancer, and 14 patients had pharyngeal cancer. For tumors of <=2 cm, 2 to 5 cm, and >5 cm, thresholds were defined as 2.5 standardized uptake value (SUV), 35%, and 20% of the maximum FDG activity, respectively. The cervical and mediastinal lymph nodes with the shortest axial diameter of >=10 mm were considered to be metastatic on CT (LNCT). The retropharyngeal lymph nodes with the shortest axial diameter of >=5 mm on CT and MRI were also defined as metastatic. Lymph nodes showing maximum FDG activity greater than the adopted thresholds for radiation therapy planning were designated LNPET-RTP, and lymph nodes with a maximum FDG activity of >=2.5 SUV were regarded as malignant and were designated LNPET-2.5 SUV. Results: The sizes of gross tumor volumes on PET (GTVPET) with the adopted thresholds in the axial plane were visually well fitted to those of GTV on CT (GTVCT). However, the volumes of GTVPET were larger than those of GTVCT, with significant differences (p < 0.0001) for lung cancer, due to respiratory motion. For lung cancer, the numbers of LNCT, LNPET-RTP, and LNPET-2.5 SUV were 29, 28, and 34, respectively. For pharyngeal cancer, the numbers of LNCT, LNPET-RTP, and LNPET-2.5 SUV were 14, 9, and 15, respectively. Conclusions: Our multiple thresholds were applicable for delineating the primary target on PET/CT simulation. However, these thresholds were inaccurate for depicting malignant lymph nodes.

  14. General adaptive guidance using nonlinear programming constraint solving methods (FAST)

    NASA Astrophysics Data System (ADS)

    Skalecki, Lisa; Martin, Marc

    An adaptive, general purpose, constraint solving guidance algorithm called FAST (Flight Algorithm to Solve Trajectories) has been developed by the authors in response to the requirements for the Advanced Launch System (ALS). The FAST algorithm can be used for all mission phases for a wide range of Space Transportation Vehicles without code modification because of the general formulation of the nonlinear programming (NLP) problem, ad the general trajectory simulation used to predict constraint values. The approach allows on board re-targeting for severe weather and changes in payload or mission parameters, increasing flight reliability and dependability while reducing the amount of pre-flight analysis that must be performed. The algorithm is described in general in this paper. Three degree of freedom simulation results are presented for application of the algorithm to ascent and reentry phases of an ALS mission, and Mars aerobraking. Flight processor CPU requirement data is also shown.

  15. Investigation of the Multiple Model Adaptive Control (MMAC) method for flight control systems

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The application was investigated of control theoretic ideas to the design of flight control systems for the F-8 aircraft. The design of an adaptive control system based upon the so-called multiple model adaptive control (MMAC) method is considered. Progress is reported.

  16. The older person has a stroke: Learning to adapt using the Feldenkrais® Method.

    PubMed

    Jackson-Wyatt, O

    1995-01-01

    The older person with a stroke requires adapted therapeutic interventions to take into account normal age-related changes. The Feldenkrais® Method presents a model for learning to promote adaptability that addresses key functional changes seen with normal aging. Clinical examples related to specific functional tasks are discussed to highlight major treatment modifications and neuromuscular, psychological, emotional, and sensory considerations. PMID:27619899

  17. A Comparative Study of Item Exposure Control Methods in Computerized Adaptive Testing.

    ERIC Educational Resources Information Center

    Chang, Shun-Wen; Twu, Bor-Yaun

    This study investigated and compared the properties of five methods of item exposure control within the purview of estimating examinees' abilities in a computerized adaptive testing (CAT) context. Each of the exposure control algorithms was incorporated into the item selection procedure and the adaptive testing progressed based on the CAT design…

  18. Simple method for adaptive filtering of motion artifacts in E-textile wearable ECG sensors.

    PubMed

    Alkhidir, Tamador; Sluzek, Andrzej; Yapici, Murat Kaya

    2015-08-01

    In this paper, we have developed a simple method for adaptive out-filtering of the motion artifact from the electrocardiogram (ECG) obtained by using conductive textile electrodes. The textile electrodes were placed on the left and the right wrist to measure ECG through lead-1 configuration. The motion artifact was induced by simple hand movements. The reference signal for adaptive filtering was obtained by placing additional electrodes at one hand to capture the motion of the hand. The adaptive filtering was compared to independent component analysis (ICA) algorithm. The signal-to-noise ratio (SNR) for the adaptive filtering approach was higher than independent component analysis in most cases.

  19. Thresholds of cutaneous afferents related to perceptual threshold across the human foot sole.

    PubMed

    Strzalkowski, Nicholas D J; Mildren, Robyn L; Bent, Leah R

    2015-10-01

    Perceptual thresholds are known to vary across the foot sole, despite a reported even distribution in cutaneous afferents. Skin mechanical properties have been proposed to account for these differences; however, a direct relationship between foot sole afferent firing, perceptual threshold, and skin mechanical properties has not been previously investigated. Using the technique of microneurography, we recorded the monofilament firing thresholds of cutaneous afferents and associated perceptual thresholds across the foot sole. In addition, receptive field hardness measurements were taken to investigate the influence of skin hardness on these threshold measures. Afferents were identified as fast adapting [FAI (n = 48) or FAII (n = 13)] or slowly adapting [SAI (n = 21) or SAII (n = 20)], and were grouped based on receptive field location (heel, arch, metatarsals, toes). Overall, perceptual thresholds were found to most closely align with firing thresholds of FA afferents. In contrast, SAI and SAII afferent firing thresholds were found to be significantly higher than perceptual thresholds and are not thought to mediate monofilament perceptual threshold across the foot sole. Perceptual thresholds and FAI afferent firing thresholds were significantly lower in the arch compared with other regions, and skin hardness was found to positively correlate with both FAI and FAII afferent firing and perceptual thresholds. These data support a perceptual influence of skin hardness, which is likely the result of elevated FA afferent firing threshold at harder foot sole sites. The close coupling between FA afferent firing and perceptual threshold across foot sole indicates that small changes in FA afferent firing can influence perceptual thresholds.

  20. An adaptive mesh refinement algorithm for the discrete ordinates method

    SciTech Connect

    Jessee, J.P.; Fiveland, W.A.; Howell, L.H.; Colella, P.; Pember, R.B.

    1996-03-01

    The discrete ordinates form of the radiative transport equation (RTE) is spatially discretized and solved using an adaptive mesh refinement (AMR) algorithm. This technique permits the local grid refinement to minimize spatial discretization error of the RTE. An error estimator is applied to define regions for local grid refinement; overlapping refined grids are recursively placed in these regions; and the RTE is then solved over the entire domain. The procedure continues until the spatial discretization error has been reduced to a sufficient level. The following aspects of the algorithm are discussed: error estimation, grid generation, communication between refined levels, and solution sequencing. This initial formulation employs the step scheme, and is valid for absorbing and isotopically scattering media in two-dimensional enclosures. The utility of the algorithm is tested by comparing the convergence characteristics and accuracy to those of the standard single-grid algorithm for several benchmark cases. The AMR algorithm provides a reduction in memory requirements and maintains the convergence characteristics of the standard single-grid algorithm; however, the cases illustrate that efficiency gains of the AMR algorithm will not be fully realized until three-dimensional geometries are considered.

  1. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  2. Analysis of modified SMI method for adaptive array weight control

    NASA Technical Reports Server (NTRS)

    Dilsavor, R. L.; Moses, R. L.

    1989-01-01

    An adaptive array is applied to the problem of receiving a desired signal in the presence of weak interference signals which need to be suppressed. A modification, suggested by Gupta, of the sample matrix inversion (SMI) algorithm controls the array weights. In the modified SMI algorithm, interference suppression is increased by subtracting a fraction F of the noise power from the diagonal elements of the estimated covariance matrix. Given the true covariance matrix and the desired signal direction, the modified algorithm is shown to maximize a well-defined, intuitive output power ratio criterion. Expressions are derived for the expected value and variance of the array weights and output powers as a function of the fraction F and the number of snapshots used in the covariance matrix estimate. These expressions are compared with computer simulation and good agreement is found. A trade-off is found to exist between the desired level of interference suppression and the number of snapshots required in order to achieve that level with some certainty. The removal of noise eigenvectors from the covariance matrix inverse is also discussed with respect to this application. Finally, the type and severity of errors which occur in the covariance matrix estimate are characterized through simulation.

  3. Comparison of the diagnostic accuracy, sensitivity and specificity of four odontological methods for age evaluation in Italian children at the age threshold of 14 years using ROC curves.

    PubMed

    Pinchi, Vilma; Pradella, Francesco; Vitale, Giulia; Rugo, Dario; Nieri, Michele; Norelli, Gian-Aristide

    2016-01-01

    The age threshold of 14 years is relevant in Italy as the minimum age for criminal responsibility. It is of utmost importance to evaluate the diagnostic accuracy of every odontological method for age evaluation considering the sensitivity, or the ability to estimate the true positive cases, and the specificity, or the ability to estimate the true negative cases. The research aims to compare the specificity and sensitivity of four commonly adopted methods of dental age estimation - Demirjian, Haavikko, Willems and Cameriere - in a sample of Italian children aged between 11 and 16 years, with an age threshold of 14 years, using receiver operating characteristic curves and the area under the curve (AUC). In addition, new decision criteria are developed to increase the accuracy of the methods. Among the four odontological methods for age estimation adopted in the research, the Cameriere method showed the highest AUC in both female and male cohorts. The Cameriere method shows a high degree of accuracy at the age threshold of 14 years. To adopt the Cameriere method to estimate the 14-year age threshold more accurately, however, it is suggested - according to the Youden index - that the decision criterion be set at the lower value of 12.928 for females and 13.258 years for males, obtaining a sensitivity of 85% and specificity of 88% in females, and a sensitivity of 77% and specificity of 92% in males. If a specificity level >90% is needed, the cut-off point should be set at 12.959 years (82% sensitivity) for females.

  4. An adaptation of Krylov subspace methods to path following

    SciTech Connect

    Walker, H.F.

    1996-12-31

    Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.

  5. A comparison of performance of automatic cloud coverage assessment algorithm for Formosat-2 image using clustering-based and spatial thresholding methods

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Hsien

    2012-11-01

    Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.

  6. Speckle reduction in optical coherence tomography by adaptive total variation method

    NASA Astrophysics Data System (ADS)

    Wu, Tong; Shi, Yaoyao; Liu, Youwen; He, Chongjun

    2015-12-01

    An adaptive total variation method based on the combination of speckle statistics and total variation restoration is proposed and developed for reducing speckle noise in optical coherence tomography (OCT) images. The statistical distribution of the speckle noise in OCT image is investigated and measured. With the measured parameters such as the mean value and variance of the speckle noise, the OCT image is restored by the adaptive total variation restoration method. The adaptive total variation restoration algorithm was applied to the OCT images of a volunteer's hand skin, which showed effective speckle noise reduction and image quality improvement. For image quality comparison, the commonly used median filtering method was also applied to the same images to reduce the speckle noise. The measured results demonstrate the superior performance of the adaptive total variation restoration method in terms of image signal-to-noise ratio, equivalent number of looks, contrast-to-noise ratio, and mean square error.

  7. Adapting Western research methods to indigenous ways of knowing.

    PubMed

    Simonds, Vanessa W; Christopher, Suzanne

    2013-12-01

    Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid.

  8. Adapting Western Research Methods to Indigenous Ways of Knowing

    PubMed Central

    Christopher, Suzanne

    2013-01-01

    Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid. PMID:23678897

  9. Automatic multirate methods for ordinary differential equations. [Adaptive time steps

    SciTech Connect

    Gear, C.W.

    1980-01-01

    A study is made of the application of integration methods in which different step sizes are used for different members of a system of equations. Such methods can result in savings if the cost of derivative evaluation is high or if a system is sparse; however, the estimation and control of errors is very difficult and can lead to high overheads. Three approaches are discussed, and it is shown that the least intuitive is the most promising. 2 figures.

  10. Adaptive error covariances estimation methods for ensemble Kalman filters

    SciTech Connect

    Zhen, Yicun; Harlim, John

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.

  11. ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve☆

    PubMed Central

    Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

    2014-01-01

    In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725

  12. Systems and Methods for Parameter Dependent Riccati Equation Approaches to Adaptive Control

    NASA Technical Reports Server (NTRS)

    Kim, Kilsoo (Inventor); Yucelen, Tansel (Inventor); Calise, Anthony J. (Inventor)

    2015-01-01

    Systems and methods for adaptive control are disclosed. The systems and methods can control uncertain dynamic systems. The control system can comprise a controller that employs a parameter dependent Riccati equation. The controller can produce a response that causes the state of the system to remain bounded. The control system can control both minimum phase and non-minimum phase systems. The control system can augment an existing, non-adaptive control design without modifying the gains employed in that design. The control system can also avoid the use of high gains in both the observer design and the adaptive control law.

  13. Ecological Scarcity Method: Adaptation and Implementation for Different Countries

    NASA Astrophysics Data System (ADS)

    Grinberg, Marina; Ackermann, Robert; Finkbeiner, Matthias

    2012-12-01

    The Ecological Scarcity Method is one of the methods for impact assessment in LCA. It enables to express different environmental impacts in single score units, eco-points. Such results are handy for decision-makers in policy or enterprises to improve environmental management. So far this method is mostly used in the country of its origin, Switzerland. Eco-factors derive from the national conditions. For other countries sometimes it is impossible to calculate all ecofactors. The solution of the problem is to create a set of transformation rules. The rules should take into account the regional differences, the level of society development, the grade of scarcity and other factors. The research is focused on the creation of transformation rules between Switzerland, Germany and the Russian Federation in case of GHG emissions.

  14. A high-throughput multiplex method adapted for GMO detection.

    PubMed

    Chaouachi, Maher; Chupeau, Gaëlle; Berard, Aurélie; McKhann, Heather; Romaniuk, Marcel; Giancola, Sandra; Laval, Valérie; Bertheau, Yves; Brunel, Dominique

    2008-12-24

    A high-throughput multiplex assay for the detection of genetically modified organisms (GMO) was developed on the basis of the existing SNPlex method designed for SNP genotyping. This SNPlex assay allows the simultaneous detection of up to 48 short DNA sequences (approximately 70 bp; "signature sequences") from taxa endogenous reference genes, from GMO constructions, screening targets, construct-specific, and event-specific targets, and finally from donor organisms. This assay avoids certain shortcomings of multiplex PCR-based methods already in widespread use for GMO detection. The assay demonstrated high specificity and sensitivity. The results suggest that this assay is reliable, flexible, and cost- and time-effective for high-throughput GMO detection.

  15. An Adaptive Kalman Filter Using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  16. An Adaptive Kalman Filter using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  17. The Pilates method and cardiorespiratory adaptation to training.

    PubMed

    Tinoco-Fernández, Maria; Jiménez-Martín, Miguel; Sánchez-Caravaca, M Angeles; Fernández-Pérez, Antonio M; Ramírez-Rodrigo, Jesús; Villaverde-Gutiérrez, Carmen

    2016-01-01

    Although all authors report beneficial health changes following training based on the Pilates method, no explicit analysis has been performed of its cardiorespiratory effects. The objective of this study was to evaluate possible changes in cardiorespiratory parameters with the Pilates method. A total of 45 university students aged 18-35 years (77.8% female and 22.2% male), who did not routinely practice physical exercise or sports, volunteered for the study and signed informed consent. The Pilates training was conducted over 10 weeks, with three 1-hour sessions per week. Physiological cardiorespiratory responses were assessed using a MasterScreen CPX apparatus. After the 10-week training, statistically significant improvements were observed in mean heart rate (135.4-124.2 beats/min), respiratory exchange ratio (1.1-0.9) and oxygen equivalent (30.7-27.6) values, among other spirometric parameters, in submaximal aerobic testing. These findings indicate that practice of the Pilates method has a positive influence on cardiorespiratory parameters in healthy adults who do not routinely practice physical exercise activities. PMID:27357919

  18. Comparison of the applicability of four odontological methods for age estimation of the 14 years legal threshold in a sample of Italian adolescents.

    PubMed

    Pinchi, Vilma; Norelli, Gian-Aristide; Pradella, Francesco; Vitale, Giulia; Rugo, Dario; Nieri, Michele

    2012-12-01

    The 14-years age threshold is especially important in Italy for criminal, civil and administrative laws. Several methods relying on dental calcification of the teeth, up to the second molar, are used for the evaluation of age in childhood. The objective of the research was to compare the inter-rater agreement and accuracy of four common methods for the dental age estimation - Demirjian (D), Willems (W), Cameriere (C) and Haavikko (H) - in a sample of Italian adolescents between 11 and 16 years. The sensitivity and specificity, and the different level of probability, according to the peculiarities of Italian criminal and civil law, were compared for the methods examined, considering the threshold of 14 years. The sample was composed of 501 digital OPGs of Italian children (257 females and 244 males), aged from 11 years and 0 days to 15 years and 364 days. The maturation stage of the teeth was evaluated according to D, W, H and C methods by three independent examiners. Mixed statistical models were applied to compare the accuracy and the errors of each method. The inter-rater agreement was high for the four methods and the intraclass correlation coefficients were all ≥ 0.81. Methods H and C showed a general tendency to underestimate the age in the considered sample while the methods D and W tended to overestimate the child's age. In females, D and W were more accurate than C, which is more accurate than H. In the males, W is the most accurate method even though it over-estimated age. Considering the 14-years threshold, the sensitivity of D and W methods is quite high (range 0.80; 0.95) and specificity is low (range 0.61; 0.86). The principal findings of the research are: the W and D methods are much more accurate than C and H, but they tend to overestimate the age. The C method largely underestimates the age (by ~1 year) for both genders and for all operators. H is unsuitable for dental age estimation in the Italian population, while W and D yielded high

  19. The alerting system for hydrogeological hazard in Lombardy Region, northern Italy: rainfall thresholds triggering debris-flows and "equivalent rainfall" method

    NASA Astrophysics Data System (ADS)

    Cucchi, A.; Valsecchi, I. Q.; Alberti, M.; Fassi, P.; Molari, M.; Mannucci, G.

    2015-01-01

    The Functional Centre (CFMR) of the Civil Protection of the Lombardy Region, North Italy, has the main task of monitoring and alerting, particularly with respect to natural hazards. The procedure of early warning for hydrogeological hazard is based on a comparison of two quantities: thresholds and rainfall, both referred to a defined area and an exact time interval. The CFMR studied 52 landslide events (1987-2003) in Medium-Low Valtellina and derived a model of the critical detachment rainfall, in function of the local slope and the Curve Number CN (an empirical parameter related with the land cover and the hydrological conditions of the soil): it's physically consistent and allows a geographically targeted alerting. Moreover, rainfall thresholds were associated with a typical probability of exceedance. The processing of rainfall data is carried out through the "equivalent rainfall" method, that allows to take into account the antecedent moisture condition of the soil: in fact the hazard is substantially greater when the soil is near to saturation. The method was developed from the CN method and considers the local CN and the observed rainfall of the previous 5 days. The obtained value for the local equivalent rainfall, that combines rainfall (observed and forecasted) and local soil characteristics, is a better parameter for the evaluation of the hydrogeological hazard. The comparison between equivalent rainfall and thresholds allows to estimate the local hydrogeological hazard, displayed through hazard maps, and consequently to provide a reliable alerting activity (even localized to limited portions of the region).

  20. A Massively Parallel Adaptive Fast Multipole Method on Heterogeneous Architectures

    SciTech Connect

    Lashuk, Ilya; Chandramowlishwaran, Aparna; Langston, Harper; Nguyen, Tuan-Anh; Sampath, Rahul S; Shringarpure, Aashay; Vuduc, Richard; Ying, Lexing; Zorin, Denis; Biros, George

    2012-01-01

    We describe a parallel fast multipole method (FMM) for highly nonuniform distributions of particles. We employ both distributed memory parallelism (via MPI) and shared memory parallelism (via OpenMP and GPU acceleration) to rapidly evaluate two-body nonoscillatory potentials in three dimensions on heterogeneous high performance computing architectures. We have performed scalability tests with up to 30 billion particles on 196,608 cores on the AMD/CRAY-based Jaguar system at ORNL. On a GPU-enabled system (NSF's Keeneland at Georgia Tech/ORNL), we observed 30x speedup over a single core CPU and 7x speedup over a multicore CPU implementation. By combining GPUs with MPI, we achieve less than 10 ns/particle and six digits of accuracy for a run with 48 million nonuniformly distributed particles on 192 GPUs.

  1. The Limits to Adaptation; A Systems Approach

    EPA Science Inventory

    The Limits to Adaptation: A Systems Approach. The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering...

  2. Error estimation and adaptive order nodal method for solving multidimensional transport problems

    SciTech Connect

    Zamonsky, O.M.; Gho, C.J.; Azmy, Y.Y.

    1998-01-01

    The authors propose a modification of the Arbitrarily High Order Transport Nodal method whereby they solve each node and each direction using different expansion order. With this feature and a previously proposed a posteriori error estimator they develop an adaptive order scheme to automatically improve the accuracy of the solution of the transport equation. They implemented the modified nodal method, the error estimator and the adaptive order scheme into a discrete-ordinates code for solving monoenergetic, fixed source, isotropic scattering problems in two-dimensional Cartesian geometry. They solve two test problems with large homogeneous regions to test the adaptive order scheme. The results show that using the adaptive process the storage requirements are reduced while preserving the accuracy of the results.

  3. An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    1999-01-01

    An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.

  4. Impedance adaptation methods of the piezoelectric energy harvesting

    NASA Astrophysics Data System (ADS)

    Kim, Hyeoungwoo

    In this study, the important issues of energy recovery were addressed and a comprehensive investigation was performed on harvesting electrical power from an ambient mechanical vibration source. Also discussed are the impedance matching methods used to increase the efficiency of energy transfer from the environment to the application. Initially, the mechanical impedance matching method was investigated to increase mechanical energy transferred to the transducer from the environment. This was done by reducing the mechanical impedance such as damping factor and energy reflection ratio. The vibration source and the transducer were modeled by a two-degree-of-freedom dynamic system with mass, spring constant, and damper. The transmissibility employed to show how much mechanical energy that was transferred in this system was affected by the damping ratio and the stiffness of elastic materials. The mechanical impedance of the system was described by electrical system using analogy between the two systems in order to simply the total mechanical impedance. Secondly, the transduction rate of mechanical energy to electrical energy was improved by using a PZT material which has a high figure of merit and a high electromechanical coupling factor for electrical power generation, and a piezoelectric transducer which has a high transduction rate was designed and fabricated. The high g material (g33 = 40 [10-3Vm/N]) was developed to improve the figure of merit of the PZT ceramics. The cymbal composite transducer has been found as a promising structure for piezoelectric energy harvesting under high force at cyclic conditions (10--200 Hz), because it has almost 40 times higher effective strain coefficient than PZT ceramics. The endcap of cymbal also enhances the endurance of the ceramic to sustain ac load along with stress amplification. In addition, a macro fiber composite (MFC) was employed as a strain component because of its flexibility and the high electromechanical coupling

  5. An adaptive multiscale finite element method for unsaturated flow problems in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    He, Xinguang; Ren, Li

    2009-07-01

    SummaryIn this paper we present an adaptive multiscale finite element method for solving the unsaturated water flow problems in heterogeneous porous media spanning over many scales. The main purpose is to design a numerical method which is capable of adaptively capturing the large-scale behavior of the solution on a coarse-scale mesh without resolving all the small-scale details at each time step. This is accomplished by constructing the multiscale base functions that are adapted to the time change of the unsaturated hydraulic conductivity field. The key idea of our method is to use a criterion based on the temporal variation of the hydraulic conductivity field to determine when and where to update our multiscale base functions. As a consequence, these base functions are able to dynamically account for the spatio-temporal variability in the equation coefficients. We described the principle for constructing such a method in detail and gave an algorithm for implementing it. Numerical experiments were carried out for the unsaturated water flow equation with randomly generated lognormal hydraulic parameters to demonstrate the efficiency and accuracy of the proposed method. The results show that throughout the adaptive simulation, only a very small fraction of the multiscale base functions needs to be recomputed, and the level of accuracy of the adaptive method is higher than that of the multiscale finite element technique in which the base functions are not updated with the time change of the hydraulic conductivity.

  6. On the use of adaptive moving grid methods in combustion problems

    SciTech Connect

    Hyman, J.M.; Larrouturou, B.

    1986-01-01

    The investigators have presented the reasons and advantages of adaptively moving the mesh points for the solution of time-dependent PDEs (partial differential equations) systems developing sharp gradients, and more specifically for combustion problems. Several available adaptive dynamic rezone methods have been briefly reviewed, and the effectiveness of these algorithms for combustion problems has been illustrated by the numerical solution of a simple flame propagation problem. 29 refs., 7 figs.

  7. A robust adaptive sampling method for faster acquisition of MR images.

    PubMed

    Vellagoundar, Jaganathan; Machireddy, Ramasubba Reddy

    2015-06-01

    A robust adaptive k-space sampling method is proposed for faster acquisition and reconstruction of MR images. In this method, undersampling patterns are generated based on magnitude profile of a fully acquired 2-D k-space data. Images are reconstructed using compressive sampling reconstruction algorithm. Simulation experiments are done to assess the performance of the proposed method under various signal-to-noise ratio (SNR) levels. The performance of the method is better than non-adaptive variable density sampling method when k-space SNR is greater than 10dB. The method is implemented on a fully acquired multi-slice raw k-space data and a quality assurance phantom data. Data reduction of up to 60% is achieved in the multi-slice imaging data and 75% is achieved in the phantom imaging data. The results show that reconstruction accuracy is improved over non-adaptive or conventional variable density sampling method. The proposed sampling method is signal dependent and the estimation of sampling locations is robust to noise. As a result, it eliminates the necessity of mathematical model and parameter tuning to compute k-space sampling patterns as required in non-adaptive sampling methods.

  8. A self-organizing Lagrangian particle method for adaptive-resolution advection-diffusion simulations

    NASA Astrophysics Data System (ADS)

    Reboux, Sylvain; Schrader, Birte; Sbalzarini, Ivo F.

    2012-05-01

    We present a novel adaptive-resolution particle method for continuous parabolic problems. In this method, particles self-organize in order to adapt to local resolution requirements. This is achieved by pseudo forces that are designed so as to guarantee that the solution is always well sampled and that no holes or clusters develop in the particle distribution. The particle sizes are locally adapted to the length scale of the solution. Differential operators are consistently evaluated on the evolving set of irregularly distributed particles of varying sizes using discretization-corrected operators. The method does not rely on any global transforms or mapping functions. After presenting the method and its error analysis, we demonstrate its capabilities and limitations on a set of two- and three-dimensional benchmark problems. These include advection-diffusion, the Burgers equation, the Buckley-Leverett five-spot problem, and curvature-driven level-set surface refinement.

  9. A self-adaptive-grid method with application to airfoil flow

    NASA Technical Reports Server (NTRS)

    Nakahashi, K.; Deiwert, G. S.

    1985-01-01

    A self-adaptive-grid method is described that is suitable for multidimensional steady and unsteady computations. Based on variational principles, a spring analogy is used to redistribute grid points in an optimal sense to reduce the overall solution error. User-specified parameters, denoting both maximum and minimum permissible grid spacings, are used to define the all-important constants, thereby minimizing the empiricism and making the method self-adaptive. Operator splitting and one-sided controls for orthogonality and smoothness are used to make the method practical, robust, and efficient. Examples are included for both steady and unsteady viscous flow computations about airfoils in two dimensions, as well as for a steady inviscid flow computation and a one-dimensional case. These examples illustrate the precise control the user has with the self-adaptive method and demonstrate a significant improvement in accuracy and quality of the solutions.

  10. Binarization and multi-thresholding of document images using connectivity

    SciTech Connect

    O`Gorman, L.

    1994-12-31

    Thresholding is a common image processing operation applied to gray-scale images to obtain binary or multi-level images. A thresholding method is described here that is global in approach, but uses a measure of local information, namely connectivity. Thresholds are found at the intensity levels that best preserve the connectivity of regions within the image. Thus, this method has advantages of both global and locally adaptive approaches. Experimental comparisons for document images show that the connectivity-preserving method improves subsequent OCR recognition rates from about 95% to 97.5% and reduces the number of binarization failures (where text is so poorly binarized as to be totally unrecognizable by a commercial OCR system) from 33% to 6% on difficult images.

  11. Research on adaptive segmentation and activity classification method of filamentous fungi image in microbe fermentation

    NASA Astrophysics Data System (ADS)

    Cai, Xiaochun; Hu, Yihua; Wang, Peng; Sun, Dujuan; Hu, Guilan

    2009-10-01

    The paper presents an adaptive segmentation and activity classification method for filamentous fungi image. Firstly, an adaptive structuring element (SE) construction algorithm is proposed for image background suppression. Based on watershed transform method, the color labeled segmentation of fungi image is taken. Secondly, the fungi elements feature space is described and the feature set for fungi hyphae activity classification is extracted. The growth rate evaluation of fungi hyphae is achieved by using SVM classifier. Some experimental results demonstrate that the proposed method is effective for filamentous fungi image processing.

  12. An adaptive wavelet stochastic collocation method for irregular solutions of stochastic partial differential equations

    SciTech Connect

    Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D

    2012-10-01

    Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.

  13. Arbitrary Lagrangian-Eulerian Method with Local Structured Adaptive Mesh Refinement for Modeling Shock Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2001-10-22

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.

  14. Five Methods to Score the Teacher Observation of Classroom Adaptation Checklist and to Examine Group Differences

    ERIC Educational Resources Information Center

    Wang, Ze; Rohrer, David; Chuang, Chi-ching; Fujiki, Mayo; Herman, Keith; Reinke, Wendy

    2015-01-01

    This study compared 5 scoring methods in terms of their statistical assumptions. They were then used to score the Teacher Observation of Classroom Adaptation Checklist, a measure consisting of 3 subscales and 21 Likert-type items. The 5 methods used were (a) sum/average scores of items, (b) latent factor scores with continuous indicators, (c)…

  15. Adaptation of the TCLP and SW-846 methods to radioactive mixed waste

    SciTech Connect

    Griest, W.H.; Schenley, R.L.; Caton, J.E.; Wolfe, P.F.

    1994-07-01

    Modifications of conventional sample preparation and analytical methods are necessary to provide radiation protection and to meet sensitivity requirements for regulated constituents when working with radioactive samples. Adaptations of regulatory methods for determining ``total`` Toxicity Characteristic Leaching Procedure (TCLP) volatile and semivolatile organics and pesticides, and for conducting aqueous leaching are presented.

  16. Definition of temperature thresholds: the example of the French heat wave warning system.

    PubMed

    Pascal, Mathilde; Wagner, Vérène; Le Tertre, Alain; Laaidi, Karine; Honoré, Cyrille; Bénichou, Françoise; Beaudeau, Pascal

    2013-01-01

    Heat-related deaths should be somewhat preventable. In France, some prevention measures are activated when minimum and maximum temperatures averaged over three days reach city-specific thresholds. The current thresholds were computed based on a descriptive analysis of past heat waves and on local expert judgement. We tested whether a different method would confirm these thresholds. The study was set in the six cities of Paris, Lyon, Marseille, Nantes, Strasbourg and Limoges between 1973 and 2003. For each city, we estimated the excess in mortality associated with different temperature thresholds, using a generalised additive model, controlling for long-time trends, seasons and days of the week. These models were used to compute the mortality predicted by different percentiles of temperatures. The thresholds were chosen as the percentiles associated with a significant excess mortality. In all cities, there was a good correlation between current thresholds and the thresholds derived from the models, with 0°C to 3°C differences for averaged maximum temperatures. Both set of thresholds were able to anticipate the main periods of excess mortality during the summers of 1973 to 2003. A simple method relying on descriptive analysis and expert judgement is sufficient to define protective temperature thresholds and to prevent heat wave mortality. As temperatures are increasing along with the climate change and adaptation is ongoing, more research is required to understand if and when thresholds should be modified.

  17. An h-adaptive local discontinuous Galerkin method for the Navier-Stokes-Korteweg equations

    NASA Astrophysics Data System (ADS)

    Tian, Lulu; Xu, Yan; Kuerten, J. G. M.; van der Vegt, J. J. W.

    2016-08-01

    In this article, we develop a mesh adaptation algorithm for a local discontinuous Galerkin (LDG) discretization of the (non)-isothermal Navier-Stokes-Korteweg (NSK) equations modeling liquid-vapor flows with phase change. This work is a continuation of our previous research, where we proposed LDG discretizations for the (non)-isothermal NSK equations with a time-implicit Runge-Kutta method. To save computing time and to capture the thin interfaces more accurately, we extend the LDG discretization with a mesh adaptation method. Given the current adapted mesh, a criterion for selecting candidate elements for refinement and coarsening is adopted based on the locally largest value of the density gradient. A strategy to refine and coarsen the candidate elements is then provided. We emphasize that the adaptive LDG discretization is relatively simple and does not require additional stabilization. The use of a locally refined mesh in combination with an implicit Runge-Kutta time method is, however, non-trivial, but results in an efficient time integration method for the NSK equations. Computations, including cases with solid wall boundaries, are provided to demonstrate the accuracy, efficiency and capabilities of the adaptive LDG discretizations.

  18. Estimating the Importance of Private Adaptation to Climate Change in Agriculture: A Review of Empirical Methods

    NASA Astrophysics Data System (ADS)

    Moore, F.; Burke, M.

    2015-12-01

    A wide range of studies using a variety of methods strongly suggest that climate change will have a negative impact on agricultural production in many areas. Farmers though should be able to learn about a changing climate and to adjust what they grow and how they grow it in order to reduce these negative impacts. However, it remains unclear how effective these private (autonomous) adaptations will be, or how quickly they will be adopted. Constraining the uncertainty on this adaptation is important for understanding the impacts of climate change on agriculture. Here we review a number of empirical methods that have been proposed for understanding the rate and effectiveness of private adaptation to climate change. We compare these methods using data on agricultural yields in the United States and western Europe.

  19. Sparse regularization-based reconstruction for bioluminescence tomography using a multilevel adaptive finite element method.

    PubMed

    He, Xiaowei; Hou, Yanbin; Chen, Duofang; Jiang, Yuchuan; Shen, Man; Liu, Junting; Zhang, Qitan; Tian, Jie

    2011-01-01

    Bioluminescence tomography (BLT) is a promising tool for studying physiological and pathological processes at cellular and molecular levels. In most clinical or preclinical practices, fine discretization is needed for recovering sources with acceptable resolution when solving BLT with finite element method (FEM). Nevertheless, uniformly fine meshes would cause large dataset and overfine meshes might aggravate the ill-posedness of BLT. Additionally, accurately quantitative information of density and power has not been simultaneously obtained so far. In this paper, we present a novel multilevel sparse reconstruction method based on adaptive FEM framework. In this method, permissible source region gradually reduces with adaptive local mesh refinement. By using sparse reconstruction with l(1) regularization on multilevel adaptive meshes, simultaneous recovery of density and power as well as accurate source location can be achieved. Experimental results for heterogeneous phantom and mouse atlas model demonstrate its effectiveness and potentiality in the application of quantitative BLT.

  20. The adaptive problems of female teenage refugees and their behavioral adjustment methods for coping

    PubMed Central

    Mhaidat, Fatin

    2016-01-01

    This study aimed at identifying the levels of adaptive problems among teenage female refugees in the government schools and explored the behavioral methods that were used to cope with the problems. The sample was composed of 220 Syrian female students (seventh to first secondary grades) enrolled at government schools within the Zarqa Directorate and who came to Jordan due to the war conditions in their home country. The study used the scale of adaptive problems that consists of four dimensions (depression, anger and hostility, low self-esteem, and feeling insecure) and a questionnaire of the behavioral adjustment methods for dealing with the problem of asylum. The results indicated that the Syrian teenage female refugees suffer a moderate degree of adaptation problems, and the positive adjustment methods they have used are more than the negatives. PMID:27175098

  1. Asynchronous multilevel adaptive methods for solving partial differential equations on multiprocessors - Performance results

    NASA Technical Reports Server (NTRS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids (global and local) to provide adaptive resolution and fast solution of PDEs. Like all such methods, it offers parallelism by using possibly many disconnected patches per level, but is hindered by the need to handle these levels sequentially. The finest levels must therefore wait for processing to be essentially completed on all the coarser ones. A recently developed asynchronous version of FAC, called AFAC, completely eliminates this bottleneck to parallelism. This paper describes timing results for AFAC, coupled with a simple load balancing scheme, applied to the solution of elliptic PDEs on an Intel iPSC hypercube. These tests include performance of certain processes necessary in adaptive methods, including moving grids and changing refinement. A companion paper reports on numerical and analytical results for estimating convergence factors of AFAC applied to very large scale examples.

  2. An adaptive altitude information fusion method for autonomous landing processes of small unmanned aerial rotorcraft.

    PubMed

    Lei, Xusheng; Li, Jingjing

    2012-01-01

    This paper presents an adaptive information fusion method to improve the accuracy and reliability of the altitude measurement information for small unmanned aerial rotorcraft during the landing process. Focusing on the low measurement performance of sensors mounted on small unmanned aerial rotorcraft, a wavelet filter is applied as a pre-filter to attenuate the high frequency noises in the sensor output. Furthermore, to improve altitude information, an adaptive extended Kalman filter based on a maximum a posteriori criterion is proposed to estimate measurement noise covariance matrix in real time. Finally, the effectiveness of the proposed method is proved by static tests, hovering flight and autonomous landing flight tests. PMID:23201993

  3. The block adaptive multigrid method applied to the solution of the Euler equations

    NASA Technical Reports Server (NTRS)

    Pantelelis, Nikos

    1993-01-01

    In the present study, a scheme capable of solving very fast and robust complex nonlinear systems of equations is presented. The Block Adaptive Multigrid (BAM) solution method offers multigrid acceleration and adaptive grid refinement based on the prediction of the solution error. The proposed solution method was used with an implicit upwind Euler solver for the solution of complex transonic flows around airfoils. Very fast results were obtained (18-fold acceleration of the solution) using one fourth of the volumes of a global grid with the same solution accuracy for two test cases.

  4. A comparison of locally adaptive multigrid methods: LDC, FAC and FIC

    NASA Technical Reports Server (NTRS)

    Khadra, Khodor; Angot, Philippe; Caltagirone, Jean-Paul

    1993-01-01

    This study is devoted to a comparative analysis of three 'Adaptive ZOOM' (ZOom Overlapping Multi-level) methods based on similar concepts of hierarchical multigrid local refinement: LDC (Local Defect Correction), FAC (Fast Adaptive Composite), and FIC (Flux Interface Correction)--which we proposed recently. These methods are tested on two examples of a bidimensional elliptic problem. We compare, for V-cycle procedures, the asymptotic evolution of the global error evaluated by discrete norms, the corresponding local errors, and the convergence rates of these algorithms.

  5. An Adaptive Altitude Information Fusion Method for Autonomous Landing Processes of Small Unmanned Aerial Rotorcraft

    PubMed Central

    Lei, Xusheng; Li, Jingjing

    2012-01-01

    This paper presents an adaptive information fusion method to improve the accuracy and reliability of the altitude measurement information for small unmanned aerial rotorcraft during the landing process. Focusing on the low measurement performance of sensors mounted on small unmanned aerial rotorcraft, a wavelet filter is applied as a pre-filter to attenuate the high frequency noises in the sensor output. Furthermore, to improve altitude information, an adaptive extended Kalman filter based on a maximum a posteriori criterion is proposed to estimate measurement noise covariance matrix in real time. Finally, the effectiveness of the proposed method is proved by static tests, hovering flight and autonomous landing flight tests. PMID:23201993

  6. Effects of light curing method and resin composite composition on composite adaptation to the cavity wall.

    PubMed

    Yoshikawa, Takako; Morigami, Makoto; Sadr, Alireza; Tagami, Junji

    2014-01-01

    This study aimed to evaluate the effects of the light curing method and resin composite composition on marginal sealing and resin composite adaptation to the cavity wall. Cylindrical cavities were prepared on the buccal or lingual cervical regions. The teeth were restored using Clearfil Liner Bond 2V adhesive system and filled with Clearfil Photo Bright or Palfique Estelite resin composite. The resins were cured using the conventional or slow-start light curing method. After thermal cycling, the specimens were subjected to a dye penetration test. The slow-start curing method showed better resin composite adaptation to the cavity wall for both composites. Furthermore, the slow-start curing method resulted in significantly improved dentin marginal sealing compared with the conventional method for Clearfil Photo Bright. The light-cured resin composite, which exhibited increased contrast ratios duringpolymerization, seems to suggest high compensation for polymerization contraction stress when using the slow-start curing method.

  7. A NOISE ADAPTIVE FUZZY EQUALIZATION METHOD FOR PROCESSING SOLAR EXTREME ULTRAVIOLET IMAGES

    SciTech Connect

    Druckmueller, M.

    2013-08-15

    A new image enhancement tool ideally suited for the visualization of fine structures in extreme ultraviolet images of the corona is presented in this paper. The Noise Adaptive Fuzzy Equalization method is particularly suited for the exceptionally high dynamic range images from the Atmospheric Imaging Assembly instrument on the Solar Dynamics Observatory. This method produces artifact-free images and gives significantly better results than methods based on convolution or Fourier transform which are often used for that purpose.

  8. A GPU-accelerated adaptive discontinuous Galerkin method for level set equation

    NASA Astrophysics Data System (ADS)

    Karakus, A.; Warburton, T.; Aksel, M. H.; Sert, C.

    2016-01-01

    This paper presents a GPU-accelerated nodal discontinuous Galerkin method for the solution of two- and three-dimensional level set (LS) equation on unstructured adaptive meshes. Using adaptive mesh refinement, computations are localised mostly near the interface location to reduce the computational cost. Small global time step size resulting from the local adaptivity is avoided by local time-stepping based on a multi-rate Adams-Bashforth scheme. Platform independence of the solver is achieved with an extensible multi-threading programming API that allows runtime selection of different computing devices (GPU and CPU) and different threading interfaces (CUDA, OpenCL and OpenMP). Overall, a highly scalable, accurate and mass conservative numerical scheme that preserves the simplicity of LS formulation is obtained. Efficiency, performance and local high-order accuracy of the method are demonstrated through distinct numerical test cases.

  9. Automatic off-body overset adaptive Cartesian mesh method based on an octree approach

    SciTech Connect

    Peron, Stephanie; Benoit, Christophe

    2013-01-01

    This paper describes a method for generating adaptive structured Cartesian grids within a near-body/off-body mesh partitioning framework for the flow simulation around complex geometries. The off-body Cartesian mesh generation derives from an octree structure, assuming each octree leaf node defines a structured Cartesian block. This enables one to take into account the large scale discrepancies in terms of resolution between the different bodies involved in the simulation, with minimum memory requirements. Two different conversions from the octree to Cartesian grids are proposed: the first one generates Adaptive Mesh Refinement (AMR) type grid systems, and the second one generates abutting or minimally overlapping Cartesian grid set. We also introduce an algorithm to control the number of points at each adaptation, that automatically determines relevant values of the refinement indicator driving the grid refinement and coarsening. An application to a wing tip vortex computation assesses the capability of the method to capture accurately the flow features.

  10. A method for image quality evaluation considering adaptation to luminance of surround and noise in stimuli

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin

    2010-09-01

    This study intends to quantify the effects of the surround luminance and noise of a given stimulus on the shape of spatial luminance contrast sensitivity function (CSF) and to propose an adaptive image quality evaluation method. The proposed image evaluation method extends a model called square-root integral (SQRI). The non-linear behaviour of the human visual system was taken into account by using CSF. This model can be defined as the square root integration of multiplication between display modulation transfer function and CSF. The CSF term in the original SQRI was replaced by the surround adaptive CSF quantified in this study and it is divided by the Fourier transform of a given stimulus for compensating for the noise adaptation.

  11. A density-based adaptive quantum mechanical/molecular mechanical method.

    PubMed

    Waller, Mark P; Kumbhar, Sadhana; Yang, Jack

    2014-10-20

    We present a density-based adaptive quantum mechanical/molecular mechanical (DBA-QM/MM) method, whereby molecules can switch layers from the QM to the MM region and vice versa. The adaptive partitioning of the molecular system ensures that the layer assignment can change during the optimization procedure, that is, on the fly. The switch from a QM molecule to a MM molecule is determined if there is an absence of noncovalent interactions to any atom of the QM core region. The presence/absence of noncovalent interactions is determined by analysis of the reduced density gradient. Therefore, the location of the QM/MM boundary is based on physical arguments, and this neatly removes some empiricism inherent in previous adaptive QM/MM partitioning schemes. The DBA-QM/MM method is validated by using a water-in-water setup and an explicitly solvated L-alanyl-L-alanine dipeptide. PMID:24954803

  12. A density-based adaptive quantum mechanical/molecular mechanical method.

    PubMed

    Waller, Mark P; Kumbhar, Sadhana; Yang, Jack

    2014-10-20

    We present a density-based adaptive quantum mechanical/molecular mechanical (DBA-QM/MM) method, whereby molecules can switch layers from the QM to the MM region and vice versa. The adaptive partitioning of the molecular system ensures that the layer assignment can change during the optimization procedure, that is, on the fly. The switch from a QM molecule to a MM molecule is determined if there is an absence of noncovalent interactions to any atom of the QM core region. The presence/absence of noncovalent interactions is determined by analysis of the reduced density gradient. Therefore, the location of the QM/MM boundary is based on physical arguments, and this neatly removes some empiricism inherent in previous adaptive QM/MM partitioning schemes. The DBA-QM/MM method is validated by using a water-in-water setup and an explicitly solvated L-alanyl-L-alanine dipeptide.

  13. An adaptive mesh finite volume method for the Euler equations of gas dynamics

    NASA Astrophysics Data System (ADS)

    Mungkasi, Sudi

    2016-06-01

    The Euler equations have been used to model gas dynamics for decades. They consist of mathematical equations for the conservation of mass, momentum, and energy of the gas. For a large time value, the solution may contain discontinuities, even when the initial condition is smooth. A standard finite volume numerical method is not able to give accurate solutions to the Euler equations around discontinuities. Therefore we solve the Euler equations using an adaptive mesh finite volume method. In this paper, we present a new construction of the adaptive mesh finite volume method with an efficient computation of the refinement indicator. The adaptive method takes action automatically at around places having inaccurate solutions. Inaccurate solutions are reconstructed to reduce the error by refining the mesh locally up to a certain level. On the other hand, if the solution is already accurate, then the mesh is coarsened up to another certain level to minimize computational efforts. We implement the numerical entropy production as the mesh refinement indicator. As a test problem, we take the Sod shock tube problem. Numerical results show that the adaptive method is more promising than the standard one in solving the Euler equations of gas dynamics.

  14. Performance of the Adaptive Collision Source (ACS) Method for Discrete Ordinates in Parallel Environments

    NASA Astrophysics Data System (ADS)

    Walters, William J.; Haghighat, Alireza

    2014-06-01

    A new collision source method has been developed to solve the Linear Boltzmann Equation (LBE) more efficiently by adaptation of the angular quadrature order. The angular adaptation method is unique in that the flux from each scattering source iteration is obtained separately, with potentially a different quadrature order. This allows for an optimal use of processing power, by using a high order quadrature for the first few iterations that need it, before shifting to lower order quadratures for the remaining iterations. This is essentially an extension of the first collision source method, and we call it the adaptive collision source method (ACS). The ACS methodolog y has been implemented in the TITAN discrete ordinates code, and has shown a speedup of 2-3 on a test problem, with very little loss of accuracy (within a provided adaptive tolerance). Further, the code has been extended to work in parallel environments by angular decomposition. Although the method requires increased parallel communication, tests have shown excellent scalability, with parallel fractions of up to 99%.

  15. Method and system for training dynamic nonlinear adaptive filters which have embedded memory

    NASA Technical Reports Server (NTRS)

    Rabinowitz, Matthew (Inventor)

    2002-01-01

    Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.

  16. A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection

    SciTech Connect

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; Burkardt, John V.

    2015-06-24

    This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.

  17. A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Solution of the Euler Equations

    SciTech Connect

    Anderson, R W; Elliott, N S; Pember, R B

    2003-02-14

    A new method that combines staggered grid arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the methods are driven by the need to reconcile traditional AMR techniques with the staggered variables and moving, deforming meshes associated with Lagrange based ALE schemes. We develop interlevel solution transfer operators and interlevel boundary conditions first in the case of purely Lagrangian hydrodynamics, and then extend these ideas into an ALE method by developing adaptive extensions of elliptic mesh relaxation techniques. Conservation properties of the method are analyzed, and a series of test problem calculations are presented which demonstrate the utility and efficiency of the method.

  18. Applications of automatic mesh generation and adaptive methods in computational medicine

    SciTech Connect

    Schmidt, J.A.; Macleod, R.S.; Johnson, C.R.; Eason, J.C.

    1995-12-31

    Important problems in Computational Medicine exist that can benefit from the implementation of adaptive mesh refinement techniques. Biological systems are so inherently complex that only efficient models running on state of the art hardware can begin to simulate reality. To tackle the complex geometries associated with medical applications we present a general purpose mesh generation scheme based upon the Delaunay tessellation algorithm and an iterative point generator. In addition, automatic, two- and three-dimensional adaptive mesh refinement methods are presented that are derived from local and global estimates of the finite element error. Mesh generation and adaptive refinement techniques are utilized to obtain accurate approximations of bioelectric fields within anatomically correct models of the heart and human thorax. Specifically, we explore the simulation of cardiac defibrillation and the general forward and inverse problems in electrocardiography (ECG). Comparisons between uniform and adaptive refinement techniques are made to highlight the computational efficiency and accuracy of adaptive methods in the solution of field problems in computational medicine.

  19. Development and evaluation of a method of calibrating medical displays based on fixed adaptation

    SciTech Connect

    Sund, Patrik Månsson, Lars Gunnar; Båth, Magnus

    2015-04-15

    Purpose: The purpose of this work was to develop and evaluate a new method for calibration of medical displays that includes the effect of fixed adaptation and by using equipment and luminance levels typical for a modern radiology department. Methods: Low contrast sinusoidal test patterns were derived at nine luminance levels from 2 to 600 cd/m{sup 2} and used in a two alternative forced choice observer study, where the adaptation level was fixed at the logarithmic average of 35 cd/m{sup 2}. The contrast sensitivity at each luminance level was derived by establishing a linear relationship between the ten pattern contrast levels used at every luminance level and a detectability index (d′) calculated from the fraction of correct responses. A Gaussian function was fitted to the data and normalized to the adaptation level. The corresponding equation was used in a display calibration method that included the grayscale standard display function (GSDF) but compensated for fixed adaptation. In the evaluation study, the contrast of circular objects with a fixed pixel contrast was displayed using both calibration methods and was rated on a five-grade scale. Results were calculated using a visual grading characteristics method. Error estimations in both observer studies were derived using a bootstrap method. Results: The contrast sensitivities for the darkest and brightest patterns compared to the contrast sensitivity at the adaptation luminance were 37% and 56%, respectively. The obtained Gaussian fit corresponded well with similar studies. The evaluation study showed a higher degree of equally distributed contrast throughout the luminance range with the calibration method compensated for fixed adaptation than for the GSDF. The two lowest scores for the GSDF were obtained for the darkest and brightest patterns. These scores were significantly lower than the lowest score obtained for the compensated GSDF. For the GSDF, the scores for all luminance levels were statistically

  20. Adaptive non-local means method for speckle reduction in ultrasound images

    NASA Astrophysics Data System (ADS)

    Ai, Ling; Ding, Mingyue; Zhang, Xuming

    2016-03-01

    Noise removal is a crucial step to enhance the quality of ultrasound images. However, some existing despeckling methods cannot ensure satisfactory restoration performance. In this paper, an adaptive non-local means (ANLM) filter is proposed for speckle noise reduction in ultrasound images. The distinctive property of the proposed method lies in that the decay parameter will not take the fixed value for the whole image but adapt itself to the variation of the local features in the ultrasound images. In the proposed method, the pre-filtered image will be obtained using the traditional NLM method. Based on the pre-filtered result, the local gradient will be computed and it will be utilized to determine the decay parameter adaptively for each image pixel. The final restored image will be produced by the ANLM method using the obtained decay parameters. Simulations on the synthetic image show that the proposed method can deliver sufficient speckle reduction while preserving image details very well and it outperforms the state-of-the-art despeckling filters in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). Experiments on the clinical ultrasound image further demonstrate the practicality and advantage of the proposed method over the compared filtering methods.

  1. Key techniques and applications of adaptive growth method for stiffener layout design of plates and shells

    NASA Astrophysics Data System (ADS)

    Ding, Xiaohong; Ji, Xuerong; Ma, Man; Hou, Jianyun

    2013-11-01

    The application of the adaptive growth method is limited because several key techniques during the design process need manual intervention of designers. Key techniques of the method including the ground structure construction and seed selection are studied, so as to make it possible to improve the effectiveness and applicability of the adaptive growth method in stiffener layout design optimization of plates and shells. Three schemes of ground structures, which are comprised by different shell elements and beam elements, are proposed. It is found that the main stiffener layouts resulted from different ground structures are almost the same, but the ground structure comprised by 8-nodes shell elements and both 3-nodes and 2-nodes beam elements can result in clearest stiffener layout, and has good adaptability and low computational cost. An automatic seed selection approach is proposed, which is based on such selection rules that the seeds should be positioned on where the structural strain energy is great for the minimum compliance problem, and satisfy the dispersancy requirement. The adaptive growth method with the suggested key techniques is integrated into an ANSYS-based program, which provides a design tool for the stiffener layout design optimization of plates and shells. Typical design examples, including plate and shell structures to achieve minimum compliance and maximum bulking stability are illustrated. In addition, as a practical mechanical structural design example, the stiffener layout of an inlet structure for a large-scale electrostatic precipitator is also demonstrated. The design results show that the adaptive growth method integrated with the suggested key techniques can effectively and flexibly deal with stiffener layout design problem for plates and shells with complex geometrical shape and loading conditions to achieve various design objectives, thus it provides a new solution method for engineering structural topology design optimization.

  2. Method for reducing the drag of blunt-based vehicles by adaptively increasing forebody roughness

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A. (Inventor); Saltzman, Edwin J. (Inventor); Moes, Timothy R. (Inventor); Iliff, Kenneth W. (Inventor)

    2005-01-01

    A method for reducing drag upon a blunt-based vehicle by adaptively increasing forebody roughness to increase drag at the roughened area of the forebody, which results in a decrease in drag at the base of this vehicle, and in total vehicle drag.

  3. [Correction of autonomic reactions parameters in organism of cosmonaut with adaptive biocontrol method

    NASA Technical Reports Server (NTRS)

    Kornilova, L. N.; Cowings, P. S.; Toscano, W. B.; Arlashchenko, N. I.; Korneev, D. Iu; Ponomarenko, A. V.; Salagovich, S. V.; Sarantseva, A. V.; Kozlovskaia, I. B.

    2000-01-01

    Presented are results of testing the method of adaptive biocontrol during preflight training of cosmonauts. Within the MIR-25 crew, a high level of controllability of the autonomous reactions was characteristic of Flight Commanders MIR-23 and MIR-25 and flight Engineer MIR-23, while Flight Engineer MIR-25 displayed a weak intricate dependence of these reactions on the depth of relaxation or strain.

  4. Item Pocket Method to Allow Response Review and Change in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.

    2013-01-01

    Most computerized adaptive testing (CAT) programs do not allow test takers to review and change their responses because it could seriously deteriorate the efficiency of measurement and make tests vulnerable to manipulative test-taking strategies. Several modified testing methods have been developed that provide restricted review options while…

  5. Expert Concept Mapping Method for Defining the Characteristics of Adaptive ELearning: ALFANET Project Case

    ERIC Educational Resources Information Center

    Stoyanov, Slavi; Kirschner, Paul

    2004-01-01

    The article presents empirical evidence for the effectiveness and efficiency of a modified version of Trochim's (1989a, b) concept mapping approach to define the characteristics of an adaptive learning environment. The effectiveness and the efficiency of the method are attributed to the support that it provides in terms of elicitation, sharing,…

  6. Methods of Adapting Digital Content for the Learning Process via Mobile Devices

    ERIC Educational Resources Information Center

    Lopez, J. L. Gimenez; Royo, T. Magal; Laborda, Jesus Garcia; Calvo, F. Garde

    2009-01-01

    This article analyses different methods of adapting digital content for its delivery via mobile devices taking into account two aspects which are a fundamental part of the learning process; on the one hand, functionality of the contents, and on the other, the actual controlled navigation requirements that the learner needs in order to acquire high…

  7. Adaptive projection method applied to three-dimensional ultrasonic focusing and steering through the ribs.

    PubMed

    Cochard, E; Aubry, J F; Tanter, M; Prada, C

    2011-08-01

    An adaptive projection method for ultrasonic focusing through the rib cage, with minimal energy deposition on the ribs, was evaluated experimentally in 3D geometry. Adaptive projection is based on decomposition of the time-reversal operator (DORT method) and projection on the "noise" subspace. It is shown that 3D implementation of this method is straightforward, and not more time-consuming than 2D. Comparisons are made between adaptive projection, spherical focusing, and a previously proposed time-reversal focusing method, by measuring pressure fields in the focal plane and rib region using the three methods. The ratio of the specific absorption rate at the focus over the one at the ribs was found to be increased by a factor of up to eight, versus spherical emission. Beam steering out of geometric focus was also investigated. For all configurations projecting steered emissions were found to deposit less energy on the ribs than steering time-reversed emissions: thus the non-invasive method presented here is more efficient than state-of-the-art invasive techniques. In fact, this method could be used for real-time treatment, because a single acquisition of back-scattered echoes from the ribs is enough to treat a large volume around the focus, thanks to real time projection of the steered beams.

  8. Statistical mechanics analysis of thresholding 1-bit compressed sensing

    NASA Astrophysics Data System (ADS)

    Xu, Yingying; Kabashima, Yoshiyuki

    2016-08-01

    The one-bit compressed sensing framework aims to reconstruct a sparse signal by only using the sign information of its linear measurements. To compensate for the loss of scale information, past studies in the area have proposed recovering the signal by imposing an additional constraint on the l 2-norm of the signal. Recently, an alternative strategy that captures scale information by introducing a threshold parameter to the quantization process was advanced. In this paper, we analyze the typical behavior of thresholding 1-bit compressed sensing utilizing the replica method of statistical mechanics, so as to gain an insight for properly setting the threshold value. Our result shows that fixing the threshold at a constant value yields better performance than varying it randomly when the constant is optimally tuned, statistically. Unfortunately, the optimal threshold value depends on the statistical properties of the target signal, which may not be known in advance. In order to handle this inconvenience, we develop a heuristic that adaptively tunes the threshold parameter based on the frequency of positive (or negative) values in the binary outputs. Numerical experiments show that the heuristic exhibits satisfactory performance while incurring low computational cost.

  9. Non-orthogonal spin-adaptation of coupled cluster methods: A new implementation of methods including quadruple excitations

    SciTech Connect

    Matthews, Devin A.; Stanton, John F.

    2015-02-14

    The theory of non-orthogonal spin-adaptation for closed-shell molecular systems is applied to coupled cluster methods with quadruple excitations (CCSDTQ). Calculations at this level of detail are of critical importance in describing the properties of molecular systems to an accuracy which can meet or exceed modern experimental techniques. Such calculations are of significant (and growing) importance in such fields as thermodynamics, kinetics, and atomic and molecular spectroscopies. With respect to the implementation of CCSDTQ and related methods, we show that there are significant advantages to non-orthogonal spin-adaption with respect to simplification and factorization of the working equations and to creating an efficient implementation. The resulting algorithm is implemented in the CFOUR program suite for CCSDT, CCSDTQ, and various approximate methods (CCSD(T), CC3, CCSDT-n, and CCSDT(Q))

  10. Applying Parallel Adaptive Methods with GeoFEST/PYRAMID to Simulate Earth Surface Crustal Dynamics

    NASA Technical Reports Server (NTRS)

    Norton, Charles D.; Lyzenga, Greg; Parker, Jay; Glasscoe, Margaret; Donnellan, Andrea; Li, Peggy

    2006-01-01

    This viewgraph presentation reviews the use Adaptive Mesh Refinement (AMR) in simulating the Crustal Dynamics of Earth's Surface. AMR simultaneously improves solution quality, time to solution, and computer memory requirements when compared to generating/running on a globally fine mesh. The use of AMR in simulating the dynamics of the Earth's Surface is spurred by future proposed NASA missions, such as InSAR for Earth surface deformation and other measurements. These missions will require support for large-scale adaptive numerical methods using AMR to model observations. AMR was chosen because it has been successful in computation fluid dynamics for predictive simulation of complex flows around complex structures.

  11. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  12. Fuzzy physical programming for Space Manoeuvre Vehicles trajectory optimization based on hp-adaptive pseudospectral method

    NASA Astrophysics Data System (ADS)

    Chai, Runqi; Savvaris, Al; Tsourdos, Antonios

    2016-06-01

    In this paper, a fuzzy physical programming (FPP) method has been introduced for solving multi-objective Space Manoeuvre Vehicles (SMV) skip trajectory optimization problem based on hp-adaptive pseudospectral methods. The dynamic model of SMV is elaborated and then, by employing hp-adaptive pseudospectral methods, the problem has been transformed to nonlinear programming (NLP) problem. According to the mission requirements, the solutions were calculated for each single-objective scenario. To get a compromised solution for each target, the fuzzy physical programming (FPP) model is proposed. The preference function is established with considering the fuzzy factor of the system such that a proper compromised trajectory can be acquired. In addition, the NSGA-II is tested to obtain the Pareto-optimal solution set and verify the Pareto optimality of the FPP solution. Simulation results indicate that the proposed method is effective and feasible in terms of dealing with the multi-objective skip trajectory optimization for the SMV.

  13. An Adaptive Instability Suppression Controls Method for Aircraft Gas Turbine Engine Combustors

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; DeLaat, John C.; Chang, Clarence T.

    2008-01-01

    An adaptive controls method for instability suppression in gas turbine engine combustors has been developed and successfully tested with a realistic aircraft engine combustor rig. This testing was part of a program that demonstrated, for the first time, successful active combustor instability control in an aircraft gas turbine engine-like environment. The controls method is called Adaptive Sliding Phasor Averaged Control. Testing of the control method has been conducted in an experimental rig with different configurations designed to simulate combustors with instabilities of about 530 and 315 Hz. Results demonstrate the effectiveness of this method in suppressing combustor instabilities. In addition, a dramatic improvement in suppression of the instability was achieved by focusing control on the second harmonic of the instability. This is believed to be due to a phenomena discovered and reported earlier, the so called Intra-Harmonic Coupling. These results may have implications for future research in combustor instability control.

  14. Adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients

    PubMed Central

    Xia, Kelin; Zhan, Meng; Wan, Decheng; Wei, Guo-Wei

    2011-01-01

    Mesh deformation methods are a versatile strategy for solving partial differential equations (PDEs) with a vast variety of practical applications. However, these methods break down for elliptic PDEs with discontinuous coefficients, namely, elliptic interface problems. For this class of problems, the additional interface jump conditions are required to maintain the well-posedness of the governing equation. Consequently, in order to achieve high accuracy and high order convergence, additional numerical algorithms are required to enforce the interface jump conditions in solving elliptic interface problems. The present work introduces an interface technique based adaptively deformed mesh strategy for resolving elliptic interface problems. We take the advantages of the high accuracy, flexibility and robustness of the matched interface and boundary (MIB) method to construct an adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients. The proposed method generates deformed meshes in the physical domain and solves the transformed governed equations in the computational domain, which maintains regular Cartesian meshes. The mesh deformation is realized by a mesh transformation PDE, which controls the mesh redistribution by a source term. The source term consists of a monitor function, which builds in mesh contraction rules. Both interface geometry based deformed meshes and solution gradient based deformed meshes are constructed to reduce the L∞ and L2 errors in solving elliptic interface problems. The proposed adaptively deformed mesh based interface method is extensively validated by many numerical experiments. Numerical results indicate that the adaptively deformed mesh based interface method outperforms the original MIB method for dealing with elliptic interface problems. PMID:22586356

  15. Refinement trajectory and determination of eigenstates by a wavelet based adaptive method

    SciTech Connect

    Pipek, Janos; Nagy, Szilvia

    2006-11-07

    The detail structure of the wave function is analyzed at various refinement levels using the methods of wavelet analysis. The eigenvalue problem of a model system is solved in granular Hilbert spaces, and the trajectory of the eigenstates is traced in terms of the resolution. An adaptive method is developed for identifying the fine structure localization regions, where further refinement of the wave function is necessary.

  16. A two-dimensional adaptive spectral element method for the direct simulation of incompressible flow

    NASA Astrophysics Data System (ADS)

    Hsu, Li-Chieh

    The spectral element method is a high order discretization scheme for the solution of nonlinear partial differential equations. The method draws its strengths from the finite element method for geometrical flexibility and spectral methods for high accuracy. Although the method is, in theory, very powerful for complex phenomena such as transitional flows, its practical implementation is limited by the arbitrary choice of domain discretization. For instance, it is hard to estimate the appropriate number of elements for a specific case. Selection of regions to be refined or coarsened is difficult especially as the flow becomes more complex and memory limits of the computer are stressed. We present an adaptive spectral element method in which the grid is automatically refined or coarsened in order to capture underresolved regions of the domain and to follow regions requiring high resolution as they develop in time. The objective is to provide the best and most efficient solution to a time-dependent nonlinear problem by continually optimizing resource allocation. The adaptivity is based on an error estimator which determines which regions need more resolution. The solution strategy is as follows: compute an initial solution with a suitable initial mesh, estimate errors in the solution locally in each element, modify the mesh according to the error estimators, interpolate old mesh solutions onto the new elements, and resume the numerical solution process. A two-dimensional adaptive spectral element method for the direct simulation of incompressible flows has been developed. The adaptive algorithm effectively diagnoses and refines regions of the flow where complexity of the solution requires increased resolution. The method has been demonstrated on two-dimensional examples in heat conduction, Stokes and Navier-Stokes flows.

  17. Adaptation strategies for high order discontinuous Galerkin methods based on Tau-estimation

    NASA Astrophysics Data System (ADS)

    Kompenhans, Moritz; Rubio, Gonzalo; Ferrer, Esteban; Valero, Eusebio

    2016-02-01

    In this paper three p-adaptation strategies based on the minimization of the truncation error are presented for high order discontinuous Galerkin methods. The truncation error is approximated by means of a τ-estimation procedure and enables the identification of mesh regions that require adaptation. Three adaptation strategies are developed and termed a posteriori, quasi-a priori and quasi-a priori corrected. All strategies require fine solutions, which are obtained by enriching the polynomial order, but while the former needs time converged solutions, the last two rely on non-converged solutions, which lead to faster computations. In addition, the high order method permits the spatial decoupling for the estimated errors and enables anisotropic p-adaptation. These strategies are verified and compared in terms of accuracy and computational cost for the Euler and the compressible Navier-Stokes equations. It is shown that the two quasi-a priori methods achieve a significant reduction in computational cost when compared to a uniform polynomial enrichment. Namely, for a viscous boundary layer flow, we obtain a speedup of 6.6 and 7.6 for the quasi-a priori and quasi-a priori corrected approaches, respectively.

  18. A wavelet-optimized, very high order adaptive grid and order numerical method

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Differencing operators of arbitrarily high order can be constructed by interpolating a polynomial through a set of data followed by differentiation of this polynomial and finally evaluation of the polynomial at the point where a derivative approximation is desired. Furthermore, the interpolating polynomial can be constructed from algebraic, trigonometric, or, perhaps exponential polynomials. This paper begins with a comparison of such differencing operator construction. Next, the issue of proper grids for high order polynomials is addressed. Finally, an adaptive numerical method is introduced which adapts the numerical grid and the order of the differencing operator depending on the data. The numerical grid adaptation is performed on a Chebyshev grid. That is, at each level of refinement the grid is a Chebvshev grid and this grid is refined locally based on wavelet analysis.

  19. The Effect of Bilateral Superior Laryngeal Nerve Lesion on Swallowing – A Novel Method to Quantitate Aspirated Volume and Pharyngeal Threshold in Videofluoroscopy

    PubMed Central

    DING, Peng; FUNG, George Shiu-Kai; LIN, Ming De; HOLMAN, Shaina D.; GERMAN, Rebecca Z.

    2015-01-01

    Purpose To determine the effect of bilateral superior laryngeal nerve (SLN) lesion on swallowing threshold volume and the occurrence of aspiration, using a novel measurement technique for videofluorscopic swallowing studies (VFSS). Methods and Materials We used a novel radiographic phantom to assess volume of the milk containing barium from fluoroscopy. The custom made phantom was firstly calibrated by comparing image intensity of the phantom with known cylinder depths. Secondly, known volume pouches of milk in a pig cadaver were compared to volumes calculated with the phantom. Using these standards, we calculated the volume of milk in the valleculae, esophagus and larynx, for 205 feeding sequences from four infant pigs feeding before and after had bilateral SLN lesions. Swallow safety was assessed using the IMPAS scale. Results The log-linear correlation between image intensity values from the phantom filled with barium milk and the known phantom cylinder depths was strong (R2>0.95), as was the calculated volumes of the barium milk pouches. The threshold volume of bolus in the valleculae during feeding was significantly larger after bilateral SLN lesion than in control swallows (p<0.001). The IMPAS score increased in the lesioned swallows relative to the controls (p<0.001). Conclusion Bilateral SLN lesion dramatically increased the aspiration incidence and the threshold volume of bolus in valleculae. The use of this phantom permits quantification of the aspirated volume of fluid. The custom made phantom and calibration allow for more accurate 3D volume estimation from 2D x-ray in VFSS. PMID:25270532

  20. An h-adaptive finite element method for turbulent heat transfer

    SciTech Connect

    Carriington, David B

    2009-01-01

    A two-equation turbulence closure model (k-{omega}) using an h-adaptive grid technique and finite element method (FEM) has been developed to simulate low Mach flow and heat transfer. These flows are applicable to many flows in engineering and environmental sciences. Of particular interest in the engineering modeling areas are: combustion, solidification, and heat exchanger design. Flows for indoor air quality modeling and atmospheric pollution transport are typical types of environmental flows modeled with this method. The numerical method is based on a hybrid finite element model using an equal-order projection process. The model includes thermal and species transport, localized mesh refinement (h-adaptive) and Petrov-Galerkin weighting for the stabilizing the advection. This work develops the continuum model of a two-equation turbulence closure method. The fractional step solution method is stated along with the h-adaptive grid method (Carrington and Pepper, 2002). Solutions are presented for 2d flow over a backward-facing step.

  1. A Digitalized Gyroscope System Based on a Modified Adaptive Control Method.

    PubMed

    Xia, Dunzhu; Hu, Yiwei; Ni, Peizhen

    2016-03-04

    In this work we investigate the possibility of applying the adaptive control algorithm to Micro-Electro-Mechanical System (MEMS) gyroscopes. Through comparing the gyroscope working conditions with the reference model, the adaptive control method can provide online estimation of the key parameters and the proper control strategy for the system. The digital second-order oscillators in the reference model are substituted for two phase locked loops (PLLs) to achieve a more steady amplitude and frequency control. The adaptive law is modified to satisfy the condition of unequal coupling stiffness and coupling damping coefficient. The rotation mode of the gyroscope system is considered in our work and a rotation elimination section is added to the digitalized system. Before implementing the algorithm in the hardware platform, different simulations are conducted to ensure the algorithm can meet the requirement of the angular rate sensor, and some of the key adaptive law coefficients are optimized. The coupling components are detected and suppressed respectively and Lyapunov criterion is applied to prove the stability of the system. The modified adaptive control algorithm is verified in a set of digitalized gyroscope system, the control system is realized in digital domain, with the application of Field Programmable Gate Array (FPGA). Key structure parameters are measured and compared with the estimation results, which validated that the algorithm is feasible in the setup. Extra gyroscopes are used in repeated experiments to prove the commonality of the algorithm.

  2. Scale-adaptive tensor algebra for local many-body methods of electronic structure theory

    SciTech Connect

    Liakh, Dmitry I

    2014-01-01

    While the formalism of multiresolution analysis (MRA), based on wavelets and adaptive integral representations of operators, is actively progressing in electronic structure theory (mostly on the independent-particle level and, recently, second-order perturbation theory), the concepts of multiresolution and adaptivity can also be utilized within the traditional formulation of correlated (many-particle) theory which is based on second quantization and the corresponding (generally nonorthogonal) tensor algebra. In this paper, we present a formalism called scale-adaptive tensor algebra (SATA) which exploits an adaptive representation of tensors of many-body operators via the local adjustment of the basis set quality. Given a series of locally supported fragment bases of a progressively lower quality, we formulate the explicit rules for tensor algebra operations dealing with adaptively resolved tensor operands. The formalism suggested is expected to enhance the applicability and reliability of local correlated many-body methods of electronic structure theory, especially those directly based on atomic orbitals (or any other localized basis functions).

  3. A Digitalized Gyroscope System Based on a Modified Adaptive Control Method.

    PubMed

    Xia, Dunzhu; Hu, Yiwei; Ni, Peizhen

    2016-01-01

    In this work we investigate the possibility of applying the adaptive control algorithm to Micro-Electro-Mechanical System (MEMS) gyroscopes. Through comparing the gyroscope working conditions with the reference model, the adaptive control method can provide online estimation of the key parameters and the proper control strategy for the system. The digital second-order oscillators in the reference model are substituted for two phase locked loops (PLLs) to achieve a more steady amplitude and frequency control. The adaptive law is modified to satisfy the condition of unequal coupling stiffness and coupling damping coefficient. The rotation mode of the gyroscope system is considered in our work and a rotation elimination section is added to the digitalized system. Before implementing the algorithm in the hardware platform, different simulations are conducted to ensure the algorithm can meet the requirement of the angular rate sensor, and some of the key adaptive law coefficients are optimized. The coupling components are detected and suppressed respectively and Lyapunov criterion is applied to prove the stability of the system. The modified adaptive control algorithm is verified in a set of digitalized gyroscope system, the control system is realized in digital domain, with the application of Field Programmable Gate Array (FPGA). Key structure parameters are measured and compared with the estimation results, which validated that the algorithm is feasible in the setup. Extra gyroscopes are used in repeated experiments to prove the commonality of the algorithm. PMID:26959019

  4. Investigation of the effects of color on judgments of sweetness using a taste adaptation method.

    PubMed

    Hidaka, Souta; Shimoda, Kazumasa

    2014-01-01

    It has been reported that color can affect the judgment of taste. For example, a dark red color enhances the subjective intensity of sweetness. However, the underlying mechanisms of the effect of color on taste have not been fully investigated; in particular, it remains unclear whether the effect is based on cognitive/decisional or perceptual processes. Here, we investigated the effect of color on sweetness judgments using a taste adaptation method. A sweet solution whose color was subjectively congruent with sweetness was judged as sweeter than an uncolored sweet solution both before and after adaptation to an uncolored sweet solution. In contrast, subjective judgment of sweetness for uncolored sweet solutions did not differ between the conditions following adaptation to a colored sweet solution and following adaptation to an uncolored one. Color affected sweetness judgment when the target solution was colored, but the colored sweet solution did not modulate the magnitude of taste adaptation. Therefore, it is concluded that the effect of color on the judgment of taste would occur mainly in cognitive/decisional domains.

  5. Phylogeny-based comparative methods question the adaptive nature of sporophytic specializations in mosses.

    PubMed

    Huttunen, Sanna; Olsson, Sanna; Buchbender, Volker; Enroth, Johannes; Hedenäs, Lars; Quandt, Dietmar

    2012-01-01

    Adaptive evolution has often been proposed to explain correlations between habitats and certain phenotypes. In mosses, a high frequency of species with specialized sporophytic traits in exposed or epiphytic habitats was, already 100 years ago, suggested as due to adaptation. We tested this hypothesis by contrasting phylogenetic and morphological data from two moss families, Neckeraceae and Lembophyllaceae, both of which show parallel shifts to a specialized morphology and to exposed epiphytic or epilithic habitats. Phylogeny-based tests for correlated evolution revealed that evolution of four sporophytic traits is correlated with a habitat shift. For three of them, evolutionary rates of dual character-state changes suggest that habitat shifts appear prior to changes in morphology. This suggests that they could have evolved as adaptations to new habitats. Regarding the fourth correlated trait the specialized morphology had already evolved before the habitat shift. In addition, several other specialized "epiphytic" traits show no correlation with a habitat shift. Besides adaptive diversification, other processes thus also affect the match between phenotype and environment. Several potential factors such as complex genetic and developmental pathways yielding the same phenotypes, differences in strength of selection, or constraints in phenotypic evolution may lead to an inability of phylogeny-based comparative methods to detect potential adaptations.

  6. A Digitalized Gyroscope System Based on a Modified Adaptive Control Method

    PubMed Central

    Xia, Dunzhu; Hu, Yiwei; Ni, Peizhen

    2016-01-01

    In this work we investigate the possibility of applying the adaptive control algorithm to Micro-Electro-Mechanical System (MEMS) gyroscopes. Through comparing the gyroscope working conditions with the reference model, the adaptive control method can provide online estimation of the key parameters and the proper control strategy for the system. The digital second-order oscillators in the reference model are substituted for two phase locked loops (PLLs) to achieve a more steady amplitude and frequency control. The adaptive law is modified to satisfy the condition of unequal coupling stiffness and coupling damping coefficient. The rotation mode of the gyroscope system is considered in our work and a rotation elimination section is added to the digitalized system. Before implementing the algorithm in the hardware platform, different simulations are conducted to ensure the algorithm can meet the requirement of the angular rate sensor, and some of the key adaptive law coefficients are optimized. The coupling components are detected and suppressed respectively and Lyapunov criterion is applied to prove the stability of the system. The modified adaptive control algorithm is verified in a set of digitalized gyroscope system, the control system is realized in digital domain, with the application of Field Programmable Gate Array (FPGA). Key structure parameters are measured and compared with the estimation results, which validated that the algorithm is feasible in the setup. Extra gyroscopes are used in repeated experiments to prove the commonality of the algorithm. PMID:26959019

  7. A wavelet-MRA-based adaptive semi-Lagrangian method for the relativistic Vlasov Maxwell system

    NASA Astrophysics Data System (ADS)

    Besse, Nicolas; Latu, Guillaume; Ghizzo, Alain; Sonnendrücker, Eric; Bertrand, Pierre

    2008-08-01

    In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strong laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to

  8. An adaptive Newton-method based on a dynamical systems approach

    NASA Astrophysics Data System (ADS)

    Amrein, Mario; Wihler, Thomas P.

    2014-09-01

    The traditional Newton method for solving nonlinear operator equations in Banach spaces is discussed within the context of the continuous Newton method. This setting makes it possible to interpret the Newton method as a discrete dynamical system and thereby to cast it in the framework of an adaptive step size control procedure. In so doing, our goal is to reduce the chaotic behavior of the original method without losing its quadratic convergence property close to the roots. The performance of the modified scheme is illustrated with various examples from algebraic and differential equations.

  9. Adaptive mesh refinement techniques for the immersed interface method applied to flow problems.

    PubMed

    Li, Zhilin; Song, Peng

    2013-06-01

    In this paper, we develop an adaptive mesh refinement strategy of the Immersed Interface Method for flow problems with a moving interface. The work is built on the AMR method developed for two-dimensional elliptic interface problems in the paper [12] (CiCP, 12(2012), 515-527). The interface is captured by the zero level set of a Lipschitz continuous function φ(x, y, t). Our adaptive mesh refinement is built within a small band of |φ(x, y, t)| ≤ δ with finer Cartesian meshes. The AMR-IIM is validated for Stokes and Navier-Stokes equations with exact solutions, moving interfaces driven by the surface tension, and classical bubble deformation problems. A new simple area preserving strategy is also proposed in this paper for the level set method.

  10. Adaptive mesh refinement techniques for the immersed interface method applied to flow problems

    PubMed Central

    Li, Zhilin; Song, Peng

    2013-01-01

    In this paper, we develop an adaptive mesh refinement strategy of the Immersed Interface Method for flow problems with a moving interface. The work is built on the AMR method developed for two-dimensional elliptic interface problems in the paper [12] (CiCP, 12(2012), 515–527). The interface is captured by the zero level set of a Lipschitz continuous function φ(x, y, t). Our adaptive mesh refinement is built within a small band of |φ(x, y, t)| ≤ δ with finer Cartesian meshes. The AMR-IIM is validated for Stokes and Navier-Stokes equations with exact solutions, moving interfaces driven by the surface tension, and classical bubble deformation problems. A new simple area preserving strategy is also proposed in this paper for the level set method. PMID:23794763

  11. Theory of Adaptive Acquisition Method for Image Reconstruction from Projections and Application to EPR Imaging

    NASA Astrophysics Data System (ADS)

    Placidi, G.; Alecci, M.; Sotgiu, A.

    1995-07-01

    An adaptive method for selecting the projections to be used for image reconstruction is presented. The method starts with the acquisition of four projections at angles of 0°, 45°, 90°, 135° and selects the new angles by computing a function of the previous projections. This makes it possible to adapt the selection of projections to the arbitrary shape of the sample, thus measuring a more informative set of projections. When the sample is smooth or has internal symmetries, this technique allows a reduction in the number of projections required to reconstruct the image without loss of information. The method has been tested on simulated data at different values of signal-to-noise ratio (S/N) and on experimental data recorded by an EPR imaging apparatus.

  12. CARA Risk Assessment Thresholds

    NASA Technical Reports Server (NTRS)

    Hejduk, M. D.

    2016-01-01

    Warning remediation threshold (Red threshold): Pc level at which warnings are issued, and active remediation considered and usually executed. Analysis threshold (Green to Yellow threshold): Pc level at which analysis of event is indicated, including seeking additional information if warranted. Post-remediation threshold: Pc level to which remediation maneuvers are sized in order to achieve event remediation and obviate any need for immediate follow-up maneuvers. Maneuver screening threshold: Pc compliance level for routine maneuver screenings (more demanding than regular Red threshold due to additional maneuver uncertainty).

  13. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    NASA Astrophysics Data System (ADS)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  14. The direct simulation Monte Carlo method using unstructured adaptive mesh and its application

    NASA Astrophysics Data System (ADS)

    Wu, J.-S.; Tseng, K.-C.; Kuo, C.-H.

    2002-02-01

    The implementation of an adaptive mesh-embedding (h-refinement) scheme using unstructured grid in two-dimensional direct simulation Monte Carlo (DSMC) method is reported. In this technique, local isotropic refinement is used to introduce new mesh where the local cell Knudsen number is less than some preset value. This simple scheme, however, has several severe consequences affecting the performance of the DSMC method. Thus, we have applied a technique to remove the hanging node, by introducing the an-isotropic refinement in the interfacial cells between refined and non-refined cells. Not only does this remedy increase a negligible amount of work, but it also removes all the difficulties presented in the originals scheme. We have tested the proposed scheme for argon gas in a high-speed driven cavity flow. The results show an improved flow resolution as compared with that of un-adaptive mesh. Finally, we have used triangular adaptive mesh to compute a near-continuum gas flow, a hypersonic flow over a cylinder. The results show fairly good agreement with previous studies. In summary, the proposed simple mesh adaptation is very useful in computing rarefied gas flows, which involve both complicated geometry and highly non-uniform density variations throughout the flow field. Copyright

  15. Adaptive Tracker Design with Identifier for Pendulum System by Conditional LMI Method and IROA

    NASA Astrophysics Data System (ADS)

    Hwang, Jiing-Dong; Tsai, Zhi-Ren

    This paper proposes a robust adaptive fuzzy PID control scheme augmented with a supervisory controller for unknown systems. In this scheme, a generalized fuzzy model is used to describe a class of unknown systems. The control strategy allows each part of the control law, i.e., a supervisory controller, a compensator, and an adaptive fuzzy PID controller, to be designed incrementally according to different guidelines. The supervisory controller in the outer loop aims at enhancing system robustness in the face of extra disturbances, variation in system parameters, and parameter drift in the adaptation law. Furthermore, an H∞ control design method using the fuzzy Lyapunov function is presented for the design of the initial control gains that guarantees transient performance at the start of closed-loop control, which is generally overlooked in many adaptive control systems. This design of the initial control gains is a compound search strategy called conditional linear matrix inequality (CLMI) approach with IROA (Improved random optimal algorithm), it leads to less complex designs than a standard LMI method by fuzzy Lyapunov function. Numerical studies of the tracking control of an uncertain inverted pendulum system demonstrate the effectiveness of the control strategy. From results of this simulation, the generalized fuzzy model reduces the rule number of T-S fuzzy model indeed.

  16. A simple and inexpensive method for determining cold sensitivity and adaptation in mice.

    PubMed

    Brenner, Daniel S; Golden, Judith P; Vogt, Sherri K; Gereau, Robert W

    2015-01-01

    Cold hypersensitivity is a serious clinical problem, affecting a broad subset of patients and causing significant decreases in quality of life. The cold plantar assay allows the objective and inexpensive assessment of cold sensitivity in mice, and can quantify both analgesia and hypersensitivity. Mice are acclimated on a glass plate, and a compressed dry ice pellet is held against the glass surface underneath the hindpaw. The latency to withdrawal from the cooling glass is used as a measure of cold sensitivity. Cold sensation is also important for survival in regions with seasonal temperature shifts, and in order to maintain sensitivity animals must be able to adjust their thermal response thresholds to match the ambient temperature. The Cold Plantar Assay (CPA) also allows the study of adaptation to changes in ambient temperature by testing the cold sensitivity of mice at temperatures ranging from 30 °C to 5 °C. Mice are acclimated as described above, but the glass plate is cooled to the desired starting temperature using aluminum boxes (or aluminum foil packets) filled with hot water, wet ice, or dry ice. The temperature of the plate is measured at the center using a filament T-type thermocouple probe. Once the plate has reached the desired starting temperature, the animals are tested as described above. This assay allows testing of mice at temperatures ranging from innocuous to noxious. The CPA yields unambiguous and consistent behavioral responses in uninjured mice and can be used to quantify both hypersensitivity and analgesia. This protocol describes how to use the CPA to measure cold hypersensitivity, analgesia, and adaptation in mice. PMID:25867969

  17. A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction

    NASA Technical Reports Server (NTRS)

    Bockelie, Michael J.; Eiseman, Peter R.

    1990-01-01

    A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.

  18. Adaptive scene-based nonuniformity correction method for infrared-focal plane arrays

    NASA Astrophysics Data System (ADS)

    Torres, Sergio N.; Vera, Esteban M.; Reeves, Rodrigo A.; Sobarzo, Sergio K.

    2003-08-01

    The non-uniform response in infrared focal plane array (IRFPA) detectors produces corrupted images with a fixed-pattern noise. In this paper we present an enhanced adaptive scene-based non-uniformity correction (NUC) technique. The method simultaneously estimates detector's parameters and performs the non-uniformity compensation using a neural network approach. In addition, the proposed method doesn't make any assumption on the kind or amount of non-uniformity presented on the raw data. The strength and robustness of the proposed method relies in avoiding the presence of ghosting artifacts through the use of optimization techniques in the parameter estimation learning process, such as: momentum, regularization, and adaptive learning rate. The proposed method has been tested with video sequences of simulated and real infrared data taken with an InSb IRFPA, reaching high correction levels, reducing the fixed pattern noise, decreasing the ghosting, and obtaining an effective frame by frame adaptive estimation of each detector's gain and offset.

  19. Vortical Flow Prediction using an Adaptive Unstructured Grid Method. Chapter 11

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2009-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  20. Development of the Adaptive Collision Source (ACS) method for discrete ordinates

    SciTech Connect

    Walters, W.; Haghighat, A.

    2013-07-01

    We have developed a new collision source method to solve the Linear Boltzmann Equation (LBE) more efficiently by adaptation of the angular quadrature order. The angular adaptation method is unique in that the flux from each scattering source iteration is obtained, with potentially a different quadrature order. Traditionally, the flux from every iteration is combined, with the same quadrature applied to the combined flux. Since the scattering process tends to distribute the radiation more evenly over angles (i.e., make it more isotropic), the quadrature requirements generally decrease with each iteration. This allows for an optimal use of processing power, by using a high order quadrature for the first few iterations that need it, before shifting to lower order quadratures for the remaining iterations. This is essentially an extension of the first collision source method, and we call it the adaptive collision source method (ACS). The ACS methodology has been implemented in the TITAN discrete ordinates code, and has shown a relative speedup of 1.5-2.5 on a test problem, for the same desired level of accuracy. (authors)

  1. Adjoint-based error estimation and mesh adaptation for the correction procedure via reconstruction method

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Wang, Z. J.

    2015-08-01

    Adjoint-based mesh adaptive methods are capable of distributing computational resources to areas which are important for predicting an engineering output. In this paper, we develop an adjoint-based h-adaptation approach based on the high-order correction procedure via reconstruction formulation (CPR) to minimize the output or functional error. A dual-consistent CPR formulation of hyperbolic conservation laws is developed and its dual consistency is analyzed. Super-convergent functional and error estimate for the output with the CPR method are obtained. Factors affecting the dual consistency, such as the solution point distribution, correction functions, boundary conditions and the discretization approach for the non-linear flux divergence term, are studied. The presented method is then used to perform simulations for the 2D Euler and Navier-Stokes equations with mesh adaptation driven by the adjoint-based error estimate. Several numerical examples demonstrate the ability of the presented method to dramatically reduce the computational cost comparing with uniform grid refinement.

  2. A method for online verification of adapted fields using an independent dose monitor

    SciTech Connect

    Chang Jina; Norrlinger, Bernhard D.; Heaton, Robert K.; Jaffray, David A.; Cho, Young-Bin; Islam, Mohammad K.; Mahon, Robert

    2013-07-15

    Purpose: Clinical implementation of online adaptive radiotherapy requires generation of modified fields and a method of dosimetric verification in a short time. We present a method of treatment field modification to account for patient setup error, and an online method of verification using an independent monitoring system.Methods: The fields are modified by translating each multileaf collimator (MLC) defined aperture in the direction of the patient setup error, and magnifying to account for distance variation to the marked isocentre. A modified version of a previously reported online beam monitoring system, the integral quality monitoring (IQM) system, was investigated for validation of adapted fields. The system consists of a large area ion-chamber with a spatial gradient in electrode separation to provide a spatially sensitive signal for each beam segment, mounted below the MLC, and a calculation algorithm to predict the signal. IMRT plans of ten prostate patients have been modified in response to six randomly chosen setup errors in three orthogonal directions.Results: A total of approximately 49 beams for the modified fields were verified by the IQM system, of which 97% of measured IQM signal agree with the predicted value to within 2%.Conclusions: The modified IQM system was found to be suitable for online verification of adapted treatment fields.

  3. An adaptive grid method for computing the high speed 3D viscous flow about a re-entry vehicle

    NASA Technical Reports Server (NTRS)

    Bockelie, Michael J.; Smith, Robert E.

    1992-01-01

    An algebraic solution adaptive grid generation method that allows adapting the grid in all three coordinate directions is presented. Techniques are described that maintain the integrity of the original vehicle definition for grid point movement on the vehicle surface and that avoid grid cross over in the boundary layer portion of the grid lying next to the vehicle surface. The adaptive method is tested by computing the Mach 6 hypersonic three dimensional viscous flow about a proposed Martian entry vehicle.

  4. Vivid Motor Imagery as an Adaptation Method for Head Turns on a Short-Arm Centrifuge

    NASA Technical Reports Server (NTRS)

    Newby, N. J.; Mast, F. W.; Natapoff, A.; Paloski, W. H.

    2006-01-01

    from one another. For the perceived duration of sensations, the CG group again exhibited the least amount of adaptation. However, the rates of adaptation of the PA and the MA groups were indistinguishable, suggesting that the imagined pseudostimulus appeared to be just as effective a means of adaptation as the actual stimulus. The MA group's rate of adaptation to motion sickness symptoms was also comparable to the PA group. The use of vivid motor imagery may be an effective method for adapting to the illusory sensations and motion sickness symptoms produced by cross-coupled stimuli. For space-based AG applications, this technique may prove quite useful in retaining astronauts considered highly susceptible to motion sickness as it reduces the number of actual CCS required to attain adaptation.

  5. The Adaptively Biased Molecular Dynamics method revisited: New capabilities and an application

    NASA Astrophysics Data System (ADS)

    Moradi, Mahmoud; Babin, Volodymyr; Roland, Christopher; Sagui, Celeste

    2015-09-01

    The free energy is perhaps one of the most important quantity required for describing biomolecular systems at equilibrium. Unfortunately, accurate and reliable free energies are notoriously difficult to calculate. To address this issue, we previously developed the Adaptively Biased Molecular Dynamics (ABMD) method for accurate calculation of rugged free energy surfaces (FES). Here, we briefly review the workings of the ABMD method with an emphasis on recent software additions, along with a short summary of a selected ABMD application based on the B-to-Z DNA transition. The ABMD method, along with current extensions, is currently implemented in the AMBER (ver.10-14) software package.

  6. An adaptive grid method for computing time accurate solutions on structured grids

    NASA Technical Reports Server (NTRS)

    Bockelie, Michael J.; Smith, Robert E.; Eiseman, Peter R.

    1991-01-01

    The solution method consists of three parts: a grid movement scheme; an unsteady Euler equation solver; and a temporal coupling routine that links the dynamic grid to the Euler solver. The grid movement scheme is an algebraic method containing grid controls that generate a smooth grid that resolves the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling is performed with a grid prediction correction procedure that is simple to implement and provides a grid that does not lag the solution in time. The adaptive solution method is tested by computing the unsteady inviscid solutions for a one dimensional shock tube and a two dimensional shock vortex iteraction.

  7. Adaptation of LASCA method for diagnostics of malignant tumours in laboratory animals

    SciTech Connect

    Ul'yanov, S S; Laskavyi, V N; Glova, Alina B; Polyanina, T I; Ul'yanova, O V; Fedorova, V A; Ul'yanov, A S

    2012-05-31

    The LASCA method is adapted for diagnostics of malignant neoplasms in laboratory animals. Tumours are studied in mice of Balb/c inbred line after inoculation of cells of syngeneic myeloma cell line Sp.2/0 Ag.8. The appropriateness of using the tLASCA method in tumour investigations is substantiated; its advantages in comparison with the sLASCA method are demonstrated. It is found that the most informative characteristic, indicating the presence of a tumour, is the fractal dimension of LASCA images.

  8. Adaptation of LASCA method for diagnostics of malignant tumours in laboratory animals

    NASA Astrophysics Data System (ADS)

    Ul'yanov, S. S.; Laskavyi, V. N.; Glova, Alina B.; Polyanina, T. I.; Ul'yanova, O. V.; Fedorova, V. A.; Ul'yanov, A. S.

    2012-05-01

    The LASCA method is adapted for diagnostics of malignant neoplasms in laboratory animals. Tumours are studied in mice of Balb/c inbred line after inoculation of cells of syngeneic myeloma cell line Sp.2/0 — Ag.8. The appropriateness of using the tLASCA method in tumour investigations is substantiated; its advantages in comparison with the sLASCA method are demonstrated. It is found that the most informative characteristic, indicating the presence of a tumour, is the fractal dimension of LASCA images.

  9. Fast and robust reconstruction for fluorescence molecular tomography via a sparsity adaptive subspace pursuit method.

    PubMed

    Ye, Jinzuo; Chi, Chongwei; Xue, Zhenwen; Wu, Ping; An, Yu; Xu, Han; Zhang, Shuang; Tian, Jie

    2014-02-01

    Fluorescence molecular tomography (FMT), as a promising imaging modality, can three-dimensionally locate the specific tumor position in small animals. However, it remains challenging for effective and robust reconstruction of fluorescent probe distribution in animals. In this paper, we present a novel method based on sparsity adaptive subspace pursuit (SASP) for FMT reconstruction. Some innovative strategies including subspace projection, the bottom-up sparsity adaptive approach, and backtracking technique are associated with the SASP method, which guarantees the accuracy, efficiency, and robustness for FMT reconstruction. Three numerical experiments based on a mouse-mimicking heterogeneous phantom have been performed to validate the feasibility of the SASP method. The results show that the proposed SASP method can achieve satisfactory source localization with a bias less than 1mm; the efficiency of the method is much faster than mainstream reconstruction methods; and this approach is robust even under quite ill-posed condition. Furthermore, we have applied this method to an in vivo mouse model, and the results demonstrate the feasibility of the practical FMT application with the SASP method.

  10. A novel timestamp based adaptive clock method for circuit emulation service over packet network

    NASA Astrophysics Data System (ADS)

    Dai, Jin-you; Yu, Shao-hua

    2007-11-01

    It is necessary to transport TDM (time division multiplexing) over packet network such as IP and Ethernet, and synchronization is a problem when carrying TDM over the packet network. Clock methods for TDM over packet network are introduced. A new adaptive clock method is presented. The method is a kind of timestamp based adaptive method, but no timestamp needs transporting over packet network. By using the local oscillator and a counter, the timestamp information (local timestamp) related to the service clock of the remote PE (provide edge) and the near PE can be attained. By using D-EWMA filter algorithm, the noise caused by packet network can be filtered and the useful timestamp can be extracted out. With the timestamp and a voltage-controlled oscillator, clock frequency of near PE can be adjusted the same as clock frequency of the remote PE. A kind of simulation device is designed and a test network topology is set up to test and verify the method. The experiment result shows that synthetical performance of the new method is better than ordinary buffer based method and ordinary timestamp based method.

  11. Uncertainty Estimates of Psychoacoustic Thresholds Obtained from Group Tests

    NASA Technical Reports Server (NTRS)

    Rathsam, Jonathan; Christian, Andrew

    2016-01-01

    Adaptive psychoacoustic test methods, in which the next signal level depends on the response to the previous signal, are the most efficient for determining psychoacoustic thresholds of individual subjects. In many tests conducted in the NASA psychoacoustic labs, the goal is to determine thresholds representative of the general population. To do this economically, non-adaptive testing methods are used in which three or four subjects are tested at the same time with predetermined signal levels. This approach requires us to identify techniques for assessing the uncertainty in resulting group-average psychoacoustic thresholds. In this presentation we examine the Delta Method of frequentist statistics, the Generalized Linear Model (GLM), the Nonparametric Bootstrap, a frequentist method, and Markov Chain Monte Carlo Posterior Estimation and a Bayesian approach. Each technique is exercised on a manufactured, theoretical dataset and then on datasets from two psychoacoustics facilities at NASA. The Delta Method is the simplest to implement and accurate for the cases studied. The GLM is found to be the least robust, and the Bootstrap takes the longest to calculate. The Bayesian Posterior Estimate is the most versatile technique examined because it allows the inclusion of prior information.

  12. Improved methods in neural network-based adaptive output feedback control, with applications to flight control

    NASA Astrophysics Data System (ADS)

    Kim, Nakwan

    Utilizing the universal approximation property of neural networks, we develop several novel approaches to neural network-based adaptive output feedback control of nonlinear systems, and illustrate these approaches for several flight control applications. In particular, we address the problem of non-affine systems and eliminate the fixed point assumption present in earlier work. All of the stability proofs are carried out in a form that eliminates an algebraic loop in the neural network implementation. An approximate input/output feedback linearizing controller is augmented with a neural network using input/output sequences of the uncertain system. These approaches permit adaptation to both parametric uncertainty and unmodeled dynamics. All physical systems also have control position and rate limits, which may either deteriorate performance or cause instability for a sufficiently high control bandwidth. Here we apply a method for protecting an adaptive process from the effects of input saturation and time delays, known as "pseudo control hedging". This method was originally developed for the state feedback case, and we provide a stability analysis that extends its domain of applicability to the case of output feedback. The approach is illustrated by the design of a pitch-attitude flight control system for a linearized model of an R-50 experimental helicopter, and by the design of a pitch-rate control system for a 58-state model of a flexible aircraft consisting of rigid body dynamics coupled with actuator and flexible modes. A new approach to augmentation of an existing linear controller is introduced. It is especially useful when there is limited information concerning the plant model, and the existing controller. The approach is applied to the design of an adaptive autopilot for a guided munition. Design of a neural network adaptive control that ensures asymptotically stable tracking performance is also addressed.

  13. An Adaptive Mesh Refinement Strategy for Immersed Boundary/Interface Methods.

    PubMed

    Li, Zhilin; Song, Peng

    2012-01-01

    An adaptive mesh refinement strategy is proposed in this paper for the Immersed Boundary and Immersed Interface methods for two-dimensional elliptic interface problems involving singular sources. The interface is represented by the zero level set of a Lipschitz function φ(x,y). Our adaptive mesh refinement is done within a small tube of |φ(x,y)|≤ δ with finer Cartesian meshes. The discrete linear system of equations is solved by a multigrid solver. The AMR methods could obtain solutions with accuracy that is similar to those on a uniform fine grid by distributing the mesh more economically, therefore, reduce the size of the linear system of the equations. Numerical examples presented show the efficiency of the grid refinement strategy.

  14. An Adaptive Mesh Refinement Strategy for Immersed Boundary/Interface Methods

    PubMed Central

    Li, Zhilin; Song, Peng

    2012-01-01

    An adaptive mesh refinement strategy is proposed in this paper for the Immersed Boundary and Immersed Interface methods for two-dimensional elliptic interface problems involving singular sources. The interface is represented by the zero level set of a Lipschitz function φ(x,y). Our adaptive mesh refinement is done within a small tube of |φ(x,y)|≤ δ with finer Cartesian meshes. The discrete linear system of equations is solved by a multigrid solver. The AMR methods could obtain solutions with accuracy that is similar to those on a uniform fine grid by distributing the mesh more economically, therefore, reduce the size of the linear system of the equations. Numerical examples presented show the efficiency of the grid refinement strategy. PMID:22670155

  15. Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics

    NASA Technical Reports Server (NTRS)

    Stowers, S. T.; Bass, J. M.; Oden, J. T.

    1993-01-01

    A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.

  16. Stability of a modified Peaceman-Rachford method for the paraxial Helmholtz equation on adaptive grids

    NASA Astrophysics Data System (ADS)

    Sheng, Qin; Sun, Hai-wei

    2016-11-01

    This study concerns the asymptotic stability of an eikonal, or ray, transformation based Peaceman-Rachford splitting method for solving the paraxial Helmholtz equation with high wave numbers. Arbitrary nonuniform grids are considered in transverse and beam propagation directions. The differential equation targeted has been used for modeling propagations of high intensity laser pulses over a long distance without diffractions. Self-focusing of high intensity beams may be balanced with the de-focusing effect of created ionized plasma channel in the situation, and applications of grid adaptations are frequently essential. It is shown rigorously that the fully discretized oscillation-free decomposition method on arbitrary adaptive grids is asymptotically stable with a stability index one. Simulation experiments are carried out to illustrate our concern and conclusions.

  17. Research on a pulmonary nodule segmentation method combining fast self-adaptive FCM and classification.

    PubMed

    Liu, Hui; Zhang, Cai-Ming; Su, Zhi-Yuan; Wang, Kai; Deng, Kai

    2015-01-01

    The key problem of computer-aided diagnosis (CAD) of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO) pulmonary nodules than other typical algorithms.

  18. An adaptive tau-leaping method for stochastic simulations of reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Padgett, Jill M. A.; Ilie, Silvana

    2016-03-01

    Stochastic modelling is critical for studying many biochemical processes in a cell, in particular when some reacting species have low population numbers. For many such cellular processes the spatial distribution of the molecular species plays a key role. The evolution of spatially heterogeneous biochemical systems with some species in low amounts is accurately described by the mesoscopic model of the Reaction-Diffusion Master Equation. The Inhomogeneous Stochastic Simulation Algorithm provides an exact strategy to numerically solve this model, but it is computationally very expensive on realistic applications. We propose a novel adaptive time-stepping scheme for the tau-leaping method for approximating the solution of the Reaction-Diffusion Master Equation. This technique combines effective strategies for variable time-stepping with path preservation to reduce the computational cost, while maintaining the desired accuracy. The numerical tests on various examples arising in applications show the improved efficiency achieved by the new adaptive method.

  19. Quantification of organ motion based on an adaptive image-based scale invariant feature method

    SciTech Connect

    Paganelli, Chiara; Peroni, Marta

    2013-11-15

    Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application of contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT

  20. System and method for adaptively deskewing parallel data signals relative to a clock

    DOEpatents

    Jenkins, Philip Nord; Cornett, Frank N.

    2008-10-07

    A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in detected skew.

  1. System and method for adaptively deskewing parallel data signals relative to a clock

    DOEpatents

    Jenkins, Philip Nord; Cornett, Frank N.

    2011-10-04

    A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in detected skew.

  2. Logically rectangular finite volume methods with adaptive refinement on the sphere.

    PubMed

    Berger, Marsha J; Calhoun, Donna A; Helzel, Christiane; LeVeque, Randall J

    2009-11-28

    The logically rectangular finite volume grids for two-dimensional partial differential equations on a sphere and for three-dimensional problems in a spherical shell introduced recently have nearly uniform cell size, avoiding severe Courant number restrictions. We present recent results with adaptive mesh refinement using the GeoClaw software and demonstrate well-balanced methods that exactly maintain equilibrium solutions, such as shallow water equations for an ocean at rest over arbitrary bathymetry.

  3. First-principles calculation of spectral features, chemical shift and absolute threshold of ELNES and XANES using a plane wave pseudopotential method

    NASA Astrophysics Data System (ADS)

    Mizoguchi, Teruyasu; Tanaka, Isao; Gao, Shang-Peng; Pickard, Chris J.

    2009-03-01

    Spectral features, chemical shifts, and absolute thresholds of electron energy loss near-edge structure (ELNES) and x-ray absorption near-edge structure (XANES) for selected compounds, i.e. TiO2 (rutile), TiO2 (anatase), SrTiO3, Ti2O3, Al2O3, AlN and β-Ga2O3, were calculated by a plane wave pseudopotential method. Experimental ELNES/XANES of those compounds were well reproduced when an excited pseudopotential, which includes a core hole, was used. In addition to the spectral features, it was found that chemical shifts among different compounds were also reproduced by correcting the contribution of the excited pseudopotentials to the energy of the core orbital.

  4. Adaptability and stability of genotypes of sweet sorghum by GGEBiplot and Toler methods.

    PubMed

    de Figueiredo, U J; Nunes, J A R; da C Parrella, R A; Souza, E D; da Silva, A R; Emygdio, B M; Machado, J R A; Tardin, F D

    2015-01-01

    Sweet sorghum has considerable potential for ethanol and energy production. The crop is adaptable and can be grown under a wide range of cultivation conditions in marginal areas; however, studies of phenotypic stability are lacking under tropical conditions. Various methods can be used to assess the stability of the crop. Some of these methods generate the same basic information, whereas others provide additional information on genotype x environment (G x E) interactions and/or a description of the genotypes and environments. In this study, we evaluated the complementarity of two methods, GGEBiplot and Toler, with the aim of achieving more detailed information on G x E interactions and their implications for selection of sweet sorghum genotypes. We used data from 25 sorghum genotypes grown in different environments and evaluated the following traits: flowering (FLOW), green mass yield (GMY), total soluble solids (TSS), and tons of Brix per hectare (TBH). Significant G x E interactions were found for all traits. The most stable genotypes identified with the GGEBiplot method were CMSXS643 for FLOW, CMSXS644 and CMSXS647 for GMY, CMSXS646 and CMSXS637 for TSS, and BRS511 and CMSXSS647 for TBH. Especially for TBH, the genotype BRS511 was classified as doubly desirable by the Toler method; however, unlike the result of the GGEBiplot method, the genotype CMSXS647 was also found to be doubly undesirable. The two analytical methods were complementary and enabled a more reliable identification of adapted and stable genotypes.

  5. Adaptive non-uniformity correction method based on temperature for infrared detector array

    NASA Astrophysics Data System (ADS)

    Zhang, Zhijie; Yue, Song; Hong, Pu; Jia, Guowei; Lei, Bo

    2013-09-01

    The existence of non-uniformities in the responsitivity of the element array is a severe problem typical to common infrared detector. These non-uniformities result in a "curtain'' like fixed pattern noises (FPN) that appear in the image. Some random noise can be restrained by the method kind of equalization method. But the fixed pattern noise can only be removed by .non uniformity correction method. The produce of non uniformities of detector array is the combined action of infrared detector array, readout circuit, semiconductor device performance, the amplifier circuit and optical system. Conventional linear correction techniques require costly recalibration due to the drift of the detector or changes in temperature. Therefore, an adaptive non-uniformity method is needed to solve this problem. A lot factors including detectors and environment conditions variety are considered to analyze and conduct the cause of detector drift. Several experiments are designed to verify the guess. Based on the experiments, an adaptive non-uniformity correction method is put forward in this paper. The strength of this method lies in its simplicity and low computational complexity. Extensive experimental results demonstrate the disadvantage of traditional non-uniformity correct method is conquered by the proposed scheme.

  6. Shack-Hartmann wavefront sensor with large dynamic range by adaptive spot search method.

    PubMed

    Shinto, Hironobu; Saita, Yusuke; Nomura, Takanori

    2016-07-10

    A Shack-Hartmann wavefront sensor (SHWFS) that consists of a microlens array and an image sensor has been used to measure the wavefront aberrations of human eyes. However, a conventional SHWFS has finite dynamic range depending on the diameter of the each microlens. The dynamic range cannot be easily expanded without a decrease of the spatial resolution. In this study, an adaptive spot search method to expand the dynamic range of an SHWFS is proposed. In the proposed method, spots are searched with the help of their approximate displacements measured with low spatial resolution and large dynamic range. By the proposed method, a wavefront can be correctly measured even if the spot is beyond the detection area. The adaptive spot search method is realized by using the special microlens array that generates both spots and discriminable patterns. The proposed method enables expanding the dynamic range of an SHWFS with a single shot and short processing time. The performance of the proposed method is compared with that of a conventional SHWFS by optical experiments. Furthermore, the dynamic range of the proposed method is quantitatively evaluated by numerical simulations.

  7. An automatic locally-adaptive method to estimate heavily-tailed breakthrough curves from particle distributions

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele; Fernàndez-Garcia, Daniel

    2013-09-01

    Particle tracking methods to simulate solute transport deal with the issue of having to reconstruct smooth concentrations from a limited number of particles. This is an error-prone process that typically leads to large fluctuations in the determined late-time behavior of breakthrough curves (BTCs). Kernel density estimators (KDE) can be used to automatically reconstruct smooth BTCs from a small number of particles. The kernel approach incorporates the uncertainty associated with subsampling a large population by equipping each particle with a probability density function. Two broad classes of KDE methods can be distinguished depending on the parametrization of this function: global and adaptive methods. This paper shows that each method is likely to estimate a specific portion of the BTCs. Although global methods offer a valid approach to estimate early-time behavior and peak of BTCs, they exhibit important fluctuations at the tails where fewer particles exist. In contrast, locally adaptive methods improve tail estimation while oversmoothing both early-time and peak concentrations. Therefore a new method is proposed combining the strength of both KDE approaches. The proposed approach is universal and only needs one parameter (α) which slightly depends on the shape of the BTCs. Results show that, for the tested cases, heavily-tailed BTCs are properly reconstructed with α ≈ 0.5 .

  8. Staggered grid lagrangian method with local structured adaptive mesh refinement for modeling shock hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliot, N S

    2000-09-26

    A new method for the solution of the unsteady Euler equations has been developed. The method combines staggered grid Lagrangian techniques with structured local adaptive mesh refinement (AMR). This method is a precursor to a more general adaptive arbitrary Lagrangian Eulerian (ALE-AMR) algorithm under development, which will facilitate the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required. Many of the core issues involved in the development of the ALE-AMR method hinge upon the integration of AMR with a Lagrange step, which is the focus of the work described here. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. These new algorithmic components are first developed in one dimension and are then generalized to two dimensions. Solutions of several model problems involving shock hydrodynamics are presented and discussed.

  9. The Limits to Adaptation: A Systems Approach

    EPA Science Inventory

    The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering parameters), resource constraints (expressed th...

  10. Effects of climate change on water requirements and phenological period of major crops in Heihe River basin, China - Based on the accumulated temperature threshold method

    NASA Astrophysics Data System (ADS)

    Han, Dongmei; Xu, Xinyi; Yan, Denghua

    2016-04-01

    In recent years, global climate change has significantly caused a serious crisis of water resources throughout the world. However, mainly through variations in temperature, climate change will affect water requirements of crop. It is obvious that the rise of temperature affects growing period and phenological period of crop directly, then changes the water demand quota of crop. Methods including accumulated temperature threshold and climatic tendency rate were adopted, which made up for the weakness of phenological observations, to reveal the response of crop phenological change during the growing period. Then using Penman-Menteith model and crop coefficients from the United Nations Food& Agriculture Organization (FAO), the paper firstly explored crop water requirements in different growth periods, and further forecasted quantitatively crop water requirements in Heihe River Basin, China under different climate change scenarios. Results indicate that: (i) The results of crop phenological change established in the method of accumulated temperature threshold were in agreement with measured results, and (ii) there were many differences in impacts of climate warming on water requirement of different crops. The growth periods of wheat and corn had tendency of shortening as well as the length of growth periods. (ii)Results of crop water requirements under different climate change scenarios showed: when temperature increased by 1°C, the start time of wheat growth period changed, 2 days earlier than before, and the length of total growth period shortened 2 days. Wheat water requirements increased by 1.4mm. However, corn water requirements decreased by almost 0.9mm due to the increasing temperature of 1°C. And the start time of corn growth period become 3 days ahead, and the length of total growth period shortened 4 days. Therefore, the contradiction between water supply and water demands are more obvious under the future climate warming in Heihe River Basin, China.

  11. Threshold quantum cryptography

    SciTech Connect

    Tokunaga, Yuuki; Okamoto, Tatsuaki; Imoto, Nobuyuki

    2005-01-01

    We present the concept of threshold collaborative unitary transformation or threshold quantum cryptography, which is a kind of quantum version of threshold cryptography. Threshold quantum cryptography states that classical shared secrets are distributed to several parties and a subset of them, whose number is greater than a threshold, collaborates to compute a quantum cryptographic function, while keeping each share secretly inside each party. The shared secrets are reusable if no cheating is detected. As a concrete example of this concept, we show a distributed protocol (with threshold) of conjugate coding.

  12. Adaptive f-k deghosting method based on non-Gaussianity

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Lu, Wenkai

    2016-04-01

    For conventional horizontal towed streamer data, the f-k deghosting method is widely used to remove receiver ghosts. In the traditional f-k deghosting method, the depth of the streamer and the sea surface reflection coefficient are two key ghost parameters. In general, for one seismic line, these two parameters are fixed for all shot gathers and given by the users. In practice, these two parameters often vary during acquisition because of the rough sea condition. This paper proposes an automatic method to adaptively obtain these two ghost parameters for every shot gather. Since the proposed method is based on the non-Gaussianity of the deghosting result, it is important to choose a proper non-Gaussian criterion to ensure high accuracy of the parameter estimation. We evaluate six non-Gaussian criteria by synthetic experiment. The conclusion of our experiment is expected to provide a reference for choosing the most appropriate criterion. We apply the proposed method on a 2D real field example. Experimental results show that the optimal parameters vary among shot gathers and validate effectiveness of the parameter estimation process. Moreover, despite that this method ignores the parameter variation within one shot, the adaptive deghosting results show improvements when compared with the deghosting results obtained by using constant parameters for the whole line.

  13. Effect of the curing method and composite volume on marginal and internal adaptation of composite restoratives.

    PubMed

    Souza-Junior, Eduardo José; de Souza-Régis, Marcos Ribeiro; Alonso, Roberta Caroline Bruschi; de Freitas, Anderson Pinheiro; Sinhoreti, Mario Alexandre Coelho; Cunha, Leonardo Gonçalves

    2011-01-01

    The aim of the present study was to evaluate the influence of curing methods and composite volumes on the marginal and internal adaptation of composite restoratives. Two cavities with different volumes (Lower volume: 12.6 mm(3); Higher volume: 24.5 mm(3)) were prepared on the buccal surface of 60 bovine teeth and restored using Filtek Z250 in bulk filling. For each cavity, specimens were randomly assigned into three groups according to the curing method (n=10): 1) continuous light (CL: 27 seconds at 600 mW/cm(2)); 2) soft-start (SS: 10 seconds at 150 mW/cm(2)+24 seconds at 600 mW/cm(2)); and 3) pulse delay (PD: five seconds at 150 mW/cm(2)+three minutes with no light+25 seconds at 600 mW/cm(2)). The radiant exposure for all groups was 16 J/cm(2). Marginal adaptation was measured with the dye staining gap procedure, using Caries Detector. Outer margins were stained for five seconds and the gap percentage was determined using digital images on a computer measurement program (Image Tool). Then, specimens were sectioned in slices and stained for five seconds, and the internal gaps were measured using the same method. Data were submitted to two-way analysis of variance and Tukey test (p<0.05). Composite volume had a significant influence on superficial and internal gap formation, depending on the curing method. For CL groups, restorations with higher volume showed higher marginal gap incidence than did the lower volume restorations. Additionally, the effect of the curing method depended on the volume. Regarding marginal adaptation, SS resulted in a significant reduction of gap formation, when compared to CL, for higher volume restorations. For lower volume restorations, there was no difference among the curing methods. For internal adaptation, the modulated curing methods SS and PD promoted a significant reduction of gap formation, when compared to CL, only for the lower volume restoration. Therefore, in similar conditions of the cavity configuration, the higher the

  14. Adaptive integral method with fast Gaussian gridding for solving combined field integral equations

    NASA Astrophysics Data System (ADS)

    Bakır, O.; Baǧ; Cı, H.; Michielssen, E.

    Fast Gaussian gridding (FGG), a recently proposed nonuniform fast Fourier transform algorithm, is used to reduce the memory requirements of the adaptive integral method (AIM) for accelerating the method of moments-based solution of combined field integral equations pertinent to the analysis of scattering from three-dimensional perfect electrically conducting surfaces. Numerical results that demonstrate the efficiency and accuracy of the AIM-FGG hybrid in comparison to an AIM-accelerated solver, which uses moment matching to project surface sources onto an auxiliary grid, are presented.

  15. Hybrid numerical method with adaptive overlapping meshes for solving nonstationary problems in continuum mechanics

    NASA Astrophysics Data System (ADS)

    Burago, N. G.; Nikitin, I. S.; Yakushev, V. L.

    2016-06-01

    Techniques that improve the accuracy of numerical solutions and reduce their computational costs are discussed as applied to continuum mechanics problems with complex time-varying geometry. The approach combines shock-capturing computations with the following methods: (1) overlapping meshes for specifying complex geometry; (2) elastic arbitrarily moving adaptive meshes for minimizing the approximation errors near shock waves, boundary layers, contact discontinuities, and moving boundaries; (3) matrix-free implementation of efficient iterative and explicit-implicit finite element schemes; (4) balancing viscosity (version of the stabilized Petrov-Galerkin method); (5) exponential adjustment of physical viscosity coefficients; and (6) stepwise correction of solutions for providing their monotonicity and conservativeness.

  16. Adaptively biased molecular dynamics: An umbrella sampling method with a time-dependent potential

    NASA Astrophysics Data System (ADS)

    Babin, Volodymyr; Karpusenka, Vadzim; Moradi, Mahmoud; Roland, Christopher; Sagui, Celeste

    We discuss an adaptively biased molecular dynamics (ABMD) method for the computation of a free energy surface for a set of reaction coordinates. The ABMD method belongs to the general category of umbrella sampling methods with an evolving biasing potential. It is characterized by a small number of control parameters and an O(t) numerical cost with simulation time t. The method naturally allows for extensions based on multiple walkers and replica exchange mechanism. The workings of the method are illustrated with a number of examples, including sugar puckering, and free energy landscapes for polymethionine and polyproline peptides, and for a short β-turn peptide. ABMD has been implemented into the latest version (Case et al., AMBER 10; University of California: San Francisco, 2008) of the AMBER software package and is freely available to the simulation community.

  17. Parallel level-set methods on adaptive tree-based grids

    NASA Astrophysics Data System (ADS)

    Mirzadeh, Mohammad; Guittet, Arthur; Burstedde, Carsten; Gibou, Frederic

    2016-10-01

    We present scalable algorithms for the level-set method on dynamic, adaptive Quadtree and Octree Cartesian grids. The algorithms are fully parallelized and implemented using the MPI standard and the open-source p4est library. We solve the level set equation with a semi-Lagrangian method which, similar to its serial implementation, is free of any time-step restrictions. This is achieved by introducing a scalable global interpolation scheme on adaptive tree-based grids. Moreover, we present a simple parallel reinitialization scheme using the pseudo-time transient formulation. Both parallel algorithms scale on the Stampede supercomputer, where we are currently using up to 4096 CPU cores, the limit of our current account. Finally, a relevant application of the algorithms is presented in modeling a crystallization phenomenon by solving a Stefan problem, illustrating a level of detail that would be impossible to achieve without a parallel adaptive strategy. We believe that the algorithms presented in this article will be of interest and useful to researchers working with the level-set framework and modeling multi-scale physics in general.

  18. An Adaptive INS-Aided PLL Tracking Method for GNSS Receivers in Harsh Environments.

    PubMed

    Cong, Li; Li, Xin; Jin, Tian; Yue, Song; Xue, Rui

    2016-01-01

    As the weak link in global navigation satellite system (GNSS) signal processing, the phase-locked loop (PLL) is easily influenced with frequent cycle slips and loss of lock as a result of higher vehicle dynamics and lower signal-to-noise ratios. With inertial navigation system (INS) aid, PLLs' tracking performance can be improved. However, for harsh environments with high dynamics and signal attenuation, the traditional INS-aided PLL with fixed loop parameters has some limitations to improve the tracking adaptability. In this paper, an adaptive INS-aided PLL capable of adjusting its noise bandwidth and coherent integration time has been proposed. Through theoretical analysis, the relation between INS-aided PLL phase tracking error and carrier to noise density ratio (C/N₀), vehicle dynamics, aiding information update time, noise bandwidth, and coherent integration time has been built. The relation formulae are used to choose the optimal integration time and bandwidth for a given application under the minimum tracking error criterion. Software and hardware simulation results verify the correctness of the theoretical analysis, and demonstrate that the adaptive tracking method can effectively improve the PLL tracking ability and integrated GNSS/INS navigation performance. For harsh environments, the tracking sensitivity is increased by 3 to 5 dB, velocity errors are decreased by 36% to 50% and position errors are decreased by 6% to 24% when compared with other INS-aided PLL methods. PMID:26805853

  19. An Adaptive INS-Aided PLL Tracking Method for GNSS Receivers in Harsh Environments.

    PubMed

    Cong, Li; Li, Xin; Jin, Tian; Yue, Song; Xue, Rui

    2016-01-23

    As the weak link in global navigation satellite system (GNSS) signal processing, the phase-locked loop (PLL) is easily influenced with frequent cycle slips and loss of lock as a result of higher vehicle dynamics and lower signal-to-noise ratios. With inertial navigation system (INS) aid, PLLs' tracking performance can be improved. However, for harsh environments with high dynamics and signal attenuation, the traditional INS-aided PLL with fixed loop parameters has some limitations to improve the tracking adaptability. In this paper, an adaptive INS-aided PLL capable of adjusting its noise bandwidth and coherent integration time has been proposed. Through theoretical analysis, the relation between INS-aided PLL phase tracking error and carrier to noise density ratio (C/N₀), vehicle dynamics, aiding information update time, noise bandwidth, and coherent integration time has been built. The relation formulae are used to choose the optimal integration time and bandwidth for a given application under the minimum tracking error criterion. Software and hardware simulation results verify the correctness of the theoretical analysis, and demonstrate that the adaptive tracking method can effectively improve the PLL tracking ability and integrated GNSS/INS navigation performance. For harsh environments, the tracking sensitivity is increased by 3 to 5 dB, velocity errors are decreased by 36% to 50% and position errors are decreased by 6% to 24% when compared with other INS-aided PLL methods.

  20. An Adaptive INS-Aided PLL Tracking Method for GNSS Receivers in Harsh Environments

    PubMed Central

    Cong, Li; Li, Xin; Jin, Tian; Yue, Song; Xue, Rui

    2016-01-01

    As the weak link in global navigation satellite system (GNSS) signal processing, the phase-locked loop (PLL) is easily influenced with frequent cycle slips and loss of lock as a result of higher vehicle dynamics and lower signal-to-noise ratios. With inertial navigation system (INS) aid, PLLs’ tracking performance can be improved. However, for harsh environments with high dynamics and signal attenuation, the traditional INS-aided PLL with fixed loop parameters has some limitations to improve the tracking adaptability. In this paper, an adaptive INS-aided PLL capable of adjusting its noise bandwidth and coherent integration time has been proposed. Through theoretical analysis, the relation between INS-aided PLL phase tracking error and carrier to noise density ratio (C/N0), vehicle dynamics, aiding information update time, noise bandwidth, and coherent integration time has been built. The relation formulae are used to choose the optimal integration time and bandwidth for a given application under the minimum tracking error criterion. Software and hardware simulation results verify the correctness of the theoretical analysis, and demonstrate that the adaptive tracking method can effectively improve the PLL tracking ability and integrated GNSS/INS navigation performance. For harsh environments, the tracking sensitivity is increased by 3 to 5 dB, velocity errors are decreased by 36% to 50% and position errors are decreased by 6% to 24% when compared with other INS-aided PLL methods. PMID:26805853

  1. Methods and evaluations of MRI content-adaptive finite element mesh generation for bioelectromagnetic problems.

    PubMed

    Lee, W H; Kim, T-S; Cho, M H; Ahn, Y B; Lee, S Y

    2006-12-01

    In studying bioelectromagnetic problems, finite element analysis (FEA) offers several advantages over conventional methods such as the boundary element method. It allows truly volumetric analysis and incorporation of material properties such as anisotropic conductivity. For FEA, mesh generation is the first critical requirement and there exist many different approaches. However, conventional approaches offered by commercial packages and various algorithms do not generate content-adaptive meshes (cMeshes), resulting in numerous nodes and elements in modelling the conducting domain, and thereby increasing computational load and demand. In this work, we present efficient content-adaptive mesh generation schemes for complex biological volumes of MR images. The presented methodology is fully automatic and generates FE meshes that are adaptive to the geometrical contents of MR images, allowing optimal representation of conducting domain for FEA. We have also evaluated the effect of cMeshes on FEA in three dimensions by comparing the forward solutions from various cMesh head models to the solutions from the reference FE head model in which fine and equidistant FEs constitute the model. The results show that there is a significant gain in computation time with minor loss in numerical accuracy. We believe that cMeshes should be useful in the FEA of bioelectromagnetic problems.

  2. Encoding and simulation of daily rainfall records via adaptations of the fractal multifractal method

    NASA Astrophysics Data System (ADS)

    Maskey, M.; Puente, C. E.; Sivakumar, B.; Cortis, A.

    2015-12-01

    A deterministic geometric approach, the fractal-multifractal (FM) method, is adapted to encode and simulate daily rainfall records exhibiting noticeable intermittency. Using data sets gathered at Laikakota in Bolivia and Tinkham in Washington State, USA, it is demonstrated that the adapted FM approach can, within the limits of accuracy of measured sets and using only a few geometric parameters, encode and simulate the erratic rainfall records reasonably well. The FM procedure does not only preserve the statistical attributes of the records such as histogram, entropy function and distribution of zeroes, but also captures the overall texture inherent in the rather complex intermittent sets. As such, the FM deterministic representations may be used to supplement stochastic frameworks for data coding and simulation.

  3. Adaptive particle refinement and derefinement applied to the smoothed particle hydrodynamics method

    NASA Astrophysics Data System (ADS)

    Barcarolo, D. A.; Le Touzé, D.; Oger, G.; de Vuyst, F.

    2014-09-01

    SPH simulations are usually performed with a uniform particle distribution. New techniques have been recently proposed to enable the use of spatially varying particle distributions, which encouraged the development of automatic adaptivity and particle refinement/derefinement algorithms. All these efforts resulted in very interesting and promising procedures leading to more efficient and faster SPH simulations. In this article, a family of particle refinement techniques is reviewed and a new derefinement technique is proposed and validated through several test cases involving both free-surface and viscous flows. Besides, this new procedure allows higher resolutions in the regions requiring increased accuracy. Moreover, several levels of refinement can be used with this new technique, as often encountered in adaptive mesh refinement techniques in mesh-based methods.

  4. A DAFT DL_POLY distributed memory adaptation of the Smoothed Particle Mesh Ewald method

    NASA Astrophysics Data System (ADS)

    Bush, I. J.; Todorov, I. T.; Smith, W.

    2006-09-01

    The Smoothed Particle Mesh Ewald method [U. Essmann, L. Perera, M.L. Berkowtz, T. Darden, H. Lee, L.G. Pedersen, J. Chem. Phys. 103 (1995) 8577] for calculating long ranged forces in molecular simulation has been adapted for the parallel molecular dynamics code DL_POLY_3 [I.T. Todorov, W. Smith, Philos. Trans. Roy. Soc. London 362 (2004) 1835], making use of a novel 3D Fast Fourier Transform (DAFT) [I.J. Bush, The Daresbury Advanced Fourier transform, Daresbury Laboratory, 1999] that perfectly matches the Domain Decomposition (DD) parallelisation strategy [W. Smith, Comput. Phys. Comm. 62 (1991) 229; M.R.S. Pinches, D. Tildesley, W. Smith, Mol. Sim. 6 (1991) 51; D. Rapaport, Comput. Phys. Comm. 62 (1991) 217] of the DL_POLY_3 code. In this article we describe software adaptations undertaken to import this functionality and provide a review of its performance.

  5. Parallel simulation of multiphase flows using octree adaptivity and the volume-of-fluid method

    NASA Astrophysics Data System (ADS)

    Agbaglah, Gilou; Delaux, Sébastien; Fuster, Daniel; Hoepffner, Jérôme; Josserand, Christophe; Popinet, Stéphane; Ray, Pascal; Scardovelli, Ruben; Zaleski, Stéphane

    2011-02-01

    We describe computations performed using the Gerris code, an open-source software implementing finite volume solvers on an octree adaptive grid together with a piecewise linear volume of fluid interface tracking method. The parallelisation of Gerris is achieved by domain decomposition. We show examples of the capabilities of Gerris on several types of problems. The impact of a droplet on a layer of the same liquid results in the formation of a thin air layer trapped between the droplet and the liquid layer that the adaptive refinement allows to capture. It is followed by the jetting of a thin corolla emerging from below the impacting droplet. The jet atomisation problem is another extremely challenging computational problem, in which a large number of small scales are generated. Finally we show an example of a turbulent jet computation in an equivalent resolution of 6×1024 cells. The jet simulation is based on the configuration of the Deepwater Horizon oil leak.

  6. Threshold magnitudes for a multichannel correlation detector in background seismicity

    DOE PAGES

    Carmichael, Joshua D.; Hartse, Hans

    2016-04-01

    Colocated explosive sources often produce correlated seismic waveforms. Multichannel correlation detectors identify these signals by scanning template waveforms recorded from known reference events against "target" data to find similar waveforms. This screening problem is challenged at thresholds required to monitor smaller explosions, often because non-target signals falsely trigger such detectors. Therefore, it is generally unclear what thresholds will reliably identify a target explosion while screening non-target background seismicity. Here, we estimate threshold magnitudes for hypothetical explosions located at the North Korean nuclear test site over six months of 2010, by processing International Monitoring System (IMS) array data with a multichannelmore » waveform correlation detector. Our method (1) accounts for low amplitude background seismicity that falsely triggers correlation detectors but is unidentifiable with conventional power beams, (2) adapts to diurnally variable noise levels and (3) uses source-receiver reciprocity concepts to estimate thresholds for explosions spatially separated from the template source. Furthermore, we find that underground explosions with body wave magnitudes mb = 1.66 are detectable at the IMS array USRK with probability 0.99, when using template waveforms consisting only of P -waves, without false alarms. We conservatively find that these thresholds also increase by up to a magnitude unit for sources located 4 km or more from the Feb.12, 2013 announced nuclear test.« less

  7. An adaptive segment method for smoothing lidar signal based on noise estimation

    NASA Astrophysics Data System (ADS)

    Wang, Yuzhao; Luo, Pingping

    2014-10-01

    An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.

  8. A hyper-spherical adaptive sparse-grid method for high-dimensional discontinuity detection

    SciTech Connect

    Zhang, Guannan; Webster, Clayton G; Gunzburger, Max D; Burkardt, John V

    2014-03-01

    This work proposes and analyzes a hyper-spherical adaptive hi- erarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces is proposed. The method is motivated by the the- oretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a func- tion representation of the discontinuity hyper-surface of an N-dimensional dis- continuous quantity of interest, by virtue of a hyper-spherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyper-spherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smooth- ness of the hyper-surface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous error estimates and complexity anal- yses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.

  9. An adaptive kernel smoothing method for classifying Austrosimulium tillyardianum (Diptera: Simuliidae) larval instars.

    PubMed

    Cen, Guanjun; Yu, Yonghao; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao

    2015-01-01

    In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks' rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby's growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods.

  10. An Adaptive Kernel Smoothing Method for Classifying Austrosimulium tillyardianum (Diptera: Simuliidae) Larval Instars

    PubMed Central

    Cen, Guanjun; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao

    2015-01-01

    In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks’ rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby’s growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods. PMID:26546689

  11. A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection

    DOE PAGES

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; Burkardt, John V.

    2015-06-24

    This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the newmore » technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.« less

  12. Adaptive circle-ellipse fitting method for estimating tree diameter based on single terrestrial laser scanning

    NASA Astrophysics Data System (ADS)

    Bu, Guochao; Wang, Pei

    2016-04-01

    Terrestrial laser scanning (TLS) has been used to extract accurate forest biophysical parameters for inventory purposes. The diameter at breast height (DBH) is a key parameter for individual trees because it has the potential for modeling the height, volume, biomass, and carbon sequestration potential of the tree based on empirical allometric scaling equations. In order to extract the DBH from the single-scan data of TLS automatically and accurately within a certain range, we proposed an adaptive circle-ellipse fitting method based on the point cloud transect. This proposed method can correct the error caused by the simple circle fitting method when a tree is slanted. A slanted tree was detected by the circle-ellipse fitting analysis, then the corresponding slant angle was found based on the ellipse fitting result. With this information, the DBH of the trees could be recalculated based on reslicing the point cloud data at breast height. Artificial stem data simulated by a cylindrical model of leaning trees and the scanning data acquired with the RIEGL VZ-400 were used to test the proposed adaptive fitting method. The results shown that the proposed method can detect the trees and accurately estimate the DBH for leaning trees.

  13. An adaptive filter-based method for robust, automatic detection and frequency estimation of whistles.

    PubMed

    Johansson, A Torbjorn; White, Paul R

    2011-08-01

    This paper proposes an adaptive filter-based method for detection and frequency estimation of whistle calls, such as the calls of birds and marine mammals, which are typically analyzed in the time-frequency domain using a spectrogram. The approach taken here is based on adaptive notch filtering, which is an established technique for frequency tracking. For application to automatic whistle processing, methods for detection and improved frequency tracking through frequency crossings as well as interfering transients are developed and coupled to the frequency tracker. Background noise estimation and compensation is accomplished using order statistics and pre-whitening. Using simulated signals as well as recorded calls of marine mammals and a human whistled speech utterance, it is shown that the proposed method can detect more simultaneous whistles than two competing spectrogram-based methods while not reporting any false alarms on the example datasets. In one example, it extracts complete 1.4 and 1.8 s bottlenose dolphin whistles successfully through frequency crossings. The method performs detection and estimates frequency tracks even at high sweep rates. The algorithm is also shown to be effective on human whistled utterances. PMID:21877804

  14. Adaptive Projection Subspace Dimension for the Thick-Restart Lanczos Method

    SciTech Connect

    Yamazaki, Ichitaro; Bai, Zhaojun; Simon, Horst; Wang, Lin-Wang; Wu, K.

    2008-10-01

    The Thick-Restart Lanczos (TRLan) method is an effective method for solving large-scale Hermitian eigenvalue problems. However, its performance strongly depends on the dimension of the projection subspace. In this paper, we propose an objective function to quantify the effectiveness of a chosen subspace dimension, and then introduce an adaptive scheme to dynamically adjust the dimension at each restart. An open-source software package, nu-TRLan, which implements the TRLan method with this adaptive projection subspace dimension is available in the public domain. The numerical results of synthetic eigenvalue problems are presented to demonstrate that nu-TRLan achieves speedups of between 0.9 and 5.1 over the static method using a default subspace dimension. To demonstrate the effectiveness of nu-TRLan in a real application, we apply it to the electronic structure calculations of quantum dots. We show that nu-TRLan can achieve speedups of greater than 1.69 over the state-of-the-art eigensolver for this application, which is based on the Conjugate Gradient method with a powerful preconditioner.

  15. Feasibility of an online adaptive replanning method for cranial frameless intensity-modulated radiosurgery

    SciTech Connect

    Calvo, Juan Francisco; San José, Sol; Garrido, LLuís; Puertas, Enrique; Moragues, Sandra; Pozo, Miquel; Casals, Joan

    2013-10-01

    To introduce an approach for online adaptive replanning (i.e., dose-guided radiosurgery) in frameless stereotactic radiosurgery, when a 6-dimensional (6D) robotic couch is not available in the linear accelerator (linac). Cranial radiosurgical treatments are planned in our department using intensity-modulated technique. Patients are immobilized using thermoplastic mask. A cone-beam computed tomography (CBCT) scan is acquired after the initial laser-based patient setup (CBCT{sub setup}). The online adaptive replanning procedure we propose consists of a 6D registration-based mapping of the reference plan onto actual CBCT{sub setup}, followed by a reoptimization of the beam fluences (“6D plan”) to achieve similar dosage as originally was intended, while the patient is lying in the linac couch and the original beam arrangement is kept. The goodness of the online adaptive method proposed was retrospectively analyzed for 16 patients with 35 targets treated with CBCT-based frameless intensity modulated technique. Simulation of reference plan onto actual CBCT{sub setup}, according to the 4 degrees of freedom, supported by linac couch was also generated for each case (4D plan). Target coverage (D99%) and conformity index values of 6D and 4D plans were compared with the corresponding values of the reference plans. Although the 4D-based approach does not always assure the target coverage (D99% between 72% and 103%), the proposed online adaptive method gave a perfect coverage in all cases analyzed as well as a similar conformity index value as was planned. Dose-guided radiosurgery approach is effective to assure the dose coverage and conformity of an intracranial target volume, avoiding resetting the patient inside the mask in a “trial and error” way so as to remove the pitch and roll errors when a robotic table is not available.

  16. The stochastic control of the F-8C aircraft using the Multiple Model Adaptive Control (MMAC) method

    NASA Technical Reports Server (NTRS)

    Athans, M.; Dunn, K. P.; Greene, E. S.; Lee, W. H.; Sandel, N. R., Jr.

    1975-01-01

    The purpose of this paper is to summarize results obtained for the adaptive control of the F-8C aircraft using the so-called Multiple Model Adaptive Control method. The discussion includes the selection of the performance criteria for both the lateral and the longitudinal dynamics, the design of the Kalman filters for different flight conditions, the 'identification' aspects of the design using hypothesis testing ideas, and the performance of the closed loop adaptive system.

  17. Self-adaptive method for high frequency multi-channel analysis of surface wave method

    Technology Transfer Automated Retrieval System (TEKTRAN)

    When the high frequency multi-channel analysis of surface waves (MASW) method is conducted to explore soil properties in the vadose zone, existing rules for selecting the near offset and spread lengths cannot satisfy the requirements of planar dominant Rayleigh waves for all frequencies of interest ...

  18. EPA Water Resources Adaptation Program (WRAP) Research and Development Activities Methods and Techniques

    EPA Science Inventory

    Adaptation to environmental change is not a new concept. Humans have shown throughout history a capacity for adapting to different climates and environmental changes. Farmers, foresters, civil engineers, have all been forced to adapt to numerous challenges to overcome adversity...

  19. Threshold Concepts in Biochemistry

    ERIC Educational Resources Information Center

    Loertscher, Jennifer

    2011-01-01

    Threshold concepts can be identified for any discipline and provide a framework for linking student learning to curricular design. Threshold concepts represent a transformed understanding of a discipline, without which the learner cannot progress and are therefore pivotal in learning in a discipline. Although threshold concepts have been…

  20. An HP Adaptive Discontinuous Galerkin Method for Hyperbolic Conservation Laws. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1994-01-01

    This dissertation addresses various issues for model classes of hyperbolic conservation laws. The basic approach developed in this work employs a new family of adaptive, hp-version, finite element methods based on a special discontinuous Galerkin formulation for hyperbolic problems. The discontinuous Galerkin formulation admits high-order local approximations on domains of quite general geometry, while providing a natural framework for finite element approximations and for theoretical developments. The use of hp-versions of the finite element method makes possible exponentially convergent schemes with very high accuracies in certain cases; the use of adaptive hp-schemes allows h-refinement in regions of low regularity and p-enrichment to deliver high accuracy, while keeping problem sizes manageable and dramatically smaller than many conventional approaches. The use of discontinuous Galerkin methods is uncommon in applications, but the methods rest on a reasonable mathematical basis for low-order cases and has local approximation features that can be exploited to produce very efficient schemes, especially in a parallel, multiprocessor environment. The place of this work is to first and primarily focus on a model class of linear hyperbolic conservation laws for which concrete mathematical results, methodologies, error estimates, convergence criteria, and parallel adaptive strategies can be developed, and to then briefly explore some extensions to more general cases. Next, we provide preliminaries to the study and a review of some aspects of the theory of hyperbolic conservation laws. We also provide a review of relevant literature on this subject and on the numerical analysis of these types of problems.

  1. Investigation of self-adaptive LED surgical lighting based on entropy contrast enhancing method

    NASA Astrophysics Data System (ADS)

    Liu, Peng; Wang, Huihui; Zhang, Yaqin; Shen, Junfei; Wu, Rengmao; Zheng, Zhenrong; Li, Haifeng; Liu, Xu

    2014-05-01

    Investigation was performed to explore the possibility of enhancing contrast by varying the spectral distribution (SPD) of the surgical lighting. The illumination scenes with different SPDs were generated by the combination of a self-adaptive white light optimization method and the LED ceiling system, the images of biological sample are taken by a CCD camera and then processed by an 'Entropy' based contrast evaluation model which is proposed specific for surgery occasion. Compared with the neutral white LED based and traditional algorithm based image enhancing methods, the illumination based enhancing method turns out a better performance in contrast enhancing and improves the average contrast value about 9% and 6%, respectively. This low cost method is simple, practicable, and thus may provide an alternative solution for the expensive visual facility medical instruments.

  2. A Cartesian Adaptive Level Set Method for Two-Phase Flows

    NASA Technical Reports Server (NTRS)

    Ham, F.; Young, Y.-N.

    2003-01-01

    In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.

  3. Validation of an Adaptive Combustion Instability Control Method for Gas-Turbine Engines

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; DeLaat, John C.; Chang, Clarence T.

    2004-01-01

    This paper describes ongoing testing of an adaptive control method to suppress high frequency thermo-acoustic instabilities like those found in lean-burning, low emission combustors that are being developed for future aircraft gas turbine engines. The method called Adaptive Sliding Phasor Averaged Control, was previously tested in an experimental rig designed to simulate a combustor with an instability of about 530 Hz. Results published earlier, and briefly presented here, demonstrated that this method was effective in suppressing the instability. Because this test rig did not exhibit a well pronounced instability, a question remained regarding the effectiveness of the control methodology when applied to a more coherent instability. To answer this question, a modified combustor rig was assembled at the NASA Glenn Research Center in Cleveland, Ohio. The modified rig exhibited a more coherent, higher amplitude instability, but at a lower frequency of about 315 Hz. Test results show that this control method successfully reduced the instability pressure of the lower frequency test rig. In addition, due to a certain phenomena discovered and reported earlier, the so called Intra-Harmonic Coupling, a dramatic suppression of the instability was achieved by focusing control on the second harmonic of the instability. These results and their implications are discussed, as well as a hypothesis describing the mechanism of intra-harmonic coupling.

  4. Self-adaptive projection methods for the multiple-sets split feasibility problem

    NASA Astrophysics Data System (ADS)

    Zhao, Jinling; Yang, Qingzhi

    2011-03-01

    The multiple-sets split feasibility problem (MSFP) is to find a point closest to the intersection of a family of closed convex sets in one space, such that its image under a linear transformation will be closest to the intersection of another family of closed convex sets in the image space. This problem arises in many practical fields, and it can be a model for many inverse problems. Noting that some existing algorithms require estimating the Lipschitz constant or calculating the largest eigenvalue of the matrix, in this paper, we first introduce a self-adaptive projection method by adopting Armijo-like searches to solve the MSFP, then we focus on a special case of the MSFP and propose a relaxed self-adaptive method by using projections onto half-spaces instead of those onto the original convex sets, which is much more practical. Convergence results for both methods are analyzed. Preliminary numerical results show that our methods are practical and promising for solving larger scale MSFPs.

  5. Patched based methods for adaptive mesh refinement solutions of partial differential equations

    SciTech Connect

    Saltzman, J.

    1997-09-02

    This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.

  6. Adaptive optics in spinning disk microscopy: improved contrast and brightness by a simple and fast method.

    PubMed

    Fraisier, V; Clouvel, G; Jasaitis, A; Dimitrov, A; Piolot, T; Salamero, J

    2015-09-01

    Multiconfocal microscopy gives a good compromise between fast imaging and reasonable resolution. However, the low intensity of live fluorescent emitters is a major limitation to this technique. Aberrations induced by the optical setup, especially the mismatch of the refractive index and the biological sample itself, distort the point spread function and further reduce the amount of detected photons. Altogether, this leads to impaired image quality, preventing accurate analysis of molecular processes in biological samples and imaging deep in the sample. The amount of detected fluorescence can be improved with adaptive optics. Here, we used a compact adaptive optics module (adaptive optics box for sectioning optical microscopy), which was specifically designed for spinning disk confocal microscopy. The module overcomes undesired anomalies by correcting for most of the aberrations in confocal imaging. Existing aberration detection methods require prior illumination, which bleaches the sample. To avoid multiple exposures of the sample, we established an experimental model describing the depth dependence of major aberrations. This model allows us to correct for those aberrations when performing a z-stack, gradually increasing the amplitude of the correction with depth. It does not require illumination of the sample for aberration detection, thus minimizing photobleaching and phototoxicity. With this model, we improved both signal-to-background ratio and image contrast. Here, we present comparative studies on a variety of biological samples.

  7. Adaptive correction method for an OCXO and investigation of analytical cumulative time error upper bound.

    PubMed

    Zhou, Hui; Kunz, Thomas; Schwartz, Howard

    2011-01-01

    Traditional oscillators used in timing modules of CDMA and WiMAX base stations are large and expensive. Applying cheaper and smaller, albeit more inaccurate, oscillators in timing modules is an interesting research challenge. An adaptive control algorithm is presented to enhance the oscillators to meet the requirements of base stations during holdover mode. An oscillator frequency stability model is developed for the adaptive control algorithm. This model takes into account the control loop which creates the correction signal when the timing module is in locked mode. A recursive prediction error method is used to identify the system model parameters. Simulation results show that an oscillator enhanced by our adaptive control algorithm improves the oscillator performance significantly, compared with uncorrected oscillators. Our results also show the benefit of explicitly modeling the control loop. Finally, the cumulative time error upper bound of such enhanced oscillators is investigated analytically and comparison results between the analytical and simulated upper bound are provided. The results show that the analytical upper bound can serve as a practical guide for system designers. PMID:21244973

  8. Comparative adaptation accuracy of acrylic denture bases evaluated by two different methods.

    PubMed

    Lee, Chung-Jae; Bok, Sung-Bem; Bae, Ji-Young; Lee, Hae-Hyoung

    2010-08-01

    This study examined the adaptation accuracy of acrylic denture base processed using fluid-resin (PERform), injection-moldings (SR-Ivocap, Success, Mak Press), and two compression-molding techniques. The adaptation accuracy was measured primarily by the posterior border gaps at the mid-palatal area using a microscope and subsequently by weighing of the weight of the impression material between the denture base and master cast using hand-mixed and automixed silicone. The correlation between the data measured using these two test methods was examined. The PERform and Mak Press produced significantly smaller maximum palatal gap dimensions than the other groups (p<0.05). Mak Press also showed a significantly smaller weight of automixed silicone material than the other groups (p<0.05), while SR-Ivocap and Success showed similar adaptation accuracy to the compression-molding denture. The correlationship between the magnitude of the posterior border gap and the weight of the silicone impression materials was affected by either the material or mixing variables.

  9. Adaptive correction method for an OCXO and investigation of analytical cumulative time error upper bound.

    PubMed

    Zhou, Hui; Kunz, Thomas; Schwartz, Howard

    2011-01-01

    Traditional oscillators used in timing modules of CDMA and WiMAX base stations are large and expensive. Applying cheaper and smaller, albeit more inaccurate, oscillators in timing modules is an interesting research challenge. An adaptive control algorithm is presented to enhance the oscillators to meet the requirements of base stations during holdover mode. An oscillator frequency stability model is developed for the adaptive control algorithm. This model takes into account the control loop which creates the correction signal when the timing module is in locked mode. A recursive prediction error method is used to identify the system model parameters. Simulation results show that an oscillator enhanced by our adaptive control algorithm improves the oscillator performance significantly, compared with uncorrected oscillators. Our results also show the benefit of explicitly modeling the control loop. Finally, the cumulative time error upper bound of such enhanced oscillators is investigated analytically and comparison results between the analytical and simulated upper bound are provided. The results show that the analytical upper bound can serve as a practical guide for system designers.

  10. Adaptive control system having hedge unit and related apparatus and methods

    NASA Technical Reports Server (NTRS)

    Johnson, Eric Norman (Inventor); Calise, Anthony J. (Inventor)

    2007-01-01

    The invention includes an adaptive control system used to control a plant. The adaptive control system includes a hedge unit that receives at least one control signal and a plant state signal. The hedge unit generates a hedge signal based on the control signal, the plant state signal, and a hedge model including a first model having one or more characteristics to which the adaptive control system is not to adapt, and a second model not having the characteristic(s) to which the adaptive control system is not to adapt. The hedge signal is used in the adaptive control system to remove the effect of the characteristic from a signal supplied to an adaptation law unit of the adaptive control system so that the adaptive control system does not adapt to the characteristic in controlling the plant.

  11. Adaptive control system having hedge unit and related apparatus and methods

    NASA Technical Reports Server (NTRS)

    Johnson, Eric Norman (Inventor); Calise, Anthony J. (Inventor)

    2003-01-01

    The invention includes an adaptive control system used to control a plant. The adaptive control system includes a hedge unit that receives at least one control signal and a plant state signal. The hedge unit generates a hedge signal based on the control signal, the plant state signal, and a hedge model including a first model having one or more characteristics to which the adaptive control system is not to adapt, and a second model not having the characteristic(s) to which the adaptive control system is not to adapt. The hedge signal is used in the adaptive control system to remove the effect of the characteristic from a signal supplied to an adaptation law unit of the adaptive control system so that the adaptive control system does not adapt to the characteristic in controlling the plant.

  12. Prism adaptation and neck muscle vibration in healthy individuals: are two methods better than one?

    PubMed

    Guinet, M; Michel, C

    2013-12-19

    Studies involving therapeutic combinations reveal an important benefit in the rehabilitation of neglect patients when compared to single therapies. In light of these observations our present work examines, in healthy individuals, sensorimotor and cognitive after-effects of prism adaptation and neck muscle vibration applied individually or simultaneously. We explored sensorimotor after-effects on visuo-manual open-loop pointing, visual and proprioceptive straight-ahead estimations. We assessed cognitive after-effects on the line bisection task. Fifty-four healthy participants were divided into six groups designated according to the exposure procedure used with each: 'Prism' (P) group; 'Vibration with a sensation of body rotation' (Vb) group; 'Vibration with a move illusion of the LED' (Vl) group; 'Association with a sensation of body rotation' (Ab) group; 'Association with a move illusion of the LED' (Al) group; and 'Control' (C) group. The main findings showed that prism adaptation applied alone or combined with vibration showed significant adaptation in visuo-manual open-loop pointing, visual straight-ahead and proprioceptive straight-ahead. Vibration alone produced significant after-effects on proprioceptive straight-ahead estimation in the Vl group. Furthermore all groups (except C group) showed a rightward neglect-like bias in line bisection following the training procedure. This is the first demonstration of cognitive after-effects following neck muscle vibration in healthy individuals. The simultaneous application of both methods did not produce significant greater after-effects than prism adaptation alone in both sensorimotor and cognitive tasks. These results are discussed in terms of transfer of sensorimotor plasticity to spatial cognition in healthy individuals.

  13. Adaptive methods of two-scale edge detection in post-enhancement visual pattern processing

    NASA Astrophysics Data System (ADS)

    Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2008-04-01

    Adaptive methods are defined and experimentally studied for a two-scale edge detection process that mimics human visual perception of edges and is inspired by the parvo-cellular (P) and magno-cellular (M) physiological subsystems of natural vision. This two-channel processing consists of a high spatial acuity/coarse contrast channel (P) and a coarse acuity/fine contrast (M) channel. We perform edge detection after a very strong non-linear image enhancement that uses smart Retinex image processing. Two conditions that arise from this enhancement demand adaptiveness in edge detection. These conditions are the presence of random noise further exacerbated by the enhancement process, and the equally random occurrence of dense textural visual information. We examine how to best deal with both phenomena with an automatic adaptive computation that treats both high noise and dense textures as too much information, and gracefully shifts from a smallscale to medium-scale edge pattern priorities. This shift is accomplished by using different edge-enhancement schemes that correspond with the (P) and (M) channels of the human visual system. We also examine the case of adapting to a third image condition, namely too little visual information, and automatically adjust edge detection sensitivities when sparse feature information is encountered. When this methodology is applied to a sequence of images of the same scene but with varying exposures and lighting conditions, this edge-detection process produces pattern constancy that is very useful for several imaging applications that rely on image classification in variable imaging conditions.

  14. Data-adapted moving least squares method for 3-D image interpolation

    NASA Astrophysics Data System (ADS)

    Jang, Sumi; Nam, Haewon; Lee, Yeon Ju; Jeong, Byeongseon; Lee, Rena; Yoon, Jungho

    2013-12-01

    In this paper, we present a nonlinear three-dimensional interpolation scheme for gray-level medical images. The scheme is based on the moving least squares method but introduces a fundamental modification. For a given evaluation point, the proposed method finds the local best approximation by reproducing polynomials of a certain degree. In particular, in order to obtain a better match to the local structures of the given image, we employ locally data-adapted least squares methods that can improve the classical one. Some numerical experiments are presented to demonstrate the performance of the proposed method. Five types of data sets are used: MR brain, MR foot, MR abdomen, CT head, and CT foot. From each of the five types, we choose five volumes. The scheme is compared with some well-known linear methods and other recently developed nonlinear methods. For quantitative comparison, we follow the paradigm proposed by Grevera and Udupa (1998). (Each slice is first assumed to be unknown then interpolated by each method. The performance of each interpolation method is assessed statistically.) The PSNR results for the estimated volumes are also provided. We observe that the new method generates better results in both quantitative and visual quality comparisons.

  15. System and method for adaptively deskewing parallel data signals relative to a clock

    DOEpatents

    Jenkins, Philip Nord; Cornett, Frank N.

    2006-04-18

    A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. Each of the plurality of delayed signals is compared to a reference signal to detect changes in the skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in the detected skew.

  16. FALCON: A method for flexible adaptation of local coordinates of nuclei.

    PubMed

    König, Carolin; Hansen, Mads Bøttger; Godtliebsen, Ian H; Christiansen, Ove

    2016-02-21

    We present a flexible scheme for calculating vibrational rectilinear coordinates with well-defined strict locality on a certain set of atoms. Introducing a method for Flexible Adaption of Local COordinates of Nuclei (FALCON) we show how vibrational subspaces can be "grown" in an adaptive manner. Subspace Hessian matrices are set up and used to calculate and analyze vibrational modes and frequencies. FALCON coordinates can more generally be used to construct vibrational coordinates for describing local and (semi-local) interacting modes with desired features. For instance, spatially local vibrations can be approximately described as internal motion within only a group of atoms and delocalized modes can be approximately expressed as relative motions of rigid groups of atoms. The FALCON method can support efficiency in the calculation and analysis of vibrational coordinates and energies in the context of harmonic and anharmonic calculations. The features of this method are demonstrated on a few small molecules, i.e., formylglycine, coumarin, and dimethylether as well as for the amide-I band and low-frequency modes of alanine oligomers and alpha conotoxin.

  17. FALCON: A method for flexible adaptation of local coordinates of nuclei

    NASA Astrophysics Data System (ADS)

    König, Carolin; Hansen, Mads Bøttger; Godtliebsen, Ian H.; Christiansen, Ove

    2016-02-01

    We present a flexible scheme for calculating vibrational rectilinear coordinates with well-defined strict locality on a certain set of atoms. Introducing a method for Flexible Adaption of Local COordinates of Nuclei (FALCON) we show how vibrational subspaces can be "grown" in an adaptive manner. Subspace Hessian matrices are set up and used to calculate and analyze vibrational modes and frequencies. FALCON coordinates can more generally be used to construct vibrational coordinates for describing local and (semi-local) interacting modes with desired features. For instance, spatially local vibrations can be approximately described as internal motion within only a group of atoms and delocalized modes can be approximately expressed as relative motions of rigid groups of atoms. The FALCON method can support efficiency in the calculation and analysis of vibrational coordinates and energies in the context of harmonic and anharmonic calculations. The features of this method are demonstrated on a few small molecules, i.e., formylglycine, coumarin, and dimethylether as well as for the amide-I band and low-frequency modes of alanine oligomers and alpha conotoxin.

  18. Accurate Adaptive Level Set Method and Sharpening Technique for Three Dimensional Deforming Interfaces

    NASA Technical Reports Server (NTRS)

    Kim, Hyoungin; Liou, Meng-Sing

    2011-01-01

    In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems

  19. Adaptive explicit and implicit finite element methods for transient thermal analysis

    NASA Technical Reports Server (NTRS)

    Probert, E. J.; Hassan, O.; Morgan, K.; Peraire, J.

    1992-01-01

    The application of adaptive finite element methods to the solution of transient heat conduction problems in two dimensions is investigated. The computational domain is represented by an unstructured assembly of linear triangular elements and the mesh adaptation is achieved by local regeneration of the grid, using an error estimation procedure coupled to an automatic triangular mesh generator. Two alternative solution procedures are considered. In the first procedure, the solution is advanced by explicit timestepping, with domain decomposition being used to improve the computational efficiency of the method. In the second procedure, an algorithm for constructing continuous lines which pass only once through each node of the mesh is employed. The lines are used as the basis of a fully implicit method, in which the equation system is solved by line relaxation using a block tridiagonal equation solver. The numerical performance of the two procedures is compared for the analysis of a problem involving a moving heat source applied to a convectively cooled cylindrical leading edge.

  20. A three-dimensional adaptive grid method. [for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Nakahashi, K.; Deiwert, G. S.

    1985-01-01

    A three-dimensional solution-adaptive-grid scheme is described which is suitable for complex fluid flows. This method, using tension and torsion spring analogies, was previously developed and successfully applied for two-dimensional flows. In the present work, a collection of three-dimensional flow fields are used to demonstrate the feasibility and versatility of this concept to include an added dimension. Flow fields considered include: (1) supersonic flow past an aerodynamic afterbody with a propulsive jet at incidence to the free stream, (2) supersonic flow past a blunt fin mounted on a solid wall, and (3) supersonic flow over a bump. In addition to generating three-dimensional solution-adapted grids, the method can also be used effectively as an initial grid generator. The utility of the method lies in: (1) optimum distribution of discrete grid points, (2) improvement of accuracy, (3) improved computational efficiency, (4) minimization of data base sizes, and (5) simplified three-dimensional grid generation.

  1. FALCON: A method for flexible adaptation of local coordinates of nuclei.

    PubMed

    König, Carolin; Hansen, Mads Bøttger; Godtliebsen, Ian H; Christiansen, Ove

    2016-02-21

    We present a flexible scheme for calculating vibrational rectilinear coordinates with well-defined strict locality on a certain set of atoms. Introducing a method for Flexible Adaption of Local COordinates of Nuclei (FALCON) we show how vibrational subspaces can be "grown" in an adaptive manner. Subspace Hessian matrices are set up and used to calculate and analyze vibrational modes and frequencies. FALCON coordinates can more generally be used to construct vibrational coordinates for describing local and (semi-local) interacting modes with desired features. For instance, spatially local vibrations can be approximately described as internal motion within only a group of atoms and delocalized modes can be approximately expressed as relative motions of rigid groups of atoms. The FALCON method can support efficiency in the calculation and analysis of vibrational coordinates and energies in the context of harmonic and anharmonic calculations. The features of this method are demonstrated on a few small molecules, i.e., formylglycine, coumarin, and dimethylether as well as for the amide-I band and low-frequency modes of alanine oligomers and alpha conotoxin. PMID:26896977

  2. Efficient reconstruction method for ground layer adaptive optics with mixed natural and laser guide stars.

    PubMed

    Wagner, Roland; Helin, Tapio; Obereder, Andreas; Ramlau, Ronny

    2016-02-20

    The imaging quality of modern ground-based telescopes such as the planned European Extremely Large Telescope is affected by atmospheric turbulence. In consequence, they heavily depend on stable and high-performance adaptive optics (AO) systems. Using measurements of incoming light from guide stars, an AO system compensates for the effects of turbulence by adjusting so-called deformable mirror(s) (DMs) in real time. In this paper, we introduce a novel reconstruction method for ground layer adaptive optics. In the literature, a common approach to this problem is to use Bayesian inference in order to model the specific noise structure appearing due to spot elongation. This approach leads to large coupled systems with high computational effort. Recently, fast solvers of linear order, i.e., with computational complexity O(n), where n is the number of DM actuators, have emerged. However, the quality of such methods typically degrades in low flux conditions. Our key contribution is to achieve the high quality of the standard Bayesian approach while at the same time maintaining the linear order speed of the recent solvers. Our method is based on performing a separate preprocessing step before applying the cumulative reconstructor (CuReD). The efficiency and performance of the new reconstructor are demonstrated using the OCTOPUS, the official end-to-end simulation environment of the ESO for extremely large telescopes. For more specific simulations we also use the MOST toolbox. PMID:26906596

  3. Wavefront detection method of a single-sensor based adaptive optics system.

    PubMed

    Wang, Chongchong; Hu, Lifa; Xu, Huanyu; Wang, Yukun; Li, Dayu; Wang, Shaoxin; Mu, Quanquan; Yang, Chengliang; Cao, Zhaoliang; Lu, Xinghai; Xuan, Li

    2015-08-10

    In adaptive optics system (AOS) for optical telescopes, the reported wavefront sensing strategy consists of two parts: a specific sensor for tip-tilt (TT) detection and another wavefront sensor for other distortions detection. Thus, a part of incident light has to be used for TT detection, which decreases the light energy used by wavefront sensor and eventually reduces the precision of wavefront correction. In this paper, a single Shack-Hartmann wavefront sensor based wavefront measurement method is presented for both large amplitude TT and other distortions' measurement. Experiments were performed for testing the presented wavefront method and validating the wavefront detection and correction ability of the single-sensor based AOS. With adaptive correction, the root-mean-square of residual TT was less than 0.2 λ, and a clear image was obtained in the lab. Equipped on a 1.23-meter optical telescope, the binary stars with angle distance of 0.6″ were clearly resolved using the AOS. This wavefront measurement method removes the separate TT sensor, which not only simplifies the AOS but also saves light energy for subsequent wavefront sensing and imaging, and eventually improves the detection and imaging capability of the AOS. PMID:26367988

  4. A Newton method with adaptive finite elements for solving phase-change problems with natural convection

    NASA Astrophysics Data System (ADS)

    Danaila, Ionut; Moglan, Raluca; Hecht, Frédéric; Le Masson, Stéphane

    2014-10-01

    We present a new numerical system using finite elements with mesh adaptivity for the simulation of solid-liquid phase change systems. In the liquid phase, the natural convection flow is simulated by solving the incompressible Navier-Stokes equations with Boussinesq approximation. A variable viscosity model allows the velocity to progressively vanish in the solid phase, through an intermediate mushy region. The phase change is modeled by introducing an implicit enthalpy source term in the heat equation. The final system of equations describing the liquid-solid system by a single domain approach is solved using a Newton iterative algorithm. The space discretization is based on a P2-P1 Taylor-Hood finite elements and mesh adaptivity by metric control is used to accurately track the solid-liquid interface or the density inversion interface for water flows. The numerical method is validated against classical benchmarks that progressively add strong non-linearities in the system of equations: natural convection of air, natural convection of water, melting of a phase-change material and water freezing. Very good agreement with experimental data is obtained for each test case, proving the capability of the method to deal with both melting and solidification problems with convection. The presented numerical method is easy to implement using FreeFem++ software using a syntax close to the mathematical formulation.

  5. Directionally adaptive finite element method for multidimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Tan, Zhiqiang; Varghese, Philip L.

    1993-01-01

    A directionally adaptive finite element method for multidimensional compressible flows is presented. Quadrilateral and hexahedral elements are used because they have several advantages over triangular and tetrahedral elements. Unlike traditional methods that use quadrilateral/hexahedral elements, our method allows an element to be divided in each of the three directions in 3D and two directions in 2D. Some restrictions on mesh structure are found to be necessary, especially in 3D. The refining and coarsening procedures, and the treatment of constraints are given. A new implementation of upwind schemes in the constrained finite element system is presented. Some example problems, including a Mach 10 shock interaction with the walls of a 2D channel, a 2D viscous compression corner flow, and inviscid and viscous 3D flows in square channels, are also shown.

  6. Adaptive homochromous disturbance elimination and feature selection based mean-shift vehicle tracking method

    NASA Astrophysics Data System (ADS)

    Ding, Jie; Lei, Bo; Hong, Pu; Wang, Chensheng

    2011-11-01

    This paper introduces a novel method to adaptively diminish the effects of disturbance in the airborne camera shooting traffic video. Based on the moving vector of the tracked vehicle, a search area in the next frame is predicted, which is the area of interest (AOI) to the mean-shift method. Background color estimation is performed according to the previous tracking, which is used to judge whether there is possible disturbance in the predicted search area in the next frame. Without disturbance, the difference image of vehicle and background could be used as input features to the mean-shift algorithm; with disturbance, the histogram of colors in the predict area is calculated to find the most and second disturbing color. Experiments proved this method could diminish or eliminate the effects of homochromous disturbance and lead to more precise and more robust tracking.

  7. Multigrid iterative method with adaptive spatial support for computed tomography reconstruction from few-view data

    NASA Astrophysics Data System (ADS)

    Lee, Ping-Chang

    2014-03-01

    Computed tomography (CT) plays a key role in modern medical system, whether it be for diagnosis or therapy. As an increased risk of cancer development is associated with exposure to radiation, reducing radiation exposure in CT becomes an essential issue. Based on the compressive sensing (CS) theory, iterative based method with total variation (TV) minimization is proven to be a powerful framework for few-view tomographic image reconstruction. Multigrid method is an iterative method for solving both linear and nonlinear systems, especially when the system contains a huge number of components. In medical imaging, image background is often defined by zero intensity, thus attaining spatial support of the image, which is helpful for iterative reconstruction. In the proposed method, the image support is not considered as a priori knowledge. Rather, it evolves during the reconstruction process. Based on the CS framework, we proposed a multigrid method with adaptive spatial support constraint. The simultaneous algebraic reconstruction (SART) with TV minimization is implemented for comparison purpose. The numerical result shows: 1. Multigrid method has better performance while less than 60 views of projection data were used, 2. Spatial support highly improves the CS reconstruction, and 3. When few views of projection data were measured, our method performs better than the SART+TV method with spatial support constraint.

  8. An adaptive distance-based group contribution method for thermodynamic property prediction.

    PubMed

    He, Tanjin; Li, Shuang; Chi, Yawei; Zhang, Hong-Bo; Wang, Zhi; Yang, Bin; He, Xin; You, Xiaoqing

    2016-09-14

    In the search for an accurate yet inexpensive method to predict thermodynamic properties of large hydrocarbon molecules, we have developed an automatic and adaptive distance-based group contribution (DBGC) method. The method characterizes the group interaction within a molecule with an exponential decay function of the group-to-group distance, defined as the number of bonds between the groups. A database containing the molecular bonding information and the standard enthalpy of formation (Hf,298K) for alkanes, alkenes, and their radicals at the M06-2X/def2-TZVP//B3LYP/6-31G(d) level of theory was constructed. Multiple linear regression (MLR) and artificial neural network (ANN) fitting were used to obtain the contributions from individual groups and group interactions for further predictions. Compared with the conventional group additivity (GA) method, the DBGC method predicts Hf,298K for alkanes more accurately using the same training sets. Particularly for some highly branched large hydrocarbons, the discrepancy with the literature data is smaller for the DBGC method than the conventional GA method. When extended to other molecular classes, including alkenes and radicals, the overall accuracy level of this new method is still satisfactory. PMID:27522953

  9. A new method for the analysis of the dynamics of the molecular genetic control systems. II. Application of the method of generalized threshold models in the investigation of concrete genetic systems.

    PubMed

    Prokudina, E I; Valeev RYu; Tchuraev, R N

    1991-07-01

    Mathematical models of the prokaryotic control systems of tryptophan biosynthesis (both normal and with cloned blocks) and arabinose catabolism have been built using the method of generalized threshold models. Kinetic curves for molecular components (mRNAs, proteins, metabolites) of the systems considered are obtained. It has been shown that the method of generalized threshold models gives a more detailed qualitative picture of the dynamics of the molecular genetic control systems in comparison with the heuristic method of threshold models. The qualitative analysis of the functioning of the following mechanisms of control of the tryptophan biosynthesis: (1) inhibition of the activity of anthranilate synthetase by tryptophan, (2) repression and (3) attenuation of transcription of the tryptophan operon on the basis of the mathematical model of the control system of the tryptophan biosynthesis demonstrates that feedback inhibition is the most operative of the considered mechanisms while repression allows the bacterium to economize intracellular resources. As regards the control system of the arabinose catabolism the results of modelling enable us to state the following. The induction by arabinose within a wide range of parameter values causes two subsystems (araBAD and transport operons) of the arabinose regulon with a low rate of arabinose utilization to pass into a stationary regime and one subsystem (araC operon) to pass into a stable periodical regime. A study of the system characterized by the effective utilization of arabinose has shown that under induction by arabinose stable oscillations with small amplitudes of the concentration of regulatory protein and oscillations with large amplitudes of the concentrations of arabinose-isomerase and transport protein may occur. The period of the oscillation depends on the mean lifetime of the "activator-DNA" complex and on the rate constant of arabinoseisomerase degradation.

  10. Building Adaptive Capacity with the Delphi Method and Mediated Modeling for Water Quality and Climate Change Adaptation in Lake Champlain Basin

    NASA Astrophysics Data System (ADS)

    Coleman, S.; Hurley, S.; Koliba, C.; Zia, A.; Exler, S.

    2014-12-01

    Eutrophication and nutrient pollution of surface waters occur within complex governance, social, hydrologic and biophysical basin contexts. The pervasive and perennial nutrient pollution in Lake Champlain Basin, despite decades of efforts, exemplifies problems found across the world's surface waters. Stakeholders with diverse values, interests, and forms of explicit and tacit knowledge determine water quality impacts through land use, agricultural and water resource decisions. Uncertainty, ambiguity and dynamic feedback further complicate the ability to promote the continual provision of water quality and ecosystem services. Adaptive management of water resources and land use requires mechanisms to allow for learning and integration of new information over time. The transdisciplinary Research on Adaptation to Climate Change (RACC) team is working to build regional adaptive capacity in Lake Champlain Basin while studying and integrating governance, land use, hydrological, and biophysical systems to evaluate implications for adaptive management. The RACC team has engaged stakeholders through mediated modeling workshops, online forums, surveys, focus groups and interviews. In March 2014, CSS2CC.org, an interactive online forum to source and identify adaptive interventions from a group of stakeholders across sectors was launched. The forum, based on the Delphi Method, brings forward the collective wisdom of stakeholders and experts to identify potential interventions and governance designs in response to scientific uncertainty and ambiguity surrounding the effectiveness of any strategy, climate change impacts, and the social and natural systems governing water quality and eutrophication. A Mediated Modeling Workshop followed the forum in May 2014, where participants refined and identified plausible interventions under different governance, policy and resource scenarios. Results from the online forum and workshop can identify emerging consensus across scales and sectors

  11. Practical Method of Adaptive Radiotherapy for Prostate Cancer Using Real-Time Electromagnetic Tracking

    SciTech Connect

    Olsen, Jeffrey R.; Noel, Camille E.; Baker, Kenneth; Santanam, Lakshmi; Michalski, Jeff M.; Parikh, Parag J.

    2012-04-01

    Purpose: We have created an automated process using real-time tracking data to evaluate the adequacy of planning target volume (PTV) margins in prostate cancer, allowing a process of adaptive radiotherapy with minimal physician workload. We present an analysis of PTV adequacy and a proposed adaptive process. Methods and Materials: Tracking data were analyzed for 15 patients who underwent step-and-shoot multi-leaf collimation (SMLC) intensity-modulated radiation therapy (IMRT) with uniform 5-mm PTV margins for prostate cancer using the Calypso Registered-Sign Localization System. Additional plans were generated with 0- and 3-mm margins. A custom software application using the planned dose distribution and structure location from computed tomography (CT) simulation was developed to evaluate the dosimetric impact to the target due to motion. The dose delivered to the prostate was calculated for the initial three, five, and 10 fractions, and for the entire treatment. Treatment was accepted as adequate if the minimum delivered prostate dose (D{sub min}) was at least 98% of the planned D{sub min}. Results: For 0-, 3-, and 5-mm PTV margins, adequate treatment was obtained in 3 of 15, 12 of 15, and 15 of 15 patients, and the delivered D{sub min} ranged from 78% to 99%, 96% to 100%, and 99% to 100% of the planned D{sub min}. Changes in D{sub min} did not correlate with magnitude of prostate motion. Treatment adequacy during the first 10 fractions predicted sufficient dose delivery for the entire treatment for all patients and margins. Conclusions: Our adaptive process successfully used real-time tracking data to predict the need for PTV modifications, without the added burden of physician contouring and image analysis. Our methods are applicable to other uses of real-time tracking, including hypofractionated treatment.

  12. The Adaptive Biasing Force Method: Everything You Always Wanted To Know but Were Afraid To Ask

    PubMed Central

    2014-01-01

    In the host of numerical schemes devised to calculate free energy differences by way of geometric transformations, the adaptive biasing force algorithm has emerged as a promising route to map complex free-energy landscapes. It relies upon the simple concept that as a simulation progresses, a continuously updated biasing force is added to the equations of motion, such that in the long-time limit it yields a Hamiltonian devoid of an average force acting along the transition coordinate of interest. This means that sampling proceeds uniformly on a flat free-energy surface, thus providing reliable free-energy estimates. Much of the appeal of the algorithm to the practitioner is in its physically intuitive underlying ideas and the absence of any requirements for prior knowledge about free-energy landscapes. Since its inception in 2001, the adaptive biasing force scheme has been the subject of considerable attention, from in-depth mathematical analysis of convergence properties to novel developments and extensions. The method has also been successfully applied to many challenging problems in chemistry and biology. In this contribution, the method is presented in a comprehensive, self-contained fashion, discussing with a critical eye its properties, applicability, and inherent limitations, as well as introducing novel extensions. Through free-energy calculations of prototypical molecular systems, many methodological aspects are examined, from stratification strategies to overcoming the so-called hidden barriers in orthogonal space, relevant not only to the adaptive biasing force algorithm but also to other importance-sampling schemes. On the basis of the discussions in this paper, a number of good practices for improving the efficiency and reliability of the computed free-energy differences are proposed. PMID:25247823

  13. Compact integration factor methods for complex domains and adaptive mesh refinement.

    PubMed

    Liu, Xinfeng; Nie, Qing

    2010-08-10

    Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed.

  14. An adaptive multifluid interface-capturing method for compressible flow in complex geometries

    SciTech Connect

    Greenough, J.A.; Beckner, V.; Pember, R.B.; Crutchfield, W.Y.; Bell, J.B.; Colella, P.

    1995-04-01

    We present a numerical method for solving the multifluid equations of gas dynamics using an operator-split second-order Godunov method for flow in complex geometries in two and three dimensions. The multifluid system treats the fluid components as thermodynamically distinct entities and correctly models fluids with different compressibilities. This treatment allows a general equation-of-state (EOS) specification and the method is implemented so that the EOS references are minimized. The current method is complementary to volume-of-fluid (VOF) methods in the sense that a VOF representation is used, but no interface reconstruction is performed. The Godunov integrator captures the interface during the solution process. The basic multifluid integrator is coupled to a Cartesian grid algorithm that also uses a VOF representation of the fluid-body interface. This representation of the fluid-body interface allows the algorithm to easily accommodate arbitrarily complex geometries. The resulting single grid multifluid-Cartesian grid integration scheme is coupled to a local adaptive mesh refinement algorithm that dynamically refines selected regions of the computational grid to achieve a desired level of accuracy. The overall method is fully conservative with respect to the total mixture. The method will be used for a simple nozzle problem in two-dimensional axisymmetric coordinates.

  15. Compact integration factor methods for complex domains and adaptive mesh refinement

    PubMed Central

    Liu, Xinfeng; Nie, Qing

    2010-01-01

    Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed. PMID:20543883

  16. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    PubMed

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical.

  17. A Wavelet-Based ECG Delineation Method: Adaptation to an Experimental Electrograms with Manifested Global Ischemia.

    PubMed

    Hejč, Jakub; Vítek, Martin; Ronzhina, Marina; Nováková, Marie; Kolářová, Jana

    2015-09-01

    We present a novel wavelet-based ECG delineation method with robust classification of P wave and T wave. The work is aimed on an adaptation of the method to long-term experimental electrograms (EGs) measured on isolated rabbit heart and to evaluate the effect of global ischemia in experimental EGs on delineation performance. The algorithm was tested on a set of 263 rabbit EGs with established reference points and on human signals using standard Common Standards for Quantitative Electrocardiography Standard Database (CSEDB). On CSEDB, standard deviation (SD) of measured errors satisfies given criterions in each point and the results are comparable to other published works. In rabbit signals, our QRS detector reached sensitivity of 99.87% and positive predictivity of 99.89% despite an overlay of spectral components of QRS complex, P wave and power line noise. The algorithm shows great performance in suppressing J-point elevation and reached low overall error in both, QRS onset (SD = 2.8 ms) and QRS offset (SD = 4.3 ms) delineation. T wave offset is detected with acceptable error (SD = 12.9 ms) and sensitivity nearly 99%. Variance of the errors during global ischemia remains relatively stable, however more failures in detection of T wave and P wave occur. Due to differences in spectral and timing characteristics parameters of rabbit based algorithm have to be highly adaptable and set more precisely than in human ECG signals to reach acceptable performance. PMID:26577367

  18. An Efficient Adaptive Window Size Selection Method for Improving Spectrogram Visualization

    PubMed Central

    Khan, Omar Usman

    2016-01-01

    Short Time Fourier Transform (STFT) is an important technique for the time-frequency analysis of a time varying signal. The basic approach behind it involves the application of a Fast Fourier Transform (FFT) to a signal multiplied with an appropriate window function with fixed resolution. The selection of an appropriate window size is difficult when no background information about the input signal is known. In this paper, a novel empirical model is proposed that adaptively adjusts the window size for a narrow band-signal using spectrum sensing technique. For wide-band signals, where a fixed time-frequency resolution is undesirable, the approach adapts the constant Q transform (CQT). Unlike the STFT, the CQT provides a varying time-frequency resolution. This results in a high spectral resolution at low frequencies and high temporal resolution at high frequencies. In this paper, a simple but effective switching framework is provided between both STFT and CQT. The proposed method also allows for the dynamic construction of a filter bank according to user-defined parameters. This helps in reducing redundant entries in the filter bank. Results obtained from the proposed method not only improve the spectrogram visualization but also reduce the computation cost and achieves 87.71% of the appropriate window length selection.

  19. An Adaptive Fast Multipole Boundary Element Method for Poisson-Boltzmann Electrostatics

    SciTech Connect

    Lu, Benzhuo; Cheng, Xiaolin; Huang, Jingfang; McCammon, Jonathan

    2009-01-01

    The numerical solution of the Poisson Boltzmann (PB) equation is a useful but a computationally demanding tool for studying electrostatic solvation effects in chemical and biomolecular systems. Recently, we have described a boundary integral equation-based PB solver accelerated by a new version of the fast multipole method (FMM). The overall algorithm shows an order N complexity in both the computational cost and memory usage. Here, we present an updated version of the solver by using an adaptive FMM for accelerating the convolution type matrix-vector multiplications. The adaptive algorithm, when compared to our previous nonadaptive one, not only significantly improves the performance of the overall memory usage but also remarkably speeds the calculation because of an improved load balancing between the local- and far-field calculations. We have also implemented a node-patch discretization scheme that leads to a reduction of unknowns by a factor of 2 relative to the constant element method without sacrificing accuracy. As a result of these improvements, the new solver makes the PB calculation truly feasible for large-scale biomolecular systems such as a 30S ribosome molecule even on a typical 2008 desktop computer.

  20. An Efficient Adaptive Window Size Selection Method for Improving Spectrogram Visualization.

    PubMed

    Nisar, Shibli; Khan, Omar Usman; Tariq, Muhammad

    2016-01-01

    Short Time Fourier Transform (STFT) is an important technique for the time-frequency analysis of a time varying signal. The basic approach behind it involves the application of a Fast Fourier Transform (FFT) to a signal multiplied with an appropriate window function with fixed resolution. The selection of an appropriate window size is difficult when no background information about the input signal is known. In this paper, a novel empirical model is proposed that adaptively adjusts the window size for a narrow band-signal using spectrum sensing technique. For wide-band signals, where a fixed time-frequency resolution is undesirable, the approach adapts the constant Q transform (CQT). Unlike the STFT, the CQT provides a varying time-frequency resolution. This results in a high spectral resolution at low frequencies and high temporal resolution at high frequencies. In this paper, a simple but effective switching framework is provided between both STFT and CQT. The proposed method also allows for the dynamic construction of a filter bank according to user-defined parameters. This helps in reducing redundant entries in the filter bank. Results obtained from the proposed method not only improve the spectrogram visualization but also reduce the computation cost and achieves 87.71% of the appropriate window length selection. PMID:27642291