Bremer, P. -T.
2014-08-26
ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.
Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.
2015-01-01
Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798
An adaptive threshold method for improving astrometry of space debris CCD images
NASA Astrophysics Data System (ADS)
Sun, Rong-yu; Zhao, Chang-yin
2014-06-01
Optical survey is a main technique for observing space debris, and precisely measuring the positions of space debris is of great importance. Due to several factors, e.g. the angle object normal to the observer, the shape as well as the attitude of the object, the variations of observed characteristics for low earth orbital space debris are distinct. When we look at optical CCD images of observed objects, the size and brightness are varying, hence it’s difficult to decide the threshold during centroid measurement and precise astrometry. Traditionally the threshold is given empirically and constantly in data reduction, and obviously it’s not suitable for data reduction of space debris. Here we offer a solution to provide the threshold. Our method assumes that the PSF (point spread function) is Gaussian and estimates the signal flux by a directly two-dimensional Gaussian fit, then a cubic spline interpolation is performed to divide each initial pixel into several sub-pixels, at last the threshold is determined by the estimation of signal flux and the sub-pixels above threshold are separated to estimate the centroid. A trail observation of the fast spinning satellite Ajisai is made and the CCD frames are obtained to test our algorithm. The calibration precision of various threshold is obtained through the comparison between the observed equatorial position and the reference one, the latter are obtained from the precise ephemeris of the satellite. The results indicate that our method reduces the total errors of measurements, it works effectively in improving the centering precision of space debris images.
Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding
NASA Astrophysics Data System (ADS)
Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry
2014-07-01
Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.
2013-01-01
The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the ’brown component’ extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without
Improved visual background extractor using an adaptive distance threshold
NASA Astrophysics Data System (ADS)
Han, Guang; Wang, Jinkuan; Cai, Xi
2014-11-01
Camouflage is a challenging issue in moving object detection. Even the recent and advanced background subtraction technique, visual background extractor (ViBe), cannot effectively deal with it. To better handle camouflage according to the perception characteristics of the human visual system (HVS) in terms of minimum change of intensity under a certain background illumination, we propose an improved ViBe method using an adaptive distance threshold, named IViBe for short. Different from the original ViBe using a fixed distance threshold for background matching, our approach adaptively sets a distance threshold for each background sample based on its intensity. Through analyzing the performance of the HVS in discriminating intensity changes, we determine a reasonable ratio between the intensity of a background sample and its corresponding distance threshold. We also analyze the impacts of our adaptive threshold together with an update mechanism on detection results. Experimental results demonstrate that our method outperforms ViBe even when the foreground and background share similar intensities. Furthermore, in a scenario where foreground objects are motionless for several frames, our IViBe not only reduces the initial false negatives, but also suppresses the diffusion of misclassification caused by those false negatives serving as erroneous background seeds, and hence shows an improved performance compared to ViBe.
An Adaptive Threshold in Mammalian Neocortical Evolution
Kalinka, Alex T.; Tomancak, Pavel; Huttner, Wieland B.
2014-01-01
Expansion of the neocortex is a hallmark of human evolution. However, determining which adaptive mechanisms facilitated its expansion remains an open question. Here we show, using the gyrencephaly index (GI) and other physiological and life-history data for 102 mammalian species, that gyrencephaly is an ancestral mammalian trait. We find that variation in GI does not evolve linearly across species, but that mammals constitute two principal groups above and below a GI threshold value of 1.5, approximately equal to 109 neurons, which may be characterized by distinct constellations of physiological and life-history traits. By integrating data on neurogenic period, neuroepithelial founder pool size, cell-cycle length, progenitor-type abundances, and cortical neuron number into discrete mathematical models, we identify symmetric proliferative divisions of basal progenitors in the subventricular zone of the developing neocortex as evolutionarily necessary for generating a 14-fold increase in daily prenatal neuron production, traversal of the GI threshold, and thus establishment of two principal groups. We conclude that, despite considerable neuroanatomical differences, changes in the length of the neurogenic period alone, rather than any novel neurogenic progenitor lineage, are sufficient to explain differences in neuron number and neocortical size between species within the same principal group. PMID:25405475
Adaptive Spike Threshold Enables Robust and Temporally Precise Neuronal Encoding
Resnik, Andrey; Celikel, Tansu; Englitz, Bernhard
2016-01-01
Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information. PMID:27304526
Fault-tolerant adaptive FIR filters using variable detection threshold
NASA Astrophysics Data System (ADS)
Lin, L. K.; Redinbo, G. R.
1994-10-01
Adaptive filters are widely used in many digital signal processing applications, where tap weight of the filters are adjusted by stochastic gradient search methods. Block adaptive filtering techniques, such as block least mean square and block conjugate gradient algorithm, were developed to speed up the convergence as well as improve the tracking capability which are two important factors in designing real-time adaptive filter systems. Even though algorithm-based fault tolerance can be used as a low-cost high level fault-tolerant technique to protect the aforementioned systems from hardware failures with minimal hardware overhead, the issue of choosing a good detection threshold remains a challenging problem. First of all, the systems usually only have limited computational resources, i.e., concurrent error detection and correction is not feasible. Secondly, any prior knowledge of input data is very difficult to get in practical settings. We propose a checksum-based fault detection scheme using two-level variable detection thresholds that is dynamically dependent on the past syndromes. Simulations show that the proposed scheme reduces the possibility of false alarms and has a high degree of fault coverage in adaptive filter systems.
An adaptive threshold detector and channel parameter estimator for deep space optical communications
NASA Technical Reports Server (NTRS)
Arabshahi, P.; Mukai, R.; Yan, T. -Y.
2001-01-01
This paper presents a method for optimal adaptive setting of ulse-position-modulation pulse detection thresholds, which minimizes the total probability of error for the dynamically fading optical fee space channel.
Adaptive thresholding for reliable topological inference in single subject fMRI analysis.
Gorgolewski, Krzysztof J; Storkey, Amos J; Bastin, Mark E; Pernet, Cyril R
2012-01-01
Single subject fMRI has proved to be a useful tool for mapping functional areas in clinical procedures such as tumor resection. Using fMRI data, clinicians assess the risk, plan and execute such procedures based on thresholded statistical maps. However, because current thresholding methods were developed mainly in the context of cognitive neuroscience group studies, most single subject fMRI maps are thresholded manually to satisfy specific criteria related to single subject analyzes. Here, we propose a new adaptive thresholding method which combines Gamma-Gaussian mixture modeling with topological thresholding to improve cluster delineation. In a series of simulations we show that by adapting to the signal and noise properties, the new method performs well in terms of total number of errors but also in terms of the trade-off between false negative and positive cluster error rates. Similarly, simulations show that adaptive thresholding performs better than fixed thresholding in terms of over and underestimation of the true activation border (i.e., higher spatial accuracy). Finally, through simulations and a motor test-retest study on 10 volunteer subjects, we show that adaptive thresholding improves reliability, mainly by accounting for the global signal variance. This in turn increases the likelihood that the true activation pattern can be determined offering an automatic yet flexible way to threshold single subject fMRI maps. PMID:22936908
Adaptive Thresholding Technique for Retinal Vessel Segmentation Based on GLCM-Energy Information
Mapayi, Temitope; Viriri, Serestina; Tapamo, Jules-Raymond
2015-01-01
Although retinal vessel segmentation has been extensively researched, a robust and time efficient segmentation method is highly needed. This paper presents a local adaptive thresholding technique based on gray level cooccurrence matrix- (GLCM-) energy information for retinal vessel segmentation. Different thresholds were computed using GLCM-energy information. An experimental evaluation on DRIVE database using the grayscale intensity and Green Channel of the retinal image demonstrates the high performance of the proposed local adaptive thresholding technique. The maximum average accuracy rates of 0.9511 and 0.9510 with maximum average sensitivity rates of 0.7650 and 0.7641 were achieved on DRIVE and STARE databases, respectively. When compared to the widely previously used techniques on the databases, the proposed adaptive thresholding technique is time efficient with a higher average sensitivity and average accuracy rates in the same range of very good specificity. PMID:25802550
Methods for automatic trigger threshold adjustment
Welch, Benjamin J; Partridge, Michael E
2014-03-18
Methods are presented for adjusting trigger threshold values to compensate for drift in the quiescent level of a signal monitored for initiating a data recording event, thereby avoiding false triggering conditions. Initial threshold values are periodically adjusted by re-measuring the quiescent signal level, and adjusting the threshold values by an offset computation based upon the measured quiescent signal level drift. Re-computation of the trigger threshold values can be implemented on time based or counter based criteria. Additionally, a qualification width counter can be utilized to implement a requirement that a trigger threshold criterion be met a given number of times prior to initiating a data recording event, further reducing the possibility of a false triggering situation.
Approach to nonparametric cooperative multiband segmentation with adaptive threshold.
Sebari, Imane; He, Dong-Chen
2009-07-10
We present a new nonparametric cooperative approach to multiband image segmentation. It is based on cooperation between region-growing segmentation and edge segmentation. This approach requires no input data other than the images to be processed. It uses a spectral homogeneity criterion whose threshold is determined automatically. The threshold is adaptive and varies depending on the objects to be segmented. Applying this new approach to very high resolution satellite imagery has yielded satisfactory results. The approach demonstrated its performance on images of varied complexity and was able to detect objects of great spatial and spectral heterogeneity. PMID:19593349
Methods for threshold determination in multiplexed assays
Tammero, Lance F. Bentley; Dzenitis, John M; Hindson, Benjamin J
2014-06-24
Methods for determination of threshold values of signatures comprised in an assay are described. Each signature enables detection of a target. The methods determine a probability density function of negative samples and a corresponding false positive rate curve. A false positive criterion is established and a threshold for that signature is determined as a point at which the false positive rate curve intersects the false positive criterion. A method for quantitative analysis and interpretation of assay results together with a method for determination of a desired limit of detection of a signature in an assay are also described.
Adaptive threshold harvesting and the suppression of transients.
Segura, Juan; Hilker, Frank M; Franco, Daniel
2016-04-21
Fluctuations in population size are in many cases undesirable, as they can induce outbreaks and extinctions or impede the optimal management of populations. We propose the strategy of adaptive threshold harvesting (ATH) to control fluctuations in population size. In this strategy, the population is harvested whenever population size has grown beyond a certain proportion in comparison to the previous generation. Taking such population increases into account, ATH intervenes also at smaller population sizes than the strategy of threshold harvesting. Moreover, ATH is the harvesting version of adaptive limiter control (ALC) that has recently been shown to stabilize population oscillations in both experiments and theoretical studies. We find that ATH has similar stabilization properties as ALC and thus offers itself as a harvesting alternative for the control of pests, exploitation of biological resources, or when restocking interventions required from ALC are unfeasible. We present numerical simulations of ATH to illustrate its performance in the presence of noise, lattice effect, and Allee effect. In addition, we propose an adjustment to both ATH and ALC that restricts interventions when control seems unnecessary, i.e. when population size is too small or too large, respectively. This adjustment cancels prolonged transients. PMID:26854876
Research of adaptive threshold model and its application in iris tracking
NASA Astrophysics Data System (ADS)
Zhao, Qijie; Tu, Dawei; Wang, Rensan; Gao, Daming
2005-02-01
The relationship between gray value of pixels and macro-information in image has been analyzed with the method in statistical mechanics. After simulating and curve fitting with the experiment data by statistic and regression method, an adaptive threshold model between average gray value and image threshold has been proposed in terms of Boltzmann statistics. On the other hand, the image characteristics around the eye region and the states of eyeball also have been analyzed, and an algorithm to extract the eye feature and locate its position on the image has been proposed, furthermore, another algorithm has been proposed to find the iris characteristic line and then to coordinate the iris center. At last, considering the cases of head gesture, different head position, and the opening state of eyes, some experiments have been respectively done with the function based on the adaptive threshold model and the designed algorithms in eye-gaze input human-computer interaction (HCI) system. The experiment results show that the algorithms can widely be applied in different cases, and real-time iris tracking can be performed with the adaptive threshold model and algorithms.
Adaptive Algebraic Multigrid Methods
Brezina, M; Falgout, R; MacLachlan, S; Manteuffel, T; McCormick, S; Ruge, J
2004-04-09
Our ability to simulate physical processes numerically is constrained by our ability to solve the resulting linear systems, prompting substantial research into the development of multiscale iterative methods capable of solving these linear systems with an optimal amount of effort. Overcoming the limitations of geometric multigrid methods to simple geometries and differential equations, algebraic multigrid methods construct the multigrid hierarchy based only on the given matrix. While this allows for efficient black-box solution of the linear systems associated with discretizations of many elliptic differential equations, it also results in a lack of robustness due to assumptions made on the near-null spaces of these matrices. This paper introduces an extension to algebraic multigrid methods that removes the need to make such assumptions by utilizing an adaptive process. The principles which guide the adaptivity are highlighted, as well as their application to algebraic multigrid solution of certain symmetric positive-definite linear systems.
Optimal thresholds for the estimation of area rain-rate moments by the threshold method
NASA Technical Reports Server (NTRS)
Short, David A.; Shimizu, Kunio; Kedem, Benjamin
1993-01-01
Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.
Removal of ocular artifacts from EEG using adaptive thresholding of wavelet coefficients
NASA Astrophysics Data System (ADS)
Krishnaveni, V.; Jayaraman, S.; Anitha, L.; Ramadoss, K.
2006-12-01
Electroencephalogram (EEG) gives researchers a non-invasive way to record cerebral activity. It is a valuable tool that helps clinicians to diagnose various neurological disorders and brain diseases. Blinking or moving the eyes produces large electrical potential around the eyes known as electrooculogram. It is a non-cortical activity which spreads across the scalp and contaminates the EEG recordings. These contaminating potentials are called ocular artifacts (OAs). Rejecting contaminated trials causes substantial data loss, and restricting eye movements/blinks limits the possible experimental designs and may affect the cognitive processes under investigation. In this paper, a nonlinear time-scale adaptive denoising system based on a wavelet shrinkage scheme has been used for removing OAs from EEG. The time-scale adaptive algorithm is based on Stein's unbiased risk estimate (SURE) and a soft-like thresholding function which searches for optimal thresholds using a gradient based adaptive algorithm is used. Denoising EEG with the proposed algorithm yields better results in terms of ocular artifact reduction and retention of background EEG activity compared to non-adaptive thresholding methods and the JADE algorithm.
Adaptations to training at the individual anaerobic threshold.
Keith, S P; Jacobs, I; McLellan, T M
1992-01-01
The individual anaerobic threshold (Th(an)) is the highest metabolic rate at which blood lactate concentrations can be maintained at a steady-state during prolonged exercise. The purpose of this study was to test the hypothesis that training at the Th(an) would cause a greater change in indicators of training adaptation than would training "around" the Th(an). Three groups of subjects were evaluated before, and again after 4 and 8 weeks of training: a control group, a group which trained continuously for 30 min at the Th(an) intensity (SS), and a group (NSS) which divided the 30 min of training into 7.5-min blocks at intensities which alternated between being below the Th(an) [Th(an) -30% of the difference between Th(an) and maximal oxygen consumption (VO2max)] and above the Th(an) (Th(an) +30% of the difference between Th(an) and VO2max). The VO2max increased significantly from 4.06 to 4.27 l.min-1 in SS and from 3.89 to 4.06 l.min-1 in NSS. The power output (W) at Th(an) increased from 70.5 to 79.8% VO2max in SS and from 71.1 to 80.7% VO2max in NSS. The magnitude of change in VO2max, W at Th(an), % VO2max at Th(an) and in exercise time to exhaustion at the pretraining Th(an) was similar in both trained groups. Vastus lateralis citrate synthase and 3-hydroxyacyl-CoA-dehydrogenase activities increased to the same extent in both trained groups. While all of these training-induced adaptations were statistically significant (P < 0.05), there were no significant changes in any of these variables for the control subjects.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:1425631
Accelerated adaptive integration method.
Kaus, Joseph W; Arrar, Mehrnoosh; McCammon, J Andrew
2014-05-15
Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083
Accelerated Adaptive Integration Method
2015-01-01
Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083
NASA Astrophysics Data System (ADS)
Nejadmalayeri, Alireza
The current work develops a wavelet-based adaptive variable fidelity approach that integrates Wavelet-based Direct Numerical Simulation (WDNS), Coherent Vortex Simulations (CVS), and Stochastic Coherent Adaptive Large Eddy Simulations (SCALES). The proposed methodology employs the notion of spatially and temporarily varying wavelet thresholding combined with hierarchical wavelet-based turbulence modeling. The transition between WDNS, CVS, and SCALES regimes is achieved through two-way physics-based feedback between the modeled SGS dissipation (or other dynamically important physical quantity) and the spatial resolution. The feedback is based on spatio-temporal variation of the wavelet threshold, where the thresholding level is adjusted on the fly depending on the deviation of local significant SGS dissipation from the user prescribed level. This strategy overcomes a major limitation for all previously existing wavelet-based multi-resolution schemes: the global thresholding criterion, which does not fully utilize the spatial/temporal intermittency of the turbulent flow. Hence, the aforementioned concept of physics-based spatially variable thresholding in the context of wavelet-based numerical techniques for solving PDEs is established. The procedure consists of tracking the wavelet thresholding-factor within a Lagrangian frame by exploiting a Lagrangian Path-Line Diffusive Averaging approach based on either linear averaging along characteristics or direct solution of the evolution equation. This innovative technique represents a framework of continuously variable fidelity wavelet-based space/time/model-form adaptive multiscale methodology. This methodology has been tested and has provided very promising results on a benchmark with time-varying user prescribed level of SGS dissipation. In addition, a longtime effort to develop a novel parallel adaptive wavelet collocation method for numerical solution of PDEs has been completed during the course of the current work
Spike-Threshold Adaptation Predicted by Membrane Potential Dynamics In Vivo
Fontaine, Bertrand; Peña, José Luis; Brette, Romain
2014-01-01
Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo. PMID:24722397
Research of adaptive threshold edge detection algorithm based on statistics canny operator
NASA Astrophysics Data System (ADS)
Xu, Jian; Wang, Huaisuo; Huang, Hua
2015-12-01
The traditional Canny operator cannot get the optimal threshold in different scene, on this foundation, an improved Canny edge detection algorithm based on adaptive threshold is proposed. The result of the experiment pictures indicate that the improved algorithm can get responsible threshold, and has the better accuracy and precision in the edge detection.
A New Adaptive Image Denoising Method Based on Neighboring Coefficients
NASA Astrophysics Data System (ADS)
Biswas, Mantosh; Om, Hari
2016-03-01
Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.
A New Adaptive Image Denoising Method
NASA Astrophysics Data System (ADS)
Biswas, Mantosh; Om, Hari
2016-03-01
In this paper, a new adaptive image denoising method is proposed that follows the soft-thresholding technique. In our method, a new threshold function is also proposed, which is determined by taking the various combinations of noise level, noise-free signal variance, subband size, and decomposition level. It is simple and adaptive as it depends on the data-driven parameters estimation in each subband. The state-of-the-art denoising methods viz. VisuShrink, SureShrink, BayesShrink, WIDNTF and IDTVWT are not able to modify the coefficients in an efficient manner to provide the good quality of image. Our method removes the noise from the noisy image significantly and provides better visual quality of an image.
Spatially adaptive Bayesian wavelet thresholding for speckle removal in medical ultrasound images
NASA Astrophysics Data System (ADS)
Hou, Jianhua; Xiong, Chengyi; Chen, Shaoping; He, Xiang
2007-12-01
In this paper, a novel spatially adaptive wavelet thresholding method based on Bayesian maximum a posteriori (MAP) criterion is proposed for speckle removal in medical ultrasound (US) images. The method firstly performs logarithmical transform to original speckled ultrasound image, followed by redundant wavelet transform. The proposed method uses the Rayleigh distribution for speckle wavelet coefficients and Laplacian distribution for modeling the statistics of wavelet coefficients due to signal. A Bayesian estimator with analytical formula is derived from MAP estimation, and the resulting formula is proven to be equivalent to soft thresholding in nature which makes the algorithm very simple. In order to exploit the correlation among wavelet coefficients, the parameters of Laplacian model are assumed to be spatially correlated and can be computed from the coefficients in a neighboring window, thus making our method spatially adaptive in wavelet domain. Theoretical analysis and simulation experiment results show that this proposed method can effectively suppress speckle noise in medical US images while preserving as much as possible important signal features and details.
Application of new advanced CNN structure with adaptive thresholds to color edge detection
NASA Astrophysics Data System (ADS)
Deng, Shaojiang; Tian, Yuan; Hu, Xipeng; Wei, Pengcheng; Qin, Mingfu
2012-04-01
Color edge detection is much more efficient than gray scale detection when edges exist at the boundary between regions of different colors with no change in intensity. This paper presents adaptive templates, which are capable of detecting various color and intensity changes in color image. To avoid conception of multilayer proposed in literatures, modification has been done to the CNN structure. This modified structure allows a matrix C, which carries the change information of pixels, to replace the control parts in the basic CNN equation. This modification is necessary because in multilayer structure, it faces the challenge of how to represent the intrinsic relationship among each primary layer. Additionally, in order to enhance the accuracy of edge detection, adaptive detection threshold is employed. The adaptive thresholds are considered to be alterable criteria in designing matrix C. The proposed synthetic system not only avoids the problem which is engendered by multi-layers but also exploits full information of pixels themselves. Experimental results prove that the proposed method is efficient.
Issac, Ashish; Partha Sarathi, M; Dutta, Malay Kishore
2015-11-01
Glaucoma is an optic neuropathy which is one of the main causes of permanent blindness worldwide. This paper presents an automatic image processing based method for detection of glaucoma from the digital fundus images. In this proposed work, the discriminatory parameters of glaucoma infection, such as cup to disc ratio (CDR), neuro retinal rim (NRR) area and blood vessels in different regions of the optic disc has been used as features and fed as inputs to learning algorithms for glaucoma diagnosis. These features which have discriminatory changes with the occurrence of glaucoma are strategically used for training the classifiers to improve the accuracy of identification. The segmentation of optic disc and cup based on adaptive threshold of the pixel intensities lying in the optic nerve head region. Unlike existing methods the proposed algorithm is based on an adaptive threshold that uses local features from the fundus image for segmentation of optic cup and optic disc making it invariant to the quality of the image and noise content which may find wider acceptability. The experimental results indicate that such features are more significant in comparison to the statistical or textural features as considered in existing works. The proposed work achieves an accuracy of 94.11% with a sensitivity of 100%. A comparison of the proposed work with the existing methods indicates that the proposed approach has improved accuracy of classification glaucoma from a digital fundus which may be considered clinically significant. PMID:26321351
Graded-threshold parametric response maps: towards a strategy for adaptive dose painting
NASA Astrophysics Data System (ADS)
Lausch, A.; Jensen, N.; Chen, J.; Lee, T. Y.; Lock, M.; Wong, E.
2014-03-01
Purpose: To modify the single-threshold parametric response map (ST-PRM) method for predicting treatment outcomes in order to facilitate its use for guidance of adaptive dose painting in intensity-modulated radiotherapy. Methods: Multiple graded thresholds were used to extend the ST-PRM method (Nat. Med. 2009;15(5):572-576) such that the full functional change distribution within tumours could be represented with respect to multiple confidence interval estimates for functional changes in similar healthy tissue. The ST-PRM and graded-threshold PRM (GT-PRM) methods were applied to functional imaging scans of 5 patients treated for hepatocellular carcinoma. Pre and post-radiotherapy arterial blood flow maps (ABF) were generated from CT-perfusion scans of each patient. ABF maps were rigidly registered based on aligning tumour centres of mass. ST-PRM and GT-PRM analyses were then performed on overlapping tumour regions within the registered ABF maps. Main findings: The ST-PRMs contained many disconnected clusters of voxels classified as having a significant change in function. While this may be useful to predict treatment response, it may pose challenges for identifying boost volumes or for informing dose-painting by numbers strategies. The GT-PRMs included all of the same information as ST-PRMs but also visualized the full tumour functional change distribution. Heterogeneous clusters in the ST-PRMs often became more connected in the GT-PRMs by voxels with similar functional changes. Conclusions: GT-PRMs provided additional information which helped to visualize relationships between significant functional changes identified by ST-PRMs. This may enhance ST-PRM utility for guiding adaptive dose painting.
Bauer, Robert; Gharabaghi, Alireza
2015-01-01
Restorative brain-computer interfaces (BCI) are increasingly used to provide feedback of neuronal states in a bid to normalize pathological brain activity and achieve behavioral gains. However, patients and healthy subjects alike often show a large variability, or even inability, of brain self-regulation for BCI control, known as BCI illiteracy. Although current co-adaptive algorithms are powerful for assistive BCIs, their inherent class switching clashes with the operant conditioning goal of restorative BCIs. Moreover, due to the treatment rationale, the classifier of restorative BCIs usually has a constrained feature space, thus limiting the possibility of classifier adaptation. In this context, we applied a Bayesian model of neurofeedback and reinforcement learning for different threshold selection strategies to study the impact of threshold adaptation of a linear classifier on optimizing restorative BCIs. For each feedback iteration, we first determined the thresholds that result in minimal action entropy and maximal instructional efficiency. We then used the resulting vector for the simulation of continuous threshold adaptation. We could thus show that threshold adaptation can improve reinforcement learning, particularly in cases of BCI illiteracy. Finally, on the basis of information-theory, we provided an explanation for the achieved benefits of adaptive threshold setting. PMID:25729347
Bauer, Robert; Gharabaghi, Alireza
2015-01-01
Restorative brain-computer interfaces (BCI) are increasingly used to provide feedback of neuronal states in a bid to normalize pathological brain activity and achieve behavioral gains. However, patients and healthy subjects alike often show a large variability, or even inability, of brain self-regulation for BCI control, known as BCI illiteracy. Although current co-adaptive algorithms are powerful for assistive BCIs, their inherent class switching clashes with the operant conditioning goal of restorative BCIs. Moreover, due to the treatment rationale, the classifier of restorative BCIs usually has a constrained feature space, thus limiting the possibility of classifier adaptation. In this context, we applied a Bayesian model of neurofeedback and reinforcement learning for different threshold selection strategies to study the impact of threshold adaptation of a linear classifier on optimizing restorative BCIs. For each feedback iteration, we first determined the thresholds that result in minimal action entropy and maximal instructional efficiency. We then used the resulting vector for the simulation of continuous threshold adaptation. We could thus show that threshold adaptation can improve reinforcement learning, particularly in cases of BCI illiteracy. Finally, on the basis of information-theory, we provided an explanation for the achieved benefits of adaptive threshold setting. PMID:25729347
Wavelet based ECG compression with adaptive thresholding and efficient coding.
Alshamali, A
2010-01-01
This paper proposes a new wavelet-based ECG compression technique. It is based on optimized thresholds to determine significant wavelet coefficients and an efficient coding for their positions. Huffman encoding is used to enhance the compression ratio. The proposed technique is tested using several records taken from the MIT-BIH arrhythmia database. Simulation results show that the proposed technique outperforms others obtained by previously published schemes. PMID:20608811
A threshold selection method based on edge preserving
NASA Astrophysics Data System (ADS)
Lou, Liantang; Dan, Wei; Chen, Jiaqi
2015-12-01
A method of automatic threshold selection for image segmentation is presented. An optimal threshold is selected in order to preserve edge of image perfectly in image segmentation. The shortcoming of Otsu's method based on gray-level histograms is analyzed. The edge energy function of bivariate continuous function is expressed as the line integral while the edge energy function of image is simulated by discretizing the integral. An optimal threshold method by maximizing the edge energy function is given. Several experimental results are also presented to compare with the Otsu's method.
Motion Estimation Based on Mutual Information and Adaptive Multi-Scale Thresholding.
Xu, Rui; Taubman, David; Naman, Aous Thabit
2016-03-01
This paper proposes a new method of calculating a matching metric for motion estimation. The proposed method splits the information in the source images into multiple scale and orientation subbands, reduces the subband values to a binary representation via an adaptive thresholding algorithm, and uses mutual information to model the similarity of corresponding square windows in each image. A moving window strategy is applied to recover a dense estimated motion field whose properties are explored. The proposed matching metric is a sum of mutual information scores across space, scale, and orientation. This facilitates the exploitation of information diversity in the source images. Experimental comparisons are performed amongst several related approaches, revealing that the proposed matching metric is better able to exploit information diversity, generating more accurate motion fields. PMID:26742132
NASA Astrophysics Data System (ADS)
Bagwari, A.; Tomar, G. S.
2014-04-01
In Cognitive radio networks, spectrum sensing is used to sense the unused spectrum in an opportunistic manner. In this paper, multiple antennas based energy detector utilizing adaptive double-threshold for spectrum sensing is proposed, which enhances detection performance and overcomes sensing failure problem as well. The detection threshold is made adaptive to the fluctuation of the received signal power in each local detector of cognitive radio (CR) user. Numerical results show that by using multiple antennas at the CRs, it is possible to significantly improve detection performance at very low signal-to-noise ratio (SNR). Further, the scheme was analyzed in conjunction with cooperative spectrum sensing (CSS), where CRs utilize selection combining of the decision statistics obtained by an adaptive double-threshold energy detector for making a binary decision of the presence or absence of a primary user. The decision of each CR is forwarded over error free orthogonal channels to the fusion centre, which takes the final decision of a spectrum hole. It is further found that CSS with multiple antenna-based energy detector with adaptive double-threshold improves detection performance around 26.8 % as compared to hierarchical with quantization method at -12 dB SNR, under the condition that a small number of sensing nodes are used in spectrum sensing.
Synergy of adaptive thresholds and multiple transmitters in free-space optical communication.
Louthain, James A; Schmidt, Jason D
2010-04-26
Laser propagation through extended turbulence causes severe beam spread and scintillation. Airborne laser communication systems require special considerations in size, complexity, power, and weight. Rather than using bulky, costly, adaptive optics systems, we reduce the variability of the received signal by integrating a two-transmitter system with an adaptive threshold receiver to average out the deleterious effects of turbulence. In contrast to adaptive optics approaches, systems employing multiple transmitters and adaptive thresholds exhibit performance improvements that are unaffected by turbulence strength. Simulations of this system with on-off-keying (OOK) showed that reducing the scintillation variations with multiple transmitters improves the performance of low-frequency adaptive threshold estimators by 1-3 dB. The combination of multiple transmitters and adaptive thresholding provided at least a 10 dB gain over implementing only transmitter pointing and receiver tilt correction for all three high-Rytov number scenarios. The scenario with a spherical-wave Rytov number R=0.20 enjoyed a 13 dB reduction in the required SNR for BER's between 10(-5) to 10(-3), consistent with the code gain metric. All five scenarios between 0.06 and 0.20 Rytov number improved to within 3 dB of the SNR of the lowest Rytov number scenario. PMID:20588740
Unipolar Terminal-Attractor Based Neural Associative Memory with Adaptive Threshold
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)
1996-01-01
A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner-product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.
Unipolar terminal-attractor based neural associative memory with adaptive threshold
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)
1993-01-01
A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.
Adapting to a changing environment: non-obvious thresholds in multi-scale systems.
Perryman, Clare; Wieczorek, Sebastian
2014-10-01
Many natural and technological systems fail to adapt to changing external conditions and move to a different state if the conditions vary too fast. Such 'non-adiabatic' processes are ubiquitous, but little understood. We identify these processes with a new nonlinear phenomenon-an intricate threshold where a forced system fails to adiabatically follow a changing stable state. In systems with multiple time scales, we derive existence conditions that show such thresholds to be generic, but non-obvious, meaning they cannot be captured by traditional stability theory. Rather, the phenomenon can be analysed using concepts from modern singular perturbation theory: folded singularities and canard trajectories, including composite canards. Thus, non-obvious thresholds should explain the failure to adapt to a changing environment in a wide range of multi-scale systems including: tipping points in the climate system, regime shifts in ecosystems, excitability in nerve cells, adaptation failure in regulatory genes and adiabatic switching in technology. PMID:25294963
Adapting to a changing environment: non-obvious thresholds in multi-scale systems
Perryman, Clare; Wieczorek, Sebastian
2014-01-01
Many natural and technological systems fail to adapt to changing external conditions and move to a different state if the conditions vary too fast. Such ‘non-adiabatic’ processes are ubiquitous, but little understood. We identify these processes with a new nonlinear phenomenon—an intricate threshold where a forced system fails to adiabatically follow a changing stable state. In systems with multiple time scales, we derive existence conditions that show such thresholds to be generic, but non-obvious, meaning they cannot be captured by traditional stability theory. Rather, the phenomenon can be analysed using concepts from modern singular perturbation theory: folded singularities and canard trajectories, including composite canards. Thus, non-obvious thresholds should explain the failure to adapt to a changing environment in a wide range of multi-scale systems including: tipping points in the climate system, regime shifts in ecosystems, excitability in nerve cells, adaptation failure in regulatory genes and adiabatic switching in technology. PMID:25294963
Adaptive Threshold Neural Spike Detector Using Stationary Wavelet Transform in CMOS.
Yang, Yuning; Boling, C Sam; Kamboh, Awais M; Mason, Andrew J
2015-11-01
Spike detection is an essential first step in the analysis of neural recordings. Detection at the frontend eases the bandwidth requirement for wireless data transfer of multichannel recordings to extra-cranial processing units. In this work, a low power digital integrated spike detector based on the lifting stationary wavelet transform is presented and developed. By monitoring the standard deviation of wavelet coefficients, the proposed detector can adaptively set a threshold value online for each channel independently without requiring user intervention. A prototype 16-channel spike detector was designed and tested in an FPGA. The method enables spike detection with nearly 90% accuracy even when the signal-to-noise ratio is as low as 2. The design was mapped to 130 nm CMOS technology and shown to occupy 0.014 mm(2) of area and dissipate 1.7 μW of power per channel, making it suitable for implantable multichannel neural recording systems. PMID:25955990
Optimum threshold selection method of centroid computation for Gaussian spot
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; Wang, Caixia
2015-10-01
Centroid computation of Gaussian spot is often conducted to get the exact position of a target or to measure wave-front slopes in the fields of target tracking and wave-front sensing. Center of Gravity (CoG) is the most traditional method of centroid computation, known as its low algorithmic complexity. However both electronic noise from the detector and photonic noise from the environment reduces its accuracy. In order to improve the accuracy, thresholding is unavoidable before centroid computation, and optimum threshold need to be selected. In this paper, the model of Gaussian spot is established to analyze the performance of optimum threshold under different Signal-to-Noise Ratio (SNR) conditions. Besides, two optimum threshold selection methods are introduced: TmCoG (using m % of the maximum intensity of spot as threshold), and TkCoG ( usingμn +κσ n as the threshold), μn and σn are the mean value and deviation of back noise. Firstly, their impact on the detection error under various SNR conditions is simulated respectively to find the way to decide the value of k or m. Then, a comparison between them is made. According to the simulation result, TmCoG is superior over TkCoG for the accuracy of selected threshold, and detection error is also lower.
Method For Model-Reference Adaptive Control
NASA Technical Reports Server (NTRS)
Seraji, Homayoun
1990-01-01
Relatively simple method of model-reference adaptive control (MRAC) developed from two prior classes of MRAC techniques: signal-synthesis method and parameter-adaption method. Incorporated into unified theory, which yields more general adaptation scheme.
Olfactory Detection Thresholds and Adaptation in Adults with Autism Spectrum Condition
ERIC Educational Resources Information Center
Tavassoli, T.; Baron-Cohen, S.
2012-01-01
Sensory issues have been widely reported in Autism Spectrum Conditions (ASC). Since olfaction is one of the least investigated senses in ASC, the current studies explore olfactory detection thresholds and adaptation to olfactory stimuli in adults with ASC. 80 participants took part, 38 (18 females, 20 males) with ASC and 42 control participants…
Rosa, Thiago S.; Simões, Herbert G.; Rogero, Marcelo M.; Moraes, Milton R.; Denadai, Benedito S.; Arida, Ricardo M.; Andrade, Marília S.; Silva, Bruno M.
2016-01-01
Severe obesity affects metabolism with potential to influence the lactate and glycemic response to different exercise intensities in untrained and trained rats. Here we evaluated metabolic thresholds and maximal aerobic capacity in rats with severe obesity and lean counterparts at pre- and post-training. Zucker rats (obese: n = 10, lean: n = 10) were submitted to constant treadmill bouts, to determine the maximal lactate steady state, and an incremental treadmill test, to determine the lactate threshold, glycemic threshold and maximal velocity at pre and post 8 weeks of treadmill training. Velocities of the lactate threshold and glycemic threshold agreed with the maximal lactate steady state velocity on most comparisons. The maximal lactate steady state velocity occurred at higher percentage of the maximal velocity in Zucker rats at pre-training than the percentage commonly reported and used for training prescription for other rat strains (i.e., 60%) (obese = 78 ± 9% and lean = 68 ± 5%, P < 0.05 vs. 60%). The maximal lactate steady state velocity and maximal velocity were lower in the obese group at pre-training (P < 0.05 vs. lean), increased in both groups at post-training (P < 0.05 vs. pre), but were still lower in the obese group at post-training (P < 0.05 vs. lean). Training-induced increase in maximal lactate steady state, lactate threshold and glycemic threshold velocities was similar between groups (P > 0.05), whereas increase in maximal velocity was greater in the obese group (P < 0.05 vs. lean). In conclusion, lactate threshold, glycemic threshold and maximal lactate steady state occurred at similar exercise intensity in Zucker rats at pre- and post-training. Severe obesity shifted metabolic thresholds to higher exercise intensity at pre-training, but did not attenuate submaximal and maximal aerobic training adaptations. PMID:27148063
A Threshold-Adaptive Reputation System on Mobile Ad Hoc Networks
NASA Astrophysics Data System (ADS)
Tsai, Hsiao-Chien; Lo, Nai-Wei; Wu, Tzong-Chen
In recent years huge potential benefits from novel applications in mobile ad hoc networks (MANET) have been discussed extensively. However, without robust security mechanisms and systems to provide safety shell through the MANET infrastructure, MANET applications can be vulnerable and hammered by malicious attackers easily. In order to detect misbehaved message routing and identify malicious attackers in MANET, schemes based on reputation concept have shown their advantages in this area in terms of good scalability and simple threshold-based detection strategy. We observed that previous reputation schemes generally use predefined thresholds which do not take into account the effect of behavior dynamics between nodes in a period of time. In this paper, we propose a Threshold-Adaptive Reputation System (TARS) to overcome the shortcomings of static threshold strategy and improve the overall MANET performance under misbehaved routing attack. A fuzzy-based inference engine is introduced to evaluate the trustiness of a node's one-hop neighbors. Malicious nodes whose trust values are lower than the adaptive threshold, will be detected and filtered out by their honest neighbors during trustiness evaluation process. The results of network simulation show that the TARS outperforms other compared schemes under security attacks in most cases and at the same time reduces the decrease of total packet delivery ratio by 67% in comparison with MANET without reputation system.
Milne, R.B.
1995-12-01
This thesis describes a new method for the numerical solution of partial differential equations of the parabolic type on an adaptively refined mesh in two or more spatial dimensions. The method is motivated and developed in the context of the level set formulation for the curvature dependent propagation of surfaces in three dimensions. In that setting, it realizes the multiple advantages of decreased computational effort, localized accuracy enhancement, and compatibility with problems containing a range of length scales.
Mass Detection in Mammographic Images Using Wavelet Processing and Adaptive Threshold Technique.
Vikhe, P S; Thool, V R
2016-04-01
Detection of mass in mammogram for early diagnosis of breast cancer is a significant assignment in the reduction of the mortality rate. However, in some cases, screening of mass is difficult task for radiologist, due to variation in contrast, fuzzy edges and noisy mammograms. Masses and micro-calcifications are the distinctive signs for diagnosis of breast cancer. This paper presents, a method for mass enhancement using piecewise linear operator in combination with wavelet processing from mammographic images. The method includes, artifact suppression and pectoral muscle removal based on morphological operations. Finally, mass segmentation for detection using adaptive threshold technique is carried out to separate the mass from background. The proposed method has been tested on 130 (45 + 85) images with 90.9 and 91 % True Positive Fraction (TPF) at 2.35 and 2.1 average False Positive Per Image(FP/I) from two different databases, namely Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM). The obtained results show that, the proposed technique gives improved diagnosis in the early breast cancer detection. PMID:26811073
Low-Threshold Active Teaching Methods for Mathematic Instruction
ERIC Educational Resources Information Center
Marotta, Sebastian M.; Hargis, Jace
2011-01-01
In this article, we present a large list of low-threshold active teaching methods categorized so the instructor can efficiently access and target the deployment of conceptually based lessons. The categories include teaching strategies for lecture on large and small class sizes; student action individually, in pairs, and groups; games; interaction…
Pattern Recognition With Adaptive-Thresholds For Sleep Spindle In High Density EEG Signals
Gemignani, Jessica; Agrimi, Jacopo; Cheli, Enrico; Gemignani, Angelo; Laurino, Marco; Allegrini, Paolo; Landi, Alberto; Menicucci, Danilo
2016-01-01
Sleep spindles are electroencephalographic oscillations peculiar of non-REM sleep, related to neuronal mechanisms underlying sleep restoration and learning consolidation. Based on their very singular morphology, sleep spindles can be visually recognized and detected, even though this approach can lead to significant mis-detections. For this reason, many efforts have been put in developing a reliable algorithm for spindle automatic detection, and a number of methods, based on different techniques, have been tested via visual validation. This work aims at improving current pattern recognition procedures for sleep spindles detection by taking into account their physiological sources of variability. We provide a method as a synthesis of the current state of art that, improving dynamic threshold adaptation, is able to follow modification of spindle characteristics as a function of sleep depth and inter-subjects variability. The algorithm has been applied to physiological data recorded by a high density EEG in order to perform a validation based on visual inspection and on evaluation of expected results from normal night sleep in healthy subjects. PMID:26736332
Future temperature in southwest Asia projected to exceed a threshold for human adaptability
NASA Astrophysics Data System (ADS)
Pal, Jeremy S.; Eltahir, Elfatih A. B.
2016-02-01
A human body may be able to adapt to extremes of dry-bulb temperature (commonly referred to as simply temperature) through perspiration and associated evaporative cooling provided that the wet-bulb temperature (a combined measure of temperature and humidity or degree of `mugginess’) remains below a threshold of 35 °C. (ref. ). This threshold defines a limit of survivability for a fit human under well-ventilated outdoor conditions and is lower for most people. We project using an ensemble of high-resolution regional climate model simulations that extremes of wet-bulb temperature in the region around the Arabian Gulf are likely to approach and exceed this critical threshold under the business-as-usual scenario of future greenhouse gas concentrations. Our results expose a specific regional hotspot where climate change, in the absence of significant mitigation, is likely to severely impact human habitability in the future.
NASA Astrophysics Data System (ADS)
Krasichkov, Alexander S.; Grigoriev, Eugene B.; Bogachev, Mikhail I.; Nifontov, Eugene M.
2015-10-01
We suggest an analytical approach to the adaptive thresholding in a shape anomaly detection problem. We find an analytical expression for the distribution of the cosine similarity score between a reference shape and an observational shape hindered by strong measurement noise that depends solely on the noise level and is independent of the particular shape analyzed. The analytical treatment is also confirmed by computer simulations and shows nearly perfect agreement. Using this analytical solution, we suggest an improved shape anomaly detection approach based on adaptive thresholding. We validate the noise robustness of our approach using typical shapes of normal and pathological electrocardiogram cycles hindered by additive white noise. We show explicitly that under high noise levels our approach considerably outperforms the conventional tactic that does not take into account variations in the noise level.
The effects of adaptation and masking on incremental thresholds for contrast.
Ross, J; Speed, H D; Morgan, M J
1993-10-01
Using a temporal two-alternative forced-choice procedure, we measured thresholds for detecting increments in contrast of a 2 c/deg vertical grating at a wide range of pedestal contrasts, (1) before and after adapting to a grating of the same orientation and spatial frequency, and (2) in the presence of superimposed masks that varied in either orientation or spatial frequency. The adapting grating and all masks were of fixed 40% contrast. The results show that prior adaptation and concurrent masking have qualitatively similar effects on incremental thresholds; both raise threshold at low pedestal contrasts and leave them unaltered at higher contrasts. But masks have greater effects than adaptors, the effect of an orthogonal mask, or one two octaves higher in spatial frequency, being about the same as a parallel adaptor of the same spatial frequency as the pedestal grating. The results are explained by a model of Ross and Speed [(1991) Proceedings of the Royal Society of London B, 246, 61-69] that assumes that masks and adaptors both reposition the transducer function of contrast sensitive mechanisms and that masks, but not adaptors, also stimulate the detecting mechanism. PMID:8266646
A Wavelet Thresholding Method to Reduce Ultrasound Artifacts
Tay, Peter C.; Acton, Scott T.; Hossack, John A.
2010-01-01
Artifacts due to enhancement, reverberation, and multi-path reflection are commonly encountered in medical ultrasound imaging. These artifacts can adversely affect an automated image quantification algorithm or interfere with a physician’s assessment of a radiological image. This paper proposes a soft wavelet thresholding method to replace regions adversely affected by these artifacts with the texture due to the underlying tissue(s), which were originally obscured. Our proposed method soft thresholds the wavelet coefficients of affected regions to estimate the reflectivity values caused by these artifacts. By subtracting the estimated reflectivity values of the artifacts from the original reflectivity values, estimates of artifact reduced reflectivity values are attained. The improvements of our proposed method are substantiated by an evaluation of Field II simulated, in vivo mouse and human heart B mode images. PMID:20934848
A wavelet thresholding method to reduce ultrasound artifacts.
Tay, Peter C; Acton, Scott T; Hossack, John A
2011-01-01
Artifacts due to enhancement, reverberation, and multi-path reflection are commonly encountered in medical ultrasound imaging. These artifacts can adversely affect an automated image quantification algorithm or interfere with a physician's assessment of a radiological image. This paper proposes a soft wavelet thresholding method to replace regions adversely affected by these artifacts with the texture due to the underlying tissue(s), which were originally obscured. Our proposed method soft thresholds the wavelet coefficients of affected regions to estimate the reflectivity values caused by these artifacts. By subtracting the estimated reflectivity values of the artifacts from the original reflectivity values, estimates of artifact reduced reflectivity values are attained. The improvements of our proposed method are substantiated by an evaluation of Field II simulated, in vivo mouse and human heart B mode images. PMID:20934848
Manakov, N. L. Marmo, S. I.; Sviridov, S. A.
2009-04-15
The two-photon above-threshold ionization of atoms is calculated using numerical algorithms of the Pade approximation in the model-potential method with the Coulomb asymptotics. The total and differential cross sections of the above-threshold ionization of helium and alkali metal atoms by elliptically polarized radiation are presented. The dependence of the angular distribution of photoelectrons on the sign of the ellipticity of radiation (the elliptic dichroism phenomenon) is analyzed in the above-threshold frequency range.
Impact of sub and supra-threshold adaptation currents in networks of spiking neurons.
Colliaux, David; Yger, Pierre; Kaneko, Kunihiko
2015-12-01
Neuronal adaptation is the intrinsic capacity of the brain to change, by various mechanisms, its dynamical responses as a function of the context. Such a phenomena, widely observed in vivo and in vitro, is known to be crucial in homeostatic regulation of the activity and gain control. The effects of adaptation have already been studied at the single-cell level, resulting from either voltage or calcium gated channels both activated by the spiking activity and modulating the dynamical responses of the neurons. In this study, by disentangling those effects into a linear (sub-threshold) and a non-linear (supra-threshold) part, we focus on the the functional role of those two distinct components of adaptation onto the neuronal activity at various scales, starting from single-cell responses up to recurrent networks dynamics, and under stationary or non-stationary stimulations. The effects of slow currents on collective dynamics, like modulation of population oscillation and reliability of spike patterns, is quantified for various types of adaptation in sparse recurrent networks. PMID:26400658
Robust Optimal Adaptive Control Method with Large Adaptive Gain
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2009-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.
NASA Astrophysics Data System (ADS)
Elmi, Omid; Javad Tourian, Mohammad; Sneeuw, Nico
2015-04-01
The importance of river discharge monitoring is critical for e.g., water resource planning, climate change, hazard monitoring. River discharge has been measured at in situ gauges for more than a century. Despite various attempts, some basins are still ungauged. Moreover, a reduction in the number of worldwide gauging stations increases the interest to employ remote sensing data for river discharge monitoring. Finding an empirical relationship between simultaneous in situ measurements of discharge and river widths derived from satellite imagery has been introduced as a straightforward remote sensing alternative. Classifying water and land in an image is the primary task for defining the river width. Water appears dark in the near infrared and infrared bands in satellite images. As a result low values in the histogram usually represent the water content. In this way, applying a threshold on the image histogram and separating into two different classes is one of the most efficient techniques to build a water mask. Beside its simple definition, finding the appropriate threshold value in each image is the most critical issue. The threshold is variable due to changes in the water level, river extent, atmosphere, sunlight radiation, onboard calibration of the satellite over time. These complexities in water body classification are the main source of error in river width estimation. In this study, we are looking for the most efficient adaptive threshold algorithm to estimate the river discharge. To do this, all cloud free MODIS images coincident with the in situ measurement are collected. Next a number of automatic threshold selection techniques are employed to generate different dynamic water masks. Then, for each of them a separate empirical relationship between river widths and discharge measurements are determined. Through these empirical relationships, we estimate river discharge at the gauge and then validate our results against in situ measurements and also
Variable threshold method for ECG R-peak detection.
Kew, Hsein-Ping; Jeong, Do-Un
2011-10-01
In this paper, a wearable belt-type ECG electrode worn around the chest by measuring the real-time ECG is produced in order to minimize the inconvenient in wearing. ECG signal is detected using a potential instrument system. The measured ECG signal is transmits via an ultra low power consumption wireless data communications unit to personal computer using Zigbee-compatible wireless sensor node. ECG signals carry a lot of clinical information for a cardiologist especially the R-peak detection in ECG. R-peak detection generally uses the threshold value which is fixed. There will be errors in peak detection when the baseline changes due to motion artifacts and signal size changes. Preprocessing process which includes differentiation process and Hilbert transform is used as signal preprocessing algorithm. Thereafter, variable threshold method is used to detect the R-peak which is more accurate and efficient than fixed threshold value method. R-peak detection using MIT-BIH databases and Long Term Real-Time ECG is performed in this research in order to evaluate the performance analysis. PMID:21695499
QUEST - A Bayesian adaptive psychometric method
NASA Technical Reports Server (NTRS)
Watson, A. B.; Pelli, D. G.
1983-01-01
An adaptive psychometric procedure that places each trial at the current most probable Bayesian estimate of threshold is described. The procedure takes advantage of the common finding that the human psychometric function is invariant in form when expressed as a function of log intensity. The procedure is simple, fast, and efficient, and may be easily implemented on any computer.
NASA Astrophysics Data System (ADS)
Amanda, A. R.; Widita, R.
2016-03-01
The aim of this research is to compare some image segmentation methods for lungs based on performance evaluation parameter (Mean Square Error (MSE) and Peak Signal Noise to Ratio (PSNR)). In this study, the methods compared were connected threshold, neighborhood connected, and the threshold level set segmentation on the image of the lungs. These three methods require one important parameter, i.e the threshold. The threshold interval was obtained from the histogram of the original image. The software used to segment the image here was InsightToolkit-4.7.0 (ITK). This research used 5 lung images to be analyzed. Then, the results were compared using the performance evaluation parameter determined by using MATLAB. The segmentation method is said to have a good quality if it has the smallest MSE value and the highest PSNR. The results show that four sample images match the criteria of connected threshold, while one sample refers to the threshold level set segmentation. Therefore, it can be concluded that connected threshold method is better than the other two methods for these cases.
Methods of scaling threshold color difference using printed samples
NASA Astrophysics Data System (ADS)
Huang, Min; Cui, Guihua; Liu, Haoxue; Luo, M. Ronnier
2012-01-01
A series of printed samples on substrate of semi-gloss paper and with the magnitude of threshold color difference were prepared for scaling the visual color difference and to evaluate the performance of different method. The probabilities of perceptibly was used to normalized to Z-score and different color differences were scaled to the Z-score. The visual color difference was got, and checked with the STRESS factor. The results indicated that only the scales have been changed but the relative scales between pairs in the data are preserved.
Impact of slow K(+) currents on spike generation can be described by an adaptive threshold model.
Kobayashi, Ryota; Kitano, Katsunori
2016-06-01
A neuron that is stimulated by rectangular current injections initially responds with a high firing rate, followed by a decrease in the firing rate. This phenomenon is called spike-frequency adaptation and is usually mediated by slow K(+) currents, such as the M-type K(+) current (I M ) or the Ca(2+)-activated K(+) current (I AHP ). It is not clear how the detailed biophysical mechanisms regulate spike generation in a cortical neuron. In this study, we investigated the impact of slow K(+) currents on spike generation mechanism by reducing a detailed conductance-based neuron model. We showed that the detailed model can be reduced to a multi-timescale adaptive threshold model, and derived the formulae that describe the relationship between slow K(+) current parameters and reduced model parameters. Our analysis of the reduced model suggests that slow K(+) currents have a differential effect on the noise tolerance in neural coding. PMID:27085337
Fine tuning of the threshold of T cell selection by the Nck adapters.
Roy, Edwige; Togbe, Dieudonnée; Holdorf, Amy; Trubetskoy, Dmitry; Nabti, Sabrina; Küblbeck, Günter; Schmitt, Sabine; Kopp-Schneider, Annette; Leithäuser, Frank; Möller, Peter; Bladt, Friedhelm; Hämmerling, Günter J; Arnold, Bernd; Pawson, Tony; Tafuri, Anna
2010-12-15
Thymic selection shapes the T cell repertoire to ensure maximal antigenic coverage against pathogens while preventing autoimmunity. Recognition of self-peptides in the context of peptide-MHC complexes by the TCR is central to this process, which remains partially understood at the molecular level. In this study we provide genetic evidence that the Nck adapter proteins are essential for thymic selection. In vivo Nck deletion resulted in a reduction of the thymic cellularity, defective positive selection of low-avidity T cells, and impaired deletion of thymocytes engaged by low-potency stimuli. Nck-deficient thymocytes were characterized by reduced ERK activation, particularly pronounced in mature single positive thymocytes. Taken together, our findings identify a crucial role for the Nck adapters in enhancing TCR signal strength, thereby fine-tuning the threshold of thymocyte selection and shaping the preimmune T cell repertoire. PMID:21078909
Simple method for model reference adaptive control
NASA Technical Reports Server (NTRS)
Seraji, H.
1989-01-01
A simple method is presented for combined signal synthesis and parameter adaptation within the framework of model reference adaptive control theory. The results are obtained using a simple derivation based on an improved Liapunov function.
Detection of fiducial points in ECG waves using iteration based adaptive thresholds.
Wonjune Kang; Kyunguen Byun; Hong-Goo Kang
2015-08-01
This paper presents an algorithm for the detection of fiducial points in electrocardiogram (ECG) waves using iteration based adaptive thresholds. By setting the search range of the processing frame to the interval between two consecutive R peaks, the peaks of T and P waves are used as reference salient points (RSPs) to detect the fiducial points. The RSPs are selected from candidates whose slope variation factors are larger than iteratively defined adaptive thresholds. Considering the fact that the number of RSPs varies depending on whether the ECG wave is normal or not, the proposed algorithm proceeds with a different methodology for determining fiducial points based on the number of detected RSPs. Testing was performed using twelve records from the MIT-BIH Arrhythmia Database that were manually marked for comparison with the estimated locations of the fiducial points. The means of absolute distances between the true locations and the points estimated by the algorithm are 12.2 ms and 7.9 ms for the starting points of P and Q waves, and 9.3 ms and 13.9 ms for the ending points of S and T waves. Since the computational complexity of the proposed algorithm is very low, it is feasible for use in mobile devices. PMID:26736854
Perthame, Benoît; Gauduchon, Mathias
2010-09-01
Deterministic population models for adaptive dynamics are derived mathematically from individual-centred stochastic models in the limit of large populations. However, it is common that numerical simulations of both models fit poorly and give rather different behaviours in terms of evolution speeds and branching patterns. Stochastic simulations involve extinction phenomenon operating through demographic stochasticity, when the number of individual 'units' is small. Focusing on the class of integro-differential adaptive models, we include a similar notion in the deterministic formulations, a survival threshold, which allows phenotypical traits in the population to vanish when represented by few 'individuals'. Based on numerical simulations, we show that the survival threshold changes drastically the solution; (i) the evolution speed is much slower, (ii) the branching patterns are reduced continuously and (iii) these patterns are comparable to those obtained with stochastic simulations. The rescaled models can also be analysed theoretically. One can recover the concentration phenomena on well-separated Dirac masses through the constrained Hamilton-Jacobi equation in the limit of small mutations and large observation times. PMID:19734200
A novel method for determining target detection thresholds
NASA Astrophysics Data System (ADS)
Grossman, S.
2015-05-01
Target detection is the act of isolating objects of interest from the surrounding clutter, generally using some form of test to include objects in the found class. However, the method of determining the threshold is overlooked relying on manual determination either through empirical observation or guesswork. The question remains: how does an analyst identify the detection threshold that will produce the optimum results? This work proposes the concept of a target detection sweet spot where the missed detection probability curve crosses the false detection curve; this represents the point at which missed detects are traded for false detects in order to effect positive or negative changes in the detection probability. ROC curves are used to characterize detection probabilities and false alarm rates based on empirically derived data. It identifies the relationship between the empirically derived results and the first moment statistic of the histogram of the pixel target value data and then proposes a new method of applying the histogram results in an automated fashion to predict the target detection sweet spot at which to begin automated target detection.
Jahangiri, Anila F.; Gerling, Gregory J.
2011-01-01
The Leaky Integrate and Fire (LIF) model of a neuron is one of the best known models for a spiking neuron. A current limitation of the LIF model is that it may not accurately reproduce the dynamics of an action potential. There have recently been some studies suggesting that a LIF coupled with a multi-timescale adaptive threshold (MAT) may increase LIF’s accuracy in predicting spikes in cortical neurons. We propose a mechanotransduction process coupled with a LIF model with multi-timescale adaptive threshold to model slowly adapting type I (SAI) mechanoreceptor in monkey’s glabrous skin. In order to test the performance of the model, the spike timings predicted by this MAT model are compared with neural data. We also test a fixed threshold variant of the model by comparing its outcome with the neural data. Initial results indicate that the MAT model predicts spike timings better than a fixed threshold LIF model only. PMID:21814636
Karmali, Faisal; Chaudhuri, Shomesh E; Yi, Yongwoo; Merfeld, Daniel M
2016-03-01
When measuring thresholds, careful selection of stimulus amplitude can increase efficiency by increasing the precision of psychometric fit parameters (e.g., decreasing the fit parameter error bars). To find efficient adaptive algorithms for psychometric threshold ("sigma") estimation, we combined analytic approaches, Monte Carlo simulations, and human experiments for a one-interval, binary forced-choice, direction-recognition task. To our knowledge, this is the first time analytic results have been combined and compared with either simulation or human results. Human performance was consistent with theory and not significantly different from simulation predictions. Our analytic approach provides a bound on efficiency, which we compared against the efficiency of standard staircase algorithms, a modified staircase algorithm with asymmetric step sizes, and a maximum likelihood estimation (MLE) procedure. Simulation results suggest that optimal efficiency at determining threshold is provided by the MLE procedure targeting a fraction correct level of 0.92, an asymmetric 4-down, 1-up staircase targeting between 0.86 and 0.92 or a standard 6-down, 1-up staircase. Psychometric test efficiency, computed by comparing simulation and analytic results, was between 41 and 58% for 50 trials for these three algorithms, reaching up to 84% for 200 trials. These approaches were 13-21% more efficient than the commonly used 3-down, 1-up symmetric staircase. We also applied recent advances to reduce accuracy errors using a bias-reduced fitting approach. Taken together, the results lend confidence that the assumptions underlying each approach are reasonable and that human threshold forced-choice decision making is modeled well by detection theory models and mimics simulations based on detection theory models. PMID:26645306
Adaptive windowed range-constrained Otsu method using local information
NASA Astrophysics Data System (ADS)
Zheng, Jia; Zhang, Dinghua; Huang, Kuidong; Sun, Yuanxi; Tang, Shaojie
2016-01-01
An adaptive windowed range-constrained Otsu method using local information is proposed for improving the performance of image segmentation. First, the reason why traditional thresholding methods do not perform well in the segmentation of complicated images is analyzed. Therein, the influences of global and local thresholdings on the image segmentation are compared. Second, two methods that can adaptively change the size of the local window according to local information are proposed by us. The characteristics of the proposed methods are analyzed. Thereby, the information on the number of edge pixels in the local window of the binarized variance image is employed to adaptively change the local window size. Finally, the superiority of the proposed method over other methods such as the range-constrained Otsu, the active contour model, the double Otsu, the Bradley's, and the distance-regularized level set evolution is demonstrated. It is validated by the experiments that the proposed method can keep more details and acquire much more satisfying area overlap measure as compared with the other conventional methods.
An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures
Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Duan, Fuqing; Pan, Yutong
2016-01-01
Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels. PMID:27597878
An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures.
Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Tian, Yun; Duan, Fuqing; Pan, Yutong
2016-01-01
Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels. PMID:27597878
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A
2016-01-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.
2016-01-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908
Wavelet-based acoustic emission detection method with adaptive thresholding
NASA Astrophysics Data System (ADS)
Menon, Sunil; Schoess, Jeffrey N.; Hamza, Rida; Busch, Darryl
2000-06-01
Reductions in Navy maintenance budgets and available personnel have dictated the need to transition from time-based to 'condition-based' maintenance. Achieving this will require new enabling diagnostic technologies. One such technology, the use of acoustic emission for the early detection of helicopter rotor head dynamic component faults, has been investigated by Honeywell Technology Center for its rotor acoustic monitoring system (RAMS). This ambitious, 38-month, proof-of-concept effort, which was a part of the Naval Surface Warfare Center Air Vehicle Diagnostics System program, culminated in a successful three-week flight test of the RAMS system at Patuxent River Flight Test Center in September 1997. The flight test results demonstrated that stress-wave acoustic emission technology can detect signals equivalent to small fatigue cracks in rotor head components and can do so across the rotating articulated rotor head joints and in the presence of other background acoustic noise generated during flight operation. This paper presents the results of stress wave data analysis of the flight-test dataset using wavelet-based techniques to assess background operational noise vs. machinery failure detection results.
NASA Astrophysics Data System (ADS)
Fan, C.; Zheng, B.; Myint, S. W.; Aggarwal, R.
2014-12-01
Cropping intensity is the number of crops grown per year per unit area of cropland. Since 1970s, the Phoenix Active Management Area (AMA) has undergone rapid urbanization mostly via land conversions from agricultural prime lands to urban land use. Agricultural intensification, or multiple cropping, has been observed globally as a positive response to the growing land pressure as a consequence of urbanization and exploding population. Nevertheless, increased cropping intensity has associated local, regional, and global environmental outcomes such as degradation of water quality and soil fertility. Quantifying spatio-temporal patterns of cropping intensity can serve as a first step towards understanding these environmental problems and developing effective and sustainable cropping strategies. In this study, an adaptive threshold method was developed to measure the cropping intensity in the Phoenix AMA from 1995 to 2010 at five-year intervals. The method has several advantages in terms of (1) minimization of errors arising from missing data and noise; (2) ability to distinguish growing cycles from multiple small false peaks in a vegetation index time series; (3) flexibility when dealing with temporal profiles with diffing numbers of observations. The adaptive threshold approach measures the cropping intensity effectively with overall accuracies higher than 97%. Results indicate a dramatic decline in the area of total croplands, single crops, and double crops. A small land conversion was witnessed from single crops into double crops from 1995 to 2000, whereas a reverse trend was observed from 2005 to 2010. Changes in cropping intensity can affect local water consumption. Therefore, joint investigation of cropping patterns and agricultural water use can provide implications for future water demand, which is an increasingly critical issue in this rapidly expanding desert city.
An adaptive unsupervised hyperspectral classification method based on Gaussian distribution
NASA Astrophysics Data System (ADS)
Yue, Jiang; Wu, Jing-wei; Zhang, Yi; Bai, Lian-fa
2014-11-01
In order to achieve adaptive unsupervised clustering in the high precision, a method using Gaussian distribution to fit the similarity of the inter-class and the noise distribution is proposed in this paper, and then the automatic segmentation threshold is determined by the fitting result. First, according with the similarity measure of the spectral curve, this method assumes that the target and the background both in Gaussian distribution, the distribution characteristics is obtained through fitting the similarity measure of minimum related windows and center pixels with Gaussian function, and then the adaptive threshold is achieved. Second, make use of the pixel minimum related windows to merge adjacent similar pixels into a picture-block, then the dimensionality reduction is completed and the non-supervised classification is realized. AVIRIS data and a set of hyperspectral data we caught are used to evaluate the performance of the proposed method. Experimental results show that the proposed algorithm not only realizes the adaptive but also outperforms K-MEANS and ISODATA on the classification accuracy, edge recognition and robustness.
A new orientation-adaptive interpolation method.
Wang, Qing; Ward, Rabab Kreidieh
2007-04-01
We propose an isophote-oriented, orientation-adaptive interpolation method. The proposed method employs an interpolation kernel that adapts to the local orientation of isophotes, and the pixel values are obtained through an oriented, bilinear interpolation. We show that, by doing so, the curvature of the interpolated isophotes is reduced, and, thus, zigzagging artifacts are largely suppressed. Analysis and experiments show that images interpolated using the proposed method are visually pleasing and almost artifact free. PMID:17405424
The Method of Adaptive Comparative Judgement
ERIC Educational Resources Information Center
Pollitt, Alastair
2012-01-01
Adaptive Comparative Judgement (ACJ) is a modification of Thurstone's method of comparative judgement that exploits the power of adaptivity, but in scoring rather than testing. Professional judgement by teachers replaces the marking of tests; a judge is asked to compare the work of two students and simply to decide which of them is the better.…
NASA Astrophysics Data System (ADS)
Solari, S.; Losada, M. A.
2012-10-01
This paper explores the use of a mixture model for determining the marginal distribution of hydrological variables, consisting of a truncated central distribution that is representative of the central or main-mass regime, which for the cases studied is a lognormal distribution, and of two generalized Pareto distributions for the maximum and minimum regimes, representing the upper and lower tails, respectively. The thresholds defining the limits between these regimes and the central regime are parameters of the model and are calculated together with the remaining parameters by maximum likelihood. After testing the model with a simulation study we concluded that the upper threshold of the model can be used when applying the peak over threshold method. This will yield an automatic and objective identification of the threshold presenting an alternative to existing methods. The model was also applied to four hydrological data series: two mean daily flow series, the Thames at Kingston (United Kingdom), and the Guadalfeo River at Orgiva (Spain); and two daily precipitation series, Fort Collins (CO, USA), and Orgiva (Spain). It was observed that the model improved the fit of the data series with respect to the fit obtained with the lognormal (LN) and, in particular, provided a good fit for the upper tail. Moreover, we concluded that the proposed model is able to accommodate the entire range of values of some significant hydrological variables.
Multichannel spike detector with an adaptive threshold based on a Sigma-delta control loop.
Gagnon-Turcotte, G; Gosselin, B
2015-08-01
In this paper, we present a digital spike detector using an adaptive threshold which is suitable for real time processing of 32 electrophysiological channels in parallel. Such a new scheme is based on a Sigma-delta control loop that precisely estimates the standard deviation of the amplitude of the noise of the input signal to optimize the detection rate. Additionally, it is not dependent on the amplitude of the input signal thanks to a robust algorithm. The spike detector is implemented inside a Spartan-6 FPGA using low resources, only FPGA basic logic blocks, and is using a low clock frequency under 6 MHz for minimal power consumption. We present a comparison showing that the proposed system can compete with a dedicated off-line spike detection software. The whole system achieves up to 100% of true positive detection rate for SNRs down to 5 dB while achieving 62.3% of true positive detection rate for an SNR as low as -2 dB at a 150 AP/s firing rate. PMID:26737934
Variational method for adaptive grid generation
Brackbill, J.U.
1983-01-01
A variational method for generating adaptive meshes is described. Functionals measuring smoothness, skewness, orientation, and the Jacobian are minimized to generate a mapping from a rectilinear domain in natural coordinate to an arbitrary domain in physical coordinates. From the mapping, a mesh is easily constructed. In using the method to adaptively zone computational problems, as few as one third the number of mesh points are required in each coordinate direction compared with a uniformly zoned mesh.
Anaerobic threshold: the concept and methods of measurement.
Svedahl, Krista; MacIntosh, Brian R
2003-04-01
The anaerobic threshold (AnT) is defined as the highest sustained intensity of exercise for which measurement of oxygen uptake can account for the entire energy requirement. At the AnT, the rate at which lactate appears in the blood will be equal to the rate of its disappearance. Although inadequate oxygen delivery may facilitate lactic acid production, there is no evidence that lactic acid production above the AnT results from inadequate oxygen delivery. There are many reasons for trying to quantify this intensity of exercise, including assessment of cardiovascular or pulmonary health, evaluation of training programs, and categorization of the intensity of exercise as mild, moderate, or intense. Several tests have been developed to determine the intensity of exercise associated with AnT: maximal lactate steady state, lactate minimum test, lactate threshold, OBLA, individual anaerobic threshold, and ventilatory threshold. Each approach permits an estimate of the intensity of exercise associated with AnT, but also has consistent and predictable error depending on protocol and the criteria used to identify the appropriate intensity of exercise. These tests are valuable, but when used to predict AnT, the term that describes the approach taken should be used to refer to the intensity that has been identified, rather than to refer to this intensity as the AnT. PMID:12825337
Restrictive Stochastic Item Selection Methods in Cognitive Diagnostic Computerized Adaptive Testing
ERIC Educational Resources Information Center
Wang, Chun; Chang, Hua-Hua; Huebner, Alan
2011-01-01
This paper proposes two new item selection methods for cognitive diagnostic computerized adaptive testing: the restrictive progressive method and the restrictive threshold method. They are built upon the posterior weighted Kullback-Leibler (KL) information index but include additional stochastic components either in the item selection index or in…
A comparison of two methods for measuring thermal thresholds in diabetic neuropathy.
Levy, D; Abraham, R; Reid, G
1989-01-01
Thermal thresholds can be measured psychophysically using either the method of limits or a forced-choice method. We have compared the two methods in 367 diabetic patients, 128 with symptomatic neuropathy. The Sensortek method was chosen for the forced-choice device, the Somedic modification of the Marstock method for a method of limits. Cooling and heat pain thresholds were also measured using the Marstock method. Somedic thermal thresholds increase with age in normal subjects, but not to a clinically significant degree. In diabetics Marstock warm threshold increased by 0.8 degrees C/decade, Sensortek by 0.1 degrees C/decade. Both methods had a high coefficient of variation in normal subjects (Sensortek 29%, Marstock warm 14%, cool 42%). The prevalence of abnormal thresholds was similar for both methods (28-32%), though Marstock heat pain thresholds were less frequently abnormal (18%). Only 15-18% of patients had abnormal results in both tests. Sensortek thresholds were significantly lower on repeat testing, and all thresholds were higher in symptomatic patients. Both methods are suitable for clinical thermal testing, though the method of limits is quicker. In screening studies the choice of a suitable apparatus need not be determined by the psychophysical basis of the test. PMID:2795077
ERIC Educational Resources Information Center
Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien
2013-01-01
The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…
Longmire, M S; Milton, A F; Takken, E H
1982-11-01
Several 1-D signal processing techniques have been evaluated by simulation with a digital computer using high-spatial-resolution (0.15 mrad) noise data gathered from back-lit clouds and uniform sky with a scanning data collection system operating in the 4.0-4.8-microm spectral band. Two ordinary bandpass filters and a least-mean-square (LMS) spatial filter were evaluated in combination with a fixed or adaptive threshold algorithm. The combination of a 1-D LMS filter and a 1-D adaptive threshold sensor was shown to reject extreme cloud clutter effectively and to provide nearly equal signal detection in a clear and cluttered sky, at least in systems whose NEI (noise equivalent irradiance) exceeds 1.5 x 10(-13) W/cm(2) and whose spatial resolution is better than 0.15 x 0.36 mrad. A summary gives highlights of the work, key numerical results, and conclusions. PMID:20396326
Adaptive Finite Element Methods in Geodynamics
NASA Astrophysics Data System (ADS)
Davies, R.; Davies, H.; Hassan, O.; Morgan, K.; Nithiarasu, P.
2006-12-01
Adaptive finite element methods are presented for improving the quality of solutions to two-dimensional (2D) and three-dimensional (3D) convection dominated problems in geodynamics. The methods demonstrate the application of existing technology in the engineering community to problems within the `solid' Earth sciences. Two-Dimensional `Adaptive Remeshing': The `remeshing' strategy introduced in 2D adapts the mesh automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. The approach requires the coupling of an automatic mesh generator, a finite element flow solver and an error estimator. In this study, the procedure is implemented in conjunction with the well-known geodynamical finite element code `ConMan'. An unstructured quadrilateral mesh generator is utilised, with mesh adaptation accomplished through regeneration. This regeneration employs information provided by an interpolation based local error estimator, obtained from the computed solution on an existing mesh. The technique is validated by solving thermal and thermo-chemical problems with known benchmark solutions. In a purely thermal context, results illustrate that the method is highly successful, improving solution accuracy whilst increasing computational efficiency. For thermo-chemical simulations the same conclusions can be drawn. However, results also demonstrate that the grid based methods employed for simulating the compositional field are not competitive with the other methods (tracer particle and marker chain) currently employed in this field, even at the higher spatial resolutions allowed by the adaptive grid strategies. Three-Dimensional Adaptive Multigrid: We extend the ideas from our 2D work into the 3D realm in the context of a pre-existing 3D-spherical mantle dynamics code, `TERRA'. In its original format, `TERRA' is computationally highly efficient since it employs a multigrid solver that depends upon a grid utilizing a clever
NASA Astrophysics Data System (ADS)
Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun
2016-05-01
The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.
Evaluation of Maryland abutment scour equation through selected threshold velocity methods
Benedict, S.T.
2010-01-01
The U.S. Geological Survey, in cooperation with the Maryland State Highway Administration, used field measurements of scour to evaluate the sensitivity of the Maryland abutment scour equation to the critical (or threshold) velocity variable. Four selected methods for estimating threshold velocity were applied to the Maryland abutment scour equation, and the predicted scour to the field measurements were compared. Results indicated that performance of the Maryland abutment scour equation was sensitive to the threshold velocity with some threshold velocity methods producing better estimates of predicted scour than did others. In addition, results indicated that regional stream characteristics can affect the performance of the Maryland abutment scour equation with moderate-gradient streams performing differently from low-gradient streams. On the basis of the findings of the investigation, guidance for selecting threshold velocity methods for application to the Maryland abutment scour equation are provided, and limitations are noted.
Adaptive sequential methods for detecting network intrusions
NASA Astrophysics Data System (ADS)
Chen, Xinjia; Walker, Ernest
2013-06-01
In this paper, we propose new sequential methods for detecting port-scan attackers which routinely perform random "portscans" of IP addresses to find vulnerable servers to compromise. In addition to rigorously control the probability of falsely implicating benign remote hosts as malicious, our method performs significantly faster than other current solutions. Moreover, our method guarantees that the maximum amount of observational time is bounded. In contrast to the previous most effective method, Threshold Random Walk Algorithm, which is explicit and analytical in nature, our proposed algorithm involve parameters to be determined by numerical methods. We have introduced computational techniques such as iterative minimax optimization for quick determination of the parameters of the new detection algorithm. A framework of multi-valued decision for detecting portscanners and DoS attacks is also proposed.
Noll, Douglas C.; Fessler, Jeffrey A.
2014-01-01
Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484
Li, Yan; Zhu, Rui; Mi, Lei; Cao, Yihui; Yao, Di
2016-01-01
We propose a dual-threshold method based on a strategic combination of RGB and HSV color space for white blood cell (WBC) segmentation. The proposed method consists of three main parts: preprocessing, threshold segmentation, and postprocessing. In the preprocessing part, we get two images for further processing: one contrast-stretched gray image and one H component image from transformed HSV color space. In the threshold segmentation part, a dual-threshold method is proposed for improving the conventional single-threshold approaches and a golden section search method is used for determining the optimal thresholds. For the postprocessing part, mathematical morphology and median filtering are utilized to denoise and remove incomplete WBCs. The proposed method was tested in segmenting the lymphoblasts on a public Acute Lymphoblastic Leukemia (ALL) image dataset. The results show that the performance of the proposed method is better than single-threshold approach independently performed in RGB and HSV color space and the overall single WBC segmentation accuracy reaches 97.85%, showing a good prospect in subsequent lymphoblast classification and ALL diagnosis. PMID:27313659
Cao, Yihui; Yao, Di
2016-01-01
We propose a dual-threshold method based on a strategic combination of RGB and HSV color space for white blood cell (WBC) segmentation. The proposed method consists of three main parts: preprocessing, threshold segmentation, and postprocessing. In the preprocessing part, we get two images for further processing: one contrast-stretched gray image and one H component image from transformed HSV color space. In the threshold segmentation part, a dual-threshold method is proposed for improving the conventional single-threshold approaches and a golden section search method is used for determining the optimal thresholds. For the postprocessing part, mathematical morphology and median filtering are utilized to denoise and remove incomplete WBCs. The proposed method was tested in segmenting the lymphoblasts on a public Acute Lymphoblastic Leukemia (ALL) image dataset. The results show that the performance of the proposed method is better than single-threshold approach independently performed in RGB and HSV color space and the overall single WBC segmentation accuracy reaches 97.85%, showing a good prospect in subsequent lymphoblast classification and ALL diagnosis. PMID:27313659
Domain adaptive boosting method and its applications
NASA Astrophysics Data System (ADS)
Geng, Jie; Miao, Zhenjiang
2015-03-01
Differences of data distributions widely exist among datasets, i.e., domains. For many pattern recognition, nature language processing, and content-based analysis systems, a decrease in performance caused by the domain differences between the training and testing datasets is still a notable problem. We propose a domain adaptation method called domain adaptive boosting (DAB). It is based on the AdaBoost approach with extensions to cover the domain differences between the source and target domains. Two main stages are contained in this approach: source-domain clustering and source-domain sample selection. By iteratively adding the selected training samples from the source domain, the discrimination model is able to achieve better domain adaptation performance based on a small validation set. The DAB algorithm is suitable for the domains with large scale samples and easy to extend for multisource adaptation. We implement this method on three computer vision systems: the skin detection model in single images, the video concept detection model, and the object classification model. In the experiments, we compare the performances of several commonly used methods and the proposed DAB. Under most situations, the DAB is superior.
Twelve automated thresholding methods for segmentation of PET images: a phantom study
NASA Astrophysics Data System (ADS)
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.
2012-06-01
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Twelve automated thresholding methods for segmentation of PET images: a phantom study.
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M
2012-06-21
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools. PMID:22647928
A method for detection of foreign body in cotton based on threshold segment
NASA Astrophysics Data System (ADS)
Sha, Tao; Xie, Tingting; Wang, Mengxue; Yang, Chaoyu
2013-10-01
In order to extract foreign body from the complex channel background and cotton layers, a detection method which combined improved Otsu threshold with background estimation threshold is presented. Firstly, the original image which containing multiple foreign fibers is divided into two new images which containing only two substances by Otsu threshold. And then using the estimated value of the means and the standard deviations of the two new images a background estimation threshold was determined. The foreign fibers are extracted by the estimation threshold. Simulation results show that this method can overcome the effect which caused by the channel background interference and diversity of the foreign fibers in the actual working environment and can extract foreign bodies quickly and effectively.
Structured adaptive grid generation using algebraic methods
NASA Technical Reports Server (NTRS)
Yang, Jiann-Cherng; Soni, Bharat K.; Roger, R. P.; Chan, Stephen C.
1993-01-01
The accuracy of the numerical algorithm depends not only on the formal order of approximation but also on the distribution of grid points in the computational domain. Grid adaptation is a procedure which allows optimal grid redistribution as the solution progresses. It offers the prospect of accurate flow field simulations without the use of an excessively timely, computationally expensive, grid. Grid adaptive schemes are divided into two basic categories: differential and algebraic. The differential method is based on a variational approach where a function which contains a measure of grid smoothness, orthogonality and volume variation is minimized by using a variational principle. This approach provided a solid mathematical basis for the adaptive method, but the Euler-Lagrange equations must be solved in addition to the original governing equations. On the other hand, the algebraic method requires much less computational effort, but the grid may not be smooth. The algebraic techniques are based on devising an algorithm where the grid movement is governed by estimates of the local error in the numerical solution. This is achieved by requiring the points in the large error regions to attract other points and points in the low error region to repel other points. The development of a fast, efficient, and robust algebraic adaptive algorithm for structured flow simulation applications is presented. This development is accomplished in a three step process. The first step is to define an adaptive weighting mesh (distribution mesh) on the basis of the equidistribution law applied to the flow field solution. The second, and probably the most crucial step, is to redistribute grid points in the computational domain according to the aforementioned weighting mesh. The third and the last step is to reevaluate the flow property by an appropriate search/interpolate scheme at the new grid locations. The adaptive weighting mesh provides the information on the desired concentration
Threshold selection for classification of MR brain images by clustering method
Moldovanu, Simona; Obreja, Cristian; Moraru, Luminita
2015-12-07
Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.
Threshold selection for classification of MR brain images by clustering method
NASA Astrophysics Data System (ADS)
Moldovanu, Simona; Obreja, Cristian; Moraru, Luminita
2015-12-01
Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.
Evaluation of different methods for determining growing degree-day thresholds in apricot cultivars
NASA Astrophysics Data System (ADS)
Ruml, Mirjana; Vuković, Ana; Milatović, Dragan
2010-07-01
The aim of this study was to examine different methods for determining growing degree-day (GDD) threshold temperatures for two phenological stages (full bloom and harvest) and select the optimal thresholds for a greater number of apricot ( Prunus armeniaca L.) cultivars grown in the Belgrade region. A 10-year data series were used to conduct the study. Several commonly used methods to determine the threshold temperatures from field observation were evaluated: (1) the least standard deviation in GDD; (2) the least standard deviation in days; (3) the least coefficient of variation in GDD; (4) regression coefficient; (5) the least standard deviation in days with a mean temperature above the threshold; (6) the least coefficient of variation in days with a mean temperature above the threshold; and (7) the smallest root mean square error between the observed and predicted number of days. In addition, two methods for calculating daily GDD, and two methods for calculating daily mean air temperatures were tested to emphasize the differences that can arise by different interpretations of basic GDD equation. The best agreement with observations was attained by method (7). The lower threshold temperature obtained by this method differed among cultivars from -5.6 to -1.7°C for full bloom, and from -0.5 to 6.6°C for harvest. However, the “Null” method (lower threshold set to 0°C) and “Fixed Value” method (lower threshold set to -2°C for full bloom and to 3°C for harvest) gave very good results. The limitations of the widely used method (1) and methods (5) and (6), which generally performed worst, are discussed in the paper.
Methods for increasing the threshold sensitivity of onboard photometers
NASA Astrophysics Data System (ADS)
Angarov, V. N.; Efremkina, L. M.; Gladyshev, V. A.; Kuzmin, A. K.
The performance of the FEU-119 multialkaline photomultiplier in the quantum counting mode is analyzed, and methods for increasing its signal-to-noise ratio are described. By one method, shadow current is reduced by mounting a ring magnet around the photocathode thereby preventing photoelectrons from the cathode periphery from reaching the dynode system.
Flaw sizing method based on ultrasonic dynamic thresholds and neural network
NASA Astrophysics Data System (ADS)
Song, Yongfeng; Wang, Yiling; Ni, Peijun; Qiao, Ridong; Li, Xiongbing
2016-02-01
A dynamic threshold method for ultrasonic C-Scan imaging is developed to improve the performance of flaw sizing: the reference test blocks with flat-bottom hole flaws of different depths and sizes are used for ultrasonic C-Scan imaging. After preprocessing, flaw regions are separated from the C-scan image. Then the flaws are sized roughly by 6-dB-drop method. Based on the real size of flat-bottom holes, enumeration method is used to get the optimal threshold for the flaw. The neural network is trained using the combination of amplitude and depth of flaw echo, the rough size of flaw and the optimal threshold. Finally, the C-Scan image can be reconstructed according to dynamic threshold generated by trained RBF NN. The experimental results show that the presented method has better performance and it is ideally suited for automatic analysis of ultrasonic C-scan images.
Parallel adaptive wavelet collocation method for PDEs
Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.
2015-10-01
A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.
An adaptive selective frequency damping method
NASA Astrophysics Data System (ADS)
Jordi, Bastien; Cotter, Colin; Sherwin, Spencer
2015-03-01
The selective frequency damping (SFD) method is used to obtain unstable steady-state solutions of dynamical systems. The stability of this method is governed by two parameters that are the control coefficient and the filter width. Convergence is not guaranteed for arbitrary choice of these parameters. Even when the method does converge, the time necessary to reach a steady-state solution may be very long. We present an adaptive SFD method. We show that by modifying the control coefficient and the filter width all along the solver execution, we can reach an optimum convergence rate. This method is based on successive approximations of the dominant eigenvalue of the flow studied. We design a one-dimensional model to select SFD parameters that enable us to control the evolution of the least stable eigenvalue of the system. These parameters are then used for the application of the SFD method to the multi-dimensional flow problem. We apply this adaptive method to a set of classical test cases of computational fluid dynamics and show that the steady-state solutions obtained are similar to what can be found in the literature. Then we apply it to a specific vortex dominated flow (of interest for the automotive industry) whose stability had never been studied before. Seventh Framework Programme of the European Commission - ANADE project under Grant Contract PITN-GA-289428.
NASA Astrophysics Data System (ADS)
Chakraborty, D.; Kato, K.; Lei, R.
The adaptive threshold detection with estimated sequence (ATDES) processor is a practical version of maximum-likelihood-sequence detection (MLSD). A 60-Mbit/s continuous-mode ATDES processor has been developed and tested via an Intelsat IV F-4 and Paumalu earth station link. Experimental data obtained to date from Intelsat V 120-Mbit/s QPSK channel transmission tests and 60-Mbit/s ATDES testing indicate that an improvement of about 2 dB in Eb/No at a BER of 10 to the -6th could be achieved by a 120-Mbit/s burst-mode ATDES processor.
Control methods and thresholds of acceptability for antibrucella vaccines.
Bosseray, N
1992-01-01
Protection against brucellosis involves both cellular and humoral effectors not yet fully appreciated. Living or killed vaccines can protect against the infection itself or only against abortion. For official controls, vaccines (or new procedures of vaccination) must first be characterized pharmacologically and tested for innocuity. Protection must be tested on natural hosts with a reference vaccine (S19 or Rev. 1) by the agreed method which reproduces the natural infection and measures immunity in toto. Control and vaccinated females are challenged by the conjunctival route at mid-pregnancy under standard conditions (strain, dose) to measure the resulting infection by bacteriological analysis of excretion at parturition and of infection in target organs at slaughter. Results are principally expressed by the infection rate which should be +/- 95% in the control group. In the new vaccine group the rate should be equivalent to, or lower than, the reference vaccine group. To be statistically valid, at least 30 animals per group are required. For routine controls, laboratory models using guinea pigs, not well standardized, inaccurate and expensive, have long been proposed. The mouse model, extensively studied and standardized, should now be preferred to the guinea pig model. In the mouse model, residual virulence of a living vaccine is estimated by the time required by 50% of the mice to eradicate the strain from their spleen (Recovery Time 50%). Immunogenicity is measured by the ability of mice to restrict their splenic infection after a virulent i.p. challenge at a dose (B. abortus 544; 2 x 10(5) cfu) chosen in order that all mice were still infected 15 days post challenge.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:1286747
Reliability of a Simple Method for Determining Salt Taste Detection and Recognition Thresholds.
Giguère, Jean-François; de Moura Piovesana, Paula; Proulx-Belhumeur, Alexandra; Doré, Michel; de Lemos Sampaio, Karina; Gallani, Maria-Cecilia
2016-03-01
The aim of this study was to assess the reliability of a rapid analytical method to determine salt taste detection and recognition thresholds based on the ASTM E679 method. Reliability was evaluated according to criterion of temporal stability with a 1-week interval test-retest, with 29 participants. Thresholds were assessed by using the 3-AFC technique with 15 ascending concentrations of salt solution (1-292mM, 1.5-fold steps) and estimated by 2 approaches: individual (geometric means) and group (graphical) thresholds. The proportion of agreement between the test and retest results was estimated using intraclass coefficient correlations. The detection and recognition thresholds calculated by the geometric mean were 2.8 and 18.6mM at session 1 and 2.3 and 14.5mM at session 2 and according to the graphical approach, 2.7 and 18.6mM at session 1 and 1.7 and 16.3mM at session 2. The proportion of agreement between test and retest for the detection and recognition thresholds was 0.430 (95% CI: 0.080-0.680) and 0.660 (95% CI: 0.400-0.830). This fast and simple method to assess salt taste detection and recognition thresholds demonstrated satisfactory evidence of reliability and it could be useful for large population studies. PMID:26733539
An Adaptive VOF Method on Unstructured Grid
NASA Astrophysics Data System (ADS)
Wu, L. L.; Huang, M.; Chen, B.
2011-09-01
In order to improve the accuracy of interface capturing and keeping the computational efficiency, an adaptive VOF method on unstructured grid is proposed in this paper. The volume fraction in each cell is regarded as the criterion to locally refine the interface cell. With the movement of interface, new interface cells (0 ≤ f ≤ 1) are subdivided into child cells, while those child cells that no longer contain interface will be merged back into the original parent cell. In order to avoid the complicated redistribution of volume fraction during the subdivision and amalgamation procedure, a predictor-corrector algorithm is proposed to implement the subdivision and amalgamation procedures only in empty or full cell ( f = 0 or 1). Thus volume fraction in the new cell can take the value from the original cell directly, and the interpolation of the interface is avoided. The advantage of this method is that the re-generation of the whole grid system is not necessary, so its implementation is very efficient. Moreover, an advection flow test of a hollow square was performed, and the relative shape error of the result obtained by adaptive mesh is smaller than those by non-refined grid, which verifies the validation of our method.
Ensemble transform sensitivity method for adaptive observations
NASA Astrophysics Data System (ADS)
Zhang, Yu; Xie, Yuanfu; Wang, Hongli; Chen, Dehui; Toth, Zoltan
2016-01-01
The Ensemble Transform (ET) method has been shown to be useful in providing guidance for adaptive observation deployment. It predicts forecast error variance reduction for each possible deployment using its corresponding transformation matrix in an ensemble subspace. In this paper, a new ET-based sensitivity (ETS) method, which calculates the gradient of forecast error variance reduction in terms of analysis error variance reduction, is proposed to specify regions for possible adaptive observations. ETS is a first order approximation of the ET; it requires just one calculation of a transformation matrix, increasing computational efficiency (60%-80% reduction in computational cost). An explicit mathematical formulation of the ETS gradient is derived and described. Both the ET and ETS methods are applied to the Hurricane Irene (2011) case and a heavy rainfall case for comparison. The numerical results imply that the sensitive areas estimated by the ETS and ET are similar. However, ETS is much more efficient, particularly when the resolution is higher and the number of ensemble members is larger.
Adaptive characterization method for desktop color printers
NASA Astrophysics Data System (ADS)
Shen, Hui-Liang; Zheng, Zhi-Huan; Jin, Chong-Chao; Du, Xin; Shao, Si-Jie; Xin, John H.
2013-04-01
With the rapid development of multispectral imaging technique, it is desired that the spectral color can be accurately reproduced using desktop color printers. However, due to the specific spectral gamuts determined by printer inks, it is almost impossible to exactly replicate the reflectance spectra in other media. In addition, as ink densities can not be individually controlled, desktop printers can only be regarded as red-green-blue devices, making physical models unfeasible. We propose a locally adaptive method, which consists of both forward and inverse models, for desktop printer characterization. In the forward model, we establish the adaptive transform between control values and reflectance spectrum on individual cellular subsets by using weighted polynomial regression. In the inverse model, we first determine the candidate space of the control values based on global inverse regression and then compute the optimal control values by minimizing the color difference between the actual spectrum and the predicted spectrum via forward transform. Experimental results show that the proposed method can reproduce colors accurately for different media under multiple illuminants.
Adaptive method with intercessory feedback control for an intelligent agent
Goldsmith, Steven Y.
2004-06-22
An adaptive architecture method with feedback control for an intelligent agent provides for adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. An adaptive architecture method with feedback control for multiple intelligent agents provides for coordinating and adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. Re-programming of the adaptive architecture is through a nexus which coordinates reflexive and deliberator components.
Adaptive Accommodation Control Method for Complex Assembly
NASA Astrophysics Data System (ADS)
Kang, Sungchul; Kim, Munsang; Park, Shinsuk
Robotic systems have been used to automate assembly tasks in manufacturing and in teleoperation. Conventional robotic systems, however, have been ineffective in controlling contact force in multiple contact states of complex assemblythat involves interactions between complex-shaped parts. Unlike robots, humans excel at complex assembly tasks by utilizing their intrinsic impedance, forces and torque sensation, and tactile contact clues. By examining the human behavior in assembling complex parts, this study proposes a novel geometry-independent control method for robotic assembly using adaptive accommodation (or damping) algorithm. Two important conditions for complex assembly, target approachability and bounded contact force, can be met by the proposed control scheme. It generates target approachable motion that leads the object to move closer to a desired target position, while contact force is kept under a predetermined value. Experimental results from complex assembly tests have confirmed the feasibility and applicability of the proposed method.
Adapting implicit methods to parallel processors
Reeves, L.; McMillin, B.; Okunbor, D.; Riggins, D.
1994-12-31
When numerically solving many types of partial differential equations, it is advantageous to use implicit methods because of their better stability and more flexible parameter choice, (e.g. larger time steps). However, since implicit methods usually require simultaneous knowledge of the entire computational domain, these methods axe difficult to implement directly on distributed memory parallel processors. This leads to infrequent use of implicit methods on parallel/distributed systems. The usual implementation of implicit methods is inefficient due to the nature of parallel systems where it is common to take the computational domain and distribute the grid points over the processors so as to maintain a relatively even workload per processor. This creates a problem at the locations in the domain where adjacent points are not on the same processor. In order for the values at these points to be calculated, messages have to be exchanged between the corresponding processors. Without special adaptation, this will result in idle processors during part of the computation, and as the number of idle processors increases, the lower the effective speed improvement by using a parallel processor.
A novel EMD selecting thresholding method based on multiple iteration for denoising LIDAR signal
NASA Astrophysics Data System (ADS)
Li, Meng; Jiang, Li-hui; Xiong, Xing-long
2015-06-01
Empirical mode decomposition (EMD) approach has been believed to be potentially useful for processing the nonlinear and non-stationary LIDAR signals. To shed further light on its performance, we proposed the EMD selecting thresholding method based on multiple iteration, which essentially acts as a development of EMD interval thresholding (EMD-IT), and randomly alters the samples of noisy parts of all the corrupted intrinsic mode functions to generate a better effect of iteration. Simulations on both synthetic signals and LIDAR signals from real world support this method.
Linearly-Constrained Adaptive Signal Processing Methods
NASA Astrophysics Data System (ADS)
Griffiths, Lloyd J.
1988-01-01
In adaptive least-squares estimation problems, a desired signal d(n) is estimated using a linear combination of L observation values samples xi (n), x2(n), . . . , xL-1(n) and denoted by the vector X(n). The estimate is formed as the inner product of this vector with a corresponding L-dimensional weight vector W. One particular weight vector of interest is Wopt which minimizes the mean-square between d(n) and the estimate. In this context, the term `mean-square difference' is a quadratic measure such as statistical expectation or time average. The specific value of W which achieves the minimum is given by the prod-uct of the inverse data covariance matrix and the cross-correlation between the data vector and the desired signal. The latter is often referred to as the P-vector. For those cases in which time samples of both the desired and data vector signals are available, a variety of adaptive methods have been proposed which will guarantee that an iterative weight vector Wa(n) converges (in some sense) to the op-timal solution. Two which have been extensively studied are the recursive least-squares (RLS) method and the LMS gradient approximation approach. There are several problems of interest in the communication and radar environment in which the optimal least-squares weight set is of interest and in which time samples of the desired signal are not available. Examples can be found in array processing in which only the direction of arrival of the desired signal is known and in single channel filtering where the spectrum of the desired response is known a priori. One approach to these problems which has been suggested is the P-vector algorithm which is an LMS-like approximate gradient method. Although it is easy to derive the mean and variance of the weights which result with this algorithm, there has never been an identification of the corresponding underlying error surface which the procedure searches. The purpose of this paper is to suggest an alternative
An adaptive SPH method for strong shocks
NASA Astrophysics Data System (ADS)
Sigalotti, Leonardo Di G.; López, Hender; Trujillo, Leonardo
2009-09-01
We propose an alternative SPH scheme to usual SPH Godunov-type methods for simulating supersonic compressible flows with sharp discontinuities. The method relies on an adaptive density kernel estimation (ADKE) algorithm, which allows the width of the kernel interpolant to vary locally in space and time so that the minimum necessary smoothing is applied in regions of low density. We have performed a von Neumann stability analysis of the SPH equations for an ideal gas and derived the corresponding dispersion relation in terms of the local width of the kernel. Solution of the dispersion relation in the short wavelength limit shows that stability is achieved for a wide range of the ADKE parameters. Application of the method to high Mach number shocks confirms the predictions of the linear analysis. Examples of the resolving power of the method are given for a set of difficult problems, involving the collision of two strong shocks, the strong shock-tube test, and the interaction of two blast waves.
Adaptive wavelet methods - Matrix-vector multiplication
NASA Astrophysics Data System (ADS)
Černá, Dana; Finěk, Václav
2012-12-01
The design of most adaptive wavelet methods for elliptic partial differential equations follows a general concept proposed by A. Cohen, W. Dahmen and R. DeVore in [3, 4]. The essential steps are: transformation of the variational formulation into the well-conditioned infinite-dimensional l2 problem, finding of the convergent iteration process for the l2 problem and finally derivation of its finite dimensional version which works with an inexact right hand side and approximate matrix-vector multiplications. In our contribution, we shortly review all these parts and wemainly pay attention to approximate matrix-vector multiplications. Effective approximation of matrix-vector multiplications is enabled by an off-diagonal decay of entries of the wavelet stiffness matrix. We propose here a new approach which better utilize actual decay of matrix entries.
Adaptive model training system and method
Bickford, Randall L; Palnitkar, Rahul M; Lee, Vo
2014-04-15
An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.
Adaptive model training system and method
Bickford, Randall L; Palnitkar, Rahul M
2014-11-18
An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.
Mejias, Jorge F.; Torres, Joaquin J.
2011-01-01
In this work we study the detection of weak stimuli by spiking (integrate-and-fire) neurons in the presence of certain level of noisy background neural activity. Our study has focused in the realistic assumption that the synapses in the network present activity-dependent processes, such as short-term synaptic depression and facilitation. Employing mean-field techniques as well as numerical simulations, we found that there are two possible noise levels which optimize signal transmission. This new finding is in contrast with the classical theory of stochastic resonance which is able to predict only one optimal level of noise. We found that the complex interplay between adaptive neuron threshold and activity-dependent synaptic mechanisms is responsible for this new phenomenology. Our main results are confirmed by employing a more realistic FitzHugh-Nagumo neuron model, which displays threshold variability, as well as by considering more realistic stochastic synaptic models and realistic signals such as poissonian spike trains. PMID:21408148
Scene sketch generation using mixture of gradient kernels and adaptive thresholding
NASA Astrophysics Data System (ADS)
Paheding, Sidike; Essa, Almabrok; Asari, Vijayan
2016-04-01
This paper presents a simple but eﬀective algorithm for scene sketch generation from input images. The proposed algorithm combines the edge magnitudes of directional Prewitt diﬀerential gradient kernels with Kirsch kernels at each pixel position, and then encodes them into an eight bit binary code which encompasses local edge and texture information. In this binary encoding step, relative variance is employed to determine the object shape in each local region. Using relative variance enables object sketch extraction totally adaptive to any shape structure. On the other hand, the proposed technique does not require any parameter to adjust output and it is robust to edge density and noise. Two standard databases are used to show the eﬀectiveness of the proposed framework.
Online Adaptive Replanning Method for Prostate Radiotherapy
Ahunbay, Ergun E.; Peng Cheng; Holmes, Shannon; Godley, Andrew; Lawton, Colleen; Li, X. Allen
2010-08-01
Purpose: To report the application of an adaptive replanning technique for prostate cancer radiotherapy (RT), consisting of two steps: (1) segment aperture morphing (SAM), and (2) segment weight optimization (SWO), to account for interfraction variations. Methods and Materials: The new 'SAM+SWO' scheme was retroactively applied to the daily CT images acquired for 10 prostate cancer patients on a linear accelerator and CT-on-Rails combination during the course of RT. Doses generated by the SAM+SWO scheme based on the daily CT images were compared with doses generated after patient repositioning using the current planning target volume (PTV) margin (5 mm, 3 mm toward rectum) and a reduced margin (2 mm), along with full reoptimization scans based on the daily CT images to evaluate dosimetry benefits. Results: For all cases studied, the online replanning method provided significantly better target coverage when compared with repositioning with reduced PTV (13% increase in minimum prostate dose) and improved organ sparing when compared with repositioning with regular PTV (13% decrease in the generalized equivalent uniform dose of rectum). The time required to complete the online replanning process was 6 {+-} 2 minutes. Conclusion: The proposed online replanning method can be used to account for interfraction variations for prostate RT with a practically acceptable time frame (5-10 min) and with significant dosimetric benefits. On the basis of this study, the developed online replanning scheme is being implemented in the clinic for prostate RT.
A Multi-Threshold Sampling Method for TOF PET Signal Processing.
Kim, H; Kao, C M; Xie, Q; Chen, C T; Zhou, L; Tang, F; Frisch, H; Moses, W W; Choong, W S
2009-04-21
As an approach to realizing all-digital data acquisition for positron emission tomography (PET), we have previously proposed and studied a multi-threshold sampling method to generate samples of a PET event waveform with respect to a few user-defined amplitudes. In this sampling scheme, one can extract both the energy and timing information for an event. In this paper, we report our prototype implementation of this sampling method and the performance results obtained with this prototype. The prototype consists of two multi-threshold discriminator boards and a time-to-digital converter (TDC) board. Each of the multi-threshold discriminator boards takes one input and provides up to 8 threshold levels, which can be defined by users, for sampling the input signal. The TDC board employs the CERN HPTDC chip that determines the digitized times of the leading and falling edges of the discriminator output pulses. We connect our prototype electronics to the outputs of two Hamamatsu R9800 photomultiplier tubes (PMTs) that are individually coupled to a 6.25×6.25×25mm(3) LSO crystal. By analyzing waveform samples generated by using four thresholds, we obtain a coincidence timing resolution of about 340 ps and an ∼18% energy resolution at 511 keV. We are also able to estimate the decay-time constant from the resulting samples and obtain a mean value of 44ns with an ∼9 ns FWHM. In comparison, using digitized waveforms obtained at a 20 GSps sampling rate for the same LSO/PMT modules we obtain ∼300 ps coincidence timing resolution, ∼14% energy resolution at 511 keV, and ∼5 ns FWHM for the estimated decay-time constant. Details of the results on the timing and energy resolutions by using the multi-threshold method indicate that it is a promising approach for implementing digital PET data acquisition. PMID:19690623
A simple method to estimate threshold friction velocity of wind erosion in the field
Technology Transfer Automated Retrieval System (TEKTRAN)
Nearly all wind erosion models require the specification of threshold friction velocity (TFV). Yet determining TFV of wind erosion in field conditions is difficult as it depends on both soil characteristics and distribution of vegetation or other roughness elements. While several reliable methods ha...
NASA Astrophysics Data System (ADS)
Han, Jong Goo; Park, Tae Hee; Moon, Yong Ho; Eom, Il Kyu
2016-03-01
We propose an efficient Markov feature extraction method for color image splicing detection. The maximum value among the various directional difference values in the discrete cosine transform domain of three color channels is used to choose the Markov features. We show that the discriminability for slicing detection is increased through the maximization process from the point of view of the Kullback-Leibler divergence. In addition, we present a threshold expansion and Markov state decomposition algorithm. Threshold expansion reduces the information loss caused by the coefficient thresholding that is used to restrict the number of Markov features. To compensate the increased number of features due to the threshold expansion, we propose an even-odd Markov state decomposition algorithm. A fixed number of features, regardless of the difference directions, color channels and test datasets, are used in the proposed algorithm. We introduce three kinds of Markov feature vectors. The number of Markov features for splicing detection used in this paper is relatively small compared to the conventional methods, and our method does not require additional feature reduction algorithms. Through experimental simulations, we demonstrate that the proposed method achieves high performance in splicing detection.
Automation of a center pivot using the temperature-time-threshold method of irriation scheduling
Technology Transfer Automated Retrieval System (TEKTRAN)
A center pivot was completely automated using the temperature-time-threshold (TTT) method of irrigation scheduling. An array of infrared thermometers was mounted on the center pivot and these were used to remotely determine the crop leaf temperature as an indicator of crop water stress. We describ...
An improved vein image segmentation algorithm based on SLIC and Niblack threshold method
NASA Astrophysics Data System (ADS)
Zhou, Muqing; Wu, Zhaoguo; Chen, Difan; Zhou, Ya
2013-12-01
Subcutaneous vein images are often obtained by using the absorbency difference of near-infrared (NIR) light between vein and its surrounding tissue under NIR light illumination. Vein images with high quality are critical to biometric identification, which requires segmenting the vein skeleton from the original images accurately. To address this issue, we proposed a vein image segmentation method which based on simple linear iterative clustering (SLIC) method and Niblack threshold method. The SLIC method was used to pre-segment the original images into superpixels and all the information in superpixels were transferred into a matrix (Block Matrix). Subsequently, Niblack thresholding method is adopted to binarize Block Matrix. Finally, we obtained segmented vein images from binarized Block Matrix. According to several experiments, most part of vein skeleton is revealed compared to traditional Niblack segmentation algorithm.
Direct comparison of two statistical methods for determination of evoked-potential thresholds
NASA Astrophysics Data System (ADS)
Langford, Ted L.; Patterson, James H., Jr.
1994-07-01
Several statistical procedures have been proposed as objective methods for determining evoked-potential thresholds. Data have been presented to support each of the methods, but there have not been direct comparisons using the same data. The goal of the present study was to evaluate correlation and variance ratio statistics using common data. A secondary goal was to evaluate the utility of a derived potential for determining thresholds. Chronic, bipolar electrodes were stereotaxically implanted in the inferior colliculi of six chinchillas. Evoked potentials were obtained at 0.25, 0.5, 1.0, 2.0, 4.0 and 8.0 kHz using 12-ms tone bursts and 12-ms tone bursts superimposed on 120-ms pedestal tones which were of the same frequency as the bursts, but lower in amplitude by 15 dB. Alternate responses were averaged in blocks of 200 to 4000 depending on the size of the response. Correlations were calculated for the pairs of averages. A response was deemed present if the correlation coefficient reached the 0.05 level of significance in 4000 or fewer averages. Threshold was defined as the mean of the level at which the correlation was significant and a level 5 dB below that at which it was not. Variance ratios were calculated as described by Elberling and Don (1984) using the same data. Averaged tone burst and tone burst-plus pedestal data were differenced and the resulting waveforms subjected to the same statistical analyses described above. All analyses yielded thresholds which were essentially the same as those obtained using behavioral methods. When the difference between stimulus durations is taken into account, however, evoked-potential methods produced lower thresholds than behavioral methods.
NASA Astrophysics Data System (ADS)
Sung, J. H.; Chung, E.-S.; Lee, K. S.
2013-12-01
This study developed a comprehensive method to quantify streamflow drought severity and magnitude based on a traditional frequency analysis. Two types of curve were developed: the streamflow drought severity-duration-frequency (SDF) curve and the streamflow drought magnitude-duration-frequency (MDF) curve (e.g., a rainfall intensity-duration-frequency curve). Severity was represented as the total water deficit volume for the specific drought duration, and magnitude was defined as the daily average water deficit. The variable threshold level method was introduced to set the target instream flow requirement, which can significantly affect the streamflow drought severity and magnitude. The four threshold levels utilized were fixed, monthly, daily, and desired yield for water use. The threshold levels for the desired yield differed considerably from the other levels and represented more realistic conditions because real water demands were considered. The streamflow drought severities and magnitudes from the four threshold methods could be derived at any frequency and duration from the generated SDF and MDF curves. These SDF and MDF curves are useful in designing water resources systems for streamflow drought and water supply management.
An NMR log echo data de-noising method based on the wavelet packet threshold algorithm
NASA Astrophysics Data System (ADS)
Meng, Xiangning; Xie, Ranhong; Li, Changxi; Hu, Falong; Li, Chaoliu; Zhou, Cancan
2015-12-01
To improve the de-noising effects of low signal-to-noise ratio (SNR) nuclear magnetic resonance (NMR) log echo data, this paper applies the wavelet packet threshold algorithm to the data. The principle of the algorithm is elaborated in detail. By comparing the properties of a series of wavelet packet bases and the relevance between them and the NMR log echo train signal, ‘sym7’ is found to be the optimal wavelet packet basis of the wavelet packet threshold algorithm to de-noise the NMR log echo train signal. A new method is presented to determine the optimal wavelet packet decomposition scale; this is within the scope of its maximum, using the modulus maxima and the Shannon entropy minimum standards to determine the global and local optimal wavelet packet decomposition scales, respectively. The results of applying the method to the simulated and actual NMR log echo data indicate that compared with the wavelet threshold algorithm, the wavelet packet threshold algorithm, which shows higher decomposition accuracy and better de-noising effect, is much more suitable for de-noising low SNR-NMR log echo data.
A multi-threshold sampling method for TOF PET signal processing
Kim, Heejong; Kao, Chien-Min; Xie, Q.; Chen, Chin-Tu; Zhou, L.; Tang, F.; Frisch, Henry; Moses, William W.; Choong, Woon-Seng
2009-02-02
As an approach to realizing all-digital data acquisition for positron emission tomography (PET), we have previously proposed and studied a multithreshold sampling method to generate samples of a PET event waveform with respect to a few user-defined amplitudes. In this sampling scheme, one can extract both the energy and timing information for an event. In this paper, we report our prototype implementation of this sampling method and the performance results obtained with this prototype. The prototype consists of two multi-threshold discriminator boards and a time-to-digital converter (TDC) board. Each of the multi-threshold discriminator boards takes one input and provides up to 8 threshold levels, which can be defined by users, for sampling the input signal. The TDC board employs the CERN HPTDC chip that determines the digitized times of the leading and falling edges of the discriminator output pulses. We connect our prototype electronics to the outputs of two Hamamatsu R9800 photomultiplier tubes (PMTs) that are individually coupled to a 6.25 x 6.25 x 25mm{sup 3} LSO crystal. By analyzing waveform samples generated by using four thresholds, we obtain a coincidence timing resolution of about 340 ps and an {approx}18% energy resolution at 511 keV. We are also able to estimate the decay-time constant from the resulting samples and obtain a mean value of 44 ns with an {approx}9 ns FWHM. In comparison, using digitized waveforms obtained at a 20 GSps sampling rate for the same LSO/PMT modules we obtain {approx}300 ps coincidence timing resolution, {approx}14% energy resolution at 511 keV, and {approx}5 ns FWHM for the estimated decay-time constant. Details of the results on the timing and energy resolutions by using the multi-threshold method indicate that it is a promising approach for implementing digital PET data acquisition.
NASA Astrophysics Data System (ADS)
Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto
2016-04-01
Estimation of extreme rainfall from data constitutes one of the most important issues in statistical hydrology, as it is associated with the design of hydraulic structures and flood water management. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a generalized Pareto (GP) distribution model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data, graphical methods where one studies the dependence of GP distribution parameters (or related metrics) on the threshold level u, and Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. In this work, we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 daily rainfall records from the NOAA-NCDC open-access database, with more than 110 years of data. We find that non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while methods that are based on asymptotic properties of the upper distribution tail lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e. on the order of 0.1 ÷ 0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on pre-asymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2÷12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the
Johnson, P H; Cowley, A J; Kinnear, W J
1996-12-01
Inspiratory muscle training (IMT) has been shown to enhance exercise performance. The weighted plunger (WP) system of inspiratory threshold loading is the most commonly used method of IMT, but is expensive and cumbersome. We have evaluated a commercially available portable spring-loaded IMT device, the THRESHOLD trainer. The WP and THRESHOLD trainer devices were evaluated with their opening pressures set, in random order, at 10, 20, 30 and 40 cmH2O. Using an airpump, pressure at the valve inlet was recorded at the point at which the valve opened, and at airflow rates of 20, 40, 60, 80 and 100 L.min-1. Ten THRESHOLD trainers were then compared using the same opening pressures and airflow rates. Finally, 10 patients with stable chronic heart failure (CHF) inspired, in random order, through the WP and THRESHOLD trainer for 4 min each. The pressure-time product (PTP) was calculated for each 4 min period, to compare the work performed on inspiring through each device. The mean measured opening pressures for the WP set at 10, 20, 30 and 40 cmH2O, were 9.0, 19.3, 27.9 and 39.2 cmH2O, respectively, and there was little change over the range of flow tested. Corresponding values for the THRESHOLD trainer were 7.5, 16.9, 26.2 and 39.1 cmH2O, with the pressure being closer to the set pressure as flow increased to that seen in clinical practice. The 10 different trainers tested performed very similarly to one another. Work performed (as measured by PTP) on inspiring through the WP and THRESHOLD trainer was not significantly different. Although less accurate than the weighted plunger, the THRESHOLD trainer is an inexpensive device of consistent quality. In a clinical setting it would be a satisfactory option for inspiratory muscle training in most patients, but less so in patients with very low inspiratory flow rates. PMID:8980985
Adaptive numerical methods for partial differential equations
Cololla, P.
1995-07-01
This review describes a structured approach to adaptivity. The Automated Mesh Refinement (ARM) algorithms developed by M Berger are described, touching on hyperbolic and parabolic applications. Adaptivity is achieved by overlaying finer grids only in areas flagged by a generalized error criterion. The author discusses some of the issues involved in abutting disparate-resolution grids, and demonstrates that suitable algorithms exist for dissipative as well as hyperbolic systems.
Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo
2013-06-01
In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery. PMID:23761849
Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo
2013-01-01
In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery. PMID:23761849
Jiang, Wen Jun; Wittek, Peter; Zhao, Li; Gao, Shi Chao
2014-01-01
Photoplethysmogram (PPG) signals acquired by smartphone cameras are weaker than those acquired by dedicated pulse oximeters. Furthermore, the signals have lower sampling rates, have notches in the waveform and are more severely affected by baseline drift, leading to specific morphological characteristics. This paper introduces a new feature, the inverted triangular area, to address these specific characteristics. The new feature enables real-time adaptive waveform detection using an algorithm of linear time complexity. It can also recognize notches in the waveform and it is inherently robust to baseline drift. An implementation of the algorithm on Android is available for free download. We collected data from 24 volunteers and compared our algorithm in peak detection with two competing algorithms designed for PPG signals, Incremental-Merge Segmentation (IMS) and Adaptive Thresholding (ADT). A sensitivity of 98.0% and a positive predictive value of 98.8% were obtained, which were 7.7% higher than the IMS algorithm in sensitivity, and 8.3% higher than the ADT algorithm in positive predictive value. The experimental results confirmed the applicability of the proposed method. PMID:25570674
A High-Throughput Method to Measure NaCl and Acid Taste Thresholds in Mice
Bachmanov, Alexander A.
2009-01-01
To develop a technique suitable for measuring NaCl taste thresholds in genetic studies, we conducted a series of experiments with outbred CD-1 mice using conditioned taste aversion (CTA) and two-bottle preference tests. In Experiment 1, we compared conditioning procedures involving either oral self-administration of LiCl or pairing NaCl intake with LiCl injections and found that thresholds were the lowest after LiCl self-administration. In Experiment 2, we compared different procedures (30-min and 48-h tests) for testing conditioned mice and found that the 48-h test is more sensitive. In Experiment 3, we examined the effects of varying strength of conditioned (NaCl or LiCl taste intensity) and unconditioned (LiCl toxicity) stimuli and concluded that 75–150 mM LiCl or its mixtures with NaCl are the optimal stimuli for conditioning by oral self-administration. In Experiment 4, we examined whether this technique is applicable for measuring taste thresholds for other taste stimuli. Results of these experiments show that conditioning by oral self-administration of LiCl solutions or its mixtures with other taste stimuli followed by 48-h two-bottle tests of concentration series of a conditioned stimulus is an efficient and sensitive method to measure taste thresholds. Thresholds measured with this technique were 2 mM for NaCl and 1 mM for citric acid. This approach is suitable for simultaneous testing of large numbers of animals, which is required for genetic studies. These data demonstrate that mice, like several other species, generalize CTA from LiCl to NaCl, suggesting that they perceive taste of NaCl and LiCl as qualitatively similar, and they also can generalize CTA of a binary mixture of taste stimuli to mixture components. PMID:19188279
Principles and Methods of Adapted Physical Education.
ERIC Educational Resources Information Center
Arnheim, Daniel D.; And Others
Programs in adapted physical education are presented preceded by a background of services for the handicapped, by the psychosocial implications of disability, and by the growth and development of the handicapped. Elements of conducting programs discussed are organization and administration, class organization, facilities, exercise programs…
Adaptive method of realizing natural gradient learning for multilayer perceptrons.
Amari, S; Park, H; Fukumizu, K
2000-06-01
The natural gradient learning method is known to have ideal performances for on-line training of multilayer perceptrons. It avoids plateaus, which give rise to slow convergence of the backpropagation method. It is Fisher efficient, whereas the conventional method is not. However, for implementing the method, it is necessary to calculate the Fisher information matrix and its inverse, which is practically very difficult. This article proposes an adaptive method of directly obtaining the inverse of the Fisher information matrix. It generalizes the adaptive Gauss-Newton algorithms and provides a solid theoretical justification of them. Simulations show that the proposed adaptive method works very well for realizing natural gradient learning. PMID:10935719
Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B
2015-10-01
Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978
Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M.; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E.; Allen, Peter J.; Sempere, Lorenzo F.; Haab, Brian B.
2016-01-01
Certain experiments involve the high-throughput quantification of image data, thus requiring algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multi-color, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu’s method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978
New method to evaluate the 7Li(p, n)7Be reaction near threshold
NASA Astrophysics Data System (ADS)
Herrera, María S.; Moreno, Gustavo A.; Kreiner, Andrés J.
2015-04-01
In this work a complete description of the 7Li(p, n)7Be reaction near threshold is given using center-of-mass and relative coordinates. It is shown that this standard approach, not used before in this context, leads to a simple mathematical representation which gives easy access to all relevant quantities in the reaction and allows a precise numerical implementation. It also allows in a simple way to include proton beam-energy spread affects. The method, implemented as a C++ code, was validated both with numerical and experimental data finding a good agreement. This tool is also used here to analyze scattered published measurements such as (p, n) cross sections, differential and total neutron yields for thick targets. Using these data we derive a consistent set of parameters to evaluate neutron production near threshold. Sensitivity of the results to data uncertainty and the possibility of incorporating new measurements are also discussed.
Threshold-free method for three-dimensional segmentation of organelles
NASA Astrophysics Data System (ADS)
Chan, Yee-Hung M.; Marshall, Wallace F.
2012-03-01
An ongoing challenge in the field of cell biology is to how to quantify the size and shape of organelles within cells. Automated image analysis methods often utilize thresholding for segmentation, but the calculated surface of objects depends sensitively on the exact threshold value chosen, and this problem is generally worse at the upper and lower zboundaries because of the anisotropy of the point spread function. We present here a threshold-independent method for extracting the three-dimensional surface of vacuoles in budding yeast whose limiting membranes are labeled with a fluorescent fusion protein. These organelles typically exist as a clustered set of 1-10 sphere-like compartments. Vacuole compartments and center points are identified manually within z-stacks taken using a spinning disk confocal microscope. A set of rays is defined originating from each center point and radiating outwards in random directions. Intensity profiles are calculated at coordinates along these rays, and intensity maxima are taken as the points the rays cross the limiting membrane of the vacuole. These points are then fit with a weighted sum of basis functions to define the surface of the vacuole, and then parameters such as volume and surface area are calculated. This method is able to determine the volume and surface area of spherical beads (0.96 to 2 micron diameter) with less than 10% error, and validation using model convolution methods produce similar results. Thus, this method provides an accurate, automated method for measuring the size and morphology of organelles and can be generalized to measure cells and other objects on biologically relevant length-scales.
Solution-adaptive finite element method in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1993-01-01
Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.
Adaptive method for electron bunch profile prediction
NASA Astrophysics Data System (ADS)
Scheinker, Alexander; Gessner, Spencer
2015-10-01
We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET.
Adaptive method for electron bunch profile prediction
Scheinker, Alexander; Gessner, Spencer
2015-10-01
We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET. © 2015 authors. Published by the American Physical Society.
NASA Astrophysics Data System (ADS)
Gao, Lei; Chen, Wenchao; Wang, Baoli; Gao, Jinghuai
2014-05-01
In this paper, we present a high-fidelity new method for wave field separation of vertical seismic profiling (VSP) data. The method can keep the characteristics of waveform and amplitude variation along with the wave propagation. As a basic assumption, we assume that the wave field data of each event flattened regular wave is a low-rank matrix. Then, we construct an optimization equation to formulate the VSP wave field separation problem. To solve the equation, we combine block relaxation (BR) with singular value thresholding (SVT) to construct a new algorithm. We apply the method proposed in this paper to both synthetic and real data, and compare the results with that of the median filter based method, which is widely used in engineering practice. We conclude that the method proposed in this paper can offer a wave field separation with higher fidelity and higher signal to noise ratio (SNR).
Assessing Adaptive Instructional Design Tools and Methods in ADAPT[IT].
ERIC Educational Resources Information Center
Eseryel, Deniz; Spector, J. Michael
ADAPT[IT] (Advanced Design Approach for Personalized Training - Interactive Tools) is a European project within the Information Society Technologies program that is providing design methods and tools to guide a training designer according to the latest cognitive science and standardization principles. ADAPT[IT] addresses users in two significantly…
A simple method to estimate threshold friction velocity of wind erosion in the field
NASA Astrophysics Data System (ADS)
Li, Junran; Okin, Gregory S.; Herrick, Jeffrey E.; Belnap, Jayne; Munson, Seth M.; Miller, Mark E.
2010-05-01
This study provides a fast and easy-to-apply method to estimate threshold friction velocity (TFV) of wind erosion in the field. Wind tunnel experiments and a variety of ground measurements including air gun, pocket penetrometer, torvane, and roughness chain were conducted in Moab, Utah and cross-validated in the Mojave Desert, California. Patterns between TFV and ground measurements were examined to identify the optimum method for estimating TFV. The results show that TFVs were best predicted using the air gun and penetrometer measurements in the Moab sites. This empirical method, however, systematically underestimated TFVs in the Mojave Desert sites. Further analysis showed that TFVs in the Mojave sites can be satisfactorily estimated with a correction for rock cover, which is presumably the main cause of the underestimation of TFVs. The proposed method may be also applied to estimate TFVs in environments where other non-erodible elements such as postharvest residuals are found.
Moving and adaptive grid methods for compressible flows
NASA Technical Reports Server (NTRS)
Trepanier, Jean-Yves; Camarero, Ricardo
1995-01-01
This paper describes adaptive grid methods developed specifically for compressible flow computations. The basic flow solver is a finite-volume implementation of Roe's flux difference splitting scheme or arbitrarily moving unstructured triangular meshes. The grid adaptation is performed according to geometric and flow requirements. Some results are included to illustrate the potential of the methodology.
An adaptive pseudospectral method for discontinuous problems
NASA Technical Reports Server (NTRS)
Augenbaum, Jeffrey M.
1988-01-01
The accuracy of adaptively chosen, mapped polynomial approximations is studied for functions with steep gradients or discontinuities. It is shown that, for steep gradient functions, one can obtain spectral accuracy in the original coordinate system by using polynomial approximations in a transformed coordinate system with substantially fewer collocation points than are necessary using polynomial expansion directly in the original, physical, coordinate system. It is also shown that one can avoid the usual Gibbs oscillation associated with steep gradient solutions of hyperbolic pde's by approximation in suitably chosen coordinate systems. Continuous, high gradient solutions are computed with spectral accuracy (as measured in the physical coordinate system). Discontinuous solutions associated with nonlinear hyperbolic equations can be accurately computed by using an artificial viscosity chosen to smooth out the solution in the mapped, computational domain. Thus, shocks can be effectively resolved on a scale that is subgrid to the resolution available with collocation only in the physical domain. Examples with Fourier and Chebyshev collocation are given.
Adaptable radiation monitoring system and method
Archer, Daniel E.; Beauchamp, Brock R.; Mauger, G. Joseph; Nelson, Karl E.; Mercer, Michael B.; Pletcher, David C.; Riot, Vincent J.; Schek, James L.; Knapp, David A.
2006-06-20
A portable radioactive-material detection system capable of detecting radioactive sources moving at high speeds. The system has at least one radiation detector capable of detecting gamma-radiation and coupled to an MCA capable of collecting spectral data in very small time bins of less than about 150 msec. A computer processor is connected to the MCA for determining from the spectral data if a triggering event has occurred. Spectral data is stored on a data storage device, and a power source supplies power to the detection system. Various configurations of the detection system may be adaptably arranged for various radiation detection scenarios. In a preferred embodiment, the computer processor operates as a server which receives spectral data from other networked detection systems, and communicates the collected data to a central data reporting system.
Adaptive computational methods for aerothermal heating analysis
NASA Technical Reports Server (NTRS)
Price, John M.; Oden, J. Tinsley
1988-01-01
The development of adaptive gridding techniques for finite-element analysis of fluid dynamics equations is described. The developmental work was done with the Euler equations with concentration on shock and inviscid flow field capturing. Ultimately this methodology is to be applied to a viscous analysis for the purpose of predicting accurate aerothermal loads on complex shapes subjected to high speed flow environments. The development of local error estimate strategies as a basis for refinement strategies is discussed, as well as the refinement strategies themselves. The application of the strategies to triangular elements and a finite-element flux-corrected-transport numerical scheme are presented. The implementation of these strategies in the GIM/PAGE code for 2-D and 3-D applications is documented and demonstrated.
Adaptive mesh strategies for the spectral element method
NASA Technical Reports Server (NTRS)
Mavriplis, Catherine
1992-01-01
An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.
Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition
NASA Technical Reports Server (NTRS)
Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd
2015-01-01
Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.
Gülay, Arda; Smets, Barth F
2015-09-01
Exploring the variation in microbial community diversity between locations (β diversity) is a central topic in microbial ecology. Currently, there is no consensus on how to set the significance threshold for β diversity. Here, we describe and quantify the technical components of β diversity, including those associated with the process of subsampling. These components exist for any proposed β diversity measurement procedure. Further, we introduce a strategy to set significance thresholds for β diversity of any group of microbial samples using rarefaction, invoking the notion of a meta-community. The proposed technique was applied to several in silico generated operational taxonomic unit (OTU) libraries and experimental 16S rRNA pyrosequencing libraries. The latter represented microbial communities from different biological rapid sand filters at a full-scale waterworks. We observe that β diversity, after subsampling, is inflated by intra-sample differences; this inflation is avoided in the proposed method. In addition, microbial community evenness (Gini > 0.08) strongly affects all β diversity estimations due to bias associated with rarefaction. Where published methods to test β significance often fail, the proposed meta-community-based estimator is more successful at rejecting insignificant β diversity values. Applying our approach, we reveal the heterogeneous microbial structure of biological rapid sand filters both within and across filters. PMID:25534614
Noninvasive method to estimate anaerobic threshold in individuals with type 2 diabetes
2011-01-01
Background While several studies have identified the anaerobic threshold (AT) through the responses of blood lactate, ventilation and blood glucose others have suggested the response of the heart rate variability (HRV) as a method to identify the AT in young healthy individuals. However, the validity of HRV in estimating the lactate threshold (LT) and ventilatory threshold (VT) for individuals with type 2 diabetes (T2D) has not been investigated yet. Aim To analyze the possibility of identifying the heart rate variability threshold (HRVT) by considering the responses of parasympathetic indicators during incremental exercise test in type 2 diabetics subjects (T2D) and non diabetics individuals (ND). Methods Nine T2D (55.6 ± 5.7 years, 83.4 ± 26.6 kg, 30.9 ± 5.2 kg.m2(-1)) and ten ND (50.8 ± 5.1 years, 76.2 ± 14.3 kg, 26.5 ± 3.8 kg.m2(-1)) underwent to an incremental exercise test (IT) on a cycle ergometer. Heart rate (HR), rate of perceived exertion (RPE), blood lactate and expired gas concentrations were measured at the end of each stage. HRVT was identified through the responses of root mean square successive difference between adjacent R-R intervals (RMSSD) and standard deviation of instantaneous beat-to-beat R-R interval variability (SD1) by considering the last 60 s of each incremental stage, and were known as HRVT by RMSSD and SD1 (HRVT-RMSSD and HRVT-SD1), respectively. Results No differences were observed within groups for the exercise intensities corresponding to LT, VT, HRVT-RMSSD and HHVT-SD1. Furthermore, a strong relationship were verified among the studied parameters both for T2D (r = 0.68 to 0.87) and ND (r = 0.91 to 0.98) and the Bland & Altman technique confirmed the agreement among them. Conclusion The HRVT identification by the proposed autonomic indicators (SD1 and RMSSD) were demonstrated to be valid to estimate the LT and VT for both T2D and ND. PMID:21226946
NASA Astrophysics Data System (ADS)
Tsuda, Yuki; Akiyoshi, Masanori; Samejima, Masaki; Oka, Hironori
In this paper the authors propose a classification method of inquiry e-mails for describing FAQ (Frequently Asked Questions) and automatic setting mechanism of judgment thresholds. In this method, a dictionary used for classification of inquiries is generated and updated automatically by statistical information of characteristic words in clusters, and inquiries are classified correctly to each proper cluster by using the dictionary. Threshold values are automatically set by using statistical information.
Adaptive finite-element method for diffraction gratings
NASA Astrophysics Data System (ADS)
Bao, Gang; Chen, Zhiming; Wu, Haijun
2005-06-01
A second-order finite-element adaptive strategy with error control for one-dimensional grating problems is developed. The unbounded computational domain is truncated to a bounded one by a perfectly-matched-layer (PML) technique. The PML parameters, such as the thickness of the layer and the medium properties, are determined through sharp a posteriori error estimates. The adaptive finite-element method is expected to increase significantly the accuracy and efficiency of the discretization as well as reduce the computation cost. Numerical experiments are included to illustrate the competitiveness of the proposed adaptive method.
Adaptive multiscale method for two-dimensional nanoscale adhesive contacts
NASA Astrophysics Data System (ADS)
Tong, Ruiting; Liu, Geng; Liu, Lan; Wu, Liyan
2013-05-01
There are two separate traditional approaches to model contact problems: continuum and atomistic theory. Continuum theory is successfully used in many domains, but when the scale of the model comes to nanometer, continuum approximation meets challenges. Atomistic theory can catch the detailed behaviors of an individual atom by using molecular dynamics (MD) or quantum mechanics, although accurately, it is usually time-consuming. A multiscale method coupled MD and finite element (FE) is presented. To mesh the FE region automatically, an adaptive method based on the strain energy gradient is introduced to the multiscale method to constitute an adaptive multiscale method. Utilizing the proposed method, adhesive contacts between a rigid cylinder and an elastic substrate are studied, and the results are compared with full MD simulations. The process of FE meshes refinement shows that adaptive multiscale method can make FE mesh generation more flexible. Comparison of the displacements of boundary atoms in the overlap region with the results from full MD simulations indicates that adaptive multiscale method can transfer displacements effectively. Displacements of atoms and FE nodes on the center line of the multiscale model agree well with that of atoms in full MD simulations, which shows the continuity in the overlap region. Furthermore, the Von Mises stress contours and contact force distributions in the contact region are almost same as full MD simulations. The method presented combines multiscale method and adaptive technique, and can provide a more effective way to multiscale method and to the investigation on nanoscale contact problems.
Fast adaptive composite grid methods on distributed parallel architectures
NASA Technical Reports Server (NTRS)
Lemke, Max; Quinlan, Daniel
1992-01-01
The fast adaptive composite (FAC) grid method is compared with the adaptive composite method (AFAC) under variety of conditions including vectorization and parallelization. Results are given for distributed memory multiprocessor architectures (SUPRENUM, Intel iPSC/2 and iPSC/860). It is shown that the good performance of AFAC and its superiority over FAC in a parallel environment is a property of the algorithm and not dependent on peculiarities of any machine.
NASA Technical Reports Server (NTRS)
Smith, Paul L.; VonderHaar, Thomas H.
1996-01-01
The principal goal of this project is to establish relationships that would allow application of area-time integral (ATI) calculations based upon satellite data to estimate rainfall volumes. The research is being carried out as a collaborative effort between the two participating organizations, with the satellite data analysis to determine values for the ATIs being done primarily by the STC-METSAT scientists and the associated radar data analysis to determine the 'ground-truth' rainfall estimates being done primarily at the South Dakota School of Mines and Technology (SDSM&T). Synthesis of the two separate kinds of data and investigation of the resulting rainfall-versus-ATI relationships is then carried out jointly. The research has been pursued using two different approaches, which for convenience can be designated as the 'fixed-threshold approach' and the 'adaptive-threshold approach'. In the former, an attempt is made to determine a single temperature threshold in the satellite infrared data that would yield ATI values for identifiable cloud clusters which are closely related to the corresponding rainfall amounts as determined by radar. Work on the second, or 'adaptive-threshold', approach for determining the satellite ATI values has explored two avenues: (1) attempt involved choosing IR thresholds to match the satellite ATI values with ones separately calculated from the radar data on a case basis; and (2) an attempt involved a striaghtforward screening analysis to determine the (fixed) offset that would lead to the strongest correlation and lowest standard error of estimate in the relationship between the satellite ATI values and the corresponding rainfall volumes.
Adaptive upscaling with the dual mesh method
Guerillot, D.; Verdiere, S.
1997-08-01
The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.
Adaptive Finite Element Methods for Continuum Damage Modeling
NASA Technical Reports Server (NTRS)
Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.
1995-01-01
The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.
An auto-adaptive background subtraction method for Raman spectra.
Xie, Yi; Yang, Lidong; Sun, Xilong; Wu, Dewen; Chen, Qizhen; Zeng, Yongming; Liu, Guokun
2016-05-15
Background subtraction is a crucial step in the preprocessing of Raman spectrum. Usually, parameter manipulating of the background subtraction method is necessary for the efficient removal of the background, which makes the quality of the spectrum empirically dependent. In order to avoid artificial bias, we proposed an auto-adaptive background subtraction method without parameter adjustment. The main procedure is: (1) select the local minima of spectrum while preserving major peaks, (2) apply an interpolation scheme to estimate background, (3) and design an iteration scheme to improve the adaptability of background subtraction. Both simulated data and Raman spectra have been used to evaluate the proposed method. By comparing the backgrounds obtained from three widely applied methods: the polynomial, the Baek's and the airPLS, the auto-adaptive method meets the demand of practical applications in terms of efficiency and accuracy. PMID:26950502
An auto-adaptive background subtraction method for Raman spectra
NASA Astrophysics Data System (ADS)
Xie, Yi; Yang, Lidong; Sun, Xilong; Wu, Dewen; Chen, Qizhen; Zeng, Yongming; Liu, Guokun
2016-05-01
Background subtraction is a crucial step in the preprocessing of Raman spectrum. Usually, parameter manipulating of the background subtraction method is necessary for the efficient removal of the background, which makes the quality of the spectrum empirically dependent. In order to avoid artificial bias, we proposed an auto-adaptive background subtraction method without parameter adjustment. The main procedure is: (1) select the local minima of spectrum while preserving major peaks, (2) apply an interpolation scheme to estimate background, (3) and design an iteration scheme to improve the adaptability of background subtraction. Both simulated data and Raman spectra have been used to evaluate the proposed method. By comparing the backgrounds obtained from three widely applied methods: the polynomial, the Baek's and the airPLS, the auto-adaptive method meets the demand of practical applications in terms of efficiency and accuracy.
Li, Xue; Xu, Yuan; Zhao, Gang; Shi, Chunli; Wang, Zhong-Liang; Wang, Yuqiu
2015-04-01
The eutrophication problem of drinking water source is directly related to the security of urban water supplication, and phosphorus has been proved as an important element to the water quality of the most northern hemisphere lakes and reservoirs. In the paper, 15-year monitoring records (1990∼2004) of Yuqiao Reservoir were used to model the changing trend of the total phosphorus (TP), analyze the uncertainty of nutrient parameters, and estimate the threshold of eutrophication management at a specific water quality goal by the application of Bayesian method through chemical material balance (CMB) model. The results revealed that Yuqiao Reservoir was a P-controlled water ecosystem, and the inner concentration of TP in the reservoir was significantly correlated with TP loading concentration, hydraulic retention coefficient, and bottom water dissolved oxygen concentration. In the case, the goal of water quality for TP in the reservoir was set to be 0.05 mg L(-1) (the third level of national surface water standard for reservoirs according to GB3838-2002), management measures could be taken to improve water quality in reservoir through controlling the highest inflow phosphorus concentration (0.15∼0.21 mg L(-1)) and the lowest DO concentration (3.76∼5.59 mg L(-1)) to the threshold. Inverse method was applied to evaluate the joint manage measures, and the results revealed that it was a valuable measure to avoid eutrophication by controlling lowest dissolved oxygen concentration and adjusting the inflow and outflow of reservoir. PMID:25792022
Track and vertex reconstruction: From classical to adaptive methods
Strandlie, Are; Fruehwirth, Rudolf
2010-04-15
This paper reviews classical and adaptive methods of track and vertex reconstruction in particle physics experiments. Adaptive methods have been developed to meet the experimental challenges at high-energy colliders, in particular, the CERN Large Hadron Collider. They can be characterized by the obliteration of the traditional boundaries between pattern recognition and statistical estimation, by the competition between different hypotheses about what constitutes a track or a vertex, and by a high level of flexibility and robustness achieved with a minimum of assumptions about the data. The theoretical background of some of the adaptive methods is described, and it is shown that there is a close connection between the two main branches of adaptive methods: neural networks and deformable templates, on the one hand, and robust stochastic filters with annealing, on the other hand. As both classical and adaptive methods of track and vertex reconstruction presuppose precise knowledge of the positions of the sensitive detector elements, the paper includes an overview of detector alignment methods and a survey of the alignment strategies employed by past and current experiments.
Introduction to Adaptive Methods for Differential Equations
NASA Astrophysics Data System (ADS)
Eriksson, Kenneth; Estep, Don; Hansbo, Peter; Johnson, Claes
Knowing thus the Algorithm of this calculus, which I call Differential Calculus, all differential equations can be solved by a common method (Gottfried Wilhelm von Leibniz, 1646-1719).When, several years ago, I saw for the first time an instrument which, when carried, automatically records the number of steps taken by a pedestrian, it occurred to me at once that the entire arithmetic could be subjected to a similar kind of machinery so that not only addition and subtraction, but also multiplication and division, could be accomplished by a suitably arranged machine easily, promptly and with sure results. For it is unworthy of excellent men to lose hours like slaves in the labour of calculations, which could safely be left to anyone else if the machine was used. And now that we may give final praise to the machine, we may say that it will be desirable to all who are engaged in computations which, as is well known, are the managers of financial affairs, the administrators of others estates, merchants, surveyors, navigators, astronomers, and those connected with any of the crafts that use mathematics (Leibniz).
Stability and error estimation for Component Adaptive Grid methods
NASA Technical Reports Server (NTRS)
Oliger, Joseph; Zhu, Xiaolei
1994-01-01
Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.
NASA Astrophysics Data System (ADS)
Schneider, Kai; Roussel, Olivier; Farge, Marie
2007-11-01
Coherent Vortex Simulation is based on the wavelet decomposition of the flow into coherent and incoherent components. An adaptive multiresolution method using second order finite volumes with explicit time discretization, a 2-4 Mac Cormack scheme, allows an efficient computation of the coherent flow on a dynamically adapted grid. Neglecting the influence of the incoherent background models turbulent dissipation. We present CVS computation of three dimensional compressible time developing mixing layer. We show the speed up in CPU time with respect to DNS and the obtained memory reduction thanks to dynamical octree data structures. The impact of different filtering strategies is discussed and it is found that isotropic wavelet thresholding of the Favre averaged gradient of the momentum yields the most effective results.
NASA Technical Reports Server (NTRS)
Hirsch, David
2009-01-01
Spacecraft fire safety emphasizes fire prevention, which is achieved primarily through the use of fire-resistant materials. Materials selection for spacecraft is based on conventional flammability acceptance tests, along with prescribed quantity limitations and configuration control for items that are non-pass or questionable. ISO 14624-1 and -2 are the major methods used to evaluate flammability of polymeric materials intended for use in the habitable environments of spacecraft. The methods are upward flame-propagation tests initiated in static environments and using a well-defined igniter flame at the bottom of the sample. The tests are conducted in the most severe flaming combustion environment expected in the spacecraft. The pass/fail test logic of ISO 14624-1 and -2 does not allow a quantitative comparison with reduced gravity or microgravity test results; therefore their use is limited, and possibilities for in-depth theoretical analyses and realistic estimates of spacecraft fire extinguishment requirements are practically eliminated. To better understand the applicability of laboratory test data to actual spacecraft environments, a modified ISO 14624 protocol has been proposed that, as an alternative to qualifying materials as pass/fail in the worst-expected environments, measures the actual upward flammability limit for the material. A working group established by NASA to provide recommendations for exploration spacecraft internal atmospheres realized the importance of correlating laboratory data with real-life environments and recommended NASA to develop a flammability threshold test method. The working group indicated that for the Constellation Program, the flammability threshold information will allow NASA to identify materials with increased flammability risk from oxygen concentration and total pressure changes, minimize potential impacts, and allow for development of sound requirements for new spacecraft and extravehicular landers and habitats
Adaptive multiscale model reduction with Generalized Multiscale Finite Element Methods
NASA Astrophysics Data System (ADS)
Chung, Eric; Efendiev, Yalchin; Hou, Thomas Y.
2016-09-01
In this paper, we discuss a general multiscale model reduction framework based on multiscale finite element methods. We give a brief overview of related multiscale methods. Due to page limitations, the overview focuses on a few related methods and is not intended to be comprehensive. We present a general adaptive multiscale model reduction framework, the Generalized Multiscale Finite Element Method. Besides the method's basic outline, we discuss some important ingredients needed for the method's success. We also discuss several applications. The proposed method allows performing local model reduction in the presence of high contrast and no scale separation.
Final Report: Symposium on Adaptive Methods for Partial Differential Equations
Pernice, M.; Johnson, C.R.; Smith, P.J.; Fogelson, A.
1998-12-10
OAK-B135 Final Report: Symposium on Adaptive Methods for Partial Differential Equations. Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.
A multigrid method for steady Euler equations on unstructured adaptive grids
NASA Technical Reports Server (NTRS)
Riemslagh, Kris; Dick, Erik
1993-01-01
A flux-difference splitting type algorithm is formulated for the steady Euler equations on unstructured grids. The polynomial flux-difference splitting technique is used. A vertex-centered finite volume method is employed on a triangular mesh. The multigrid method is in defect-correction form. A relaxation procedure with a first order accurate inner iteration and a second-order correction performed only on the finest grid, is used. A multi-stage Jacobi relaxation method is employed as a smoother. Since the grid is unstructured a Jacobi type is chosen. The multi-staging is necessary to provide sufficient smoothing properties. The domain is discretized using a Delaunay triangular mesh generator. Three grids with more or less uniform distribution of nodes but with different resolution are generated by successive refinement of the coarsest grid. Nodes of coarser grids appear in the finer grids. The multigrid method is started on these grids. As soon as the residual drops below a threshold value, an adaptive refinement is started. The solution on the adaptively refined grid is accelerated by a multigrid procedure. The coarser multigrid grids are generated by successive coarsening through point removement. The adaption cycle is repeated a few times. Results are given for the transonic flow over a NACA-0012 airfoil.
Mera, David; Cotos, José M; Varela-Pet, José; Garcia-Pineda, Oscar
2012-10-01
Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the ocean's surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time. PMID:22874883
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics
Anderson, R W; Pember, R B; Elliott, N S
2004-01-28
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics
Anderson, R W; Pember, R B; Elliott, N S
2002-10-19
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.
Stutz, William E.; Bolnick, Daniel I.
2014-01-01
Genes of the vertebrate major histocompatibility complex (MHC) are of great interest to biologists because of their important role in immunity and disease, and their extremely high levels of genetic diversity. Next generation sequencing (NGS) technologies are quickly becoming the method of choice for high-throughput genotyping of multi-locus templates like MHC in non-model organisms.Previous approaches to genotyping MHC genes using NGS technologies suffer from two problems:1) a “gray zone” where low frequency alleles and high frequency artifacts can be difficult to disentangle and 2) a similar sequence problem, where very similar alleles can be difficult to distinguish as two distinct alleles. Here were present a new method for genotyping MHC loci – Stepwise Threshold Clustering (STC) – that addresses these problems by taking full advantage of the increase in sequence data provided by NGS technologies. Unlike previous approaches for genotyping MHC with NGS data that attempt to classify individual sequences as alleles or artifacts, STC uses a quasi-Dirichlet clustering algorithm to cluster similar sequences at increasing levels of sequence similarity. By applying frequency and similarity based criteria to clusters rather than individual sequences, STC is able to successfully identify clusters of sequences that correspond to individual or similar alleles present in the genomes of individual samples. Furthermore, STC does not require duplicate runs of all samples, increasing the number of samples that can be genotyped in a given project. We show how the STC method works using a single sample library. We then apply STC to 295 threespine stickleback (Gasterosteus aculeatus) samples from four populations and show that neighboring populations differ significantly in MHC allele pools. We show that STC is a reliable, accurate, efficient, and flexible method for genotyping MHC that will be of use to biologists interested in a variety of downstream applications. PMID
Adaptive wavelet collocation method simulations of Rayleigh-Taylor instability
NASA Astrophysics Data System (ADS)
Reckinger, S. J.; Livescu, D.; Vasilyev, O. V.
2010-12-01
Numerical simulations of single-mode, compressible Rayleigh-Taylor instability are performed using the adaptive wavelet collocation method (AWCM), which utilizes wavelets for dynamic grid adaptation. Due to the physics-based adaptivity and direct error control of the method, AWCM is ideal for resolving the wide range of scales present in the development of the instability. The problem is initialized consistent with the solutions from linear stability theory. Non-reflecting boundary conditions are applied to prevent the contamination of the instability growth by pressure waves created at the interface. AWCM is used to perform direct numerical simulations that match the early-time linear growth, the terminal bubble velocity and a reacceleration region.
Adaptive computational methods for SSME internal flow analysis
NASA Technical Reports Server (NTRS)
Oden, J. T.
1986-01-01
Adaptive finite element methods for the analysis of classes of problems in compressible and incompressible flow of interest in SSME (space shuttle main engine) analysis and design are described. The general objective of the adaptive methods is to improve and to quantify the quality of numerical solutions to the governing partial differential equations of fluid dynamics in two-dimensional cases. There are several different families of adaptive schemes that can be used to improve the quality of solutions in complex flow simulations. Among these are: (1) r-methods (node-redistribution or moving mesh methods) in which a fixed number of nodal points is allowed to migrate to points in the mesh where high error is detected; (2) h-methods, in which the mesh size h is automatically refined to reduce local error; and (3) p-methods, in which the local degree p of the finite element approximation is increased to reduce local error. Two of the three basic techniques have been studied in this project: an r-method for steady Euler equations in two dimensions and a p-method for transient, laminar, viscous incompressible flow. Numerical results are presented. A brief introduction to residual methods of a-posterior error estimation is also given and some pertinent conclusions of the study are listed.
NASA Astrophysics Data System (ADS)
Langousis, Andreas; Mamalakis, Antonios; Puliga, Michelangelo; Deidda, Roberto
2016-04-01
In extreme excess modeling, one fits a generalized Pareto (GP) distribution to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as nonparametric methods that are intended to locate the changing point between extreme and nonextreme regions of the data, graphical methods where one studies the dependence of GP-related metrics on the threshold level u, and Goodness-of-Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. Here we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 overcentennial daily rainfall records from the NOAA-NCDC database. We find that nonparametric methods are generally not reliable, while methods that are based on GP asymptotic properties lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e., on the order of 0.1-0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on preasymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2 and 12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the empirical records, as well as variations in their size, constitute the two most important factors that may significantly affect the accuracy of the obtained results.
NASA Astrophysics Data System (ADS)
Gong, He; Fan, Yubo; Zhang, Ming
2008-04-01
The objective of this paper is to identify the effects of mechanical disuse and basic multi-cellular unit (BMU) activation threshold on the form of trabecular bone during menopause. A bone adaptation model with mechanical- biological factors at BMU level was integrated with finite element analysis to simulate the changes of trabecular bone structure during menopause. Mechanical disuse and changes in the BMU activation threshold were applied to the model for the period from 4 years before to 4 years after menopause. The changes in bone volume fraction, trabecular thickness and fractal dimension of the trabecular structures were used to quantify the changes of trabecular bone in three different cases associated with mechanical disuse and BMU activation threshold. It was found that the changes in the simulated bone volume fraction were highly correlated and consistent with clinical data, and that the trabecular thickness reduced significantly during menopause and was highly linearly correlated with the bone volume fraction, and that the change trend of fractal dimension of the simulated trabecular structure was in correspondence with clinical observations. The numerical simulation in this paper may help to better understand the relationship between the bone morphology and the mechanical, as well as biological environment; and can provide a quantitative computational model and methodology for the numerical simulation of the bone structural morphological changes caused by the mechanical environment, and/or the biological environment.
Cox-Davenport, Rebecca A; Phelan, Julia C
2015-05-01
First-time NCLEX-RN pass rates are an important indicator of nursing school success and quality. Nursing schools use different methods to anticipate NCLEX outcomes and help prevent student failure and possible threat to accreditation. This study evaluated the impact of a shift in NCLEX preparation policy at a BSN program in the southeast United States. The policy shifted from the use of predictor score thresholds to determine graduation eligibility to a more proactive remediation strategy involving adaptive quizzing. A descriptive correlational design evaluated the impact of an adaptive quizzing system designed to give students ongoing active practice and feedback and explored the relationship between predictor examinations and NCLEX success. Data from student usage of the system as well as scores on predictor tests were collected for three student cohorts. Results revealed a positive correlation between adaptive quizzing system usage and content mastery. Two of the 69 students in the sample did not pass the NCLEX. With so few students failing the NCLEX, predictability of any course variables could not be determined. The power of predictor examinations to predict NCLEX failure could also not be supported. The most consistent factor among students, however, was their content mastery level within the adaptive quizzing system. Implications of these findings are discussed. PMID:25851560
Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes
2016-01-01
The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is
Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes
2016-01-01
The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is
NASA Astrophysics Data System (ADS)
Mazas, Franck; Hamm, Luc; Kergadallan, Xavier
2013-04-01
In France, the storm Xynthia of February 27-28th, 2010 reminded engineers and stakeholders of the necessity for an accurate estimation of extreme sea levels for the risk assessment in coastal areas. Traditionally, two main approaches exist for the statistical extrapolation of extreme sea levels: the direct approach performs a direct extrapolation on the sea level data, while the indirect approach carries out a separate analysis of the deterministic component (astronomical tide) and stochastic component (meteorological residual, or surge). When the tidal component is large compared with the surge one, the latter approach is known to perform better. In this approach, the statistical extrapolation is performed on the surge component then the distribution of extreme seal levels is obtained by convolution of the tide and surge distributions. This model is often referred to as the Joint Probability Method. Different models from the univariate extreme theory have been applied in the past for extrapolating extreme surges, in particular the Annual Maxima Method (AMM) and the r-largest method. In this presentation, we apply the Peaks-Over-Threshold (POT) approach for declustering extreme surge events, coupled with the Poisson-GPD model for fitting extreme surge peaks. This methodology allows a sound estimation of both lower and upper tails of the stochastic distribution, including the estimation of the uncertainties associated to the fit by computing the confidence intervals. After convolution with the tide signal, the model yields the distribution for the whole range of possible sea level values. Particular attention is paid to the necessary distinction between sea level values observed at a regular time step, such as hourly, and sea level events, such as those occurring during a storm. Extremal indexes for both surges and levels are thus introduced. This methodology will be illustrated with a case study at Brest, France.
NASA Astrophysics Data System (ADS)
Susrama, I. G.; Purnama, K. E.; Purnomo, M. H.
2016-01-01
Oligospermia is a male fertility issue defined as a low sperm concentration in the ejaculate. Normally the sperm concentration is 20-120 million/ml, while Oligospermia patients has sperm concentration less than 20 million/ml. Sperm test done in the fertility laboratory to determine oligospermia by checking fresh sperm according to WHO standards in 2010 [9]. The sperm seen in a microscope using a Neubauer improved counting chamber and manually count the number of sperm. In order to be counted automatically, this research made an automation system to analyse and count the sperm concentration called Automated Analysis of Sperm Concentration Counters (A2SC2) using Otsu threshold segmentation process and morphology. Data sperm used is the fresh sperm directly in the analysis in the laboratory from 10 people. The test results using A2SC2 method obtained an accuracy of 91%. Thus in this study, A2SC2 can be used to calculate the amount and concentration of sperm automatically
Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.
ERIC Educational Resources Information Center
Butler, Ronald W.
The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…
A Conditional Exposure Control Method for Multidimensional Adaptive Testing
ERIC Educational Resources Information Center
Finkelman, Matthew; Nering, Michael L.; Roussos, Louis A.
2009-01-01
In computerized adaptive testing (CAT), ensuring the security of test items is a crucial practical consideration. A common approach to reducing item theft is to define maximum item exposure rates, i.e., to limit the proportion of examinees to whom a given item can be administered. Numerous methods for controlling exposure rates have been proposed…
Adaptive frequency estimation by MUSIC (Multiple Signal Classification) method
NASA Astrophysics Data System (ADS)
Karhunen, Juha; Nieminen, Esko; Joutsensalo, Jyrki
During the last years, the eigenvector-based method called MUSIC has become very popular in estimating the frequencies of sinusoids in additive white noise. Adaptive realizations of the MUSIC method are studied using simulated data. Several of the adaptive realizations seem to give in practice equally good results as the nonadaptive standard realization. The only exceptions are instantaneous gradient type algorithms that need considerably more samples to achieve a comparable performance. A new method is proposed for constructing initial estimates to the signal subspace. The method improves often dramatically the performance of instantaneous gradient type algorithms. The new signal subspace estimate can also be used to define a frequency estimator directly or to simplify eigenvector computation.
Adaptive reconnection-based arbitrary Lagrangian Eulerian method
Bo, Wurigen; Shashkov, Mikhail
2015-07-21
We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALE method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.
Adaptive reconnection-based arbitrary Lagrangian Eulerian method
Bo, Wurigen; Shashkov, Mikhail
2015-07-21
We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less
NASA Astrophysics Data System (ADS)
Butler, John S.; Molloy, Anna; Williams, Laura; Kimmich, Okka; Quinlivan, Brendan; O'Riordan, Sean; Hutchinson, Michael; Reilly, Richard B.
2015-08-01
Objective. Recent studies have proposed that the temporal discrimination threshold (TDT), the shortest detectable time period between two stimuli, is a possible endophenotype for adult onset idiopathic isolated focal dystonia (AOIFD). Patients with AOIFD, the third most common movement disorder, and their first-degree relatives have been shown to have abnormal visual and tactile TDTs. For this reason it is important to fully characterize each participant’s data. To date the TDT has only been reported as a single value. Approach. Here, we fit individual participant data with a cumulative Gaussian to extract the mean and standard deviation of the distribution. The mean represents the point of subjective equality (PSE), the inter-stimulus interval at which participants are equally likely to respond that two stimuli are one stimulus (synchronous) or two different stimuli (asynchronous). The standard deviation represents the just noticeable difference (JND) which is how sensitive participants are to changes in temporal asynchrony around the PSE. We extended this method by submitting the data to a non-parametric bootstrapped analysis to get 95% confidence intervals on individual participant data. Main results. Both the JND and PSE correlate with the TDT value but are independent of each other. Hence this suggests that they represent different facets of the TDT. Furthermore, we divided groups by age and compared the TDT, PSE, and JND values. The analysis revealed a statistical difference for the PSE which was only trending for the TDT. Significance. The analysis method will enable deeper analysis of the TDT to leverage subtle differences within and between control and patient groups, not apparent in the standard TDT measure.
Method and system for environmentally adaptive fault tolerant computing
NASA Technical Reports Server (NTRS)
Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)
2010-01-01
A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.
Workshop on adaptive grid methods for fusion plasmas
Wiley, J.C.
1995-07-01
The author describes a general `hp` finite element method with adaptive grids. The code was based on the work of Oden, et al. The term `hp` refers to the method of spatial refinement (h), in conjunction with the order of polynomials used as a part of the finite element discretization (p). This finite element code seems to handle well the different mesh grid sizes occuring between abuted grids with different resolutions.
Solving Chemical Master Equations by an Adaptive Wavelet Method
Jahnke, Tobias; Galan, Steffen
2008-09-01
Solving chemical master equations is notoriously difficult due to the tremendous number of degrees of freedom. We present a new numerical method which efficiently reduces the size of the problem in an adaptive way. The method is based on a sparse wavelet representation and an algorithm which, in each time step, detects the essential degrees of freedom required to approximate the solution up to the desired accuracy.
ICASE/LaRC Workshop on Adaptive Grid Methods
NASA Technical Reports Server (NTRS)
South, Jerry C., Jr. (Editor); Thomas, James L. (Editor); Vanrosendale, John (Editor)
1995-01-01
Solution-adaptive grid techniques are essential to the attainment of practical, user friendly, computational fluid dynamics (CFD) applications. In this three-day workshop, experts gathered together to describe state-of-the-art methods in solution-adaptive grid refinement, analysis, and implementation; to assess the current practice; and to discuss future needs and directions for research. This was accomplished through a series of invited and contributed papers. The workshop focused on a set of two-dimensional test cases designed by the organizers to aid in assessing the current state of development of adaptive grid technology. In addition, a panel of experts from universities, industry, and government research laboratories discussed their views of needs and future directions in this field.
NASA Astrophysics Data System (ADS)
Liu, Lixin; Bian, Hongyu; Yagi, Shin-ichi; Yang, Xiaodong
2016-07-01
Raw sonar images may not be used for underwater detection or recognition directly because disturbances such as the grating-lobe and multi-path disturbance affect the gray-level distribution of sonar images and cause phantom echoes. To search for a more robust segmentation method with a reasonable computational cost, a prior-knowledge-based threshold segmentation method of underwater linear object detection is discussed. The possibility of guiding the segmentation threshold evolution of forward-looking sonar images using prior knowledge is verified by experiment. During the threshold evolution, the collinear relation of two lines that correspond to double peaks in the voting space of the edged image is used as the criterion of termination. The interaction is reflected in the sense that the Hough transform contributes to the basis of the collinear relation of lines, while the binary image generated from the current threshold provides the resource of the Hough transform. The experimental results show that the proposed method could maintain a good tradeoff between the segmentation quality and the computational time in comparison with conventional segmentation methods. The proposed method redounds to a further process for unsupervised underwater visual understanding.
An Adaptive Cross-Architecture Combination Method for Graph Traversal
You, Yang; Song, Shuaiwen; Kerbyson, Darren J.
2014-06-18
Breadth-First Search (BFS) is widely used in many real-world applications including computational biology, social networks, and electronic design automation. The combination method, using both top-down and bottom-up techniques, is the most effective BFS approach. However, current combination methods rely on trial-and-error and exhaustive search to locate the optimal switching point, which may cause significant runtime overhead. To solve this problem, we design an adaptive method based on regression analysis to predict an optimal switching point for the combination method at runtime within less than 0.1% of the BFS execution time.
An adaptive over/under data combination method
NASA Astrophysics Data System (ADS)
He, Jian-Wei; Lu, Wen-Kai; Li, Zhong-Xiao
2013-12-01
The traditional "dephase and sum" algorithms for over/under data combination estimate the ghost operator by assuming a calm sea surface. However, the real sea surface is typically rough, which invalidates the calm sea surface assumption. Hence, the traditional "dephase and sum" algorithms might produce poor-quality results in rough sea conditions. We propose an adaptive over/under data combination method, which adaptively estimates the amplitude spectrum of the ghost operator from the over/under data, and then over/under data combinations are implemented using the estimated ghost operators. A synthetic single shot gather is used to verify the performance of the proposed method in rough sea surface conditions and a real triple over/under dataset demonstrates the method performance.
An Adaptive Derivative-based Method for Function Approximation
Tong, C
2008-10-22
To alleviate the high computational cost of large-scale multi-physics simulations to study the relationships between the model parameters and the outputs of interest, response surfaces are often used in place of the exact functional relationships. This report explores a method for response surface construction using adaptive sampling guided by derivative information at each selected sample point. This method is especially suitable for applications that can readily provide added information such as gradients and Hessian with respect to the input parameters under study. When higher order terms (third and above) in the Taylor series are negligible, the approximation error for this method can be controlled. We present details of the adaptive algorithm and numerical results on a few test problems.
Development of a dynamically adaptive grid method for multidimensional problems
NASA Astrophysics Data System (ADS)
Holcomb, J. E.; Hindman, R. G.
1984-06-01
An approach to solution adaptive grid generation for use with finite difference techniques, previously demonstrated on model problems in one space dimension, has been extended to multidimensional problems. The method is based on the popular elliptic steady grid generators, but is 'dynamically' adaptive in the sense that a grid is maintained at all times satisfying the steady grid law driven by a solution-dependent source term. Testing has been carried out on Burgers' equation in one and two space dimensions. Results appear encouraging both for inviscid wave propagation cases and viscous boundary layer cases, suggesting that application to practical flow problems is now possible. In the course of the work, obstacles relating to grid correction, smoothing of the solution, and elliptic equation solvers have been largely overcome. Concern remains, however, about grid skewness, boundary layer resolution and the need for implicit integration methods. Also, the method in 3-D is expected to be very demanding of computer resources.
Technology Transfer Automated Retrieval System (TEKTRAN)
Studies using the Time Temperature Threshold (TTT) method for irrigation scheduling have been documented for cotton, corn, and soybean. However, there are limited studies of the irrigation management of grain sorghum (Sorghum bicolor, L.) with this plant-feedback system. In this two-year study, th...
Adaptive neural network nonlinear control for BTT missile based on the differential geometry method
NASA Astrophysics Data System (ADS)
Wu, Hao; Wang, Yongji; Xu, Jiangsheng
2007-11-01
A new nonlinear control strategy incorporated the differential geometry method with adaptive neural networks is presented for the nonlinear coupling system of Bank-to-Turn missile in reentry phase. The basic control law is designed using the differential geometry feedback linearization method, and the online learning neural networks are used to compensate the system errors due to aerodynamic parameter errors and external disturbance in view of the arbitrary nonlinear mapping and rapid online learning ability for multi-layer neural networks. The online weights and thresholds tuning rules are deduced according to the tracking error performance functions by Levenberg-Marquardt algorithm, which will make the learning process faster and more stable. The six degree of freedom simulation results show that the attitude angles can track the desired trajectory precisely. It means that the proposed strategy effectively enhance the stability, the tracking performance and the robustness of the control system.
Final Report: Symposium on Adaptive Methods for Partial Differential Equations
Pernice, Michael; Johnson, Christopher R.; Smith, Philip J.; Fogelson, Aaron
1998-12-08
Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.
Marin, Diego; Gegundez-Arias, Manuel E; Suero, Angel; Bravo, Jose M
2015-02-01
Development of automatic retinal disease diagnosis systems based on retinal image computer analysis can provide remarkably quicker screening programs for early detection. Such systems are mainly focused on the detection of the earliest ophthalmic signs of illness and require previous identification of fundal landmark features such as optic disc (OD), fovea or blood vessels. A methodology for accurate center-position location and OD retinal region segmentation on digital fundus images is presented in this paper. The methodology performs a set of iterative opening-closing morphological operations on the original retinography intensity channel to produce a bright region-enhanced image. Taking blood vessel confluence at the OD into account, a 2-step automatic thresholding procedure is then applied to obtain a reduced region of interest, where the center and the OD pixel region are finally obtained by performing the circular Hough transform on a set of OD boundary candidates generated through the application of the Prewitt edge detector. The methodology was evaluated on 1200 and 1748 fundus images from the publicly available MESSIDOR and MESSIDOR-2 databases, acquired from diabetic patients and thus being clinical cases of interest within the framework of automated diagnosis of retinal diseases associated to diabetes mellitus. This methodology proved highly accurate in OD-center location: average Euclidean distance between the methodology-provided and actual OD-center position was 6.08, 9.22 and 9.72 pixels for retinas of 910, 1380 and 1455 pixels in size, respectively. On the other hand, OD segmentation evaluation was performed in terms of Jaccard and Dice coefficients, as well as the mean average distance between estimated and actual OD boundaries. Comparison with the results reported by other reviewed OD segmentation methodologies shows our proposal renders better overall performance. Its effectiveness and robustness make this proposed automated OD location and
Advanced numerical methods in mesh generation and mesh adaptation
Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A
2010-01-01
Numerical solution of partial differential equations requires appropriate meshes, efficient solvers and robust and reliable error estimates. Generation of high-quality meshes for complex engineering models is a non-trivial task. This task is made more difficult when the mesh has to be adapted to a problem solution. This article is focused on a synergistic approach to the mesh generation and mesh adaptation, where best properties of various mesh generation methods are combined to build efficiently simplicial meshes. First, the advancing front technique (AFT) is combined with the incremental Delaunay triangulation (DT) to build an initial mesh. Second, the metric-based mesh adaptation (MBA) method is employed to improve quality of the generated mesh and/or to adapt it to a problem solution. We demonstrate with numerical experiments that combination of all three methods is required for robust meshing of complex engineering models. The key to successful mesh generation is the high-quality of the triangles in the initial front. We use a black-box technique to improve surface meshes exported from an unattainable CAD system. The initial surface mesh is refined into a shape-regular triangulation which approximates the boundary with the same accuracy as the CAD mesh. The DT method adds robustness to the AFT. The resulting mesh is topologically correct but may contain a few slivers. The MBA uses seven local operations to modify the mesh topology. It improves significantly the mesh quality. The MBA method is also used to adapt the mesh to a problem solution to minimize computational resources required for solving the problem. The MBA has a solid theoretical background. In the first two experiments, we consider the convection-diffusion and elasticity problems. We demonstrate the optimal reduction rate of the discretization error on a sequence of adaptive strongly anisotropic meshes. The key element of the MBA method is construction of a tensor metric from hierarchical edge
Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes
NASA Technical Reports Server (NTRS)
Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak
2004-01-01
High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Jones, Jeffrey A.
1997-01-01
One of the TRMM radar products of interest is the monthly-averaged rain rates over 5 x 5 degree cells. Clearly, the most directly way of calculating these and similar statistics is to compute them from the individual estimates made over the instantaneous field of view of the Instrument (4.3 km horizontal resolution). An alternative approach is the use of a threshold method. It has been established that over sufficiently large regions the fractional area above a rain rate threshold and the area-average rain rate are well correlated for particular choices of the threshold [e.g., Kedem et al., 19901]. A straightforward application of this method to the TRMM data would consist of the conversion of the individual reflectivity factors to rain rates followed by a calculation of the fraction of these that exceed a particular threshold. Previous results indicate that for thresholds near or at 5 mm/h, the correlation between this fractional area and the area-average rain rate is high. There are several drawbacks to this approach, however. At the TRMM radar frequency of 13.8 GHz the signal suffers attenuation so that the negative bias of the high resolution rain rate estimates will increase as the path attenuation increases. To establish a quantitative relationship between fractional area and area-average rain rate, an independent means of calculating the area-average rain rate is needed such as an array of rain gauges. This type of calibration procedure, however, is difficult for a spaceborne radar such as TRMM. To estimate a statistic other than the mean of the distribution requires, in general, a different choice of threshold and a different set of tuning parameters.
Methods for prismatic/tetrahedral grid generation and adaptation
NASA Astrophysics Data System (ADS)
Kallinderis, Y.
1995-10-01
The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.
Efficient Unstructured Grid Adaptation Methods for Sonic Boom Prediction
NASA Technical Reports Server (NTRS)
Campbell, Richard L.; Carter, Melissa B.; Deere, Karen A.; Waithe, Kenrick A.
2008-01-01
This paper examines the use of two grid adaptation methods to improve the accuracy of the near-to-mid field pressure signature prediction of supersonic aircraft computed using the USM3D unstructured grid flow solver. The first method (ADV) is an interactive adaptation process that uses grid movement rather than enrichment to more accurately resolve the expansion and compression waves. The second method (SSGRID) uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid with the pressure waves and reduce the cell count required to achieve an accurate signature prediction at a given distance from the vehicle. Both methods initially create negative volume cells that are repaired in a module in the ADV code. While both approaches provide significant improvements in the near field signature (< 3 body lengths) relative to a baseline grid without increasing the number of grid points, only the SSGRID approach allows the details of the signature to be accurately computed at mid-field distances (3-10 body lengths) for direct use with mid-field-to-ground boom propagation codes.
Vortical Flow Prediction Using an Adaptive Unstructured Grid Method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2003-01-01
A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.
Vortical Flow Prediction Using an Adaptive Unstructured Grid Method
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2001-01-01
A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65deg delta wing with different values of leading-edge bluntness, and the second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the windtunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.
Adaptive [theta]-methods for pricing American options
NASA Astrophysics Data System (ADS)
Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran
2008-12-01
We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.
Space-time adaptive numerical methods for geophysical applications.
Castro, C E; Käser, M; Toro, E F
2009-11-28
In this paper we present high-order formulations of the finite volume and discontinuous Galerkin finite-element methods for wave propagation problems with a space-time adaptation technique using unstructured meshes in order to reduce computational cost without reducing accuracy. Both methods can be derived in a similar mathematical framework and are identical in their first-order version. In their extension to higher order accuracy in space and time, both methods use spatial polynomials of higher degree inside each element, a high-order solution of the generalized Riemann problem and a high-order time integration method based on the Taylor series expansion. The static adaptation strategy uses locally refined high-resolution meshes in areas with low wave speeds to improve the approximation quality. Furthermore, the time step length is chosen locally adaptive such that the solution is evolved explicitly in time by an optimal time step determined by a local stability criterion. After validating the numerical approach, both schemes are applied to geophysical wave propagation problems such as tsunami waves and seismic waves comparing the new approach with the classical global time-stepping technique. The problem of mesh partitioning for large-scale applications on multi-processor architectures is discussed and a new mesh partition approach is proposed and tested to further reduce computational cost. PMID:19840984
Robust flicker evaluation method for low power adaptive dimming LCDs
NASA Astrophysics Data System (ADS)
Kim, Seul-Ki; Song, Seok-Jeong; Nam, Hyoungsik
2015-05-01
This paper describes a robust dimming flicker evaluation method of adaptive dimming algorithms for low power liquid crystal displays (LCDs). While the previous methods use sum of square difference (SSD) values without excluding the image sequence information, the proposed modified SSD (mSSD) values are obtained only with the dimming flicker effects by making use of differential images. The proposed scheme is verified for eight dimming configurations of two dimming level selection methods and four temporal filters over three test videos. Furthermore, a new figure of merit is introduced to cover the dimming flicker as well as image qualities and power consumption.
Optimal and adaptive methods of processing hydroacoustic signals (review)
NASA Astrophysics Data System (ADS)
Malyshkin, G. S.; Sidel'nikov, G. B.
2014-09-01
Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.
Adaptive domain decomposition methods for advection-diffusion problems
Carlenzoli, C.; Quarteroni, A.
1995-12-31
Domain decomposition methods can perform poorly on advection-diffusion equations if diffusion is dominated by advection. Indeed, the hyperpolic part of the equations could affect the behavior of iterative schemes among subdomains slowing down dramatically their rate of convergence. Taking into account the direction of the characteristic lines we introduce suitable adaptive algorithms which are stable with respect to the magnitude of the convective field in the equations and very effective on bear boundary value problems.
Wilczek, Rajmund; Swiątkowski, Maciej; Czepiel, Aleksandra; Sterliński, Maciej; Makowska, Ewa; Kułakowski, Piotr
2011-01-01
We report a case of successful implantation of an additional defibrillation lead into the coronary sinus due to high defibrillation threshold (DFT) in a seriously ill patient with a history of extensive myocardial infarction referred for implantable cardioverter- defibrillator implantation after an episode of unstable ventricular tachycardia. All previous attempts to reduce DFT, including subcutaneous electrode implantation, had been unsuccessful. PMID:22219117
Nastasi, Michael Anthony; Wang, Yongqiang; Fraboni, Beatrice; Cosseddu, Piero; Bonfiglio, Annalisa
2013-06-11
Organic thin film devices that included an organic thin film subjected to a selected dose of a selected energy of ions exhibited a stabilized mobility (.mu.) and threshold voltage (VT), a decrease in contact resistance R.sub.C, and an extended operational lifetime that did not degrade after 2000 hours of operation in the air.
NASA Astrophysics Data System (ADS)
Domingues, Margarete O.; Gomes, Anna Karina F.; Mendes, Odim; Schneider, Kai
2013-10-01
We present a new adaptive multiresoltion method for the numerical simulation of ideal magnetohydrodynamics. The governing equations, i.e., the compressible Euler equations coupled with the Maxwell equations are discretized using a finite volume scheme on a two-dimensional Cartesian mesh. Adaptivity in space is obtained via multiresolution analysis, which allows the reliable introduction of a locally refined mesh while controlling the error. The explicit time discretization uses a compact Runge-Kutta method for local time stepping and an embedded Runge-Kutta scheme for automatic time step control. An extended generalized Lagrangian multiplier approach with the mixed hyperbolic-parabolic correction type is used to control the incompressibility of the magnetic field. Applications to a two-dimensional problem illustrate the properties of the method. Memory savings and numerical divergences of the magnetic field are reported and the accuracy of the adaptive computations is assessed by comparing with the available exact solution. This work was supported by the contract SiCoMHD (ANR-Blanc 2011-045).
A New Online Calibration Method for Multidimensional Computerized Adaptive Testing.
Chen, Ping; Wang, Chun
2016-09-01
Multidimensional-Method A (M-Method A) has been proposed as an efficient and effective online calibration method for multidimensional computerized adaptive testing (MCAT) (Chen & Xin, Paper presented at the 78th Meeting of the Psychometric Society, Arnhem, The Netherlands, 2013). However, a key assumption of M-Method A is that it treats person parameter estimates as their true values, thus this method might yield erroneous item calibration when person parameter estimates contain non-ignorable measurement errors. To improve the performance of M-Method A, this paper proposes a new MCAT online calibration method, namely, the full functional MLE-M-Method A (FFMLE-M-Method A). This new method combines the full functional MLE (Jones & Jin in Psychometrika 59:59-75, 1994; Stefanski & Carroll in Annals of Statistics 13:1335-1351, 1985) with the original M-Method A in an effort to correct for the estimation error of ability vector that might otherwise adversely affect the precision of item calibration. Two correction schemes are also proposed when implementing the new method. A simulation study was conducted to show that the new method generated more accurate item parameter estimation than the original M-Method A in almost all conditions. PMID:26608960
Ye, Linlin; Yang, Dan; Wang, Xu
2014-06-01
A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal. PMID:25219236
Ales, Justin M; Farzin, Faraz; Rossion, Bruno; Norcia, Anthony M
2012-01-01
We introduce a sensitive method for measuring face detection thresholds rapidly, objectively, and independently of low-level visual cues. The method is based on the swept parameter steady-state visual evoked potential (ssVEP), in which a stimulus is presented at a specific temporal frequency while parametrically varying ("sweeping") the detectability of the stimulus. Here, the visibility of a face image was increased by progressive derandomization of the phase spectra of the image in a series of equally spaced steps. Alternations between face and fully randomized images at a constant rate (3/s) elicit a robust first harmonic response at 3 Hz specific to the structure of the face. High-density EEG was recorded from 10 human adult participants, who were asked to respond with a button-press as soon as they detected a face. The majority of participants produced an evoked response at the first harmonic (3 Hz) that emerged abruptly between 30% and 35% phase-coherence of the face, which was most prominent on right occipito-temporal sites. Thresholds for face detection were estimated reliably in single participants from 15 trials, or on each of the 15 individual face trials. The ssVEP-derived thresholds correlated with the concurrently measured perceptual face detection thresholds. This first application of the sweep VEP approach to high-level vision provides a sensitive and objective method that could be used to measure and compare visual perception thresholds for various object shapes and levels of categorization in different human populations, including infants and individuals with developmental delay. PMID:23024355
NASA Astrophysics Data System (ADS)
Deidda, Roberto; Mamalakis, Antonis; Langousis, Andreas
2015-04-01
One of the most crucial issues in statistical hydrology is the estimation of extreme rainfall from data. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a Generalized Pareto Distribution (GPD) model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches that can be grouped into three basic classes: a) non-parametric methods that locate the changing point between extreme and non-extreme regions of the data, b) graphical methods where one studies the dependence of the GPD parameters (or related metrics) to the threshold level u, and c) Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GPD model is applicable. In this work, we review representative methods for GPD threshold detection, discuss fundamental differences in their theoretical bases, and apply them to daily rainfall records from the NOAA-NCDC open-access database (http://www.ncdc.noaa.gov/oa/climate/ghcn-daily/). We find that non-parametric methods that locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while graphical methods and GoF metrics that rely on limiting arguments for the upper distribution tail lead to unrealistically high thresholds u. The latter is expected, since one checks the validity of the limiting arguments rather than the applicability of a GPD distribution model. Better performance is demonstrated by graphical methods and GoF metrics that rely on GPD properties. Finally, we discuss the effects of data quantization (common in hydrologic applications) on the estimated thresholds. Acknowledgments: The research project is implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General
A novel adaptive force control method for IPMC manipulation
NASA Astrophysics Data System (ADS)
Hao, Lina; Sun, Zhiyong; Li, Zhi; Su, Yunquan; Gao, Jianchao
2012-07-01
IPMC is a type of electro-active polymer material, also called artificial muscle, which can generate a relatively large deformation under a relatively low input voltage (generally speaking, less than 5 V), and can be implemented in a water environment. Due to these advantages, IPMC can be used in many fields such as biomimetics, service robots, bio-manipulation, etc. Until now, most existing methods for IPMC manipulation are displacement control not directly force control, however, under most conditions, the success rate of manipulations for tiny fragile objects is limited by the contact force, such as using an IPMC gripper to fix cells. Like most EAPs, a creep phenomenon exists in IPMC, of which the generated force will change with time and the creep model will be influenced by the change of the water content or other environmental factors, so a proper force control method is urgently needed. This paper presents a novel adaptive force control method (AIPOF control—adaptive integral periodic output feedback control), based on employing a creep model of which parameters are obtained by using the FRLS on-line identification method. The AIPOF control method can achieve an arbitrary pole configuration as long as the plant is controllable and observable. This paper also designs the POF and IPOF controller to compare their test results. Simulation and experiments of micro-force-tracking tests are carried out, with results confirming that the proposed control method is viable.
NASA Astrophysics Data System (ADS)
Zhang, Yan; Lian, Jijian; Liu, Fang
2016-02-01
Modal parameter identification is a core issue in the health monitoring and damage detection of hydraulic structures. The parameters are mainly obtained from the measured vibrational response under ambient excitation. However, the response signal is mixed with noise and interference signals, which will cover the structure vibration information; therefore, the parameter cannot be identified. This paper proposes an improved filtering method based on an ensemble empirical mode decomposition (EEMD) and wavelet threshold method. A 'noise index' is presented to estimate the noise degree of the components decomposed by the EEMD, and this index is related to the wavelet threshold calculation. In addition, the improved filtering method combined with an eigensystem realization algorithm (ERA) and a singular entropy (SE) is applied to an operational modal identification of a roof overflow powerhouse with a bulb tubular unit.
Investigation of the Multiple Method Adaptive Control (MMAC) method for flight control systems
NASA Technical Reports Server (NTRS)
Athans, M.; Baram, Y.; Castanon, D.; Dunn, K. P.; Green, C. S.; Lee, W. H.; Sandell, N. R., Jr.; Willsky, A. S.
1979-01-01
The stochastic adaptive control of the NASA F-8C digital-fly-by-wire aircraft using the multiple model adaptive control (MMAC) method is presented. The selection of the performance criteria for the lateral and the longitudinal dynamics, the design of the Kalman filters for different operating conditions, the identification algorithm associated with the MMAC method, the control system design, and simulation results obtained using the real time simulator of the F-8 aircraft at the NASA Langley Research Center are discussed.
Parallel, adaptive finite element methods for conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.
1994-01-01
We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.
Variation of the critical percolation threshold with the method of preparation of the system
NASA Astrophysics Data System (ADS)
Giazitzidis, Paraskevas; Avramov, Isak; Argyrakis, Panos
2015-12-01
In the present work we propose a model in which one may vary at will the critical threshold p c of the percolation transition, by probing one candidate site (or bond) at a time. This is realised by implementing an attractive (repulsive) rule when building up the lattice, so that newly added sites are either attracted or repelled by the already existing clusters. We use a tuning parameter k, which is the number of attempts for a site to be occupied, leading to a continuous change of the percolation threshold while the new percolation process still belongs to the same universality class as the ordinary random percolation. We find that by increasing the value of the tuning parameter k, p c decreases until it reaches a minimum value where nucleation effects are now more pronounced than the percolation process. Such results are useful for the explanation of several new experimental systems that have recently appeared.
NASA Technical Reports Server (NTRS)
Smith, Stephen W.; Seshadri, Banavara R.; Newman, John A.
2015-01-01
The experimental methods to determine near-threshold fatigue crack growth rate data are prescribed in ASTM standard E647. To produce near-threshold data at a constant stress ratio (R), the applied stress-intensity factor (K) is decreased as the crack grows based on a specified K-gradient. Consequently, as the fatigue crack growth rate threshold is approached and the crack tip opening displacement decreases, remote crack wake contact may occur due to the plastically deformed crack wake surfaces and shield the growing crack tip resulting in a reduced crack tip driving force and non-representative crack growth rate data. If such data are used to life a component, the evaluation could yield highly non-conservative predictions. Although this anomalous behavior has been shown to be affected by K-gradient, starting K level, residual stresses, environmental assisted cracking, specimen geometry, and material type, the specifications within the standard to avoid this effect are limited to a maximum fatigue crack growth rate and a suggestion for the K-gradient value. This paper provides parallel experimental and computational simulations for the K-decreasing method for two materials (an aluminum alloy, AA 2024-T3 and a titanium alloy, Ti 6-2-2-2-2) to aid in establishing clear understanding of appropriate testing requirements. These simulations investigate the effect of K-gradient, the maximum value of stress-intensity factor applied, and material type. A material independent term is developed to guide in the selection of appropriate test conditions for most engineering alloys. With the use of such a term, near-threshold fatigue crack growth rate tests can be performed at accelerated rates, near-threshold data can be acquired in days instead of weeks without having to establish testing criteria through trial and error, and these data can be acquired for most engineering materials, even those that are produced in relatively small product forms.
Adaptive methods for nonlinear structural dynamics and crashworthiness analysis
NASA Technical Reports Server (NTRS)
Belytschko, Ted
1993-01-01
The objective is to describe three research thrusts in crashworthiness analysis: adaptivity; mixed time integration, or subcycling, in which different timesteps are used for different parts of the mesh in explicit methods; and methods for contact-impact which are highly vectorizable. The techniques are being developed to improve the accuracy of calculations, ease-of-use of crashworthiness programs, and the speed of calculations. The latter is still of importance because crashworthiness calculations are often made with models of 20,000 to 50,000 elements using explicit time integration and require on the order of 20 to 100 hours on current supercomputers. The methodologies are briefly reviewed and then some example calculations employing these methods are described. The methods are also of value to other nonlinear transient computations.
Ultsch, Alfred; Thrun, Michael C; Hansen-Goos, Onno; Lötsch, Jörn
2015-01-01
Biomedical data obtained during cell experiments, laboratory animal research, or human studies often display a complex distribution. Statistical identification of subgroups in research data poses an analytical challenge. Here were introduce an interactive R-based bioinformatics tool, called "AdaptGauss". It enables a valid identification of a biologically-meaningful multimodal structure in the data by fitting a Gaussian mixture model (GMM) to the data. The interface allows a supervised selection of the number of subgroups. This enables the expectation maximization (EM) algorithm to adapt more complex GMM than usually observed with a noninteractive approach. Interactively fitting a GMM to heat pain threshold data acquired from human volunteers revealed a distribution pattern with four Gaussian modes located at temperatures of 32.3, 37.2, 41.4, and 45.4 °C. Noninteractive fitting was unable to identify a meaningful data structure. Obtained results are compatible with known activity temperatures of different TRP ion channels suggesting the mechanistic contribution of different heat sensors to the perception of thermal pain. Thus, sophisticated analysis of the modal structure of biomedical data provides a basis for the mechanistic interpretation of the observations. As it may reflect the involvement of different TRP thermosensory ion channels, the analysis provides a starting point for hypothesis-driven laboratory experiments. PMID:26516852
NASA Astrophysics Data System (ADS)
Tan, Kok Liang; Tanaka, Toshiyuki; Nakamura, Hidetoshi; Shirahata, Toru; Sugiura, Hiroaki
Chronic Obstructive Pulmonary Disease is a disease in which the airways and tiny air sacs (alveoli) inside the lung are partially obstructed or destroyed. Emphysema is what occurs as more and more of the walls between air sacs get destroyed. The goal of this paper is to produce a more practical emphysema-quantification algorithm that has higher correlation with the parameters of pulmonary function tests compared to classical methods. The use of the threshold range from approximately -900 Hounsfield Unit to -990 Hounsfield Unit for extracting emphysema from CT has been reported in many papers. From our experiments, we realize that a threshold which is optimal for a particular CT data set might not be optimal for other CT data sets due to the subtle radiographic variations in the CT images. Consequently, we propose a multi-threshold method that utilizes ten thresholds between and including -900 Hounsfield Unit and -990 Hounsfield Unit for identifying the different potential emphysematous regions in the lung. Subsequently, we divide the lung into eight sub-volumes. From each sub-volume, we calculate the ratio of the voxels with the intensity below a certain threshold. The respective ratios of the voxels below the ten thresholds are employed as the features for classifying the sub-volumes into four emphysema severity classes. Neural network is used as the classifier. The neural network is trained using 80 training sub-volumes. The performance of the classifier is assessed by classifying 248 test sub-volumes of the lung obtained from 31 subjects. Actual diagnoses of the sub-volumes are hand-annotated and consensus-classified by radiologists. The four-class classification accuracy of the proposed method is 89.82%. The sub-volumetric classification results produced in this study encompass not only the information of emphysema severity but also the distribution of emphysema severity from the top to the bottom of the lung. We hypothesize that besides emphysema severity, the
Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping
2016-01-01
Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068
Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping
2016-01-01
Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068
Robust time and frequency domain estimation methods in adaptive control
NASA Technical Reports Server (NTRS)
Lamaire, Richard Orville
1987-01-01
A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.
Planetary gearbox fault diagnosis using an adaptive stochastic resonance method
NASA Astrophysics Data System (ADS)
Lei, Yaguo; Han, Dong; Lin, Jing; He, Zhengjia
2013-07-01
Planetary gearboxes are widely used in aerospace, automotive and heavy industry applications due to their large transmission ratio, strong load-bearing capacity and high transmission efficiency. The tough operation conditions of heavy duty and intensive impact load may cause gear tooth damage such as fatigue crack and teeth missed etc. The challenging issues in fault diagnosis of planetary gearboxes include selection of sensitive measurement locations, investigation of vibration transmission paths and weak feature extraction. One of them is how to effectively discover the weak characteristics from noisy signals of faulty components in planetary gearboxes. To address the issue in fault diagnosis of planetary gearboxes, an adaptive stochastic resonance (ASR) method is proposed in this paper. The ASR method utilizes the optimization ability of ant colony algorithms and adaptively realizes the optimal stochastic resonance system matching input signals. Using the ASR method, the noise may be weakened and weak characteristics highlighted, and therefore the faults can be diagnosed accurately. A planetary gearbox test rig is established and experiments with sun gear faults including a chipped tooth and a missing tooth are conducted. And the vibration signals are collected under the loaded condition and various motor speeds. The proposed method is used to process the collected signals and the results of feature extraction and fault diagnosis demonstrate its effectiveness.
Spatially-Anisotropic Parallel Adaptive Wavelet Collocation Method
NASA Astrophysics Data System (ADS)
Vasilyev, Oleg V.; Brown-Dymkoski, Eric
2015-11-01
Despite latest advancements in development of robust wavelet-based adaptive numerical methodologies to solve partial differential equations, they all suffer from two major ``curses'': 1) the reliance on rectangular domain and 2) the ``curse of anisotropy'' (i.e. homogeneous wavelet refinement and inability to have spatially varying aspect ratio of the mesh elements). The new method addresses both of these challenges by utilizing an adaptive anisotropic wavelet transform on curvilinear meshes that can be either algebraically prescribed or calculated on the fly using PDE-based mesh generation. In order to ensure accurate representation of spatial operators in physical space, an additional adaptation on spatial physical coordinates is also performed. It is important to note that when new nodes are added in computational space, the physical coordinates can be approximated by interpolation of the existing solution and additional local iterations to ensure that the solution of coordinate mapping PDEs is converged on the new mesh. In contrast to traditional mesh generation approaches, the cost of adding additional nodes is minimal, mainly due to localized nature of iterative mesh generation PDE solver requiring local iterations in the vicinity of newly introduced points. This work was supported by ONR MURI under grant N00014-11-1-069.
The SMART CLUSTER METHOD - adaptive earthquake cluster analysis and declustering
NASA Astrophysics Data System (ADS)
Schaefer, Andreas; Daniell, James; Wenzel, Friedemann
2016-04-01
Earthquake declustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity with usual applications comprising of probabilistic seismic hazard assessments (PSHAs) and earthquake prediction methods. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation. Various methods have been developed to address this issue from other researchers. These have differing ranges of complexity ranging from rather simple statistical window methods to complex epidemic models. This study introduces the smart cluster method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal identification. Hereby, an adaptive search algorithm for data point clusters is adopted. It uses the earthquake density in the spatio-temporal neighbourhood of each event to adjust the search properties. The identified clusters are subsequently analysed to determine directional anisotropy, focussing on a strong correlation along the rupture plane and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010/2011 Darfield-Christchurch events, an adaptive classification procedure is applied to disassemble subsequent ruptures which may have been grouped into an individual cluster using near-field searches, support vector machines and temporal splitting. The steering parameters of the search behaviour are linked to local earthquake properties like magnitude of completeness, earthquake density and Gutenberg-Richter parameters. The method is capable of identifying and classifying earthquake clusters in space and time. It is tested and validated using earthquake data from California and New Zealand. As a result of the cluster identification process, each event in
An adaptive pseudo-spectral method for reaction diffusion problems
NASA Technical Reports Server (NTRS)
Bayliss, A.; Gottlieb, D.; Matkowsky, B. J.; Minkoff, M.
1987-01-01
The spectral interpolation error was considered for both the Chebyshev pseudo-spectral and Galerkin approximations. A family of functionals I sub r (u), with the property that the maximum norm of the error is bounded by I sub r (u)/J sub r, where r is an integer and J is the degree of the polynomial approximation, was developed. These functionals are used in the adaptive procedure whereby the problem is dynamically transformed to minimize I sub r (u). The number of collocation points is then chosen to maintain a prescribed error bound. The method is illustrated by various examples from combustion problems in one and two dimensions.
A multilevel adaptive projection method for unsteady incompressible flow
NASA Technical Reports Server (NTRS)
Howell, Louis H.
1993-01-01
There are two main requirements for practical simulation of unsteady flow at high Reynolds number: the algorithm must accurately propagate discontinuous flow fields without excessive artificial viscosity, and it must have some adaptive capability to concentrate computational effort where it is most needed. We satisfy the first of these requirements with a second-order Godunov method similar to those used for high-speed flows with shocks, and the second with a grid-based refinement scheme which avoids some of the drawbacks associated with unstructured meshes. These two features of our algorithm place certain constraints on the projection method used to enforce incompressibility. Velocities are cell-based, leading to a Laplacian stencil for the projection which decouples adjacent grid points. We discuss features of the multigrid and multilevel iteration schemes required for solution of the resulting decoupled problem. Variable-density flows require use of a modified projection operator--we have found a multigrid method for this modified projection that successfully handles density jumps of thousands to one. Numerical results are shown for the 2D adaptive and 3D variable-density algorithms.
A parallel adaptive method for pseudo-arclength continuation
NASA Astrophysics Data System (ADS)
Aruliah, D. A.; van Veen, L.; Dubitski, A.
2012-10-01
Pseudo-arclength continuation is a well-established method for constructing a numerical curve comprising solutions of a system of nonlinear equations. In many complicated high-dimensional systems, the corrector steps within pseudo-arclength continuation are extremely costly to compute; as a result, the step-length of the preceding prediction step must be adapted carefully to avoid prohibitively many failed steps. We describe the essence of a parallel method for adapting the step-length of pseudo-arclength continuation. Our method employs several predictor-corrector sequences with differing step-lengths running concurrently on distinct processors. Our parallel framework permits intermediate results of correction sequences that have not yet converged to seed new predictor-corrector sequences with various step-lengths; the goal is to amortize the cost of corrector steps to make further progress along the underlying numerical curve. Results from numerical experiments suggest a three-fold speedup is attainable when the continuation curve sought has great topological complexity and the corrector steps require significant processor time.
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods
Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods.
Schmidt, Johannes F M; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675
Adaptive grid methods for RLV environment assessment and nozzle analysis
NASA Technical Reports Server (NTRS)
Thornburg, Hugh J.
1996-01-01
Rapid access to highly accurate data about complex configurations is needed for multi-disciplinary optimization and design. In order to efficiently meet these requirements a closer coupling between the analysis algorithms and the discretization process is needed. In some cases, such as free surface, temporally varying geometries, and fluid structure interaction, the need is unavoidable. In other cases the need is to rapidly generate and modify high quality grids. Techniques such as unstructured and/or solution-adaptive methods can be used to speed the grid generation process and to automatically cluster mesh points in regions of interest. Global features of the flow can be significantly affected by isolated regions of inadequately resolved flow. These regions may not exhibit high gradients and can be difficult to detect. Thus excessive resolution in certain regions does not necessarily increase the accuracy of the overall solution. Several approaches have been employed for both structured and unstructured grid adaption. The most widely used involve grid point redistribution, local grid point enrichment/derefinement or local modification of the actual flow solver. However, the success of any one of these methods ultimately depends on the feature detection algorithm used to determine solution domain regions which require a fine mesh for their accurate representation. Typically, weight functions are constructed to mimic the local truncation error and may require substantial user input. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. These weight functions can then be used to construct blending functions for algebraic redistribution, interpolation functions for unstructured grid generation
NASA Astrophysics Data System (ADS)
Wilson, Mark; Mitra, Sunanda; Roberson, Glenn H.; Shieh, Yao-Yang
1997-10-01
Currently early detection of breast cancer is primarily accomplished by mammography and suspicious findings may lead to a decision for performing a biopsy. Digital enhancement and pattern recognition techniques may aid in early detection of some patterns such as microcalcification clusters indicating onset of DCIS (ductal carcinoma in situ) that accounts for 20% of all mammographically detected breast cancers and could be treated when detected early. These individual calcifications are hard to detect due to size and shape variability and inhomogeneous background texture. Our study addresses only early detection of microcalcifications that allows the radiologist to interpret the x-ray findings in computer-aided enhanced form easier than evaluating the x-ray film directly. We present an algorithm which locates microcalcifications based on local grayscale variability and of tissue structures and image statistics. Threshold filters with lower and upper bounds computed from the image statistics of the entire image and selected subimages were designed to enhance the entire image. This enhanced image was used as the initial image for identifying the micro-calcifications based on the variable box threshold filters at different resolutions. The test images came from the Texas Tech University Health Sciences Center and the MIAS mammographic database, which are classified into various categories including microcalcifications. Classification of other types of abnormalities in mammograms based on their characteristic features is addressed in later studies.
König, Seth D.; Buffalo, Elizabeth A.
2014-01-01
Background Eye tracking is an important component of many human and non-human primate behavioral experiments. As behavioral paradigms have become more complex, including unconstrained viewing of natural images, eye movements measured in these paradigms have become more variable and complex as well. Accordingly, the common practice of using acceleration, dispersion, or velocity thresholds to segment viewing behavior into periods of fixations and saccades may be insufficient. New Method Here we propose a novel algorithm, called Cluster Fix, which uses k-means cluster analysis to take advantage of the qualitative differences between fixations and saccades. The algorithm finds natural divisions in 4 state space parameters—distance, velocity, acceleration, and angular velocity—to separate scan paths into periods of fixations and saccades. The number and size of clusters adjusts to the variability of individual scan paths. Results Cluster Fix can detect small saccades that were often indistinguishable from noisy fixations. Local analysis of fixations helped determine the transition times between fixations and saccades. Comparison with Existing Methods Because Cluster Fix detects natural divisions in the data, predefined thresholds are not needed. Conclusions A major advantage of Cluster Fix is the ability to precisely identify the beginning and end of saccades, which is essential for studying neural activity that is modulated by or time-locked to saccades. Our data suggest that Cluster Fix is more sensitive than threshold-based algorithms but comes at the cost of an increase in computational time. PMID:24509130
Turbulence profiling methods applied to ESO's adaptive optics facility
NASA Astrophysics Data System (ADS)
Valenzuela, Javier; Béchet, Clémentine; Garcia-Rissmann, Aurea; Gonté, Frédéric; Kolb, Johann; Le Louarn, Miska; Neichel, Benoît; Madec, Pierre-Yves; Guesalaga, Andrés.
2014-07-01
Two algorithms were recently studied for C2n profiling from wide-field Adaptive Optics (AO) measurements on GeMS (Gemini Multi-Conjugate AO system). They both rely on the Slope Detection and Ranging (SLODAR) approach, using spatial covariances of the measurements issued from various wavefront sensors. The first algorithm estimates the C2n profile by applying the truncated least-squares inverse of a matrix modeling the response of slopes covariances to various turbulent layer heights. In the second method, the profile is estimated by deconvolution of these spatial cross-covariances of slopes. We compare these methods in the new configuration of ESO Adaptive Optics Facility (AOF), a high-order multiple laser system under integration. For this, we use measurements simulated by the AO cluster of ESO. The impact of the measurement noise and of the outer scale of the atmospheric turbulence is analyzed. The important influence of the outer scale on the results leads to the development of a new step for outer scale fitting included in each algorithm. This increases the reliability and robustness of the turbulence strength and profile estimations.
NASA Astrophysics Data System (ADS)
Senthil, A.; Ramasamy, P.; Verma, Sunil
2011-03-01
Optically good quality semi-organic single crystal of sodium acid phthalate (NaAP) was successfully grown by Sankaranarayanan-Ramasamy (SR) method. Transparent, colourless <0 0 1> oriented unidirectional bulk single crystals of diameters 10 and 20 mm and length maximum up to 75 mm were grown by the SR method. The grown crystals were subjected to various characterization studies such as etching, birefringence, laser damage threshold, UV-vis spectrum and dielectric measurement. The value of birefringence and quality were ascertained by birefringence studies.
An adaptive PCA fusion method for remote sensing images
NASA Astrophysics Data System (ADS)
Guo, Qing; Li, An; Zhang, Hongqun; Feng, Zhongkui
2014-10-01
The principal component analysis (PCA) method is a popular fusion method used for its efficiency and high spatial resolution improvement. However, the spectral distortion is often found in PCA. In this paper, we propose an adaptive PCA method to enhance the spectral quality of the fused image. The amount of spatial details of the panchromatic (PAN) image injected into each band of the multi-spectral (MS) image is appropriately determined by a weighting matrix, which is defined by the edges of the PAN image, the edges of the MS image and the proportions between MS bands. In order to prove the effectiveness of the proposed method, the qualitative visual and quantitative analyses are introduced. The correlation coefficient (CC), the spectral discrepancy (SPD), and the spectral angle mapper (SAM) are used to measure the spectral quality of each fused band image. Q index is calculated to evaluate the global spectral quality of all the fused bands as a whole. The spatial quality is evaluated by the average gradient (AG) and the standard deviation (STD). Experimental results show that the proposed method improves the spectral quality very much comparing to the original PCA method while maintaining the high spatial quality of the original PCA.
The method of subliminal psychodynamic activation: do individual thresholds make a difference?
Malik, R; Paraherakis, A; Joseph, S; Ladd, H
1996-12-01
The present experiment investigated the effects of subliminal psychodynamic stimuli on anxiety as measured by heart rate. Following an anxiety-inducing task, male and female subjects were tachistoscopically shown, at their subjective thresholds, one of five subliminal stimuli, MOMMY AND I ARE ONE, DADDY AND I ARE ONE (symbiotic messages). MOMMY HAS LEFT ME (abandonment message), I AM HAPPY AND CALM (positively toned but nonsymbiotic phrase), or MYMMO NAD I REA ENO (control stimulus). It was hypothesized that men would exhibit a greater decrease in heart rate after exposure to the MOMMY stimulus than the control message. No definitive predictions were made for women. The abandonment phrase was expected to increase heart rate. A positively toned message was included to assess whether its effects would be comparable to those hypothesized for the MOMMY message. The results yielded no significant effects for stimulus or gender and so provided no support for the hypotheses. PMID:9017738
Cutler, Timothy D.; Wang, Chong; Hoff, Steven J.; Zimmerman, Jeffrey J.
2013-01-01
In aerobiology, dose-response studies are used to estimate the risk of infection to a susceptible host presented by exposure to a specific dose of an airborne pathogen. In the research setting, host- and pathogen-specific factors that affect the dose-response continuum can be accounted for by experimental design, but the requirement to precisely determine the dose of infectious pathogen to which the host was exposed is often challenging. By definition, quantification of viable airborne pathogens is based on the culture of micro-organisms, but some airborne pathogens are transmissible at concentrations below the threshold of quantification by culture. In this paper we present an approach to the calculation of exposure dose at microbiologically unquantifiable levels using an application of the “continuous-stirred tank reactor (CSTR) model” and the validation of this approach using rhodamine B dye as a surrogate for aerosolized microbial pathogens in a dynamic aerosol toroid (DAT). PMID:24082399
NASA Technical Reports Server (NTRS)
Kantor, A. V.; Timonin, V. G.; Azarova, Y. S.
1974-01-01
The method of adaptive discretization is the most promising for elimination of redundancy from telemetry messages characterized by signal shape. Adaptive discretization with associative sorting was considered as a way to avoid the shortcomings of adaptive discretization with buffer smoothing and adaptive discretization with logical switching in on-board information compression devices (OICD) in spacecraft. Mathematical investigations of OICD are presented.
Hwang, Wei-Chin
2010-01-01
How do we culturally adapt psychotherapy for ethnic minorities? Although there has been growing interest in doing so, few therapy adaptation frameworks have been developed. The majority of these frameworks take a top-down theoretical approach to adapting psychotherapy. The purpose of this paper is to introduce a community-based developmental approach to modifying psychotherapy for ethnic minorities. The Formative Method for Adapting Psychotherapy (FMAP) is a bottom-up approach that involves collaborating with consumers to generate and support ideas for therapy adaptation. It involves 5-phases that target developing, testing, and reformulating therapy modifications. These phases include: (a) generating knowledge and collaborating with stakeholders (b) integrating generated information with theory and empirical and clinical knowledge, (c) reviewing the initial culturally adapted clinical intervention with stakeholders and revising the culturally adapted intervention, (d) testing the culturally adapted intervention, and (e) finalizing the culturally adapted intervention. Application of the FMAP is illustrated using examples from a study adapting psychotherapy for Chinese Americans, but can also be readily applied to modify therapy for other ethnic groups. PMID:20625458
A Spectral Adaptive Mesh Refinement Method for the Burgers equation
NASA Astrophysics Data System (ADS)
Nasr Azadani, Leila; Staples, Anne
2013-03-01
Adaptive mesh refinement (AMR) is a powerful technique in computational fluid dynamics (CFD). Many CFD problems have a wide range of scales which vary with time and space. In order to resolve all the scales numerically, high grid resolutions are required. The smaller the scales the higher the resolutions should be. However, small scales are usually formed in a small portion of the domain or in a special period of time. AMR is an efficient method to solve these types of problems, allowing high grid resolutions where and when they are needed and minimizing memory and CPU time. Here we formulate a spectral version of AMR in order to accelerate simulations of a 1D model for isotropic homogenous turbulence, the Burgers equation, as a first test of this method. Using pseudo spectral methods, we applied AMR in Fourier space. The spectral AMR (SAMR) method we present here is applied to the Burgers equation and the results are compared with the results obtained using standard solution methods performed using a fine mesh.
Robust image registration using adaptive coherent point drift method
NASA Astrophysics Data System (ADS)
Yang, Lijuan; Tian, Zheng; Zhao, Wei; Wen, Jinhuan; Yan, Weidong
2016-04-01
Coherent point drift (CPD) method is a powerful registration tool under the framework of the Gaussian mixture model (GMM). However, the global spatial structure of point sets is considered only without other forms of additional attribute information. The equivalent simplification of mixing parameters and the manual setting of the weight parameter in GMM make the CPD method less robust to outlier and have less flexibility. An adaptive CPD method is proposed to automatically determine the mixing parameters by embedding the local attribute information of features into the construction of GMM. In addition, the weight parameter is treated as an unknown parameter and automatically determined in the expectation-maximization algorithm. In image registration applications, the block-divided salient image disk extraction method is designed to detect sparse salient image features and local self-similarity is used as attribute information to describe the local neighborhood structure of each feature. The experimental results on optical images and remote sensing images show that the proposed method can significantly improve the matching performance.
Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method
NASA Astrophysics Data System (ADS)
Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony
Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.
NASA Astrophysics Data System (ADS)
Ogata, Kohichi; Niino, Shingo
2015-03-01
This study describes the improvement of an eye-gaze interface system with a visible light camera. The current system detects the center of the iris from a captured eye image using image processing. During the initial stages of system use, a display window is provided to set the threshold values of the image's saturation and intensity, which is used to manually adjust the appearance of the iris region. In this study, we propose an automatic threshold setting method. The optimum threshold value for the saturation is obtained by discriminant analysis and that for the intensity is determined by finding the value that yields the same number of accumulated pixels in the detected region as threshold processing of the saturation. In our experiments with subjects with brown eyes, the automatic method obtained good threshold values in most cases. Furthermore, an adjustment function to overcome under- or over-estimated saturation threshold values is also proposed. This function provides a more robust automatic threshold setting. In experiments, we compared our automatic setting method with conventional manual techniques, which showed that the automatic method is useful for reducing the time required for threshold setting and its pointing accuracy is comparable to that of the manual approach.
A locally adaptive kernel regression method for facies delineation
NASA Astrophysics Data System (ADS)
Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.
2015-12-01
Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.
A forward method for optimal stochastic nonlinear and adaptive control
NASA Technical Reports Server (NTRS)
Bayard, David S.
1988-01-01
A computational approach is taken to solve the optimal nonlinear stochastic control problem. The approach is to systematically solve the stochastic dynamic programming equations forward in time, using a nested stochastic approximation technique. Although computationally intensive, this provides a straightforward numerical solution for this class of problems and provides an alternative to the usual dimensionality problem associated with solving the dynamic programming equations backward in time. It is shown that the cost degrades monotonically as the complexity of the algorithm is reduced. This provides a strategy for suboptimal control with clear performance/computation tradeoffs. A numerical study focusing on a generic optimal stochastic adaptive control example is included to demonstrate the feasibility of the method.
Adaptive mesh refinement and adjoint methods in geophysics simulations
NASA Astrophysics Data System (ADS)
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times
A threshold method for coastal line feature extraction from optical satellite imagery
NASA Astrophysics Data System (ADS)
Zoran, L. F. V.; Golovanov, C. Ionescu; Zoran, M. A.
2007-10-01
The coastal zone of world is under increasing stress due to development of industries, trade and commerce, tourism and resultant human population growth and migration, and deteriorating water quality. Satellite imagery is used for mapping of coastal zone ecosystems as well as to assess the extent and alteration in land cover/land use in coastal ecosystem. Beside anthropogenic activities, episodic events, such as storms, floods, induce certain changes or accelerate the process of change, so in order to conserve the coastal ecosystems and habitats is an urgent need to define coastal line and its spatio-temporal changes. Coastlines have never been stable in terms of their long term and short term positions. Coastal line is a simple but important type of feature in remote sensed images. In remote sensing have been proposed many valid approaches for automatically identifying of this feature, of which the accuracy and speed is the most important. The aim of the paper is to develop a threshold-based morphological approach for coastline feature extraction from optical remote sensing satellite images (LandsatTM 5, ETM 7 + and IKONOS) and to apply it for Romanian Black Sea coastal zone for period of 20 years (1985-2005).
Evaluation of Adaptive Subdivision Method on Mobile Device
NASA Astrophysics Data System (ADS)
Rahim, Mohd Shafry Mohd; Isa, Siti Aida Mohd; Rehman, Amjad; Saba, Tanzila
2013-06-01
Recently, there are significant improvements in the capabilities of mobile devices; but rendering large 3D object is still tedious because of the constraint in resources of mobile devices. To reduce storage requirement, 3D object is simplified but certain area of curvature is compromised and the surface will not be smooth. Therefore a method to smoother selected area of a curvature is implemented. One of the popular methods is adaptive subdivision method. Experiments are performed using two data with results based on processing time, rendering speed and the appearance of the object on the devices. The result shows a downfall in frame rate performance due to the increase in the number of triangles with each level of iteration while the processing time of generating the new mesh also significantly increase. Since there is a difference in screen size between the devices the surface on the iPhone appears to have more triangles and more compact than the surface displayed on the iPad. [Figure not available: see fulltext.
Moreira, Sérgio R; Arsa, Gisela; Oliveira, Hildeamo B; Lima, Laila C J; Campbell, Carmen S G; Simões, Herbert G
2008-07-01
The purpose of this study was to compare different methods to identify the lactate threshold (LT) and glucose threshold (GT) on resistance exercise for individuals with type 2 diabetes. Nine men with type 2 diabetes (47.2 +/- 12.4 years, 87.6 +/- 20.0 kg, 174.9 +/- 5.9 cm, and 22.4 +/- 7.2% body fat) performed incremental tests (ITs) on the leg press (LP) and bench press (BP) at relative intensities of 10, 20, 25, 30, 35, 40, 50, 60, 70, 80, and 90% of one-repetition maximum (1RM) at each 1-minute stage. During the 2-minute interval between stages, 25 mul of capillary blood were collected from the earlobe for blood lactate [Lac] and blood glucose [Gluc] analysis (YSI 2700S). The LT in the LP and BP was identified at IT by the inflexion in [Lac] response as well as by an equation originated from a polynomial adjustment (LTp) of the [Lac]/% 1RM ratio responses. The lowest [Gluc] during the IT identified the GT. The analysis of variance did not show differences among the 1RM at the thresholds identified by different methods in the LP (LTLP = 31.0% +/- 5.3% 1RM; GTLP = 32.1% +/- 6.1% 1RM; LTpLP = 36.7% +/- 5.6% 1RM; p > 0.05) and BP (LTBP = 29.9% +/- 8.5% 1RM; GTBP = 32.1% +/- 8.5% 1RM; LTpBP = 31.8% +/- 6.7% 1RM; p > 0.05). It was concluded that it was possible to identify the LT and GT in resistance exercise by different methods for individuals with type 2 diabetes with no differences between them. The intensities (kg) corresponding to these thresholds were between 46% and 60% of the body weight on the LP and between 18% and 26% of the body weight on the BP, in which the exercise prescription would be done to this intensity in 3 sets of 20 to 30 repetitions each and 1 minute of rest while alternating the muscle groups for blood glucose control for individuals with characteristics similar to the participants. PMID:18545200
Chiu, Chuan-Hung; Wen, Tzai-Hung; Chien, Lung-Chang; Yu, Hwa-Lung
2014-01-01
Understanding the spatial characteristics of dengue fever (DF) incidences is crucial for governmental agencies to implement effective disease control strategies. We investigated the associations between environmental and socioeconomic factors and DF geographic distribution, are proposed a probabilistic risk assessment approach that uses threshold-based quantile regression to identify the significant risk factors for DF transmission and estimate the spatial distribution of DF risk regarding full probability distributions. To interpret risk, return period was also included to characterize the frequency pattern of DF geographic occurrences. The study area included old Kaohsiung City and Fongshan District, two areas in Taiwan that have been affected by severe DF infections in recent decades. Results indicated that water-related facilities, including canals and ditches, and various types of residential area, as well as the interactions between them, were significant factors that elevated DF risk. By contrast, the increase of per capita income and its associated interactions with residential areas mitigated the DF risk in the study area. Nonlinear associations between these factors and DF risk were present in various quantiles, implying that water-related factors characterized the underlying spatial patterns of DF, and high-density residential areas indicated the potential for high DF incidence (e.g., clustered infections). The spatial distributions of DF risks were assessed in terms of three distinct map presentations: expected incidence rates, incidence rates in various return periods, and return periods at distinct incidence rates. These probability-based spatial risk maps exhibited distinct DF risks associated with environmental factors, expressed as various DF magnitudes and occurrence probabilities across Kaohsiung, and can serve as a reference for local governmental agencies. PMID:25302582
Method for diagnosis and control of aerobic training in rats based on lactate threshold.
Carvalho, Joyce F; Masuda, Masako O; Pompeu, Fernando A M S
2005-04-01
We propose a protocol for determination of lactate threshold (LT) and test the validity of one aerobic training based on LT in rats. In group I, V(LTi) (velocity at LT before training) was determined in all rats (n=10), each rat training at its own V(LTi) and in group II, animals (n=7) ran at 15 m min(-1), the mean V(LTi) of group I. The training consisted of daily runs at V(LTi) for 50 min, 5 days/week, for 4 weeks. In group I, this program increased V(LT) (V(LTi) 14.90+/-1.49 m min(-1) and V(LTf), after training, 22.60+/-1.17 m min(-1)) and the velocity at exhaustion (19.50+/-1.63 m min(-1) and 27.60+/-1.17 m min(-1)). [Lactate] at LT (2.62+/-0.43 mmol L(-1) versus 2.11+/-0.15 mmol L(-1)) and relative values of LT (76+/-3% versus 82+/-2%) stayed unaltered. In group II the V(LTf) was 20+/-1.8 m.mim(-1), the [lactate] at the LT, 2.02+/-0.17 mmol.L(-1); the exhaustion speed, 23.57+/-2.11 m.mim(-1) and relative value of LT, 82.71+/-2.29%. There were no significant differences in these parameters between groups I and II. Thus, this protocol based on LT is effective and the mean V(LT) determined in a small number of healthy untrained rats can be used for aerobic training in a larger group of healthy animals of same gender and age. PMID:15936699
A Probabilistic Spatial Dengue Fever Risk Assessment by a Threshold-Based-Quantile Regression Method
Chiu, Chuan-Hung; Wen, Tzai-Hung; Chien, Lung-Chang; Yu, Hwa-Lung
2014-01-01
Understanding the spatial characteristics of dengue fever (DF) incidences is crucial for governmental agencies to implement effective disease control strategies. We investigated the associations between environmental and socioeconomic factors and DF geographic distribution, are proposed a probabilistic risk assessment approach that uses threshold-based quantile regression to identify the significant risk factors for DF transmission and estimate the spatial distribution of DF risk regarding full probability distributions. To interpret risk, return period was also included to characterize the frequency pattern of DF geographic occurrences. The study area included old Kaohsiung City and Fongshan District, two areas in Taiwan that have been affected by severe DF infections in recent decades. Results indicated that water-related facilities, including canals and ditches, and various types of residential area, as well as the interactions between them, were significant factors that elevated DF risk. By contrast, the increase of per capita income and its associated interactions with residential areas mitigated the DF risk in the study area. Nonlinear associations between these factors and DF risk were present in various quantiles, implying that water-related factors characterized the underlying spatial patterns of DF, and high-density residential areas indicated the potential for high DF incidence (e.g., clustered infections). The spatial distributions of DF risks were assessed in terms of three distinct map presentations: expected incidence rates, incidence rates in various return periods, and return periods at distinct incidence rates. These probability-based spatial risk maps exhibited distinct DF risks associated with environmental factors, expressed as various DF magnitudes and occurrence probabilities across Kaohsiung, and can serve as a reference for local governmental agencies. PMID:25302582
ERIC Educational Resources Information Center
Melaragno, Ralph J.
The two-phase study compared two methods of adapting self-instructional materials to individual differences among learners. The methods were compared with each other and with a control condition involving only minimal adaptation. The first adaptation procedure was based on subjects' performances on a learning task in Phase I of the study; the…
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
The dynamic time-over-threshold method for multi-channel APD based gamma-ray detectors
NASA Astrophysics Data System (ADS)
Orita, T.; Shimazoe, K.; Takahashi, H.
2015-03-01
t- Recent advances in manufacturing technology have enabled the use of multi-channel pixelated detectors in gamma-ray imaging applications. When obtaining gamma-ray measurements, it is important to obtain pulse height information in order to avoid unnecessary events such as scattering. However, as the number of channels increases, more electronics are needed to process each channel's signal, and the corresponding increases in circuit size and power consumption can result in practical problems. The time-over-threshold (ToT) method, which has recently become popular in the medical field, is a signal processing technique that can effectively avoid such problems. However, ToT suffers from poor linearity and its dynamic range is limited. We therefore propose a new ToT technique called the dynamic time-over-threshold (dToT) method [4]. A new signal processing system using dToT and CR-RC shaping demonstrated much better linearity than that of a conventional ToT. Using a test circuit with a new Gd3Al2Ga3O12 (GAGG) scintillator and an avalanche photodiode, the pulse height spectra of 137Cs and 22Na sources were measured with high linearity. Based on these results, we designed a new application-specific integrated circuit (ASIC) for this multi-channel dToT system, measured the spectra of a 22Na source, and investigated the linearity of the system.
An Adaptive De-Aliasing Strategy for Discontinuous Galerkin methods
NASA Astrophysics Data System (ADS)
Beck, Andrea; Flad, David; Frank, Hannes; Munz, Claus-Dieter
2015-11-01
Discontinuous Galerkin methods combine the accuracy of a local polynomial representation with the geometrical flexibility of an element-based discretization. In combination with their excellent parallel scalability, these methods are currently of great interest for DNS and LES. For high order schemes, the dissipation error approaches a cut-off behavior, which allows an efficient wave resolution per degree of freedom, but also reduces robustness against numerical errors. One important source of numerical error is the inconsistent discretization of the non-linear convective terms, which results in aliasing of kinetic energy and solver instability. Consistent evaluation of the inner products prevents this form of error, but is computationally very expensive. In this talk, we discuss the need for a consistent de-aliasing to achieve a neutrally stable scheme, and present a novel strategy for recovering a part of the incurred computational costs. By implementing the de-aliasing operation through a cell-local projection filter, we can perform adaptive de-aliasing in space and time, based on physically motivated indicators. We will present results for a homogeneous isotropic turbulence and the Taylor-Green vortex flow, and discuss implementation details, accuracy and efficiency.
Method for removing tilt control in adaptive optics systems
Salmon, J.T.
1998-04-28
A new adaptive optics system and method of operation are disclosed, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G{prime} = (I{minus}X(X{sup T} X){sup {minus}1}X{sup T})G(I{minus}A). 3 figs.
Method for removing tilt control in adaptive optics systems
Salmon, Joseph Thaddeus
1998-01-01
A new adaptive optics system and method of operation, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G'=(I-X(X.sup.T X).sup.-1 X.sup.T)G(I-A)
Adapted G-mode Clustering Method applied to Asteroid Taxonomy
NASA Astrophysics Data System (ADS)
Hasselmann, Pedro H.; Carvano, Jorge M.; Lazzaro, D.
2013-11-01
The original G-mode was a clustering method developed by A. I. Gavrishin in the late 60's for geochemical classification of rocks, but was also applied to asteroid photometry, cosmic rays, lunar sample and planetary science spectroscopy data. In this work, we used an adapted version to classify the asteroid photometry from SDSS Moving Objects Catalog. The method works by identifying normal distributions in a multidimensional space of variables. The identification starts by locating a set of points with smallest mutual distance in the sample, which is a problem when data is not planar. Here we present a modified version of the G-mode algorithm, which was previously written in FORTRAN 77, in Python 2.7 and using NumPy, SciPy and Matplotlib packages. The NumPy was used for array and matrix manipulation and Matplotlib for plot control. The Scipy had a import role in speeding up G-mode, Scipy.spatial.distance.mahalanobis was chosen as distance estimator and Numpy.histogramdd was applied to find the initial seeds from which clusters are going to evolve. Scipy was also used to quickly produce dendrograms showing the distances among clusters. Finally, results for Asteroids Taxonomy and tests for different sample sizes and implementations are presented.
NASA Technical Reports Server (NTRS)
Alov, N. V.; Dadayan, K. A.
1988-01-01
The feasibility of measuring metal work functions using the secondary emission threshold method and an electron spectrometer is demonstrated. Measurements are reported for Nb, Mo, Ta, and W bombarded by Ar(+) ions.
Adaptable Metadata Rich IO Methods for Portable High Performance IO
Lofstead, J.; Zheng, Fang; Klasky, Scott A; Schwan, Karsten
2009-01-01
Since IO performance on HPC machines strongly depends on machine characteristics and configuration, it is important to carefully tune IO libraries and make good use of appropriate library APIs. For instance, on current petascale machines, independent IO tends to outperform collective IO, in part due to bottlenecks at the metadata server. The problem is exacerbated by scaling issues, since each IO library scales differently on each machine, and typically, operates efficiently to different levels of scaling on different machines. With scientific codes being run on a variety of HPC resources, efficient code execution requires us to address three important issues: (1) end users should be able to select the most efficient IO methods for their codes, with minimal effort in terms of code updates or alterations; (2) such performance-driven choices should not prevent data from being stored in the desired file formats, since those are crucial for later data analysis; and (3) it is important to have efficient ways of identifying and selecting certain data for analysis, to help end users cope with the flood of data produced by high end codes. This paper employs ADIOS, the ADaptable IO System, as an IO API to address (1)-(3) above. Concerning (1), ADIOS makes it possible to independently select the IO methods being used by each grouping of data in an application, so that end users can use those IO methods that exhibit best performance based on both IO patterns and the underlying hardware. In this paper, we also use this facility of ADIOS to experimentally evaluate on petascale machines alternative methods for high performance IO. Specific examples studied include methods that use strong file consistency vs. delayed parallel data consistency, as that provided by MPI-IO or POSIX IO. Concerning (2), to avoid linking IO methods to specific file formats and attain high IO performance, ADIOS introduces an efficient intermediate file format, termed BP, which can be converted, at small
A hybrid method for optimization of the adaptive Goldstein filter
NASA Astrophysics Data System (ADS)
Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue
2014-12-01
The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.
Tsunami modelling with adaptively refined finite volume methods
LeVeque, R.J.; George, D.L.; Berger, M.J.
2011-01-01
Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.
NASA Astrophysics Data System (ADS)
Kurihara, Yosuke; Watanabe, Kajiro; Kobayashi, Kazuyuki; Tanaka, Hiroshi
General anesthesia used for surgical operations may cause unstable conditions of the patients after the operations, which could lead to respiratory arrests. Under such circumstances, nurses could fail in finding the change of the conditions, and other malpractices could also occur. It is highly possible that such malpractices may occur while transferring a patient from ICU to the room using a stretcher. Monitoring the change in the blood oxygen saturation concentration and other vital signs to detect a respiratory arrest is not easy when transferring a patient on a stretcher. Here we present several noise reduction system and algorithm to detect respiratory arrests in transferring a patient, based on the unconstrained air pressure method that the authors presented previously. As the result, when the acceleration level of the stretcher noise was 0.5G, the respiratory arrest detection ratio using this novel method was 65%, while that with the conventional method was 0%.
LDRD Final Report: Adaptive Methods for Laser Plasma Simulation
Dorr, M R; Garaizar, F X; Hittinger, J A
2003-01-29
The goal of this project was to investigate the utility of parallel adaptive mesh refinement (AMR) in the simulation of laser plasma interaction (LPI). The scope of work included the development of new numerical methods and parallel implementation strategies. The primary deliverables were (1) parallel adaptive algorithms to solve a system of equations combining plasma fluid and light propagation models, (2) a research code implementing these algorithms, and (3) an analysis of the performance of parallel AMR on LPI problems. The project accomplished these objectives. New algorithms were developed for the solution of a system of equations describing LPI. These algorithms were implemented in a new research code named ALPS (Adaptive Laser Plasma Simulator) that was used to test the effectiveness of the AMR algorithms on the Laboratory's large-scale computer platforms. The details of the algorithm and the results of the numerical tests were documented in an article published in the Journal of Computational Physics [2]. A principal conclusion of this investigation is that AMR is most effective for LPI systems that are ''hydrodynamically large'', i.e., problems requiring the simulation of a large plasma volume relative to the volume occupied by the laser light. Since the plasma-only regions require less resolution than the laser light, AMR enables the use of efficient meshes for such problems. In contrast, AMR is less effective for, say, a single highly filamented beam propagating through a phase plate, since the resulting speckle pattern may be too dense to adequately separate scales with a locally refined mesh. Ultimately, the gain to be expected from the use of AMR is highly problem-dependent. One class of problems investigated in this project involved a pair of laser beams crossing in a plasma flow. Under certain conditions, energy can be transferred from one beam to the other via a resonant interaction with an ion acoustic wave in the crossing region. AMR provides an
NASA Astrophysics Data System (ADS)
Chung-Wei, Li; Gwo-Hshiung, Tzeng
To deal with complex problems, structuring them through graphical representations and analyzing causal influences can aid in illuminating complex issues, systems, or concepts. The DEMATEL method is a methodology which can be used for researching and solving complicated and intertwined problem groups. The end product of the DEMATEL process is a visual representation—the impact-relations map—by which respondents organize their own actions in the world. The applicability of the DEMATEL method is widespread, ranging from analyzing world problematique decision making to industrial planning. The most important property of the DEMATEL method used in the multi-criteria decision making (MCDM) field is to construct interrelations between criteria. In order to obtain a suitable impact-relations map, an appropriate threshold value is needed to obtain adequate information for further analysis and decision-making. In this paper, we propose a method based on the entropy approach, the maximum mean de-entropy algorithm, to achieve this purpose. Using real cases to find the interrelationships between the criteria for evaluating effects in E-learning programs as an examples, we will compare the results obtained from the respondents and from our method, and discuss that the different impact-relations maps from these two methods.
A varying threshold method for ChIP peak-calling using multiple sources of information
Chen, Kuan-Bei; Zhang, Yu
2010-01-01
Motivation: Gene regulation commonly involves interaction among DNA, proteins and biochemical conditions. Using chromatin immunoprecipitation (ChIP) technologies, protein–DNA interactions are routinely detected in the genome scale. Computational methods that detect weak protein-binding signals and simultaneously maintain a high specificity yet remain to be challenging. An attractive approach is to incorporate biologically relevant data, such as protein co-occupancy, to improve the power of protein-binding detection. We call the additional data related with the target protein binding as supporting tracks. Results: We propose a novel but rigorous statistical method to identify protein occupancy in ChIP data using multiple supporting tracks (PASS2). We demonstrate that utilizing biologically related information can significantly increase the discovery of true protein-binding sites, while still maintaining a desired level of false positive calls. Applying the method to GATA1 restoration in mouse erythroid cell line, we detected many new GATA1-binding sites using GATA1 co-occupancy data. Availability: http://stat.psu.edu/∼yuzhang/pass2.tar Contact: yuzhang@stat.psu.edu PMID:20823314
Solution of Reactive Compressible Flows Using an Adaptive Wavelet Method
NASA Astrophysics Data System (ADS)
Zikoski, Zachary; Paolucci, Samuel; Powers, Joseph
2008-11-01
This work presents numerical simulations of reactive compressible flow, including detailed multicomponent transport, using an adaptive wavelet algorithm. The algorithm allows for dynamic grid adaptation which enhances our ability to fully resolve all physically relevant scales. The thermodynamic properties, equation of state, and multicomponent transport properties are provided by CHEMKIN and TRANSPORT libraries. Results for viscous detonation in a H2:O2:Ar mixture, and other problems in multiple dimensions, are included.
On Accuracy of Adaptive Grid Methods for Captured Shocks
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2002-01-01
The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.
NASA Technical Reports Server (NTRS)
Wang, Ray (Inventor)
2009-01-01
A method and system for spatial data manipulation input and distribution via an adaptive wireless transceiver. The method and system include a wireless transceiver for automatically and adaptively controlling wireless transmissions using a Waveform-DNA method. The wireless transceiver can operate simultaneously over both the short and long distances. The wireless transceiver is automatically adaptive and wireless devices can send and receive wireless digital and analog data from various sources rapidly in real-time via available networks and network services.
Bayesian approach to color-difference models based on threshold and constant-stimuli methods.
Brusola, Fernando; Tortajada, Ignacio; Lengua, Ismael; Jordá, Begoña; Peris, Guillermo
2015-06-15
An alternative approach based on statistical Bayesian inference is presented to deal with the development of color-difference models and the precision of parameter estimation. The approach was applied to simulated data and real data, the latter published by selected authors involved with the development of color-difference formulae using traditional methods. Our results show very good agreement between the Bayesian and classical approaches. Among other benefits, our proposed methodology allows one to determine the marginal posterior distribution of each random individual parameter of the color-difference model. In this manner, it is possible to analyze the effect of individual parameters on the statistical significance calculation of a color-difference equation. PMID:26193510
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
Adaptation of a-Stratified Method in Variable Length Computerized Adaptive Testing.
ERIC Educational Resources Information Center
Wen, Jian-Bing; Chang, Hua-Hua; Hau, Kit-Tai
Test security has often been a problem in computerized adaptive testing (CAT) because the traditional wisdom of item selection overly exposes high discrimination items. The a-stratified (STR) design advocated by H. Chang and his collaborators, which uses items of less discrimination in earlier stages of testing, has been shown to be very…
Study of adaptive methods for data compression of scanner data
NASA Technical Reports Server (NTRS)
1977-01-01
The performance of adaptive image compression techniques and the applicability of a variety of techniques to the various steps in the data dissemination process are examined in depth. It is concluded that the bandwidth of imagery generated by scanners can be reduced without introducing significant degradation such that the data can be transmitted over an S-band channel. This corresponds to a compression ratio equivalent to 1.84 bits per pixel. It is also shown that this can be achieved using at least two fairly simple techniques with weight-power requirements well within the constraints of the LANDSAT-D satellite. These are the adaptive 2D DPCM and adaptive hybrid techniques.
Systems and Methods for Derivative-Free Adaptive Control
NASA Technical Reports Server (NTRS)
Yucelen, Tansel (Inventor); Kim, Kilsoo (Inventor); Calise, Anthony J. (Inventor)
2015-01-01
An adaptive control system is disclosed. The control system can control uncertain dynamic systems. The control system can employ one or more derivative-free adaptive control architectures. The control system can further employ one or more derivative-free weight update laws. The derivative-free weight update laws can comprise a time-varying estimate of an ideal vector of weights. The control system of the present invention can therefore quickly stabilize systems that undergo sudden changes in dynamics, caused by, for example, sudden changes in weight. Embodiments of the present invention can also provide a less complex control system than existing adaptive control systems. The control system can control aircraft and other dynamic systems, such as, for example, those with non-minimum phase dynamics.
NASA Astrophysics Data System (ADS)
Nakamura, Y.; Shimazoe, K.; Takahashi, H.
2016-02-01
Silicon photomultipliers (SiPMs), which are a relatively new type of photon detector, have received more attention in the fields of nuclear medicine and high-energy physics because of their compactness and high gain up to 106. In this work, a SiPM-based multi-channel gamma ray detector with individual read out based on the dynamic time-over-threshold (dToT) method is implemented and demonstrated as an elemental material for large-area gamma ray imager applications. The detector consists of 64 channels of KETEK SiPM PM6660 (6 × 6 mm2 containing 10,000 micro-cells of 60 × 60 μm2) coupled to an 8 × 8 array of high-energy resolution Gd3(Al,Ga)5O12(Ce) (HR-GAGG) crystals (10 × 10 × 10 mm3) segmented by a 1 mm thick BaSO4 reflector. To produce a digital pulse containing linear energy information, the dToT-based read-out circuit consists of a CR-RC shaping amplifier (2.2 μs) and comparator with a feedback component. By modelling the pulse of the SiPM, the light output, and the CR-RC shaping amplifier, the integral-non-linearity (INL) was numerically calculated in terms of the delay time and the time constant of dynamic threshold movement. The experimental results of the averaged INL and energy resolution were 5.8±1.6% and the full-width-at-half-maximum (FWHM) of 7.4±0.9% at 662 keV, respectively. The 64-channel single-mode detector module was successfully implemented, demonstrating potential for its use as an elemental material for large-area gamma ray imaging applications.
A New Method to Cancel RFI---The Adaptive Filter
NASA Astrophysics Data System (ADS)
Bradley, R.; Barnbaum, C.
1996-12-01
An increasing amount of precious radio frequency spectrum in the VHF, UHF, and microwave bands is being utilized each year to support new commercial and military ventures, and all have the potential to interfere with radio astronomy observations. Some radio spectral lines of astronomical interest occur outside the protected radio astronomy bands and are unobservable due to heavy interference. Conventional approaches to deal with RFI include legislation, notch filters, RF shielding, and post-processing techniques. Although these techniques are somewhat successful, each suffers from insufficient interference cancellation. One concept of interference excision that has not been used before in radio astronomy is adaptive interference cancellation. The concept of adaptive interference canceling was first introduced in the mid-1970s as a way to reduce unwanted noise in low frequency (audio) systems. Examples of such systems include the canceling of maternal ECG in fetal electrocardiography and the reduction of engine noise in the passenger compartment of automobiles. Only recently have high-speed digital filter chips made adaptive filtering possible in a bandwidth as large a few megahertz, finally opening the door to astronomical uses. The system consists of two receivers: the main beam of the radio telescope receives the desired signal corrupted by RFI coming in the sidelobes, and the reference antenna receives only the RFI. The reference antenna is processed using a digital adaptive filter and then subtracted from the signal in the main beam, thus producing the system output. The weights of the digital filter are adjusted by way of an algorithm that minimizes, in a least-squares sense, the power output of the system. Through an adaptive-iterative process, the interference canceler will lock onto the RFI and the filter will adjust itself to minimize the effect of the RFI at the system output. We are building a prototype 100 MHz receiver and will measure the cancellation
The use of the spectral method within the fast adaptive composite grid method
McKay, S.M.
1994-12-31
The use of efficient algorithms for the solution of partial differential equations has been sought for many years. The fast adaptive composite grid (FAC) method combines an efficient algorithm with high accuracy to obtain low cost solutions to partial differential equations. The FAC method achieves fast solution by combining solutions on different grids with varying discretizations and using multigrid like techniques to find fast solution. Recently, the continuous FAC (CFAC) method has been developed which utilizes an analytic solution within a subdomain to iterate to a solution of the problem. This has been shown to achieve excellent results when the analytic solution can be found. The CFAC method will be extended to allow solvers which construct a function for the solution, e.g., spectral and finite element methods. In this discussion, the spectral methods will be used to provide a fast, accurate solution to the partial differential equation. As spectral methods are more accurate than finite difference methods, the ensuing accuracy from this hybrid method outside of the subdomain will be investigated.
Adaptive finite element methods for two-dimensional problems in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1994-01-01
Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.
Evaluation of an adaptive beamforming method for hearing aids.
Greenberg, J E; Zurek, P M
1992-03-01
In this paper evaluations of a two-microphone adaptive beamforming system for hearing aids are presented. The system, based on the constrained adaptive beamformer described by Griffiths and Jim [IEEE Trans. Antennas Propag. AP-30, 27-34 (1982)], adapts to preserve target signals from straight ahead and to minimize jammer signals arriving from other directions. Modifications of the basic Griffiths-Jim algorithm are proposed to alleviate problems of target cancellation and misadjustment that arise in the presence of strong target signals. The evaluations employ both computer simulations and a real-time hardware implementation and are restricted to the case of a single jammer. Performance is measured by the spectrally weighted gain in the target-to-jammer ratio in the steady state. Results show that in environments with relatively little reverberation: (1) the modifications allow good performance even with misaligned arrays and high input target-to-jammer ratios; and (2) performance is better with a broadside array with 7-cm spacing between microphones than with a 26-cm broadside or a 7-cm endfire configuration. Performance degrades in reverberant environments; at the critical distance of a room, improvement with a practical system is limited to a few dB. PMID:1564202
Method and apparatus for adaptive force and position control of manipulators
NASA Technical Reports Server (NTRS)
Seraji, Homayoun (Inventor)
1989-01-01
The present invention discloses systematic methods and apparatus for the design of real time controllers. Real-time control employs adaptive force/position by use of feedforward and feedback controllers, with the feedforward controller being the inverse of the linearized model of robot dynamics and containing only proportional-double-derivative terms is disclosed. The feedback controller, of the proportional-integral-derivative type, ensures that manipulator joints follow reference trajectories and the feedback controller achieves robust tracking of step-plus-exponential trajectories, all in real time. The adaptive controller includes adaptive force and position control within a hybrid control architecture. The adaptive controller, for force control, achieves tracking of desired force setpoints, and the adaptive position controller accomplishes tracking of desired position trajectories. Circuits in the adaptive feedback and feedforward controllers are varied by adaptation laws.
NASA Astrophysics Data System (ADS)
Silva, N. P.; Camargo, R. D.
2014-12-01
Given the growing investment in coastal activities, such as industrial and residential settings, proper understanding of oceanographic and meteorological phenomena over such areas became very important. The winds play a major role in this context, being the main source of energy for gravity waves generation in the ocean, and determining the characterization of severe weather conditions. In this study, a statistical analysis of extreme values was applied to wind data from National Centers for Environmental Prediction and National Center for Atmospheric Research reanalysis (NCEP-I) grid points with 2.5º of spacial resolution and results from a simulation with the BRAMS model in the Southwest Atlantic Ocean region with 0.25º of resolution. The Peaks Over Threshold (POT) technique was applied and the analysis focused on the behavior of extreme values according to the wind direction and the resolution of the original data. The period of analysis goes from 1982 to 2011 and the domain goes from 40ºS to 5ºN latitude and 70ºW to 10ºW longitude. The POT method demanded that peaks chosen for analysis were independent and identically distributed and a minimum interval of 48 hours was given to separate the subset sampled for analysis. The peak excesses above a determined threshold were adjusted to the Generalized Pareto Distribution and extrapolation to 50 years return periods was built in each grid point. General large-scale patterns of 50-yr return values were similar for both datasets used. However, more details were verified in the analysis of simulation results with BRAMS, given the dependence of the methodology to the resolution of the original set. Thus, the greater detailing suggests the inclusion of mesoscale features originating these extreme values. In the northern part of the domain, extreme winds were weaker and prevailed from north, northeast and east, given the influence of the trade winds and the positioning of the South Atlantic Subtropical High. On the
NASA Astrophysics Data System (ADS)
Wang, Chuanyun; Qin, Shiyin
2015-03-01
Motivated by the robust principal component analysis, infrared small target image is regarded as low-rank background matrix corrupted by sparse target and noise matrices, thus a new target-background separation model is designed, subsequently, an adaptive detection method of infrared small target is presented. Firstly, multi-scale transform and patch transform are used to generate an image patch set for infrared small target detection; secondly, target-background separation of each patch is achieved by recovering the low-rank and sparse matrices using adaptive weighting parameter; thirdly, the image reconstruction and fusion are carried out to obtain the entire separated background and target images; finally, the infrared small target detection is realized by threshold segmentation of template matching similarity measurement. In order to validate the performance of the proposed method, three experiments: target-background separation, background clutter suppression and infrared small target detection, are performed over different clutter background with real infrared small targets in single-frame or sequence images. A series of experiment results demonstrate that the proposed method can not only suppress background clutter effectively even if with strong noise interference but also detect targets accurately with low false alarm rate.
Multiscale Simulation of Microcrack Based on a New Adaptive Finite Element Method
NASA Astrophysics Data System (ADS)
Xu, Yun; Chen, Jun; Chen, Dong Quan; Sun, Jin Shan
In this paper, a new adaptive finite element (FE) framework based on the variational multiscale method is proposed and applied to simulate the dynamic behaviors of metal under loadings. First, the extended bridging scale method is used to couple molecular dynamics and FE. Then, macro damages evolvements of those micro defects are simulated by the adaptive FE method. Some auxiliary strategies, such as the conservative mesh remapping, failure mechanism and mesh splitting technique are also included in the adaptive FE computation. Efficiency of our method is validated by numerical experiments.
An adaptive response surface method for crashworthiness optimization
NASA Astrophysics Data System (ADS)
Shi, Lei; Yang, Ren-Jye; Zhu, Ping
2013-11-01
Response surface-based design optimization has been commonly used for optimizing large-scale design problems in the automotive industry. However, most response surface models are built by a limited number of design points without considering data uncertainty. In addition, the selection of a response surface in the literature is often arbitrary. This article uses a Bayesian metric to systematically select the best available response surface among several candidates in a library while considering data uncertainty. An adaptive, efficient response surface strategy, which minimizes the number of computationally intensive simulations, was developed for design optimization of large-scale complex problems. This methodology was demonstrated by a crashworthiness optimization example.
Robustness of an adaptive beamforming method for hearing aids.
Peterson, P M; Wei, S M; Rabinowitz, W M; Zurek, P M
1990-01-01
We describe the results of computer simulations of a multimicrophone adaptive-beamforming system as a noise reduction device for hearing aids. Of particular concern was the system's sensitivity to violations of the underlying assumption that the target signal is identical at the microphones. Two- and four-microphone versions of the system were tested in simulated anechoic and modestly-reverberant environments with one and two jammers, and with deviations from the assumed straight-ahead target direction. Also examined were the effects of input target-to-jammer ratio and adaptive-filter length. Generally, although the noise-reduction performance of the system is degraded by target misalignment and modest reverberation, the system still provides positive advantage at input target-to-jammer ratios up to about 0 dB. This is in contrast to the degrading target-cancellation effect that the system can have when the equal-target assumption is violated and the input target-to-jammer ratio is greater than zero. PMID:2356741
Nonlinear mode decomposition: A noise-robust, adaptive decomposition method
NASA Astrophysics Data System (ADS)
Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta
2015-09-01
The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.
Nonlinear mode decomposition: a noise-robust, adaptive decomposition method.
Iatsenko, Dmytro; McClintock, Peter V E; Stefanovska, Aneta
2015-09-01
The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool-nonlinear mode decomposition (NMD)-which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques-which, together with the adaptive choice of their parameters, make it extremely noise robust-and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download. PMID:26465549
Investigating Item Exposure Control Methods in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Ozturk, Nagihan Boztunc; Dogan, Nuri
2015-01-01
This study aims to investigate the effects of item exposure control methods on measurement precision and on test security under various item selection methods and item pool characteristics. In this study, the Randomesque (with item group sizes of 5 and 10), Sympson-Hetter, and Fade-Away methods were used as item exposure control methods. Moreover,…
A time-saving method to assess power output at lactate threshold in well-trained and elite cyclists.
Støren, Øyvind; Rønnestad, Bent R; Sunde, Arnstein; Hansen, Joar; Ellefsen, Stian; Helgerud, Jan
2014-03-01
The purpose of this study was to examine the relationship between lactate threshold (LT) as a percentage of maximal oxygen consumption (V[Combining Dot Above]O2max) and power output at LT (LTW) and also to investigate to what extent V[Combining Dot Above]O2max, oxygen cost of cycling (CC), and maximal aerobic power (MAP) determine LTW in cycling to develop a new time-saving model for testing LTW. To do this, 108 male competitive cyclists with an average V[Combining Dot Above]O2max of 65.2 ± 7.4 ml·kg·min and an average LTW of 274 ± 43 W were tested for V[Combining Dot Above]O2max, LT %V[Combining Dot Above]O2max, LTW, MAP, and CC on a test ergometer cycle. The product of MAP and individual LT in %V[Combining Dot Above]O2max was found to be a good determinant of LTW (R = 0.98, p < 0.0001). However, LT in %V[Combining Dot Above]O2max was found to be a poor determinant of LTW (R = 0.39, p < 0.0001). Based on these findings, we have suggested a new time-saving method for calculating LTW in well-trained cyclists. The benefits from this model come both from tracking LTW during training interventions and from regularly assessing training status in competitive cyclists. Briefly, this method is based on the present findings that LTW depends on LT in %V[Combining Dot Above]O2max, V[Combining Dot Above]O2max, and CC and may after an initial test session reduce the time for the subsequent testing of LTW by as much as 50% without the need for blood samples. PMID:23942166
A massively parallel adaptive finite element method with dynamic load balancing
Devine, K.D.; Flaherty, J.E.; Wheat, S.R.; Maccabe, A.B.
1993-05-01
We construct massively parallel, adaptive finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We demonstrate parallel efficiency through computations on a 1024-processor nCUBE/2 hypercube. We also present results using adaptive p-refinement to reduce the computational cost of the method. We describe tiling, a dynamic, element-based data migration system. Tiling dynamically maintains global load balance in the adaptive method by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. We demonstrate the effectiveness of the dynamic load balancing with adaptive p-refinement examples.
An examination of an adapter method for measuring the vibration transmitted to the human arms
Xu, Xueyan S.; Dong, Ren G.; Welcome, Daniel E.; Warren, Christopher; McDowell, Thomas W.
2016-01-01
The objective of this study is to evaluate an adapter method for measuring the vibration on the human arms. Four instrumented adapters with different weights were used to measure the vibration transmitted to the wrist, forearm, and upper arm of each subject. Each adapter was attached at each location on the subjects using an elastic cloth wrap. Two laser vibrometers were also used to measure the transmitted vibration at each location to evaluate the validity of the adapter method. The apparent mass at the palm of the hand along the forearm direction was also measured to enhance the evaluation. This study found that the adapter and laser-measured transmissibility spectra were comparable with some systematic differences. While increasing the adapter mass reduced the resonant frequency at the measurement location, increasing the tightness of the adapter attachment increased the resonant frequency. However, the use of lightweight (≤15 g) adapters under medium attachment tightness did not change the basic trends of the transmissibility spectrum. The resonant features observed in the transmissibility spectra were also correlated with those observed in the apparent mass spectra. Because the local coordinate systems of the adapters may be significantly misaligned relative to the global coordinates of the vibration test systems, large errors were observed for the adapter-measured transmissibility in some individual orthogonal directions. This study, however, also demonstrated that the misalignment issue can be resolved by either using the total vibration transmissibility or by measuring the misalignment angles to correct the errors. Therefore, the adapter method is acceptable for understanding the basic characteristics of the vibration transmission in the human arms, and the adapter-measured data are acceptable for approximately modeling the system. PMID:26834309
Investigation of the Multiple Model Adaptive Control (MMAC) method for flight control systems
NASA Technical Reports Server (NTRS)
1975-01-01
The application was investigated of control theoretic ideas to the design of flight control systems for the F-8 aircraft. The design of an adaptive control system based upon the so-called multiple model adaptive control (MMAC) method is considered. Progress is reported.
The older person has a stroke: Learning to adapt using the Feldenkrais® Method.
Jackson-Wyatt, O
1995-01-01
The older person with a stroke requires adapted therapeutic interventions to take into account normal age-related changes. The Feldenkrais® Method presents a model for learning to promote adaptability that addresses key functional changes seen with normal aging. Clinical examples related to specific functional tasks are discussed to highlight major treatment modifications and neuromuscular, psychological, emotional, and sensory considerations. PMID:27619899
An adaptive filter method for spacecraft using gravity assist
NASA Astrophysics Data System (ADS)
Ning, Xiaolin; Huang, Panpan; Fang, Jiancheng; Liu, Gang; Ge, Shuzhi Sam
2015-04-01
Celestial navigation (CeleNav) has been successfully used during gravity assist (GA) flyby for orbit determination in many deep space missions. Due to spacecraft attitude errors, ephemeris errors, the camera center-finding bias, and the frequency of the images before and after the GA flyby, the statistics of measurement noise cannot be accurately determined, and yet have time-varying characteristics, which may introduce large estimation error and even cause filter divergence. In this paper, an unscented Kalman filter (UKF) with adaptive measurement noise covariance, called ARUKF, is proposed to deal with this problem. ARUKF scales the measurement noise covariance according to the changes in innovation and residual sequences. Simulations demonstrate that ARUKF is robust to the inaccurate initial measurement noise covariance matrix and time-varying measurement noise. The impact factors in the ARUKF are also investigated.
New methods and astrophysical applications of adaptive mesh fluid simulations
NASA Astrophysics Data System (ADS)
Wang, Peng
The formation of stars, galaxies and supermassive black holes are among the most interesting unsolved problems in astrophysics. Those problems are highly nonlinear and involve enormous dynamical ranges. Thus numerical simulations with spatial adaptivity are crucial in understanding those processes. In this thesis, we discuss the development and application of adaptive mesh refinement (AMR) multi-physics fluid codes to simulate those nonlinear structure formation problems. To simulate the formation of star clusters, we have developed an AMR magnetohydrodynamics (MHD) code, coupled with radiative cooling. We have also developed novel algorithms for sink particle creation, accretion, merging and outflows, all of which are coupled with the fluid algorithms using operator splitting. With this code, we have been able to perform the first AMR-MHD simulation of star cluster formation for several dynamical times, including sink particle and protostellar outflow feedbacks. The results demonstrated that protostellar outflows can drive supersonic turbulence in dense clumps and explain the observed slow and inefficient star formation. We also suggest that global collapse rate is the most important factor in controlling massive star accretion rate. In the topics of galaxy formation, we discuss the results of three projects. In the first project, using cosmological AMR hydrodynamics simulations, we found that isolated massive star still forms in cosmic string wakes even though the mega-parsec scale structure has been perturbed significantly by the cosmic strings. In the second project, we calculated the dynamical heating rate in galaxy formation. We found that by balancing our heating rate with the atomic cooling rate, it gives a critical halo mass which agrees with the result of numerical simulations. This demonstrates that the effect of dynamical heating should be put into semi-analytical works in the future. In the third project, using our AMR-MHD code coupled with radiative
Okubo, Mitsuru; Nishimura, Yasumasa; Nakamatsu, Kiyoshi; Okumura, Masahiko R.T.; Shibata, Toru; Kanamori, Shuichi; Hanaoka, Kouhei R.T.; Hosono, Makoto
2010-06-01
Purpose: Clinical applicability of a multiple-threshold method for [{sup 18}F]fluoro-2-deoxyglucose (FDG) activity in radiation treatment planning was evaluated. Methods and Materials: A total of 32 patients who underwent positron emission and computed tomography (PET/CT) simulation were included; 18 patients had lung cancer, and 14 patients had pharyngeal cancer. For tumors of <=2 cm, 2 to 5 cm, and >5 cm, thresholds were defined as 2.5 standardized uptake value (SUV), 35%, and 20% of the maximum FDG activity, respectively. The cervical and mediastinal lymph nodes with the shortest axial diameter of >=10 mm were considered to be metastatic on CT (LNCT). The retropharyngeal lymph nodes with the shortest axial diameter of >=5 mm on CT and MRI were also defined as metastatic. Lymph nodes showing maximum FDG activity greater than the adopted thresholds for radiation therapy planning were designated LNPET-RTP, and lymph nodes with a maximum FDG activity of >=2.5 SUV were regarded as malignant and were designated LNPET-2.5 SUV. Results: The sizes of gross tumor volumes on PET (GTVPET) with the adopted thresholds in the axial plane were visually well fitted to those of GTV on CT (GTVCT). However, the volumes of GTVPET were larger than those of GTVCT, with significant differences (p < 0.0001) for lung cancer, due to respiratory motion. For lung cancer, the numbers of LNCT, LNPET-RTP, and LNPET-2.5 SUV were 29, 28, and 34, respectively. For pharyngeal cancer, the numbers of LNCT, LNPET-RTP, and LNPET-2.5 SUV were 14, 9, and 15, respectively. Conclusions: Our multiple thresholds were applicable for delineating the primary target on PET/CT simulation. However, these thresholds were inaccurate for depicting malignant lymph nodes.
Parallel architectures for iterative methods on adaptive, block structured grids
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1983-01-01
A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.
An adaptive mesh refinement algorithm for the discrete ordinates method
Jessee, J.P.; Fiveland, W.A.; Howell, L.H.; Colella, P.; Pember, R.B.
1996-03-01
The discrete ordinates form of the radiative transport equation (RTE) is spatially discretized and solved using an adaptive mesh refinement (AMR) algorithm. This technique permits the local grid refinement to minimize spatial discretization error of the RTE. An error estimator is applied to define regions for local grid refinement; overlapping refined grids are recursively placed in these regions; and the RTE is then solved over the entire domain. The procedure continues until the spatial discretization error has been reduced to a sufficient level. The following aspects of the algorithm are discussed: error estimation, grid generation, communication between refined levels, and solution sequencing. This initial formulation employs the step scheme, and is valid for absorbing and isotopically scattering media in two-dimensional enclosures. The utility of the algorithm is tested by comparing the convergence characteristics and accuracy to those of the standard single-grid algorithm for several benchmark cases. The AMR algorithm provides a reduction in memory requirements and maintains the convergence characteristics of the standard single-grid algorithm; however, the cases illustrate that efficiency gains of the AMR algorithm will not be fully realized until three-dimensional geometries are considered.
Analysis of modified SMI method for adaptive array weight control
NASA Technical Reports Server (NTRS)
Dilsavor, R. L.; Moses, R. L.
1989-01-01
An adaptive array is applied to the problem of receiving a desired signal in the presence of weak interference signals which need to be suppressed. A modification, suggested by Gupta, of the sample matrix inversion (SMI) algorithm controls the array weights. In the modified SMI algorithm, interference suppression is increased by subtracting a fraction F of the noise power from the diagonal elements of the estimated covariance matrix. Given the true covariance matrix and the desired signal direction, the modified algorithm is shown to maximize a well-defined, intuitive output power ratio criterion. Expressions are derived for the expected value and variance of the array weights and output powers as a function of the fraction F and the number of snapshots used in the covariance matrix estimate. These expressions are compared with computer simulation and good agreement is found. A trade-off is found to exist between the desired level of interference suppression and the number of snapshots required in order to achieve that level with some certainty. The removal of noise eigenvectors from the covariance matrix inverse is also discussed with respect to this application. Finally, the type and severity of errors which occur in the covariance matrix estimate are characterized through simulation.
Speckle reduction in optical coherence tomography by adaptive total variation method
NASA Astrophysics Data System (ADS)
Wu, Tong; Shi, Yaoyao; Liu, Youwen; He, Chongjun
2015-12-01
An adaptive total variation method based on the combination of speckle statistics and total variation restoration is proposed and developed for reducing speckle noise in optical coherence tomography (OCT) images. The statistical distribution of the speckle noise in OCT image is investigated and measured. With the measured parameters such as the mean value and variance of the speckle noise, the OCT image is restored by the adaptive total variation restoration method. The adaptive total variation restoration algorithm was applied to the OCT images of a volunteer's hand skin, which showed effective speckle noise reduction and image quality improvement. For image quality comparison, the commonly used median filtering method was also applied to the same images to reduce the speckle noise. The measured results demonstrate the superior performance of the adaptive total variation restoration method in terms of image signal-to-noise ratio, equivalent number of looks, contrast-to-noise ratio, and mean square error.
An adaptation of Krylov subspace methods to path following
Walker, H.F.
1996-12-31
Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.
Systems and Methods for Parameter Dependent Riccati Equation Approaches to Adaptive Control
NASA Technical Reports Server (NTRS)
Kim, Kilsoo (Inventor); Yucelen, Tansel (Inventor); Calise, Anthony J. (Inventor)
2015-01-01
Systems and methods for adaptive control are disclosed. The systems and methods can control uncertain dynamic systems. The control system can comprise a controller that employs a parameter dependent Riccati equation. The controller can produce a response that causes the state of the system to remain bounded. The control system can control both minimum phase and non-minimum phase systems. The control system can augment an existing, non-adaptive control design without modifying the gains employed in that design. The control system can also avoid the use of high gains in both the observer design and the adaptive control law.
Adapting Western Research Methods to Indigenous Ways of Knowing
Christopher, Suzanne
2013-01-01
Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid. PMID:23678897
Solving delay differential equations in S-ADAPT by method of steps.
Bauer, Robert J; Mo, Gary; Krzyzanski, Wojciech
2013-09-01
S-ADAPT is a version of the ADAPT program that contains additional simulation and optimization abilities such as parametric population analysis. S-ADAPT utilizes LSODA to solve ordinary differential equations (ODEs), an algorithm designed for large dimension non-stiff and stiff problems. However, S-ADAPT does not have a solver for delay differential equations (DDEs). Our objective was to implement in S-ADAPT a DDE solver using the methods of steps. The method of steps allows one to solve virtually any DDE system by transforming it to an ODE system. The solver was validated for scalar linear DDEs with one delay and bolus and infusion inputs for which explicit analytic solutions were derived. Solutions of nonlinear DDE problems coded in S-ADAPT were validated by comparing them with ones obtained by the MATLAB DDE solver dde23. The estimation of parameters was tested on the MATLB simulated population pharmacodynamics data. The comparison of S-ADAPT generated solutions for DDE problems with the explicit solutions as well as MATLAB produced solutions which agreed to at least 7 significant digits. The population parameter estimates from using importance sampling expectation-maximization in S-ADAPT agreed with ones used to generate the data. PMID:23810514
Automatic multirate methods for ordinary differential equations. [Adaptive time steps
Gear, C.W.
1980-01-01
A study is made of the application of integration methods in which different step sizes are used for different members of a system of equations. Such methods can result in savings if the cost of derivative evaluation is high or if a system is sparse; however, the estimation and control of errors is very difficult and can lead to high overheads. Three approaches are discussed, and it is shown that the least intuitive is the most promising. 2 figures.
NASA Astrophysics Data System (ADS)
Aver'ianov, N. E.; Baloshin, Iu. A.; Martiukhina, L. I.; Pavlishin, I. V.; Sud'Enkov, Iu. V.
1987-09-01
The amplitudes of the acoustic signals excited in metal reflectors by laser pulses are analyzed as a function of the energy density of target irradiation. It is shown that the slope of the resulting plot is related to the threshold of plasma generation near the specimen surface. Results are presented for the emission wavelengths of Nd-glass and CO2 lasers.
Adaptive error covariances estimation methods for ensemble Kalman filters
Zhen, Yicun; Harlim, John
2015-08-01
This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.
A massively parallel adaptive finite element method with dynamic load balancing
Devine, K.D.; Flaherty, J.E.; Wheat, S.R.; Maccabe, A.B.
1993-12-31
The authors construct massively parallel adaptive finite element methods for the solution of hyperbolic conservation laws. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. The resulting method is of high order and may be parallelized efficiently on MIMD computers. They demonstrate parallel efficiency through computations on a 1024-processor nCUBE/2 hypercube. They present results using adaptive p-refinement to reduce the computational cost of the method, and tiling, a dynamic, element-based data migration system that maintains global load balance of the adaptive method by overlapping neighborhoods of processors that each perform local balancing.
NASA Astrophysics Data System (ADS)
Hsu, Kuo-Hsien
2012-11-01
Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.
Weighted Structural Regression: A Broad Class of Adaptive Methods for Improving Linear Prediction.
ERIC Educational Resources Information Center
Pruzek, Robert M.; Lepak, Greg M.
1992-01-01
Adaptive forms of weighted structural regression are developed and discussed. Bootstrapping studies indicate that the new methods have potential to recover known population regression weights and predict criterion score values routinely better than do ordinary least squares methods. The new methods are scale free and simple to compute. (SLD)
An Adaptive Kalman Filter using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
An Adaptive Kalman Filter Using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
Adapting and using quality management methods to improve health promotion.
Becker, Craig M; Glascoff, Mary A; Felts, William Michael; Kent, Christopher
2015-01-01
Although the western world is the most technologically advanced civilization to date, it is also the most addicted, obese, medicated, and in-debt adult population in history. Experts had predicted that the 21st century would be a time of better health and prosperity. Although wealth has increased, our quest to quell health problems using a pathogenic approach without understanding the interconnectedness of everyone and everything has damaged personal and planetary health. While current efforts help identify and eliminate causes of problems, they do not facilitate the creation of health and well-being as would be done with a salutogenic approach. Sociologist Aaron Antonovsky coined the term salutogenesis in 1979. It is derived from salus, which is Latin for health, and genesis, meaning to give birth. Salutogenesis, the study of the origins and creation of health, provides a method to identify an interconnected way to enhance well-being. Salutogenesis provides a framework for a method of practice to improve health promotion efforts. This article illustrates how quality management methods can be used to guide health promotion efforts focused on improving health beyond the absence of disease. PMID:25777291
Adaptive Discrete Equation Method for injection of stochastic cavitating flows
NASA Astrophysics Data System (ADS)
Geraci, Gianluca; Rodio, Maria Giovanna; Iaccarino, Gianluca; Abgrall, Remi; Congedo, Pietro
2014-11-01
This work aims at the improvement of the prediction and of the control of biofuel injection for combustion. In fact, common injector should be optimized according to the specific physical/chemical properties of biofuels. In order to attain this scope, an optimized model for reproducing the injection for several biofuel blends will be considered. The originality of this approach is twofold, i) the use of cavitating two-phase compressible models, known as Baer & Nunziato, in order to reproduce the injection, and ii) the design of a global scheme for directly taking into account experimental measurements uncertainties in the simulation. In particular, stochastic intrusive methods display a high efficiency when dealing with discontinuities in unsteady compressible flows. We have recently formulated a new scheme for simulating stochastic multiphase flows relying on the Discrete Equation Method (DEM) for describing multiphase effects. The set-up of the intrusive stochastic method for multiphase unsteady compressible flows in quasi 1D configuration will be presented. The target test-case is a multiphase unsteady nozzle for injection of biofuels, described by complex thermodynamics models, for which experimental data and associated uncertainties are available.
The Pilates method and cardiorespiratory adaptation to training.
Tinoco-Fernández, Maria; Jiménez-Martín, Miguel; Sánchez-Caravaca, M Angeles; Fernández-Pérez, Antonio M; Ramírez-Rodrigo, Jesús; Villaverde-Gutiérrez, Carmen
2016-01-01
Although all authors report beneficial health changes following training based on the Pilates method, no explicit analysis has been performed of its cardiorespiratory effects. The objective of this study was to evaluate possible changes in cardiorespiratory parameters with the Pilates method. A total of 45 university students aged 18-35 years (77.8% female and 22.2% male), who did not routinely practice physical exercise or sports, volunteered for the study and signed informed consent. The Pilates training was conducted over 10 weeks, with three 1-hour sessions per week. Physiological cardiorespiratory responses were assessed using a MasterScreen CPX apparatus. After the 10-week training, statistically significant improvements were observed in mean heart rate (135.4-124.2 beats/min), respiratory exchange ratio (1.1-0.9) and oxygen equivalent (30.7-27.6) values, among other spirometric parameters, in submaximal aerobic testing. These findings indicate that practice of the Pilates method has a positive influence on cardiorespiratory parameters in healthy adults who do not routinely practice physical exercise activities. PMID:27357919
NASA Astrophysics Data System (ADS)
Yenn Chong, See; Lee, Jung-Ryul; Yik Park, Chan
2013-03-01
Conventional threshold crossing technique generally encounters the difficulty in setting a common threshold level in the extraction of the respective time-of-flights (ToFs) and amplitudes from the guided waves obtained at many different points by spatial scanning. Therefore, we propose a statistical threshold determination method through noise map generation to automatically process numerous guided waves having different propagation distances. First, a two-dimensional (2-D) noise map is generated using one-dimensional (1-D) WT magnitudes at time zero of the acquired waves. Then, the probability density functions (PDFs) of Gamma distribution, Weibull distribution and exponential distribution are used to model the measured 2-D noise map. Graphical goodness-of-fit measurements are used to find the best fit among the three theoretical distributions. Then, the threshold level is automatically determined by selecting the desired confidence level of the noise rejection in the cumulative distribution function of the best fit PDF. Based on this threshold level, the amplitudes and ToFs are extracted and mapped into a 2-D matrix array form. The threshold level determined by the noise statistics may cross the noise signal after time zero. These crossings are represented as salt-and-pepper noise in the ToF and amplitude maps but finally removed by the 1-D median filter. This proposed method was verified in a thick stainless steel hollow cylinder where guided waves were acquired in an area of 180 mm×126 mm of the cylinder by using a laser ultrasonic scanning system and an ultrasonic sensor. The Gamma distribution was estimated as the best fit to the verification experimental data by the proposed algorithm. The statistical parameters of the Gamma distribution were used to determine the threshold level appropriate for most of the guided waves. The ToFs and amplitudes of the first arrival mode were mapped into a 2-D matrix array form. Each map included 447 noisy points out of 90
The Limits to Adaptation; A Systems Approach
The Limits to Adaptation: A Systems Approach. The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering...
Error estimation and adaptive order nodal method for solving multidimensional transport problems
Zamonsky, O.M.; Gho, C.J.; Azmy, Y.Y.
1998-01-01
The authors propose a modification of the Arbitrarily High Order Transport Nodal method whereby they solve each node and each direction using different expansion order. With this feature and a previously proposed a posteriori error estimator they develop an adaptive order scheme to automatically improve the accuracy of the solution of the transport equation. They implemented the modified nodal method, the error estimator and the adaptive order scheme into a discrete-ordinates code for solving monoenergetic, fixed source, isotropic scattering problems in two-dimensional Cartesian geometry. They solve two test problems with large homogeneous regions to test the adaptive order scheme. The results show that using the adaptive process the storage requirements are reduced while preserving the accuracy of the results.
An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
1999-01-01
An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.
A Massively Parallel Adaptive Fast Multipole Method on Heterogeneous Architectures
Lashuk, Ilya; Chandramowlishwaran, Aparna; Langston, Harper; Nguyen, Tuan-Anh; Sampath, Rahul S; Shringarpure, Aashay; Vuduc, Richard; Ying, Lexing; Zorin, Denis; Biros, George
2012-01-01
We describe a parallel fast multipole method (FMM) for highly nonuniform distributions of particles. We employ both distributed memory parallelism (via MPI) and shared memory parallelism (via OpenMP and GPU acceleration) to rapidly evaluate two-body nonoscillatory potentials in three dimensions on heterogeneous high performance computing architectures. We have performed scalability tests with up to 30 billion particles on 196,608 cores on the AMD/CRAY-based Jaguar system at ORNL. On a GPU-enabled system (NSF's Keeneland at Georgia Tech/ORNL), we observed 30x speedup over a single core CPU and 7x speedup over a multicore CPU implementation. By combining GPUs with MPI, we achieve less than 10 ns/particle and six digits of accuracy for a run with 48 million nonuniformly distributed particles on 192 GPUs.
NASA Technical Reports Server (NTRS)
Minnis, Patrick; Harrison, Edwin F.; Gibson, Gary G.
1987-01-01
A set of visible and IR data obtained with GOES from July 17-31, 1983 is analyzed using a modified version of the hybrid bispectral threshold method developed by Minnis and Harrison (1984). This methodology can be divided into a set of procedures or optional techniques to determine the proper contaminate clear-sky temperature or IR threshold. The various optional techniques are described; the options are: standard, low-temperature limit, high-reflectance limit, low-reflectance limit, coldest pixel and thermal adjustment limit, IR-only low-cloud temperature limit, IR clear-sky limit, and IR overcast limit. Variations in the cloud parameters and the characteristics and diurnal cycles of trade cumulus and stratocumulus clouds over the eastern equatorial Pacific are examined. It is noted that the new method produces substantial changes in about one third of the cloud amount retrieval; and low cloud retrievals are affected most by the new constraints.
Impedance adaptation methods of the piezoelectric energy harvesting
NASA Astrophysics Data System (ADS)
Kim, Hyeoungwoo
In this study, the important issues of energy recovery were addressed and a comprehensive investigation was performed on harvesting electrical power from an ambient mechanical vibration source. Also discussed are the impedance matching methods used to increase the efficiency of energy transfer from the environment to the application. Initially, the mechanical impedance matching method was investigated to increase mechanical energy transferred to the transducer from the environment. This was done by reducing the mechanical impedance such as damping factor and energy reflection ratio. The vibration source and the transducer were modeled by a two-degree-of-freedom dynamic system with mass, spring constant, and damper. The transmissibility employed to show how much mechanical energy that was transferred in this system was affected by the damping ratio and the stiffness of elastic materials. The mechanical impedance of the system was described by electrical system using analogy between the two systems in order to simply the total mechanical impedance. Secondly, the transduction rate of mechanical energy to electrical energy was improved by using a PZT material which has a high figure of merit and a high electromechanical coupling factor for electrical power generation, and a piezoelectric transducer which has a high transduction rate was designed and fabricated. The high g material (g33 = 40 [10-3Vm/N]) was developed to improve the figure of merit of the PZT ceramics. The cymbal composite transducer has been found as a promising structure for piezoelectric energy harvesting under high force at cyclic conditions (10--200 Hz), because it has almost 40 times higher effective strain coefficient than PZT ceramics. The endcap of cymbal also enhances the endurance of the ceramic to sustain ac load along with stress amplification. In addition, a macro fiber composite (MFC) was employed as a strain component because of its flexibility and the high electromechanical coupling
A self-adaptive-grid method with application to airfoil flow
NASA Technical Reports Server (NTRS)
Nakahashi, K.; Deiwert, G. S.
1985-01-01
A self-adaptive-grid method is described that is suitable for multidimensional steady and unsteady computations. Based on variational principles, a spring analogy is used to redistribute grid points in an optimal sense to reduce the overall solution error. User-specified parameters, denoting both maximum and minimum permissible grid spacings, are used to define the all-important constants, thereby minimizing the empiricism and making the method self-adaptive. Operator splitting and one-sided controls for orthogonality and smoothness are used to make the method practical, robust, and efficient. Examples are included for both steady and unsteady viscous flow computations about airfoils in two dimensions, as well as for a steady inviscid flow computation and a one-dimensional case. These examples illustrate the precise control the user has with the self-adaptive method and demonstrate a significant improvement in accuracy and quality of the solutions.
NASA Astrophysics Data System (ADS)
Cai, Xiaochun; Hu, Yihua; Wang, Peng; Sun, Dujuan; Hu, Guilan
2009-10-01
The paper presents an adaptive segmentation and activity classification method for filamentous fungi image. Firstly, an adaptive structuring element (SE) construction algorithm is proposed for image background suppression. Based on watershed transform method, the color labeled segmentation of fungi image is taken. Secondly, the fungi elements feature space is described and the feature set for fungi hyphae activity classification is extracted. The growth rate evaluation of fungi hyphae is achieved by using SVM classifier. Some experimental results demonstrate that the proposed method is effective for filamentous fungi image processing.
Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D
2012-10-01
Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.
Anderson, R W; Pember, R B; Elliott, N S
2001-10-22
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.
Adaptation of the TCLP and SW-846 methods to radioactive mixed waste
Griest, W.H.; Schenley, R.L.; Caton, J.E.; Wolfe, P.F.
1994-07-01
Modifications of conventional sample preparation and analytical methods are necessary to provide radiation protection and to meet sensitivity requirements for regulated constituents when working with radioactive samples. Adaptations of regulatory methods for determining ``total`` Toxicity Characteristic Leaching Procedure (TCLP) volatile and semivolatile organics and pesticides, and for conducting aqueous leaching are presented.
ERIC Educational Resources Information Center
Wang, Ze; Rohrer, David; Chuang, Chi-ching; Fujiki, Mayo; Herman, Keith; Reinke, Wendy
2015-01-01
This study compared 5 scoring methods in terms of their statistical assumptions. They were then used to score the Teacher Observation of Classroom Adaptation Checklist, a measure consisting of 3 subscales and 21 Likert-type items. The 5 methods used were (a) sum/average scores of items, (b) latent factor scores with continuous indicators, (c)…
NASA Astrophysics Data System (ADS)
Fubelli, Giandomenico
2014-05-01
The assessment of landslide triggering rainfall thresholds is a useful technique to predict the occurrence of such phenomena and provide public authorities with the values of the critical rainfall over which it is appropriate to consider the state of alert. In this perspective, I investigated the urban area of San Vito Romano, a 3500 inhabitants), located in the Aniene River basin, about 50 km east of Rome, and heavily affected by landslides. This area extends over a calcarenitic-marly-arenaceous bedrock of Tortonian age, arranged in a monocline structure dipping 10/15 degrees eastward, in parallel with the slope angle. Part of the village overlays a 500 m large translational rock slide that has caused damage to many buildings during the last decades. Boreholes, drilled in the landslide area, some of them supplied with piezometers and inclinometers, have provided detailed information on the underlying bedrock (silico-clastic deposits of the Frosinone Formation of upper Tortonian age) and the covering near-surface materials. In particular, borehole data showed the existence of three different sliding surfaces located at different depths (6, 12 and 24 meters). In order to establish a relationship between landslide events and the triggering rainfall amounts, I have carried out an inventory of all the slope movements that affected the study area in the last few decades on the basis of field survey, stratigraphic analysis, archive research and piezometric/inclinometric data. Then, I calculated and mapped the cumulative rainfall amounts within 3 days, 10 days, 1 month and 3 months before each landslide occurrence. By comparing the landslide distribution with the rainfall maps, I calculated the rainfall thresholds for each event, also considering the depth of the related sliding surface. In this context, I observed that a 3 days pre-event precipitation of 100 mm mobilized the shallow material overlying the upper sliding surface only with at least 170 mm of rain in the
An adaptive, formally second order accurate version of the immersed boundary method
NASA Astrophysics Data System (ADS)
Griffith, Boyce E.; Hornung, Richard D.; McQueen, David M.; Peskin, Charles S.
2007-04-01
Like many problems in biofluid mechanics, cardiac mechanics can be modeled as the dynamic interaction of a viscous incompressible fluid (the blood) and a (visco-)elastic structure (the muscular walls and the valves of the heart). The immersed boundary method is a mathematical formulation and numerical approach to such problems that was originally introduced to study blood flow through heart valves, and extensions of this work have yielded a three-dimensional model of the heart and great vessels. In the present work, we introduce a new adaptive version of the immersed boundary method. This adaptive scheme employs the same hierarchical structured grid approach (but a different numerical scheme) as the two-dimensional adaptive immersed boundary method of Roma et al. [A multilevel self adaptive version of the immersed boundary method, Ph.D. Thesis, Courant Institute of Mathematical Sciences, New York University, 1996; An adaptive version of the immersed boundary method, J. Comput. Phys. 153 (2) (1999) 509-534] and is based on a formally second order accurate (i.e., second order accurate for problems with sufficiently smooth solutions) version of the immersed boundary method that we have recently described [B.E. Griffith, C.S. Peskin, On the order of accuracy of the immersed boundary method: higher order convergence rates for sufficiently smooth problems, J. Comput. Phys. 208 (1) (2005) 75-105]. Actual second order convergence rates are obtained for both the uniform and adaptive methods by considering the interaction of a viscous incompressible flow and an anisotropic incompressible viscoelastic shell. We also present initial results from the application of this methodology to the three-dimensional simulation of blood flow in the heart and great vessels. The results obtained by the adaptive method show good qualitative agreement with simulation results obtained by earlier non-adaptive versions of the method, but the flow in the vicinity of the model heart valves
An h-adaptive local discontinuous Galerkin method for the Navier-Stokes-Korteweg equations
NASA Astrophysics Data System (ADS)
Tian, Lulu; Xu, Yan; Kuerten, J. G. M.; van der Vegt, J. J. W.
2016-08-01
In this article, we develop a mesh adaptation algorithm for a local discontinuous Galerkin (LDG) discretization of the (non)-isothermal Navier-Stokes-Korteweg (NSK) equations modeling liquid-vapor flows with phase change. This work is a continuation of our previous research, where we proposed LDG discretizations for the (non)-isothermal NSK equations with a time-implicit Runge-Kutta method. To save computing time and to capture the thin interfaces more accurately, we extend the LDG discretization with a mesh adaptation method. Given the current adapted mesh, a criterion for selecting candidate elements for refinement and coarsening is adopted based on the locally largest value of the density gradient. A strategy to refine and coarsen the candidate elements is then provided. We emphasize that the adaptive LDG discretization is relatively simple and does not require additional stabilization. The use of a locally refined mesh in combination with an implicit Runge-Kutta time method is, however, non-trivial, but results in an efficient time integration method for the NSK equations. Computations, including cases with solid wall boundaries, are provided to demonstrate the accuracy, efficiency and capabilities of the adaptive LDG discretizations.
NASA Astrophysics Data System (ADS)
Moore, F.; Burke, M.
2015-12-01
A wide range of studies using a variety of methods strongly suggest that climate change will have a negative impact on agricultural production in many areas. Farmers though should be able to learn about a changing climate and to adjust what they grow and how they grow it in order to reduce these negative impacts. However, it remains unclear how effective these private (autonomous) adaptations will be, or how quickly they will be adopted. Constraining the uncertainty on this adaptation is important for understanding the impacts of climate change on agriculture. Here we review a number of empirical methods that have been proposed for understanding the rate and effectiveness of private adaptation to climate change. We compare these methods using data on agricultural yields in the United States and western Europe.
The adaptive problems of female teenage refugees and their behavioral adjustment methods for coping
Mhaidat, Fatin
2016-01-01
This study aimed at identifying the levels of adaptive problems among teenage female refugees in the government schools and explored the behavioral methods that were used to cope with the problems. The sample was composed of 220 Syrian female students (seventh to first secondary grades) enrolled at government schools within the Zarqa Directorate and who came to Jordan due to the war conditions in their home country. The study used the scale of adaptive problems that consists of four dimensions (depression, anger and hostility, low self-esteem, and feeling insecure) and a questionnaire of the behavioral adjustment methods for dealing with the problem of asylum. The results indicated that the Syrian teenage female refugees suffer a moderate degree of adaptation problems, and the positive adjustment methods they have used are more than the negatives. PMID:27175098
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids (global and local) to provide adaptive resolution and fast solution of PDEs. Like all such methods, it offers parallelism by using possibly many disconnected patches per level, but is hindered by the need to handle these levels sequentially. The finest levels must therefore wait for processing to be essentially completed on all the coarser ones. A recently developed asynchronous version of FAC, called AFAC, completely eliminates this bottleneck to parallelism. This paper describes timing results for AFAC, coupled with a simple load balancing scheme, applied to the solution of elliptic PDEs on an Intel iPSC hypercube. These tests include performance of certain processes necessary in adaptive methods, including moving grids and changing refinement. A companion paper reports on numerical and analytical results for estimating convergence factors of AFAC applied to very large scale examples.
A new adaptive exponential smoothing method for non-stationary time series with level shifts
NASA Astrophysics Data System (ADS)
Monfared, Mohammad Ali Saniee; Ghandali, Razieh; Esmaeili, Maryam
2014-07-01
Simple exponential smoothing (SES) methods are the most commonly used methods in forecasting and time series analysis. However, they are generally insensitive to non-stationary structural events such as level shifts, ramp shifts, and spikes or impulses. Similar to that of outliers in stationary time series, these non-stationary events will lead to increased level of errors in the forecasting process. This paper generalizes the SES method into a new adaptive method called revised simple exponential smoothing (RSES), as an alternative method to recognize non-stationary level shifts in the time series. We show that the new method improves the accuracy of the forecasting process. This is done by controlling the number of observations and the smoothing parameter in an adaptive approach, and in accordance with the laws of statistical control limits and the Bayes rule of conditioning. We use a numerical example to show how the new RSES method outperforms its traditional counterpart, SES.
Binarization and multi-thresholding of document images using connectivity
O`Gorman, L.
1994-12-31
Thresholding is a common image processing operation applied to gray-scale images to obtain binary or multi-level images. A thresholding method is described here that is global in approach, but uses a measure of local information, namely connectivity. Thresholds are found at the intensity levels that best preserve the connectivity of regions within the image. Thus, this method has advantages of both global and locally adaptive approaches. Experimental comparisons for document images show that the connectivity-preserving method improves subsequent OCR recognition rates from about 95% to 97.5% and reduces the number of binarization failures (where text is so poorly binarized as to be totally unrecognizable by a commercial OCR system) from 33% to 6% on difficult images.
Software for the parallel adaptive solution of conservation laws by discontinous Galerkin methods.
Flaherty, J. E.; Loy, R. M.; Shephard, M. S.; Teresco, J. D.
1999-08-17
The authors develop software tools for the solution of conservation laws using parallel adaptive discontinuous Galerkin methods. In particular, the Rensselaer Partition Model (RPM) provides parallel mesh structures within an adaptive framework to solve the Euler equations of compressible flow by a discontinuous Galerkin method (LOCO). Results are presented for a Rayleigh-Taylor flow instability for computations performed on 128 processors of an IBM SP computer. In addition to managing the distributed data and maintaining a load balance, RPM provides information about the parallel environment that can be used to tailor partitions to a specific computational environment.
A comparison of locally adaptive multigrid methods: LDC, FAC and FIC
NASA Technical Reports Server (NTRS)
Khadra, Khodor; Angot, Philippe; Caltagirone, Jean-Paul
1993-01-01
This study is devoted to a comparative analysis of three 'Adaptive ZOOM' (ZOom Overlapping Multi-level) methods based on similar concepts of hierarchical multigrid local refinement: LDC (Local Defect Correction), FAC (Fast Adaptive Composite), and FIC (Flux Interface Correction)--which we proposed recently. These methods are tested on two examples of a bidimensional elliptic problem. We compare, for V-cycle procedures, the asymptotic evolution of the global error evaluated by discrete norms, the corresponding local errors, and the convergence rates of these algorithms.
Lei, Xusheng; Li, Jingjing
2012-01-01
This paper presents an adaptive information fusion method to improve the accuracy and reliability of the altitude measurement information for small unmanned aerial rotorcraft during the landing process. Focusing on the low measurement performance of sensors mounted on small unmanned aerial rotorcraft, a wavelet filter is applied as a pre-filter to attenuate the high frequency noises in the sensor output. Furthermore, to improve altitude information, an adaptive extended Kalman filter based on a maximum a posteriori criterion is proposed to estimate measurement noise covariance matrix in real time. Finally, the effectiveness of the proposed method is proved by static tests, hovering flight and autonomous landing flight tests. PMID:23201993
Adaptive spatial carrier frequency method for fast monitoring optical properties of fibres
NASA Astrophysics Data System (ADS)
Sokkar, T. Z. N.; El-Farahaty, K. A.; El-Bakary, M. A.; Omar, E. Z.; Agour, M.; Hamza, A. A.
2016-05-01
We present an extension of the adaptive spatial carrier frequency method which is proposed for fast measuring optical properties of fibrous materials. The method can be considered as a two complementary steps. In the first step, the support of the adaptive filter shall be defined. In the second step, the angle between the sample under test and the interference fringe system generated by the utilized interferometer has to be determined. Thus, the support of the optical filter associated with the implementation of the adaptive spatial carrier frequency method is accordingly rotated. This method is experimentally verified by measuring optical properties of polypropylene (PP) fibre with the help of a Mach-Zehnder interferometer. The results show that errors resulting from rotating the fibre with respect to the interference fringes of the interferometer are reduced compared with the traditional band pass filter method. This conclusion was driven by comparing results of the mean refractive index of drown PP fibre at parallel polarization direction obtained from the new and adaptive spatial carrier frequency method.
Threshold selection for regional peaks-over-threshold data
NASA Astrophysics Data System (ADS)
Roth, Martin; Jongbloed, Geurt; Adri Buishand, T.
2016-04-01
A hurdle in the peaks-over-threshold approach for analyzing extreme values is the selection of the threshold. A method is developed to reduce this obstacle in the presence of multiple, similar data samples. This is for instance the case in many environmental applications. The idea is to combine threshold selection methods into a regional method. Regionalized versions of the threshold stability and the mean excess plot are presented as graphical tools for threshold selection. Moreover, quantitative approaches based on the bootstrap distribution of the spatially averaged Kolmogorov-Smirnov and Anderson-Darling test statistics are introduced. It is demonstrated that the proposed regional method leads to an increased sensitivity for too low thresholds, compared to methods that do not take into account the regional information. The approach can be used for a wide range of univariate threshold selection methods. We test the methods using simulated data and present an application to rainfall data from the Dutch water board Vallei en Veluwe.
McClarren, Ryan G. Urbatsch, Todd J.
2009-09-01
In this paper we develop a robust implicit Monte Carlo (IMC) algorithm based on more accurately updating the linearized equilibrium radiation energy density. The method does not introduce oscillations in the solution and has the same limit as {delta}t{yields}{infinity} as the standard Fleck and Cummings IMC method. Moreover, the approach we introduce can be trivially added to current implementations of IMC by changing the definition of the Fleck factor. Using this new method we develop an adaptive scheme that uses either standard IMC or the modified method basing the adaptation on a zero-dimensional problem solved in each cell. Numerical results demonstrate that the new method can avoid the nonphysical overheating that occurs in standard IMC when the time step is large. The method also leads to decreased noise in the material temperature at the cost of a potential increase in the radiation temperature noise.
Yoshikawa, Takako; Morigami, Makoto; Sadr, Alireza; Tagami, Junji
2014-01-01
This study aimed to evaluate the effects of the light curing method and resin composite composition on marginal sealing and resin composite adaptation to the cavity wall. Cylindrical cavities were prepared on the buccal or lingual cervical regions. The teeth were restored using Clearfil Liner Bond 2V adhesive system and filled with Clearfil Photo Bright or Palfique Estelite resin composite. The resins were cured using the conventional or slow-start light curing method. After thermal cycling, the specimens were subjected to a dye penetration test. The slow-start curing method showed better resin composite adaptation to the cavity wall for both composites. Furthermore, the slow-start curing method resulted in significantly improved dentin marginal sealing compared with the conventional method for Clearfil Photo Bright. The light-cured resin composite, which exhibited increased contrast ratios duringpolymerization, seems to suggest high compensation for polymerization contraction stress when using the slow-start curing method. PMID:24988883
A NOISE ADAPTIVE FUZZY EQUALIZATION METHOD FOR PROCESSING SOLAR EXTREME ULTRAVIOLET IMAGES
Druckmueller, M.
2013-08-15
A new image enhancement tool ideally suited for the visualization of fine structures in extreme ultraviolet images of the corona is presented in this paper. The Noise Adaptive Fuzzy Equalization method is particularly suited for the exceptionally high dynamic range images from the Atmospheric Imaging Assembly instrument on the Solar Dynamics Observatory. This method produces artifact-free images and gives significantly better results than methods based on convolution or Fourier transform which are often used for that purpose.
A density-based adaptive quantum mechanical/molecular mechanical method.
Waller, Mark P; Kumbhar, Sadhana; Yang, Jack
2014-10-20
We present a density-based adaptive quantum mechanical/molecular mechanical (DBA-QM/MM) method, whereby molecules can switch layers from the QM to the MM region and vice versa. The adaptive partitioning of the molecular system ensures that the layer assignment can change during the optimization procedure, that is, on the fly. The switch from a QM molecule to a MM molecule is determined if there is an absence of noncovalent interactions to any atom of the QM core region. The presence/absence of noncovalent interactions is determined by analysis of the reduced density gradient. Therefore, the location of the QM/MM boundary is based on physical arguments, and this neatly removes some empiricism inherent in previous adaptive QM/MM partitioning schemes. The DBA-QM/MM method is validated by using a water-in-water setup and an explicitly solvated L-alanyl-L-alanine dipeptide. PMID:24954803
A GPU-accelerated adaptive discontinuous Galerkin method for level set equation
NASA Astrophysics Data System (ADS)
Karakus, A.; Warburton, T.; Aksel, M. H.; Sert, C.
2016-01-01
This paper presents a GPU-accelerated nodal discontinuous Galerkin method for the solution of two- and three-dimensional level set (LS) equation on unstructured adaptive meshes. Using adaptive mesh refinement, computations are localised mostly near the interface location to reduce the computational cost. Small global time step size resulting from the local adaptivity is avoided by local time-stepping based on a multi-rate Adams-Bashforth scheme. Platform independence of the solver is achieved with an extensible multi-threading programming API that allows runtime selection of different computing devices (GPU and CPU) and different threading interfaces (CUDA, OpenCL and OpenMP). Overall, a highly scalable, accurate and mass conservative numerical scheme that preserves the simplicity of LS formulation is obtained. Efficiency, performance and local high-order accuracy of the method are demonstrated through distinct numerical test cases.
Method and system for training dynamic nonlinear adaptive filters which have embedded memory
NASA Technical Reports Server (NTRS)
Rabinowitz, Matthew (Inventor)
2002-01-01
Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.
An adaptive mesh finite volume method for the Euler equations of gas dynamics
NASA Astrophysics Data System (ADS)
Mungkasi, Sudi
2016-06-01
The Euler equations have been used to model gas dynamics for decades. They consist of mathematical equations for the conservation of mass, momentum, and energy of the gas. For a large time value, the solution may contain discontinuities, even when the initial condition is smooth. A standard finite volume numerical method is not able to give accurate solutions to the Euler equations around discontinuities. Therefore we solve the Euler equations using an adaptive mesh finite volume method. In this paper, we present a new construction of the adaptive mesh finite volume method with an efficient computation of the refinement indicator. The adaptive method takes action automatically at around places having inaccurate solutions. Inaccurate solutions are reconstructed to reduce the error by refining the mesh locally up to a certain level. On the other hand, if the solution is already accurate, then the mesh is coarsened up to another certain level to minimize computational efforts. We implement the numerical entropy production as the mesh refinement indicator. As a test problem, we take the Sod shock tube problem. Numerical results show that the adaptive method is more promising than the standard one in solving the Euler equations of gas dynamics.
A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection
Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; Burkardt, John V.
2015-06-24
This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Solution of the Euler Equations
Anderson, R W; Elliott, N S; Pember, R B
2003-02-14
A new method that combines staggered grid arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the methods are driven by the need to reconcile traditional AMR techniques with the staggered variables and moving, deforming meshes associated with Lagrange based ALE schemes. We develop interlevel solution transfer operators and interlevel boundary conditions first in the case of purely Lagrangian hydrodynamics, and then extend these ideas into an ALE method by developing adaptive extensions of elliptic mesh relaxation techniques. Conservation properties of the method are analyzed, and a series of test problem calculations are presented which demonstrate the utility and efficiency of the method.
Applications of automatic mesh generation and adaptive methods in computational medicine
Schmidt, J.A.; Macleod, R.S.; Johnson, C.R.; Eason, J.C.
1995-12-31
Important problems in Computational Medicine exist that can benefit from the implementation of adaptive mesh refinement techniques. Biological systems are so inherently complex that only efficient models running on state of the art hardware can begin to simulate reality. To tackle the complex geometries associated with medical applications we present a general purpose mesh generation scheme based upon the Delaunay tessellation algorithm and an iterative point generator. In addition, automatic, two- and three-dimensional adaptive mesh refinement methods are presented that are derived from local and global estimates of the finite element error. Mesh generation and adaptive refinement techniques are utilized to obtain accurate approximations of bioelectric fields within anatomically correct models of the heart and human thorax. Specifically, we explore the simulation of cardiac defibrillation and the general forward and inverse problems in electrocardiography (ECG). Comparisons between uniform and adaptive refinement techniques are made to highlight the computational efficiency and accuracy of adaptive methods in the solution of field problems in computational medicine.
Development and evaluation of a method of calibrating medical displays based on fixed adaptation
Sund, Patrik Månsson, Lars Gunnar; Båth, Magnus
2015-04-15
Purpose: The purpose of this work was to develop and evaluate a new method for calibration of medical displays that includes the effect of fixed adaptation and by using equipment and luminance levels typical for a modern radiology department. Methods: Low contrast sinusoidal test patterns were derived at nine luminance levels from 2 to 600 cd/m{sup 2} and used in a two alternative forced choice observer study, where the adaptation level was fixed at the logarithmic average of 35 cd/m{sup 2}. The contrast sensitivity at each luminance level was derived by establishing a linear relationship between the ten pattern contrast levels used at every luminance level and a detectability index (d′) calculated from the fraction of correct responses. A Gaussian function was fitted to the data and normalized to the adaptation level. The corresponding equation was used in a display calibration method that included the grayscale standard display function (GSDF) but compensated for fixed adaptation. In the evaluation study, the contrast of circular objects with a fixed pixel contrast was displayed using both calibration methods and was rated on a five-grade scale. Results were calculated using a visual grading characteristics method. Error estimations in both observer studies were derived using a bootstrap method. Results: The contrast sensitivities for the darkest and brightest patterns compared to the contrast sensitivity at the adaptation luminance were 37% and 56%, respectively. The obtained Gaussian fit corresponded well with similar studies. The evaluation study showed a higher degree of equally distributed contrast throughout the luminance range with the calibration method compensated for fixed adaptation than for the GSDF. The two lowest scores for the GSDF were obtained for the darkest and brightest patterns. These scores were significantly lower than the lowest score obtained for the compensated GSDF. For the GSDF, the scores for all luminance levels were statistically
Adaptive non-local means method for speckle reduction in ultrasound images
NASA Astrophysics Data System (ADS)
Ai, Ling; Ding, Mingyue; Zhang, Xuming
2016-03-01
Noise removal is a crucial step to enhance the quality of ultrasound images. However, some existing despeckling methods cannot ensure satisfactory restoration performance. In this paper, an adaptive non-local means (ANLM) filter is proposed for speckle noise reduction in ultrasound images. The distinctive property of the proposed method lies in that the decay parameter will not take the fixed value for the whole image but adapt itself to the variation of the local features in the ultrasound images. In the proposed method, the pre-filtered image will be obtained using the traditional NLM method. Based on the pre-filtered result, the local gradient will be computed and it will be utilized to determine the decay parameter adaptively for each image pixel. The final restored image will be produced by the ANLM method using the obtained decay parameters. Simulations on the synthetic image show that the proposed method can deliver sufficient speckle reduction while preserving image details very well and it outperforms the state-of-the-art despeckling filters in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). Experiments on the clinical ultrasound image further demonstrate the practicality and advantage of the proposed method over the compared filtering methods.
NASA Astrophysics Data System (ADS)
Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan
2015-10-01
Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.
NASA Astrophysics Data System (ADS)
Sometani, Mitsuru; Okamoto, Dai; Harada, Shinsuke; Ishimori, Hitoshi; Takasu, Shinji; Hatakeyama, Tetsuo; Takei, Manabu; Yonezawa, Yoshiyuki; Fukuda, Kenji; Okumura, Hajime
2016-04-01
The threshold-voltage (V th) shift of 4H-SiC MOSFETs with Ar or N2O post-oxidation annealing (POA) was measured by conventional sweep and non-relaxation methods. Although the V th shift values of both samples were almost identical when measured by the sweep method, those for the Ar POA samples were larger than those for the N2O POA samples when measured by the non-relaxation method. Thus, we can say that investigating the exact V th shifts using only the conventional sweep method is difficult. The temperature-dependent analysis of the V th shifts measured by both methods revealed that the N2O POA decreases charge trapping in the near-interface region of the SiO2.
Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods
NASA Astrophysics Data System (ADS)
Kozdon, J. E.; Wilcox, L.
2013-12-01
Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.
New cardiac MRI gating method using event-synchronous adaptive digital filter.
Park, Hodong; Park, Youngcheol; Cho, Sungpil; Jang, Bongryoel; Lee, Kyoungjoung
2009-11-01
When imaging the heart using MRI, an artefact-free electrocardiograph (ECG) signal is not only important for monitoring the patient's heart activity but also essential for cardiac gating to reduce noise in MR images induced by moving organs. The fundamental problem in conventional ECG is the distortion induced by electromagnetic interference. Here, we propose an adaptive algorithm for the suppression of MR gradient artefacts (MRGAs) in ECG leads of a cardiac MRI gating system. We have modeled MRGAs by assuming a source of strong pulses used for dephasing the MR signal. The modeled MRGAs are rectangular pulse-like signals. We used an event-synchronous adaptive digital filter whose reference signal is synchronous to the gradient peaks of MRI. The event detection processor for the event-synchronous adaptive digital filter was implemented using the phase space method-a sort of topology mapping method-and least-squares acceleration filter. For evaluating the efficiency of the proposed method, the filter was tested using simulation and actual data. The proposed method requires a simple experimental setup that does not require extra hardware connections to obtain the reference signals of adaptive digital filter. The proposed algorithm was more effective than the multichannel approach. PMID:19644754
Item Pocket Method to Allow Response Review and Change in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Han, Kyung T.
2013-01-01
Most computerized adaptive testing (CAT) programs do not allow test takers to review and change their responses because it could seriously deteriorate the efficiency of measurement and make tests vulnerable to manipulative test-taking strategies. Several modified testing methods have been developed that provide restricted review options while…
Method for reducing the drag of blunt-based vehicles by adaptively increasing forebody roughness
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A. (Inventor); Saltzman, Edwin J. (Inventor); Moes, Timothy R. (Inventor); Iliff, Kenneth W. (Inventor)
2005-01-01
A method for reducing drag upon a blunt-based vehicle by adaptively increasing forebody roughness to increase drag at the roughened area of the forebody, which results in a decrease in drag at the base of this vehicle, and in total vehicle drag.
NASA Technical Reports Server (NTRS)
Kornilova, L. N.; Cowings, P. S.; Toscano, W. B.; Arlashchenko, N. I.; Korneev, D. Iu; Ponomarenko, A. V.; Salagovich, S. V.; Sarantseva, A. V.; Kozlovskaia, I. B.
2000-01-01
Presented are results of testing the method of adaptive biocontrol during preflight training of cosmonauts. Within the MIR-25 crew, a high level of controllability of the autonomous reactions was characteristic of Flight Commanders MIR-23 and MIR-25 and flight Engineer MIR-23, while Flight Engineer MIR-25 displayed a weak intricate dependence of these reactions on the depth of relaxation or strain.
Applying Parallel Adaptive Methods with GeoFEST/PYRAMID to Simulate Earth Surface Crustal Dynamics
NASA Technical Reports Server (NTRS)
Norton, Charles D.; Lyzenga, Greg; Parker, Jay; Glasscoe, Margaret; Donnellan, Andrea; Li, Peggy
2006-01-01
This viewgraph presentation reviews the use Adaptive Mesh Refinement (AMR) in simulating the Crustal Dynamics of Earth's Surface. AMR simultaneously improves solution quality, time to solution, and computer memory requirements when compared to generating/running on a globally fine mesh. The use of AMR in simulating the dynamics of the Earth's Surface is spurred by future proposed NASA missions, such as InSAR for Earth surface deformation and other measurements. These missions will require support for large-scale adaptive numerical methods using AMR to model observations. AMR was chosen because it has been successful in computation fluid dynamics for predictive simulation of complex flows around complex structures.
An edge-based solution-adaptive method applied to the AIRPLANE code
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.
1995-01-01
Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.
Matthews, Devin A.; Stanton, John F.
2015-02-14
The theory of non-orthogonal spin-adaptation for closed-shell molecular systems is applied to coupled cluster methods with quadruple excitations (CCSDTQ). Calculations at this level of detail are of critical importance in describing the properties of molecular systems to an accuracy which can meet or exceed modern experimental techniques. Such calculations are of significant (and growing) importance in such fields as thermodynamics, kinetics, and atomic and molecular spectroscopies. With respect to the implementation of CCSDTQ and related methods, we show that there are significant advantages to non-orthogonal spin-adaption with respect to simplification and factorization of the working equations and to creating an efficient implementation. The resulting algorithm is implemented in the CFOUR program suite for CCSDT, CCSDTQ, and various approximate methods (CCSD(T), CC3, CCSDT-n, and CCSDT(Q))
NASA Astrophysics Data System (ADS)
Chai, Runqi; Savvaris, Al; Tsourdos, Antonios
2016-06-01
In this paper, a fuzzy physical programming (FPP) method has been introduced for solving multi-objective Space Manoeuvre Vehicles (SMV) skip trajectory optimization problem based on hp-adaptive pseudospectral methods. The dynamic model of SMV is elaborated and then, by employing hp-adaptive pseudospectral methods, the problem has been transformed to nonlinear programming (NLP) problem. According to the mission requirements, the solutions were calculated for each single-objective scenario. To get a compromised solution for each target, the fuzzy physical programming (FPP) model is proposed. The preference function is established with considering the fuzzy factor of the system such that a proper compromised trajectory can be acquired. In addition, the NSGA-II is tested to obtain the Pareto-optimal solution set and verify the Pareto optimality of the FPP solution. Simulation results indicate that the proposed method is effective and feasible in terms of dealing with the multi-objective skip trajectory optimization for the SMV.
An Adaptive Instability Suppression Controls Method for Aircraft Gas Turbine Engine Combustors
NASA Technical Reports Server (NTRS)
Kopasakis, George; DeLaat, John C.; Chang, Clarence T.
2008-01-01
An adaptive controls method for instability suppression in gas turbine engine combustors has been developed and successfully tested with a realistic aircraft engine combustor rig. This testing was part of a program that demonstrated, for the first time, successful active combustor instability control in an aircraft gas turbine engine-like environment. The controls method is called Adaptive Sliding Phasor Averaged Control. Testing of the control method has been conducted in an experimental rig with different configurations designed to simulate combustors with instabilities of about 530 and 315 Hz. Results demonstrate the effectiveness of this method in suppressing combustor instabilities. In addition, a dramatic improvement in suppression of the instability was achieved by focusing control on the second harmonic of the instability. This is believed to be due to a phenomena discovered and reported earlier, the so called Intra-Harmonic Coupling. These results may have implications for future research in combustor instability control.
Xia, Kelin; Zhan, Meng; Wan, Decheng; Wei, Guo-Wei
2012-02-01
Mesh deformation methods are a versatile strategy for solving partial differential equations (PDEs) with a vast variety of practical applications. However, these methods break down for elliptic PDEs with discontinuous coefficients, namely, elliptic interface problems. For this class of problems, the additional interface jump conditions are required to maintain the well-posedness of the governing equation. Consequently, in order to achieve high accuracy and high order convergence, additional numerical algorithms are required to enforce the interface jump conditions in solving elliptic interface problems. The present work introduces an interface technique based adaptively deformed mesh strategy for resolving elliptic interface problems. We take the advantages of the high accuracy, flexibility and robustness of the matched interface and boundary (MIB) method to construct an adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients. The proposed method generates deformed meshes in the physical domain and solves the transformed governed equations in the computational domain, which maintains regular Cartesian meshes. The mesh deformation is realized by a mesh transformation PDE, which controls the mesh redistribution by a source term. The source term consists of a monitor function, which builds in mesh contraction rules. Both interface geometry based deformed meshes and solution gradient based deformed meshes are constructed to reduce the L(∞) and L(2) errors in solving elliptic interface problems. The proposed adaptively deformed mesh based interface method is extensively validated by many numerical experiments. Numerical results indicate that the adaptively deformed mesh based interface method outperforms the original MIB method for dealing with elliptic interface problems. PMID:22586356
Xia, Kelin; Zhan, Meng; Wan, Decheng; Wei, Guo-Wei
2011-01-01
Mesh deformation methods are a versatile strategy for solving partial differential equations (PDEs) with a vast variety of practical applications. However, these methods break down for elliptic PDEs with discontinuous coefficients, namely, elliptic interface problems. For this class of problems, the additional interface jump conditions are required to maintain the well-posedness of the governing equation. Consequently, in order to achieve high accuracy and high order convergence, additional numerical algorithms are required to enforce the interface jump conditions in solving elliptic interface problems. The present work introduces an interface technique based adaptively deformed mesh strategy for resolving elliptic interface problems. We take the advantages of the high accuracy, flexibility and robustness of the matched interface and boundary (MIB) method to construct an adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients. The proposed method generates deformed meshes in the physical domain and solves the transformed governed equations in the computational domain, which maintains regular Cartesian meshes. The mesh deformation is realized by a mesh transformation PDE, which controls the mesh redistribution by a source term. The source term consists of a monitor function, which builds in mesh contraction rules. Both interface geometry based deformed meshes and solution gradient based deformed meshes are constructed to reduce the L∞ and L2 errors in solving elliptic interface problems. The proposed adaptively deformed mesh based interface method is extensively validated by many numerical experiments. Numerical results indicate that the adaptively deformed mesh based interface method outperforms the original MIB method for dealing with elliptic interface problems. PMID:22586356
Adaptation strategies for high order discontinuous Galerkin methods based on Tau-estimation
NASA Astrophysics Data System (ADS)
Kompenhans, Moritz; Rubio, Gonzalo; Ferrer, Esteban; Valero, Eusebio
2016-02-01
In this paper three p-adaptation strategies based on the minimization of the truncation error are presented for high order discontinuous Galerkin methods. The truncation error is approximated by means of a τ-estimation procedure and enables the identification of mesh regions that require adaptation. Three adaptation strategies are developed and termed a posteriori, quasi-a priori and quasi-a priori corrected. All strategies require fine solutions, which are obtained by enriching the polynomial order, but while the former needs time converged solutions, the last two rely on non-converged solutions, which lead to faster computations. In addition, the high order method permits the spatial decoupling for the estimated errors and enables anisotropic p-adaptation. These strategies are verified and compared in terms of accuracy and computational cost for the Euler and the compressible Navier-Stokes equations. It is shown that the two quasi-a priori methods achieve a significant reduction in computational cost when compared to a uniform polynomial enrichment. Namely, for a viscous boundary layer flow, we obtain a speedup of 6.6 and 7.6 for the quasi-a priori and quasi-a priori corrected approaches, respectively.
A wavelet-optimized, very high order adaptive grid and order numerical method
NASA Technical Reports Server (NTRS)
Jameson, Leland
1996-01-01
Differencing operators of arbitrarily high order can be constructed by interpolating a polynomial through a set of data followed by differentiation of this polynomial and finally evaluation of the polynomial at the point where a derivative approximation is desired. Furthermore, the interpolating polynomial can be constructed from algebraic, trigonometric, or, perhaps exponential polynomials. This paper begins with a comparison of such differencing operator construction. Next, the issue of proper grids for high order polynomials is addressed. Finally, an adaptive numerical method is introduced which adapts the numerical grid and the order of the differencing operator depending on the data. The numerical grid adaptation is performed on a Chebyshev grid. That is, at each level of refinement the grid is a Chebvshev grid and this grid is refined locally based on wavelet analysis.
An h-adaptive finite element method for turbulent heat transfer
Carriington, David B
2009-01-01
A two-equation turbulence closure model (k-{omega}) using an h-adaptive grid technique and finite element method (FEM) has been developed to simulate low Mach flow and heat transfer. These flows are applicable to many flows in engineering and environmental sciences. Of particular interest in the engineering modeling areas are: combustion, solidification, and heat exchanger design. Flows for indoor air quality modeling and atmospheric pollution transport are typical types of environmental flows modeled with this method. The numerical method is based on a hybrid finite element model using an equal-order projection process. The model includes thermal and species transport, localized mesh refinement (h-adaptive) and Petrov-Galerkin weighting for the stabilizing the advection. This work develops the continuum model of a two-equation turbulence closure method. The fractional step solution method is stated along with the h-adaptive grid method (Carrington and Pepper, 2002). Solutions are presented for 2d flow over a backward-facing step.
A Digitalized Gyroscope System Based on a Modified Adaptive Control Method
Xia, Dunzhu; Hu, Yiwei; Ni, Peizhen
2016-01-01
In this work we investigate the possibility of applying the adaptive control algorithm to Micro-Electro-Mechanical System (MEMS) gyroscopes. Through comparing the gyroscope working conditions with the reference model, the adaptive control method can provide online estimation of the key parameters and the proper control strategy for the system. The digital second-order oscillators in the reference model are substituted for two phase locked loops (PLLs) to achieve a more steady amplitude and frequency control. The adaptive law is modified to satisfy the condition of unequal coupling stiffness and coupling damping coefficient. The rotation mode of the gyroscope system is considered in our work and a rotation elimination section is added to the digitalized system. Before implementing the algorithm in the hardware platform, different simulations are conducted to ensure the algorithm can meet the requirement of the angular rate sensor, and some of the key adaptive law coefficients are optimized. The coupling components are detected and suppressed respectively and Lyapunov criterion is applied to prove the stability of the system. The modified adaptive control algorithm is verified in a set of digitalized gyroscope system, the control system is realized in digital domain, with the application of Field Programmable Gate Array (FPGA). Key structure parameters are measured and compared with the estimation results, which validated that the algorithm is feasible in the setup. Extra gyroscopes are used in repeated experiments to prove the commonality of the algorithm. PMID:26959019
A Digitalized Gyroscope System Based on a Modified Adaptive Control Method.
Xia, Dunzhu; Hu, Yiwei; Ni, Peizhen
2016-01-01
In this work we investigate the possibility of applying the adaptive control algorithm to Micro-Electro-Mechanical System (MEMS) gyroscopes. Through comparing the gyroscope working conditions with the reference model, the adaptive control method can provide online estimation of the key parameters and the proper control strategy for the system. The digital second-order oscillators in the reference model are substituted for two phase locked loops (PLLs) to achieve a more steady amplitude and frequency control. The adaptive law is modified to satisfy the condition of unequal coupling stiffness and coupling damping coefficient. The rotation mode of the gyroscope system is considered in our work and a rotation elimination section is added to the digitalized system. Before implementing the algorithm in the hardware platform, different simulations are conducted to ensure the algorithm can meet the requirement of the angular rate sensor, and some of the key adaptive law coefficients are optimized. The coupling components are detected and suppressed respectively and Lyapunov criterion is applied to prove the stability of the system. The modified adaptive control algorithm is verified in a set of digitalized gyroscope system, the control system is realized in digital domain, with the application of Field Programmable Gate Array (FPGA). Key structure parameters are measured and compared with the estimation results, which validated that the algorithm is feasible in the setup. Extra gyroscopes are used in repeated experiments to prove the commonality of the algorithm. PMID:26959019
Scale-adaptive tensor algebra for local many-body methods of electronic structure theory
Liakh, Dmitry I
2014-01-01
While the formalism of multiresolution analysis (MRA), based on wavelets and adaptive integral representations of operators, is actively progressing in electronic structure theory (mostly on the independent-particle level and, recently, second-order perturbation theory), the concepts of multiresolution and adaptivity can also be utilized within the traditional formulation of correlated (many-particle) theory which is based on second quantization and the corresponding (generally nonorthogonal) tensor algebra. In this paper, we present a formalism called scale-adaptive tensor algebra (SATA) which exploits an adaptive representation of tensors of many-body operators via the local adjustment of the basis set quality. Given a series of locally supported fragment bases of a progressively lower quality, we formulate the explicit rules for tensor algebra operations dealing with adaptively resolved tensor operands. The formalism suggested is expected to enhance the applicability and reliability of local correlated many-body methods of electronic structure theory, especially those directly based on atomic orbitals (or any other localized basis functions).
Program For Thresholding In Digital Images
NASA Technical Reports Server (NTRS)
Nolf, Scott R.; Avis, Elizabeth L.; Matthews, Christine G.; Stacy, Kathryn
1994-01-01
THRTOOL program applies thresholding techniques to Sun rasterfiles. Provides for choice among four methods of thresholding. Written in C language and implemented on Sun series and Silicon Graphics IRIS machines.
Palacio-Mancheno, Paolo E.; Larriera, Adriana I.; Doty, Stephen B.; Cardoso, Luis; Fritton, Susannah P.
2013-01-01
Current micro-CT systems allow scanning bone at resolutions capable of three-dimensional characterization of intracortical vascular porosity and osteocyte lacunae. However, the scanning and reconstruction parameters along with the image segmentation method affect the accuracy of the measurements. In this study, the effects of scanning resolution and image threshold method in quantifying small features of cortical bone (vascular porosity, vascular canal diameter and separation, lacunar porosity and density, and tissue mineral density) were analyzed. Cortical bone from the tibia of Sprague-Dawley rats was scanned at 1-µm and 4-µm resolutions, reconstructions were density-calibrated, and volumes of interest were segmented using approaches based on edge-detection or histogram analysis. With 1-µm resolution scans, the osteocyte lacunar spaces could be visualized, and it was possible to separate the lacunar porosity from the vascular porosity. At 4-µm resolution, the vascular porosity and vascular canal diameter were underestimated, and osteocyte lacunae were not effectively detected, whereas the vascular canal separation and tissue mineral density were overestimated compared to 1-µm resolution. Resolution had a much greater effect on the measurements than did threshold method, with partial volume effects at resolutions coarser than 2 µm demonstrated in two separate analyses, one of which assessed the effect of resolution on an object of known size with similar architecture to a vascular pore. Although there was little difference when using the edge-detection versus histogram-based threshold approaches, edge-detection was somewhat more effective in delineating canal architecture at finer resolutions (1 – 2 µm). In addition, use of a high-resolution (1-µm) density-based threshold on lower resolution (4-µm) density-calibrated images was not effective in improving the lower-resolution measurements. In conclusion, if measuring cortical vascular microarchitecture
An adaptive subspace trust-region method for frequency-domain seismic full waveform inversion
NASA Astrophysics Data System (ADS)
Zhang, Huan; Li, Xiaofan; Song, Hanjie; Liu, Shaolin
2015-05-01
Full waveform inversion is currently considered as a promising seismic imaging method to obtain high-resolution and quantitative images of the subsurface. It is a nonlinear ill-posed inverse problem, the main difficulty of which that prevents the full waveform inversion from widespread applying to real data is the sensitivity to incorrect initial models and noisy data. Local optimization theories including Newton's method and gradient method always lead the convergence to local minima, while global optimization algorithms such as simulated annealing are computationally costly. To confront this issue, in this paper we investigate the possibility of applying the trust-region method to the full waveform inversion problem. Different from line search methods, trust-region methods force the new trial step within a certain neighborhood of the current iterate point. Theoretically, the trust-region methods are reliable and robust, and they have very strong convergence properties. The capability of this inversion technique is tested with the synthetic Marmousi velocity model and the SEG/EAGE Salt model. Numerical examples demonstrate that the adaptive subspace trust-region method can provide solutions closer to the global minima compared to the conventional Approximate Hessian approach and the L-BFGS method with a higher convergence rate. In addition, the match between the inverted model and the true model is still excellent even when the initial model deviates far from the true model. Inversion results with noisy data also exhibit the remarkable capability of the adaptive subspace trust-region method for low signal-to-noise data inversions. Promising numerical results suggest this adaptive subspace trust-region method is suitable for full waveform inversion, as it has stronger convergence and higher convergence rate.
Adaptive mesh refinement techniques for the immersed interface method applied to flow problems.
Li, Zhilin; Song, Peng
2013-06-01
In this paper, we develop an adaptive mesh refinement strategy of the Immersed Interface Method for flow problems with a moving interface. The work is built on the AMR method developed for two-dimensional elliptic interface problems in the paper [12] (CiCP, 12(2012), 515-527). The interface is captured by the zero level set of a Lipschitz continuous function φ(x, y, t). Our adaptive mesh refinement is built within a small band of |φ(x, y, t)| ≤ δ with finer Cartesian meshes. The AMR-IIM is validated for Stokes and Navier-Stokes equations with exact solutions, moving interfaces driven by the surface tension, and classical bubble deformation problems. A new simple area preserving strategy is also proposed in this paper for the level set method. PMID:23794763
A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction
NASA Technical Reports Server (NTRS)
Bockelie, Michael J.; Eiseman, Peter R.
1990-01-01
A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.
A simple and inexpensive method for determining cold sensitivity and adaptation in mice.
Brenner, Daniel S; Golden, Judith P; Vogt, Sherri K; Gereau, Robert W
2015-01-01
Cold hypersensitivity is a serious clinical problem, affecting a broad subset of patients and causing significant decreases in quality of life. The cold plantar assay allows the objective and inexpensive assessment of cold sensitivity in mice, and can quantify both analgesia and hypersensitivity. Mice are acclimated on a glass plate, and a compressed dry ice pellet is held against the glass surface underneath the hindpaw. The latency to withdrawal from the cooling glass is used as a measure of cold sensitivity. Cold sensation is also important for survival in regions with seasonal temperature shifts, and in order to maintain sensitivity animals must be able to adjust their thermal response thresholds to match the ambient temperature. The Cold Plantar Assay (CPA) also allows the study of adaptation to changes in ambient temperature by testing the cold sensitivity of mice at temperatures ranging from 30 °C to 5 °C. Mice are acclimated as described above, but the glass plate is cooled to the desired starting temperature using aluminum boxes (or aluminum foil packets) filled with hot water, wet ice, or dry ice. The temperature of the plate is measured at the center using a filament T-type thermocouple probe. Once the plate has reached the desired starting temperature, the animals are tested as described above. This assay allows testing of mice at temperatures ranging from innocuous to noxious. The CPA yields unambiguous and consistent behavioral responses in uninjured mice and can be used to quantify both hypersensitivity and analgesia. This protocol describes how to use the CPA to measure cold hypersensitivity, analgesia, and adaptation in mice. PMID:25867969
A Simple and Inexpensive Method for Determining Cold Sensitivity and Adaptation in Mice
Brenner, Daniel S.; Golden, Judith P.; Vogt, Sherri K.; Gereau, Robert W.
2015-01-01
Cold hypersensitivity is a serious clinical problem, affecting a broad subset of patients and causing significant decreases in quality of life. The cold plantar assay allows the objective and inexpensive assessment of cold sensitivity in mice, and can quantify both analgesia and hypersensitivity. Mice are acclimated on a glass plate, and a compressed dry ice pellet is held against the glass surface underneath the hindpaw. The latency to withdrawal from the cooling glass is used as a measure of cold sensitivity. Cold sensation is also important for survival in regions with seasonal temperature shifts, and in order to maintain sensitivity animals must be able to adjust their thermal response thresholds to match the ambient temperature. The Cold Plantar Assay (CPA) also allows the study of adaptation to changes in ambient temperature by testing the cold sensitivity of mice at temperatures ranging from 30 °C to 5 °C. Mice are acclimated as described above, but the glass plate is cooled to the desired starting temperature using aluminum boxes (or aluminum foil packets) filled with hot water, wet ice, or dry ice. The temperature of the plate is measured at the center using a filament T-type thermocouple probe. Once the plate has reached the desired starting temperature, the animals are tested as described above. This assay allows testing of mice at temperatures ranging from innocuous to noxious. The CPA yields unambiguous and consistent behavioral responses in uninjured mice and can be used to quantify both hypersensitivity and analgesia. This protocol describes how to use the CPA to measure cold hypersensitivity, analgesia, and adaptation in mice. PMID:25867969
Development of the Adaptive Collision Source (ACS) method for discrete ordinates
Walters, W.; Haghighat, A.
2013-07-01
We have developed a new collision source method to solve the Linear Boltzmann Equation (LBE) more efficiently by adaptation of the angular quadrature order. The angular adaptation method is unique in that the flux from each scattering source iteration is obtained, with potentially a different quadrature order. Traditionally, the flux from every iteration is combined, with the same quadrature applied to the combined flux. Since the scattering process tends to distribute the radiation more evenly over angles (i.e., make it more isotropic), the quadrature requirements generally decrease with each iteration. This allows for an optimal use of processing power, by using a high order quadrature for the first few iterations that need it, before shifting to lower order quadratures for the remaining iterations. This is essentially an extension of the first collision source method, and we call it the adaptive collision source method (ACS). The ACS methodology has been implemented in the TITAN discrete ordinates code, and has shown a relative speedup of 1.5-2.5 on a test problem, for the same desired level of accuracy. (authors)
Vortical Flow Prediction using an Adaptive Unstructured Grid Method. Chapter 11
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2009-01-01
A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.
NASA Astrophysics Data System (ADS)
Shi, Lei; Wang, Z. J.
2015-08-01
Adjoint-based mesh adaptive methods are capable of distributing computational resources to areas which are important for predicting an engineering output. In this paper, we develop an adjoint-based h-adaptation approach based on the high-order correction procedure via reconstruction formulation (CPR) to minimize the output or functional error. A dual-consistent CPR formulation of hyperbolic conservation laws is developed and its dual consistency is analyzed. Super-convergent functional and error estimate for the output with the CPR method are obtained. Factors affecting the dual consistency, such as the solution point distribution, correction functions, boundary conditions and the discretization approach for the non-linear flux divergence term, are studied. The presented method is then used to perform simulations for the 2D Euler and Navier-Stokes equations with mesh adaptation driven by the adjoint-based error estimate. Several numerical examples demonstrate the ability of the presented method to dramatically reduce the computational cost comparing with uniform grid refinement.
A method for online verification of adapted fields using an independent dose monitor
Chang Jina; Norrlinger, Bernhard D.; Heaton, Robert K.; Jaffray, David A.; Cho, Young-Bin; Islam, Mohammad K.; Mahon, Robert
2013-07-15
Purpose: Clinical implementation of online adaptive radiotherapy requires generation of modified fields and a method of dosimetric verification in a short time. We present a method of treatment field modification to account for patient setup error, and an online method of verification using an independent monitoring system.Methods: The fields are modified by translating each multileaf collimator (MLC) defined aperture in the direction of the patient setup error, and magnifying to account for distance variation to the marked isocentre. A modified version of a previously reported online beam monitoring system, the integral quality monitoring (IQM) system, was investigated for validation of adapted fields. The system consists of a large area ion-chamber with a spatial gradient in electrode separation to provide a spatially sensitive signal for each beam segment, mounted below the MLC, and a calculation algorithm to predict the signal. IMRT plans of ten prostate patients have been modified in response to six randomly chosen setup errors in three orthogonal directions.Results: A total of approximately 49 beams for the modified fields were verified by the IQM system, of which 97% of measured IQM signal agree with the predicted value to within 2%.Conclusions: The modified IQM system was found to be suitable for online verification of adapted treatment fields.
DING, Peng; FUNG, George Shiu-Kai; LIN, Ming De; HOLMAN, Shaina D.; GERMAN, Rebecca Z.
2015-01-01
Purpose To determine the effect of bilateral superior laryngeal nerve (SLN) lesion on swallowing threshold volume and the occurrence of aspiration, using a novel measurement technique for videofluorscopic swallowing studies (VFSS). Methods and Materials We used a novel radiographic phantom to assess volume of the milk containing barium from fluoroscopy. The custom made phantom was firstly calibrated by comparing image intensity of the phantom with known cylinder depths. Secondly, known volume pouches of milk in a pig cadaver were compared to volumes calculated with the phantom. Using these standards, we calculated the volume of milk in the valleculae, esophagus and larynx, for 205 feeding sequences from four infant pigs feeding before and after had bilateral SLN lesions. Swallow safety was assessed using the IMPAS scale. Results The log-linear correlation between image intensity values from the phantom filled with barium milk and the known phantom cylinder depths was strong (R2>0.95), as was the calculated volumes of the barium milk pouches. The threshold volume of bolus in the valleculae during feeding was significantly larger after bilateral SLN lesion than in control swallows (p<0.001). The IMPAS score increased in the lesioned swallows relative to the controls (p<0.001). Conclusion Bilateral SLN lesion dramatically increased the aspiration incidence and the threshold volume of bolus in valleculae. The use of this phantom permits quantification of the aspirated volume of fluid. The custom made phantom and calibration allow for more accurate 3D volume estimation from 2D x-ray in VFSS. PMID:25270532
Vivid Motor Imagery as an Adaptation Method for Head Turns on a Short-Arm Centrifuge
NASA Technical Reports Server (NTRS)
Newby, N. J.; Mast, F. W.; Natapoff, A.; Paloski, W. H.
2006-01-01
from one another. For the perceived duration of sensations, the CG group again exhibited the least amount of adaptation. However, the rates of adaptation of the PA and the MA groups were indistinguishable, suggesting that the imagined pseudostimulus appeared to be just as effective a means of adaptation as the actual stimulus. The MA group's rate of adaptation to motion sickness symptoms was also comparable to the PA group. The use of vivid motor imagery may be an effective method for adapting to the illusory sensations and motion sickness symptoms produced by cross-coupled stimuli. For space-based AG applications, this technique may prove quite useful in retaining astronauts considered highly susceptible to motion sickness as it reduces the number of actual CCS required to attain adaptation.
NASA Astrophysics Data System (ADS)
Gotovac, Hrvoje; Srzic, Veljko
2014-05-01
Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large
Adaptation of LASCA method for diagnostics of malignant tumours in laboratory animals
Ul'yanov, S S; Laskavyi, V N; Glova, Alina B; Polyanina, T I; Ul'yanova, O V; Fedorova, V A; Ul'yanov, A S
2012-05-31
The LASCA method is adapted for diagnostics of malignant neoplasms in laboratory animals. Tumours are studied in mice of Balb/c inbred line after inoculation of cells of syngeneic myeloma cell line Sp.2/0 Ag.8. The appropriateness of using the tLASCA method in tumour investigations is substantiated; its advantages in comparison with the sLASCA method are demonstrated. It is found that the most informative characteristic, indicating the presence of a tumour, is the fractal dimension of LASCA images.
Adaptation of LASCA method for diagnostics of malignant tumours in laboratory animals
NASA Astrophysics Data System (ADS)
Ul'yanov, S. S.; Laskavyi, V. N.; Glova, Alina B.; Polyanina, T. I.; Ul'yanova, O. V.; Fedorova, V. A.; Ul'yanov, A. S.
2012-05-01
The LASCA method is adapted for diagnostics of malignant neoplasms in laboratory animals. Tumours are studied in mice of Balb/c inbred line after inoculation of cells of syngeneic myeloma cell line Sp.2/0 — Ag.8. The appropriateness of using the tLASCA method in tumour investigations is substantiated; its advantages in comparison with the sLASCA method are demonstrated. It is found that the most informative characteristic, indicating the presence of a tumour, is the fractal dimension of LASCA images.
A novel timestamp based adaptive clock method for circuit emulation service over packet network
NASA Astrophysics Data System (ADS)
Dai, Jin-you; Yu, Shao-hua
2007-11-01
It is necessary to transport TDM (time division multiplexing) over packet network such as IP and Ethernet, and synchronization is a problem when carrying TDM over the packet network. Clock methods for TDM over packet network are introduced. A new adaptive clock method is presented. The method is a kind of timestamp based adaptive method, but no timestamp needs transporting over packet network. By using the local oscillator and a counter, the timestamp information (local timestamp) related to the service clock of the remote PE (provide edge) and the near PE can be attained. By using D-EWMA filter algorithm, the noise caused by packet network can be filtered and the useful timestamp can be extracted out. With the timestamp and a voltage-controlled oscillator, clock frequency of near PE can be adjusted the same as clock frequency of the remote PE. A kind of simulation device is designed and a test network topology is set up to test and verify the method. The experiment result shows that synthetical performance of the new method is better than ordinary buffer based method and ordinary timestamp based method.
NASA Astrophysics Data System (ADS)
Roussel, Olivier; Schneider, Kai
2010-03-01
An adaptive mulitresolution method based on a second-order finite volume discretization is presented for solving the three-dimensional compressible Navier-Stokes equations in Cartesian geometry. The explicit time discretization is of second-order and for flux evaluation a 2-4 Mac Cormack scheme is used. Coherent Vortex Simulations (CVS) are performed by decomposing the flow variables into coherent and incoherent contributions. The coherent part is computed deterministically on a locally refined grid using the adaptive multiresolution method while the influence of the incoherent part is neglected to model turbulent dissipation. The computational efficiency of this approach in terms of memory and CPU time compression is illustrated for turbulent mixing layers in the weakly compressible regime and for Reynolds numbers based on the mixing layer thickness between 50 and 200. Comparisons with direct numerical simulations allow to assess the precision and efficiency of CVS.
H∞ Adaptive tracking control for switched systems based on an average dwell-time method
NASA Astrophysics Data System (ADS)
Wu, Caiyun; Zhao, Jun
2015-10-01
This paper investigates the H∞ state tracking model reference adaptive control (MRAC) problem for a class of switched systems using an average dwell-time method. First, a stability criterion is established for a switched reference model. Then, an adaptive controller is designed and the state tracking control problem is converted into the stability analysis. The global practical stability of the error switched system can be guaranteed under a class of switching signals characterised by an average dwell time. Consequently, sufficient conditions for the solvability of the H∞ state tracking MRAC problem are derived. An example of highly manoeuvrable aircraft technology vehicle is given to demonstrate the feasibility and effectiveness of the proposed design method.
An Adaptive Mesh Refinement Strategy for Immersed Boundary/Interface Methods.
Li, Zhilin; Song, Peng
2012-01-01
An adaptive mesh refinement strategy is proposed in this paper for the Immersed Boundary and Immersed Interface methods for two-dimensional elliptic interface problems involving singular sources. The interface is represented by the zero level set of a Lipschitz function φ(x,y). Our adaptive mesh refinement is done within a small tube of |φ(x,y)|≤ δ with finer Cartesian meshes. The discrete linear system of equations is solved by a multigrid solver. The AMR methods could obtain solutions with accuracy that is similar to those on a uniform fine grid by distributing the mesh more economically, therefore, reduce the size of the linear system of the equations. Numerical examples presented show the efficiency of the grid refinement strategy. PMID:22670155
Liu, Hui; Zhang, Cai-Ming; Su, Zhi-Yuan; Wang, Kai; Deng, Kai
2015-01-01
The key problem of computer-aided diagnosis (CAD) of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO) pulmonary nodules than other typical algorithms. PMID:25945120
An adaptive tau-leaping method for stochastic simulations of reaction-diffusion systems
NASA Astrophysics Data System (ADS)
Padgett, Jill M. A.; Ilie, Silvana
2016-03-01
Stochastic modelling is critical for studying many biochemical processes in a cell, in particular when some reacting species have low population numbers. For many such cellular processes the spatial distribution of the molecular species plays a key role. The evolution of spatially heterogeneous biochemical systems with some species in low amounts is accurately described by the mesoscopic model of the Reaction-Diffusion Master Equation. The Inhomogeneous Stochastic Simulation Algorithm provides an exact strategy to numerically solve this model, but it is computationally very expensive on realistic applications. We propose a novel adaptive time-stepping scheme for the tau-leaping method for approximating the solution of the Reaction-Diffusion Master Equation. This technique combines effective strategies for variable time-stepping with path preservation to reduce the computational cost, while maintaining the desired accuracy. The numerical tests on various examples arising in applications show the improved efficiency achieved by the new adaptive method.
Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics
NASA Technical Reports Server (NTRS)
Stowers, S. T.; Bass, J. M.; Oden, J. T.
1993-01-01
A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.
Liu, Hui; Zhang, Cai-Ming; Su, Zhi-Yuan; Wang, Kai; Deng, Kai
2015-01-01
The key problem of computer-aided diagnosis (CAD) of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO) pulmonary nodules than other typical algorithms. PMID:25945120
Quantification of organ motion based on an adaptive image-based scale invariant feature method
Paganelli, Chiara; Peroni, Marta
2013-11-15
Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application of contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT
System and method for adaptively deskewing parallel data signals relative to a clock
Jenkins, Philip Nord; Cornett, Frank N.
2008-10-07
A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in detected skew.
System and method for adaptively deskewing parallel data signals relative to a clock
Jenkins, Philip Nord; Cornett, Frank N.
2011-10-04
A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in detected skew.
A Lagrangian-Eulerian finite element method with adaptive gridding for advection-dispersion problems
Ijiri, Y.; Karasaki, K.
1994-02-01
In the present paper, a Lagrangian-Eulerian finite element method with adaptive gridding for solving advection-dispersion equations is described. The code creates new grid points in the vicinity of sharp fronts at every time step in order to reduce numerical dispersion. The code yields quite accurate solutions for a wide range of mesh Peclet numbers and for mesh Courant numbers well in excess of 1.
NASA Astrophysics Data System (ADS)
Pedretti, Daniele; Fernàndez-Garcia, Daniel
2013-09-01
Particle tracking methods to simulate solute transport deal with the issue of having to reconstruct smooth concentrations from a limited number of particles. This is an error-prone process that typically leads to large fluctuations in the determined late-time behavior of breakthrough curves (BTCs). Kernel density estimators (KDE) can be used to automatically reconstruct smooth BTCs from a small number of particles. The kernel approach incorporates the uncertainty associated with subsampling a large population by equipping each particle with a probability density function. Two broad classes of KDE methods can be distinguished depending on the parametrization of this function: global and adaptive methods. This paper shows that each method is likely to estimate a specific portion of the BTCs. Although global methods offer a valid approach to estimate early-time behavior and peak of BTCs, they exhibit important fluctuations at the tails where fewer particles exist. In contrast, locally adaptive methods improve tail estimation while oversmoothing both early-time and peak concentrations. Therefore a new method is proposed combining the strength of both KDE approaches. The proposed approach is universal and only needs one parameter (α) which slightly depends on the shape of the BTCs. Results show that, for the tested cases, heavily-tailed BTCs are properly reconstructed with α ≈ 0.5 .
Adaptability and stability of genotypes of sweet sorghum by GGEBiplot and Toler methods.
de Figueiredo, U J; Nunes, J A R; da C Parrella, R A; Souza, E D; da Silva, A R; Emygdio, B M; Machado, J R A; Tardin, F D
2015-01-01
Sweet sorghum has considerable potential for ethanol and energy production. The crop is adaptable and can be grown under a wide range of cultivation conditions in marginal areas; however, studies of phenotypic stability are lacking under tropical conditions. Various methods can be used to assess the stability of the crop. Some of these methods generate the same basic information, whereas others provide additional information on genotype x environment (G x E) interactions and/or a description of the genotypes and environments. In this study, we evaluated the complementarity of two methods, GGEBiplot and Toler, with the aim of achieving more detailed information on G x E interactions and their implications for selection of sweet sorghum genotypes. We used data from 25 sorghum genotypes grown in different environments and evaluated the following traits: flowering (FLOW), green mass yield (GMY), total soluble solids (TSS), and tons of Brix per hectare (TBH). Significant G x E interactions were found for all traits. The most stable genotypes identified with the GGEBiplot method were CMSXS643 for FLOW, CMSXS644 and CMSXS647 for GMY, CMSXS646 and CMSXS637 for TSS, and BRS511 and CMSXSS647 for TBH. Especially for TBH, the genotype BRS511 was classified as doubly desirable by the Toler method; however, unlike the result of the GGEBiplot method, the genotype CMSXS647 was also found to be doubly undesirable. The two analytical methods were complementary and enabled a more reliable identification of adapted and stable genotypes. PMID:26400352
Adaptive non-uniformity correction method based on temperature for infrared detector array
NASA Astrophysics Data System (ADS)
Zhang, Zhijie; Yue, Song; Hong, Pu; Jia, Guowei; Lei, Bo
2013-09-01
The existence of non-uniformities in the responsitivity of the element array is a severe problem typical to common infrared detector. These non-uniformities result in a "curtain'' like fixed pattern noises (FPN) that appear in the image. Some random noise can be restrained by the method kind of equalization method. But the fixed pattern noise can only be removed by .non uniformity correction method. The produce of non uniformities of detector array is the combined action of infrared detector array, readout circuit, semiconductor device performance, the amplifier circuit and optical system. Conventional linear correction techniques require costly recalibration due to the drift of the detector or changes in temperature. Therefore, an adaptive non-uniformity method is needed to solve this problem. A lot factors including detectors and environment conditions variety are considered to analyze and conduct the cause of detector drift. Several experiments are designed to verify the guess. Based on the experiments, an adaptive non-uniformity correction method is put forward in this paper. The strength of this method lies in its simplicity and low computational complexity. Extensive experimental results demonstrate the disadvantage of traditional non-uniformity correct method is conquered by the proposed scheme.
Shack-Hartmann wavefront sensor with large dynamic range by adaptive spot search method.
Shinto, Hironobu; Saita, Yusuke; Nomura, Takanori
2016-07-10
A Shack-Hartmann wavefront sensor (SHWFS) that consists of a microlens array and an image sensor has been used to measure the wavefront aberrations of human eyes. However, a conventional SHWFS has finite dynamic range depending on the diameter of the each microlens. The dynamic range cannot be easily expanded without a decrease of the spatial resolution. In this study, an adaptive spot search method to expand the dynamic range of an SHWFS is proposed. In the proposed method, spots are searched with the help of their approximate displacements measured with low spatial resolution and large dynamic range. By the proposed method, a wavefront can be correctly measured even if the spot is beyond the detection area. The adaptive spot search method is realized by using the special microlens array that generates both spots and discriminable patterns. The proposed method enables expanding the dynamic range of an SHWFS with a single shot and short processing time. The performance of the proposed method is compared with that of a conventional SHWFS by optical experiments. Furthermore, the dynamic range of the proposed method is quantitatively evaluated by numerical simulations. PMID:27409319
NASA Astrophysics Data System (ADS)
Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.
2016-01-01
18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a
The Limits to Adaptation: A Systems Approach
The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering parameters), resource constraints (expressed th...
CARA Risk Assessment Thresholds
NASA Technical Reports Server (NTRS)
Hejduk, M. D.
2016-01-01
Warning remediation threshold (Red threshold): Pc level at which warnings are issued, and active remediation considered and usually executed. Analysis threshold (Green to Yellow threshold): Pc level at which analysis of event is indicated, including seeking additional information if warranted. Post-remediation threshold: Pc level to which remediation maneuvers are sized in order to achieve event remediation and obviate any need for immediate follow-up maneuvers. Maneuver screening threshold: Pc compliance level for routine maneuver screenings (more demanding than regular Red threshold due to additional maneuver uncertainty).
Uncertainty Estimates of Psychoacoustic Thresholds Obtained from Group Tests
NASA Technical Reports Server (NTRS)
Rathsam, Jonathan; Christian, Andrew
2016-01-01
Adaptive psychoacoustic test methods, in which the next signal level depends on the response to the previous signal, are the most efficient for determining psychoacoustic thresholds of individual subjects. In many tests conducted in the NASA psychoacoustic labs, the goal is to determine thresholds representative of the general population. To do this economically, non-adaptive testing methods are used in which three or four subjects are tested at the same time with predetermined signal levels. This approach requires us to identify techniques for assessing the uncertainty in resulting group-average psychoacoustic thresholds. In this presentation we examine the Delta Method of frequentist statistics, the Generalized Linear Model (GLM), the Nonparametric Bootstrap, a frequentist method, and Markov Chain Monte Carlo Posterior Estimation and a Bayesian approach. Each technique is exercised on a manufactured, theoretical dataset and then on datasets from two psychoacoustics facilities at NASA. The Delta Method is the simplest to implement and accurate for the cases studied. The GLM is found to be the least robust, and the Bootstrap takes the longest to calculate. The Bayesian Posterior Estimate is the most versatile technique examined because it allows the inclusion of prior information.
Adaptive f-k deghosting method based on non-Gaussianity
NASA Astrophysics Data System (ADS)
Liu, Lei; Lu, Wenkai
2016-04-01
For conventional horizontal towed streamer data, the f-k deghosting method is widely used to remove receiver ghosts. In the traditional f-k deghosting method, the depth of the streamer and the sea surface reflection coefficient are two key ghost parameters. In general, for one seismic line, these two parameters are fixed for all shot gathers and given by the users. In practice, these two parameters often vary during acquisition because of the rough sea condition. This paper proposes an automatic method to adaptively obtain these two ghost parameters for every shot gather. Since the proposed method is based on the non-Gaussianity of the deghosting result, it is important to choose a proper non-Gaussian criterion to ensure high accuracy of the parameter estimation. We evaluate six non-Gaussian criteria by synthetic experiment. The conclusion of our experiment is expected to provide a reference for choosing the most appropriate criterion. We apply the proposed method on a 2D real field example. Experimental results show that the optimal parameters vary among shot gathers and validate effectiveness of the parameter estimation process. Moreover, despite that this method ignores the parameter variation within one shot, the adaptive deghosting results show improvements when compared with the deghosting results obtained by using constant parameters for the whole line.
NASA Astrophysics Data System (ADS)
Ghamisi, Pedram; Kumar, Lalit
2012-01-01
Hyperspectral sensors generate useful information about climate and the earth surface in numerous contiguous narrow spectral bands, and are widely used in resource management, agriculture, environmental monitoring, etc. Compression of the hyperspectral data helps in long-term storage and transmission systems. Lossless compression is preferred for high-detail data, such as hyperspectral data. Due to high redundancy in neighboring spectral bands and the tendency to achieve a higher compression ratio, using adaptive coding methods for hyperspectral data seems suitable for this purpose. This paper introduces two new compression methods. One of these methods is adaptive and powerful for the compression of hyperspectral data, which is based on separating the bands with different specifications by the histogram and Binary Particle Swarm Optimization (BPSO) and compressing each one a different manner. The new proposed methods improve the compression ratio of the JPEG standards and save storage space the transmission. The proposed methods are applied on different test cases, and the results are evaluated and compared with some other compression methods, such as lossless JPEG and JPEG2000.
NASA Astrophysics Data System (ADS)
Burago, N. G.; Nikitin, I. S.; Yakushev, V. L.
2016-06-01
Techniques that improve the accuracy of numerical solutions and reduce their computational costs are discussed as applied to continuum mechanics problems with complex time-varying geometry. The approach combines shock-capturing computations with the following methods: (1) overlapping meshes for specifying complex geometry; (2) elastic arbitrarily moving adaptive meshes for minimizing the approximation errors near shock waves, boundary layers, contact discontinuities, and moving boundaries; (3) matrix-free implementation of efficient iterative and explicit-implicit finite element schemes; (4) balancing viscosity (version of the stabilized Petrov-Galerkin method); (5) exponential adjustment of physical viscosity coefficients; and (6) stepwise correction of solutions for providing their monotonicity and conservativeness.
An adaptive finite element method for convective heat transfer with variable fluid properties
NASA Astrophysics Data System (ADS)
Pelletier, Dominique; Ilinca, Florin; Hetu, Jean-Francois
1993-07-01
This paper presents an adaptive finite element method based on remeshing to solve incompressible viscous flow problems for which fluid properties present a strong temperature dependence. Solutions are obtained in primitive variables using a highly accurate finite element approximation on unstructured grids. Two general purpose error estimators, that take into account fluid properties variations, are presented. The methodology is applied to a problem of practical interest: the thermal convection of corn syrup in an enclosure with localized heating. Predictions are in good agreement with experimental measurements. The method leads to improved accuracy and reliability of finite element predictions.
An adaptive mesh method for phase-field simulation of alloy solidification in three dimensions
NASA Astrophysics Data System (ADS)
Bollada, P. C.; Jimack, P. K.; Mullis, A. M.
2015-06-01
We present our computational method for binary alloy solidification which takes advantage of high performance computing where up to 1024 cores are employed. Much of the simulation at a sufficiently fine resolution is possible on a modern 12 core PC and the 1024 core simulation is only necessary for very mature dendrite and convergence testing where high resolution puts extreme demands on memory. In outline, the method uses implicit time stepping in conjunction with an iterative solver, adaptive meshing and a scheme for dividing the work load across processors. We include three dimensional results for a Lewis number of 100 and a snapshot for a mature dendrite for a Lewis number of 40.
Development of a Godunov method for Maxwell's equations with Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Barbas, Alfonso; Velarde, Pedro
2015-11-01
In this paper we present a second order 3D method for Maxwell's equations based on a Godunov scheme with Adaptive Mesh Refinement (AMR). In order to achieve it, we apply a limiter which better preserves extrema and boundary conditions based on a characteristic fields decomposition. Despite being more complex, simplifications in the boundary conditions make the resulting method competitive in computer time consumption and accuracy compared to FDTD. AMR allows us to simulate systems with a sharp step in material properties with negligible rebounds and also large domains with accuracy in small wavelengths.
NASA Astrophysics Data System (ADS)
Abedini, Mohammad; Nojoumian, Mohammad Ali; Salarieh, Hassan; Meghdari, Ali
2015-08-01
In this paper, model reference control of a fractional order system has been discussed. In order to control the fractional order plant, discrete-time approximation methods have been applied. Plant and reference model are discretized by Grünwald-Letnikov definition of the fractional order derivative using "Short Memory Principle". Unknown parameters of the fractional order system are appeared in the discrete time approximate model as combinations of parameters of the main system. The discrete time MRAC via RLS identification is modified to estimate the parameters and control the fractional order plant. Numerical results show the effectiveness of the proposed method of model reference adaptive control.
An Adaptive INS-Aided PLL Tracking Method for GNSS Receivers in Harsh Environments
Cong, Li; Li, Xin; Jin, Tian; Yue, Song; Xue, Rui
2016-01-01
As the weak link in global navigation satellite system (GNSS) signal processing, the phase-locked loop (PLL) is easily influenced with frequent cycle slips and loss of lock as a result of higher vehicle dynamics and lower signal-to-noise ratios. With inertial navigation system (INS) aid, PLLs’ tracking performance can be improved. However, for harsh environments with high dynamics and signal attenuation, the traditional INS-aided PLL with fixed loop parameters has some limitations to improve the tracking adaptability. In this paper, an adaptive INS-aided PLL capable of adjusting its noise bandwidth and coherent integration time has been proposed. Through theoretical analysis, the relation between INS-aided PLL phase tracking error and carrier to noise density ratio (C/N0), vehicle dynamics, aiding information update time, noise bandwidth, and coherent integration time has been built. The relation formulae are used to choose the optimal integration time and bandwidth for a given application under the minimum tracking error criterion. Software and hardware simulation results verify the correctness of the theoretical analysis, and demonstrate that the adaptive tracking method can effectively improve the PLL tracking ability and integrated GNSS/INS navigation performance. For harsh environments, the tracking sensitivity is increased by 3 to 5 dB, velocity errors are decreased by 36% to 50% and position errors are decreased by 6% to 24% when compared with other INS-aided PLL methods. PMID:26805853
An Adaptive INS-Aided PLL Tracking Method for GNSS Receivers in Harsh Environments.
Cong, Li; Li, Xin; Jin, Tian; Yue, Song; Xue, Rui
2016-01-01
As the weak link in global navigation satellite system (GNSS) signal processing, the phase-locked loop (PLL) is easily influenced with frequent cycle slips and loss of lock as a result of higher vehicle dynamics and lower signal-to-noise ratios. With inertial navigation system (INS) aid, PLLs' tracking performance can be improved. However, for harsh environments with high dynamics and signal attenuation, the traditional INS-aided PLL with fixed loop parameters has some limitations to improve the tracking adaptability. In this paper, an adaptive INS-aided PLL capable of adjusting its noise bandwidth and coherent integration time has been proposed. Through theoretical analysis, the relation between INS-aided PLL phase tracking error and carrier to noise density ratio (C/N₀), vehicle dynamics, aiding information update time, noise bandwidth, and coherent integration time has been built. The relation formulae are used to choose the optimal integration time and bandwidth for a given application under the minimum tracking error criterion. Software and hardware simulation results verify the correctness of the theoretical analysis, and demonstrate that the adaptive tracking method can effectively improve the PLL tracking ability and integrated GNSS/INS navigation performance. For harsh environments, the tracking sensitivity is increased by 3 to 5 dB, velocity errors are decreased by 36% to 50% and position errors are decreased by 6% to 24% when compared with other INS-aided PLL methods. PMID:26805853
NASA Astrophysics Data System (ADS)
Lee, W. H.; Kim, T.-S.; Cho, M. H.; Ahn, Y. B.; Lee, S. Y.
2006-12-01
In studying bioelectromagnetic problems, finite element analysis (FEA) offers several advantages over conventional methods such as the boundary element method. It allows truly volumetric analysis and incorporation of material properties such as anisotropic conductivity. For FEA, mesh generation is the first critical requirement and there exist many different approaches. However, conventional approaches offered by commercial packages and various algorithms do not generate content-adaptive meshes (cMeshes), resulting in numerous nodes and elements in modelling the conducting domain, and thereby increasing computational load and demand. In this work, we present efficient content-adaptive mesh generation schemes for complex biological volumes of MR images. The presented methodology is fully automatic and generates FE meshes that are adaptive to the geometrical contents of MR images, allowing optimal representation of conducting domain for FEA. We have also evaluated the effect of cMeshes on FEA in three dimensions by comparing the forward solutions from various cMesh head models to the solutions from the reference FE head model in which fine and equidistant FEs constitute the model. The results show that there is a significant gain in computation time with minor loss in numerical accuracy. We believe that cMeshes should be useful in the FEA of bioelectromagnetic problems.
Encoding and simulation of daily rainfall records via adaptations of the fractal multifractal method
NASA Astrophysics Data System (ADS)
Maskey, M.; Puente, C. E.; Sivakumar, B.; Cortis, A.
2015-12-01
A deterministic geometric approach, the fractal-multifractal (FM) method, is adapted to encode and simulate daily rainfall records exhibiting noticeable intermittency. Using data sets gathered at Laikakota in Bolivia and Tinkham in Washington State, USA, it is demonstrated that the adapted FM approach can, within the limits of accuracy of measured sets and using only a few geometric parameters, encode and simulate the erratic rainfall records reasonably well. The FM procedure does not only preserve the statistical attributes of the records such as histogram, entropy function and distribution of zeroes, but also captures the overall texture inherent in the rather complex intermittent sets. As such, the FM deterministic representations may be used to supplement stochastic frameworks for data coding and simulation.
Pulse front adaptive optics: a new method for control of ultrashort laser pulses.
Sun, Bangshan; Salter, Patrick S; Booth, Martin J
2015-07-27
Ultrafast lasers enable a wide range of physics research and the manipulation of short pulses is a critical part of the ultrafast tool kit. Current methods of laser pulse shaping are usually considered separately in either the spatial or the temporal domain, but laser pulses are complex entities existing in four dimensions, so full freedom of manipulation requires advanced forms of spatiotemporal control. We demonstrate through a combination of adaptable diffractive and reflective optical elements - a liquid crystal spatial light modulator (SLM) and a deformable mirror (DM) - decoupled spatial control over the pulse front (temporal group delay) and phase front of an ultra-short pulse was enabled. Pulse front modulation was confirmed through autocorrelation measurements. This new adaptive optics technique, for the first time enabling in principle arbitrary shaping of the pulse front, promises to offer a further level of control for ultrafast lasers. PMID:26367595
Adaptive particle refinement and derefinement applied to the smoothed particle hydrodynamics method
NASA Astrophysics Data System (ADS)
Barcarolo, D. A.; Le Touzé, D.; Oger, G.; de Vuyst, F.
2014-09-01
SPH simulations are usually performed with a uniform particle distribution. New techniques have been recently proposed to enable the use of spatially varying particle distributions, which encouraged the development of automatic adaptivity and particle refinement/derefinement algorithms. All these efforts resulted in very interesting and promising procedures leading to more efficient and faster SPH simulations. In this article, a family of particle refinement techniques is reviewed and a new derefinement technique is proposed and validated through several test cases involving both free-surface and viscous flows. Besides, this new procedure allows higher resolutions in the regions requiring increased accuracy. Moreover, several levels of refinement can be used with this new technique, as often encountered in adaptive mesh refinement techniques in mesh-based methods.
New Adaptive Method for IQ Imbalance Compensation of Quadrature Modulators in Predistortion Systems
NASA Astrophysics Data System (ADS)
Zareian, Hassan; Vakili, Vahid Tabataba
2009-12-01
Imperfections in quadrature modulators (QMs), such as inphase and quadrature (IQ) imbalance, can severely impact the performance of power amplifier (PA) linearization systems, in particular in adaptive digital predistorters (PDs). In this paper, we first analyze the effect of IQ imbalance on the performance of a memory orthogonal polynomials predistorter (MOP PD), and then we propose a new adaptive algorithm to estimate and compensate the unknown IQ imbalance in QM. Unlike previous compensation techniques, the proposed method was capable of online IQ imbalance compensation with faster convergence, and no special calibration or training signals were needed. The effectiveness of the proposed IQ imbalance compensator was validated by simulations. The results clearly show the performance of the MOP PD to be enhanced significantly by adding the proposed IQ imbalance compensator.
The stochastic control of the F-8C aircraft using the Multiple Model Adaptive Control (MMAC) method
NASA Technical Reports Server (NTRS)
Athans, M.; Dunn, K. P.; Greene, E. S.; Lee, W. H.; Sandel, N. R., Jr.
1975-01-01
The purpose of this paper is to summarize results obtained for the adaptive control of the F-8C aircraft using the so-called Multiple Model Adaptive Control method. The discussion includes the selection of the performance criteria for both the lateral and the longitudinal dynamics, the design of the Kalman filters for different flight conditions, the 'identification' aspects of the design using hypothesis testing ideas, and the performance of the closed loop adaptive system.
Cen, Guanjun; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao
2015-01-01
In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks’ rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby’s growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods. PMID:26546689
A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection
Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; Burkardt, John V.
2015-06-24
This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the newmore » technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.« less
Locomotor adaptation to a powered ankle-foot orthosis depends on control method
Cain, Stephen M; Gordon, Keith E; Ferris, Daniel P
2007-01-01
Background We studied human locomotor adaptation to powered ankle-foot orthoses with the intent of identifying differences between two different orthosis control methods. The first orthosis control method used a footswitch to provide bang-bang control (a kinematic control) and the second orthosis control method used a proportional myoelectric signal from the soleus (a physiological control). Both controllers activated an artificial pneumatic muscle providing plantar flexion torque. Methods Subjects walked on a treadmill for two thirty-minute sessions spaced three days apart under either footswitch control (n = 6) or myoelectric control (n = 6). We recorded lower limb electromyography (EMG), joint kinematics, and orthosis kinetics. We compared stance phase EMG amplitudes, correlation of joint angle patterns, and mechanical work performed by the powered orthosis between the two controllers over time. Results During steady state at the end of the second session, subjects using proportional myoelectric control had much lower soleus and gastrocnemius activation than the subjects using footswitch control. The substantial decrease in triceps surae recruitment allowed the proportional myoelectric control subjects to walk with ankle kinematics close to normal and reduce negative work performed by the orthosis. The footswitch control subjects walked with substantially perturbed ankle kinematics and performed more negative work with the orthosis. Conclusion These results provide evidence that the choice of orthosis control method can greatly alter how humans adapt to powered orthosis assistance during walking. Specifically, proportional myoelectric control results in larger reductions in muscle activation and gait kinematics more similar to normal compared to footswitch control. PMID:18154649
NASA Astrophysics Data System (ADS)
Bu, Guochao; Wang, Pei
2016-04-01
Terrestrial laser scanning (TLS) has been used to extract accurate forest biophysical parameters for inventory purposes. The diameter at breast height (DBH) is a key parameter for individual trees because it has the potential for modeling the height, volume, biomass, and carbon sequestration potential of the tree based on empirical allometric scaling equations. In order to extract the DBH from the single-scan data of TLS automatically and accurately within a certain range, we proposed an adaptive circle-ellipse fitting method based on the point cloud transect. This proposed method can correct the error caused by the simple circle fitting method when a tree is slanted. A slanted tree was detected by the circle-ellipse fitting analysis, then the corresponding slant angle was found based on the ellipse fitting result. With this information, the DBH of the trees could be recalculated based on reslicing the point cloud data at breast height. Artificial stem data simulated by a cylindrical model of leaning trees and the scanning data acquired with the RIEGL VZ-400 were used to test the proposed adaptive fitting method. The results shown that the proposed method can detect the trees and accurately estimate the DBH for leaning trees.
Modeling flow through inline tube bundles using an adaptive immersed boundary method
NASA Astrophysics Data System (ADS)
Liang, Chunlei; Luo, Xiaoyu; Griffith, Boyce
2007-11-01
Fluid flow and its exerted forces on the tube bundle cylinders are important in designing mechanical/nuclear heat exchanger facilities. In this paper, we study the vortex structure of the flow around the tube bundle for different tube spacing. An adaptive, formally 2^nd order immersed boundary (IB) method is used to simulate the flow. One advantage of the IB method is its great flexibility and ease in positioning solid bodies in the fluid domain. Our IB approach uses a six-point regularized delta function and is a type of continuous forcing approach. Validation results obtained using the IB method for two-in-tandem cylinders compare well with those obtained using the finite volume or spectral element methods on unstructured grids. Subsequently, we simulated flow through six-row inline tube bundles with pitch-to-diameter ratios of 2.1, 3.2, and 4, respectively, on structured adaptively refined Cartesian grids. The IB method enables us to study the critical tube spacing when the flow regime switches from the vortex reattachment pattern to alternative individual vortex shedding.
Adaptive Projection Subspace Dimension for the Thick-Restart Lanczos Method
Yamazaki, Ichitaro; Bai, Zhaojun; Simon, Horst; Wang, Lin-Wang; Wu, K.
2008-10-01
The Thick-Restart Lanczos (TRLan) method is an effective method for solving large-scale Hermitian eigenvalue problems. However, its performance strongly depends on the dimension of the projection subspace. In this paper, we propose an objective function to quantify the effectiveness of a chosen subspace dimension, and then introduce an adaptive scheme to dynamically adjust the dimension at each restart. An open-source software package, nu-TRLan, which implements the TRLan method with this adaptive projection subspace dimension is available in the public domain. The numerical results of synthetic eigenvalue problems are presented to demonstrate that nu-TRLan achieves speedups of between 0.9 and 5.1 over the static method using a default subspace dimension. To demonstrate the effectiveness of nu-TRLan in a real application, we apply it to the electronic structure calculations of quantum dots. We show that nu-TRLan can achieve speedups of greater than 1.69 over the state-of-the-art eigensolver for this application, which is based on the Conjugate Gradient method with a powerful preconditioner.
A hyper-spherical adaptive sparse-grid method for high-dimensional discontinuity detection
Zhang, Guannan; Webster, Clayton G; Gunzburger, Max D; Burkardt, John V
2014-03-01
This work proposes and analyzes a hyper-spherical adaptive hi- erarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces is proposed. The method is motivated by the the- oretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a func- tion representation of the discontinuity hyper-surface of an N-dimensional dis- continuous quantity of interest, by virtue of a hyper-spherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyper-spherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smooth- ness of the hyper-surface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous error estimates and complexity anal- yses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.
Calvo, Juan Francisco; San José, Sol; Garrido, LLuís; Puertas, Enrique; Moragues, Sandra; Pozo, Miquel; Casals, Joan
2013-10-01
To introduce an approach for online adaptive replanning (i.e., dose-guided radiosurgery) in frameless stereotactic radiosurgery, when a 6-dimensional (6D) robotic couch is not available in the linear accelerator (linac). Cranial radiosurgical treatments are planned in our department using intensity-modulated technique. Patients are immobilized using thermoplastic mask. A cone-beam computed tomography (CBCT) scan is acquired after the initial laser-based patient setup (CBCT{sub setup}). The online adaptive replanning procedure we propose consists of a 6D registration-based mapping of the reference plan onto actual CBCT{sub setup}, followed by a reoptimization of the beam fluences (“6D plan”) to achieve similar dosage as originally was intended, while the patient is lying in the linac couch and the original beam arrangement is kept. The goodness of the online adaptive method proposed was retrospectively analyzed for 16 patients with 35 targets treated with CBCT-based frameless intensity modulated technique. Simulation of reference plan onto actual CBCT{sub setup}, according to the 4 degrees of freedom, supported by linac couch was also generated for each case (4D plan). Target coverage (D99%) and conformity index values of 6D and 4D plans were compared with the corresponding values of the reference plans. Although the 4D-based approach does not always assure the target coverage (D99% between 72% and 103%), the proposed online adaptive method gave a perfect coverage in all cases analyzed as well as a similar conformity index value as was planned. Dose-guided radiosurgery approach is effective to assure the dose coverage and conformity of an intracranial target volume, avoiding resetting the patient inside the mask in a “trial and error” way so as to remove the pitch and roll errors when a robotic table is not available.
The method of constant stimuli is inefficient
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Fitzhugh, Andrew
1990-01-01
Simpson (1988) has argued that the method of constant stimuli is as efficient as adaptive methods of threshold estimation and has supported this claim with simulations. It is shown that Simpson's simulations are not a reasonable model of the experimental process and that more plausible simulations confirm that adaptive methods are much more efficient that the method of constant stimuli.
NASA Astrophysics Data System (ADS)
Han, Dongmei; Xu, Xinyi; Yan, Denghua
2016-04-01
In recent years, global climate change has significantly caused a serious crisis of water resources throughout the world. However, mainly through variations in temperature, climate change will affect water requirements of crop. It is obvious that the rise of temperature affects growing period and phenological period of crop directly, then changes the water demand quota of crop. Methods including accumulated temperature threshold and climatic tendency rate were adopted, which made up for the weakness of phenological observations, to reveal the response of crop phenological change during the growing period. Then using Penman-Menteith model and crop coefficients from the United Nations Food& Agriculture Organization (FAO), the paper firstly explored crop water requirements in different growth periods, and further forecasted quantitatively crop water requirements in Heihe River Basin, China under different climate change scenarios. Results indicate that: (i) The results of crop phenological change established in the method of accumulated temperature threshold were in agreement with measured results, and (ii) there were many differences in impacts of climate warming on water requirement of different crops. The growth periods of wheat and corn had tendency of shortening as well as the length of growth periods. (ii)Results of crop water requirements under different climate change scenarios showed: when temperature increased by 1°C, the start time of wheat growth period changed, 2 days earlier than before, and the length of total growth period shortened 2 days. Wheat water requirements increased by 1.4mm. However, corn water requirements decreased by almost 0.9mm due to the increasing temperature of 1°C. And the start time of corn growth period become 3 days ahead, and the length of total growth period shortened 4 days. Therefore, the contradiction between water supply and water demands are more obvious under the future climate warming in Heihe River Basin, China.
Self-adaptive method for high frequency multi-channel analysis of surface wave method
Technology Transfer Automated Retrieval System (TEKTRAN)
When the high frequency multi-channel analysis of surface waves (MASW) method is conducted to explore soil properties in the vadose zone, existing rules for selecting the near offset and spread lengths cannot satisfy the requirements of planar dominant Rayleigh waves for all frequencies of interest ...
An HP Adaptive Discontinuous Galerkin Method for Hyperbolic Conservation Laws. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Bey, Kim S.
1994-01-01
This dissertation addresses various issues for model classes of hyperbolic conservation laws. The basic approach developed in this work employs a new family of adaptive, hp-version, finite element methods based on a special discontinuous Galerkin formulation for hyperbolic problems. The discontinuous Galerkin formulation admits high-order local approximations on domains of quite general geometry, while providing a natural framework for finite element approximations and for theoretical developments. The use of hp-versions of the finite element method makes possible exponentially convergent schemes with very high accuracies in certain cases; the use of adaptive hp-schemes allows h-refinement in regions of low regularity and p-enrichment to deliver high accuracy, while keeping problem sizes manageable and dramatically smaller than many conventional approaches. The use of discontinuous Galerkin methods is uncommon in applications, but the methods rest on a reasonable mathematical basis for low-order cases and has local approximation features that can be exploited to produce very efficient schemes, especially in a parallel, multiprocessor environment. The place of this work is to first and primarily focus on a model class of linear hyperbolic conservation laws for which concrete mathematical results, methodologies, error estimates, convergence criteria, and parallel adaptive strategies can be developed, and to then briefly explore some extensions to more general cases. Next, we provide preliminaries to the study and a review of some aspects of the theory of hyperbolic conservation laws. We also provide a review of relevant literature on this subject and on the numerical analysis of these types of problems.
Adaptive finite volume methods for time-dependent P.D.E.S.
Ware, J.; Berzins, M.
1995-12-31
The aim of adaptive methods for time-dependent p.d.e.s is to control the numerical error so that it is less than a user-specified tolerance. This error depends on the spatial discretization method, the spatial mesh, the method of time integration and the timestep. The spatial discretization method and positioning of the spatial mesh points should attempt to ensure that the spatial error is controlled to meet the user`s requirements. It is then desirable to integrate the o.d.e. system in time with sufficient accuracy so that the temporal error does not corrupt the spatial accuracy or the reliability of the spatial error estimates. This paper is concerned with the development of a prototype algorithm of this type, based on a cell-centered triangular finite volume scheme, for two space dimensional convection-dominated problems.
A Cartesian Adaptive Level Set Method for Two-Phase Flows
NASA Technical Reports Server (NTRS)
Ham, F.; Young, Y.-N.
2003-01-01
In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.
Patched based methods for adaptive mesh refinement solutions of partial differential equations
Saltzman, J.
1997-09-02
This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.
Validation of an Adaptive Combustion Instability Control Method for Gas-Turbine Engines
NASA Technical Reports Server (NTRS)
Kopasakis, George; DeLaat, John C.; Chang, Clarence T.
2004-01-01
This paper describes ongoing testing of an adaptive control method to suppress high frequency thermo-acoustic instabilities like those found in lean-burning, low emission combustors that are being developed for future aircraft gas turbine engines. The method called Adaptive Sliding Phasor Averaged Control, was previously tested in an experimental rig designed to simulate a combustor with an instability of about 530 Hz. Results published earlier, and briefly presented here, demonstrated that this method was effective in suppressing the instability. Because this test rig did not exhibit a well pronounced instability, a question remained regarding the effectiveness of the control methodology when applied to a more coherent instability. To answer this question, a modified combustor rig was assembled at the NASA Glenn Research Center in Cleveland, Ohio. The modified rig exhibited a more coherent, higher amplitude instability, but at a lower frequency of about 315 Hz. Test results show that this control method successfully reduced the instability pressure of the lower frequency test rig. In addition, due to a certain phenomena discovered and reported earlier, the so called Intra-Harmonic Coupling, a dramatic suppression of the instability was achieved by focusing control on the second harmonic of the instability. These results and their implications are discussed, as well as a hypothesis describing the mechanism of intra-harmonic coupling.
Weber, R; Bryan, R T; Bishop, H S; Wahlquist, S P; Sullivan, J J; Juranek, D D
1991-01-01
To determine the minimum number of Cryptosporidium oocysts that can be detected in stool specimens by diagnostic procedures, stool samples seeded with known numbers of Cryptosporidium parvum oocysts were processed by the modified Formalin-ethyl acetate (FEA) stool concentration method. FEA concentrates were subsequently examined by both the modified cold Kinyoun acid-fast (AF) staining and fluorescein-tagged monoclonal antibody (immunofluorescence [IF]) techniques. Oocysts were more easily detected in watery diarrheal stool specimens than they were in formed stool specimens. For watery stool specimens, a 100% detection rate was accomplished at a concentration of 10,000 oocysts per g of stool by both the AF staining and IF techniques. In formed stool specimens, 100% of specimens seeded with 50,000 oocysts per gram of stool were detected by the IF technique, whereas 500,000 oocysts per g of stool were needed for a 100% detection rate by AF staining. Counting of all oocysts on IF slides indicated a mean oocyst loss ranging from 51.2 to 99.6%, depending on the stool consistency as determined by the FEA concentration procedure. Our findings suggest that the most commonly used coprodiagnostic techniques may fail to detect cryptosporidiosis in many immunocompromised and immunocompetent individuals. PMID:1715881
Threshold quantum cryptography
Tokunaga, Yuuki; Okamoto, Tatsuaki; Imoto, Nobuyuki
2005-01-01
We present the concept of threshold collaborative unitary transformation or threshold quantum cryptography, which is a kind of quantum version of threshold cryptography. Threshold quantum cryptography states that classical shared secrets are distributed to several parties and a subset of them, whose number is greater than a threshold, collaborates to compute a quantum cryptographic function, while keeping each share secretly inside each party. The shared secrets are reusable if no cheating is detected. As a concrete example of this concept, we show a distributed protocol (with threshold) of conjugate coding.
Adaptive method for quantifying uncertainty in discharge measurements using velocity-area method.
NASA Astrophysics Data System (ADS)
Despax, Aurélien; Favre, Anne-Catherine; Belleville, Arnaud
2015-04-01
Streamflow information provided by hydrometric services such as EDF-DTG allow real time monitoring of rivers, streamflow forecasting, paramount hydrological studies and engineering design. In open channels, the traditional approach to measure flow uses a rating curve, which is an indirect method to estimate the discharge in rivers based on water level and punctual discharge measurements. A large proportion of these discharge measurements are performed using the velocity-area method; it consists in integrating flow velocities and depths through the cross-section [1]. The velocity field is estimated by choosing a number m of verticals, distributed across the river, where vertical velocity profile is sampled by a current-meter at ni different depths. Uncertainties coming from several sources are related to the measurement process. To date, the framework for assessing uncertainty in velocity-area discharge measurements is the method presented in the ISO 748 standard [2] which follows the GUM [3] approach. The equation for the combined uncertainty in measured discharge u(Q), at 68% level of confidence, proposed by the ISO 748 standard is expressed as: Σ 2 2 2 -q2i[u2(Bi)+-u2(Di)+-u2p(Vi)+-(1ni) ×-[u2c(Vi)+-u2exp(Vi)
Radiation hydrodynamics including irradiation and adaptive mesh refinement with AZEuS. I. Methods
NASA Astrophysics Data System (ADS)
Ramsey, J. P.; Dullemond, C. P.
2015-02-01
Aims: The importance of radiation to the physical structure of protoplanetary disks cannot be understated. However, protoplanetary disks evolve with time, and so to understand disk evolution and by association, disk structure, one should solve the combined and time-dependent equations of radiation hydrodynamics. Methods: We implement a new implicit radiation solver in the AZEuS adaptive mesh refinement magnetohydrodynamics fluid code. Based on a hybrid approach that combines frequency-dependent ray-tracing for stellar irradiation with non-equilibrium flux limited diffusion, we solve the equations of radiation hydrodynamics while preserving the directionality of the stellar irradiation. The implementation permits simulations in Cartesian, cylindrical, and spherical coordinates, on both uniform and adaptive grids. Results: We present several hydrostatic and hydrodynamic radiation tests which validate our implementation on uniform and adaptive grids as appropriate, including benchmarks specifically designed for protoplanetary disks. Our results demonstrate that the combination of a hybrid radiation algorithm with AZEuS is an effective tool for radiation hydrodynamics studies, and produces results which are competitive with other astrophysical radiation hydrodynamics codes.
Fraisier, V; Clouvel, G; Jasaitis, A; Dimitrov, A; Piolot, T; Salamero, J
2015-09-01
Multiconfocal microscopy gives a good compromise between fast imaging and reasonable resolution. However, the low intensity of live fluorescent emitters is a major limitation to this technique. Aberrations induced by the optical setup, especially the mismatch of the refractive index and the biological sample itself, distort the point spread function and further reduce the amount of detected photons. Altogether, this leads to impaired image quality, preventing accurate analysis of molecular processes in biological samples and imaging deep in the sample. The amount of detected fluorescence can be improved with adaptive optics. Here, we used a compact adaptive optics module (adaptive optics box for sectioning optical microscopy), which was specifically designed for spinning disk confocal microscopy. The module overcomes undesired anomalies by correcting for most of the aberrations in confocal imaging. Existing aberration detection methods require prior illumination, which bleaches the sample. To avoid multiple exposures of the sample, we established an experimental model describing the depth dependence of major aberrations. This model allows us to correct for those aberrations when performing a z-stack, gradually increasing the amplitude of the correction with depth. It does not require illumination of the sample for aberration detection, thus minimizing photobleaching and phototoxicity. With this model, we improved both signal-to-background ratio and image contrast. Here, we present comparative studies on a variety of biological samples. PMID:25940062
Comparative adaptation accuracy of acrylic denture bases evaluated by two different methods.
Lee, Chung-Jae; Bok, Sung-Bem; Bae, Ji-Young; Lee, Hae-Hyoung
2010-08-01
This study examined the adaptation accuracy of acrylic denture base processed using fluid-resin (PERform), injection-moldings (SR-Ivocap, Success, Mak Press), and two compression-molding techniques. The adaptation accuracy was measured primarily by the posterior border gaps at the mid-palatal area using a microscope and subsequently by weighing of the weight of the impression material between the denture base and master cast using hand-mixed and automixed silicone. The correlation between the data measured using these two test methods was examined. The PERform and Mak Press produced significantly smaller maximum palatal gap dimensions than the other groups (p<0.05). Mak Press also showed a significantly smaller weight of automixed silicone material than the other groups (p<0.05), while SR-Ivocap and Success showed similar adaptation accuracy to the compression-molding denture. The correlationship between the magnitude of the posterior border gap and the weight of the silicone impression materials was affected by either the material or mixing variables. PMID:20675954
Zhou, Hui; Kunz, Thomas; Schwartz, Howard
2011-01-01
Traditional oscillators used in timing modules of CDMA and WiMAX base stations are large and expensive. Applying cheaper and smaller, albeit more inaccurate, oscillators in timing modules is an interesting research challenge. An adaptive control algorithm is presented to enhance the oscillators to meet the requirements of base stations during holdover mode. An oscillator frequency stability model is developed for the adaptive control algorithm. This model takes into account the control loop which creates the correction signal when the timing module is in locked mode. A recursive prediction error method is used to identify the system model parameters. Simulation results show that an oscillator enhanced by our adaptive control algorithm improves the oscillator performance significantly, compared with uncorrected oscillators. Our results also show the benefit of explicitly modeling the control loop. Finally, the cumulative time error upper bound of such enhanced oscillators is investigated analytically and comparison results between the analytical and simulated upper bound are provided. The results show that the analytical upper bound can serve as a practical guide for system designers. PMID:21244973
Adaptive control system having hedge unit and related apparatus and methods
NASA Technical Reports Server (NTRS)
Johnson, Eric Norman (Inventor); Calise, Anthony J. (Inventor)
2003-01-01
The invention includes an adaptive control system used to control a plant. The adaptive control system includes a hedge unit that receives at least one control signal and a plant state signal. The hedge unit generates a hedge signal based on the control signal, the plant state signal, and a hedge model including a first model having one or more characteristics to which the adaptive control system is not to adapt, and a second model not having the characteristic(s) to which the adaptive control system is not to adapt. The hedge signal is used in the adaptive control system to remove the effect of the characteristic from a signal supplied to an adaptation law unit of the adaptive control system so that the adaptive control system does not adapt to the characteristic in controlling the plant.
Adaptive control system having hedge unit and related apparatus and methods
NASA Technical Reports Server (NTRS)
Johnson, Eric Norman (Inventor); Calise, Anthony J. (Inventor)
2007-01-01
The invention includes an adaptive control system used to control a plant. The adaptive control system includes a hedge unit that receives at least one control signal and a plant state signal. The hedge unit generates a hedge signal based on the control signal, the plant state signal, and a hedge model including a first model having one or more characteristics to which the adaptive control system is not to adapt, and a second model not having the characteristic(s) to which the adaptive control system is not to adapt. The hedge signal is used in the adaptive control system to remove the effect of the characteristic from a signal supplied to an adaptation law unit of the adaptive control system so that the adaptive control system does not adapt to the characteristic in controlling the plant.
Threshold magnitudes for a multichannel correlation detector in background seismicity
Carmichael, Joshua D.; Hartse, Hans
2016-04-01
Colocated explosive sources often produce correlated seismic waveforms. Multichannel correlation detectors identify these signals by scanning template waveforms recorded from known reference events against "target" data to find similar waveforms. This screening problem is challenged at thresholds required to monitor smaller explosions, often because non-target signals falsely trigger such detectors. Therefore, it is generally unclear what thresholds will reliably identify a target explosion while screening non-target background seismicity. Here, we estimate threshold magnitudes for hypothetical explosions located at the North Korean nuclear test site over six months of 2010, by processing International Monitoring System (IMS) array data with a multichannelmore » waveform correlation detector. Our method (1) accounts for low amplitude background seismicity that falsely triggers correlation detectors but is unidentifiable with conventional power beams, (2) adapts to diurnally variable noise levels and (3) uses source-receiver reciprocity concepts to estimate thresholds for explosions spatially separated from the template source. Furthermore, we find that underground explosions with body wave magnitudes mb = 1.66 are detectable at the IMS array USRK with probability 0.99, when using template waveforms consisting only of P -waves, without false alarms. We conservatively find that these thresholds also increase by up to a magnitude unit for sources located 4 km or more from the Feb.12, 2013 announced nuclear test.« less
Data-adapted moving least squares method for 3-D image interpolation
NASA Astrophysics Data System (ADS)
Jang, Sumi; Nam, Haewon; Lee, Yeon Ju; Jeong, Byeongseon; Lee, Rena; Yoon, Jungho
2013-12-01
In this paper, we present a nonlinear three-dimensional interpolation scheme for gray-level medical images. The scheme is based on the moving least squares method but introduces a fundamental modification. For a given evaluation point, the proposed method finds the local best approximation by reproducing polynomials of a certain degree. In particular, in order to obtain a better match to the local structures of the given image, we employ locally data-adapted least squares methods that can improve the classical one. Some numerical experiments are presented to demonstrate the performance of the proposed method. Five types of data sets are used: MR brain, MR foot, MR abdomen, CT head, and CT foot. From each of the five types, we choose five volumes. The scheme is compared with some well-known linear methods and other recently developed nonlinear methods. For quantitative comparison, we follow the paradigm proposed by Grevera and Udupa (1998). (Each slice is first assumed to be unknown then interpolated by each method. The performance of each interpolation method is assessed statistically.) The PSNR results for the estimated volumes are also provided. We observe that the new method generates better results in both quantitative and visual quality comparisons.
System and method for adaptively deskewing parallel data signals relative to a clock
Jenkins, Philip Nord; Cornett, Frank N.
2006-04-18
A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. Each of the plurality of delayed signals is compared to a reference signal to detect changes in the skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in the detected skew.
Adaptive Forward Modeling Method for Analysis and Reconstructions of Orientation Image Map
Frankie Li, Shiu Fai
2014-06-01
IceNine is a MPI-parallel orientation reconstruction and microstructure analysis code. It's primary purpose is to reconstruct a spatially resolved orientation map given a set of diffraction images from a high energy x-ray diffraction microscopy (HEDM) experiment (1). In particular, IceNine implements the adaptive version of the forward modeling method (2, 3). Part of IceNine is a library used to for conbined analysis of the microstructure with the experimentally measured diffraction signal. The libraries is also designed for tapid prototyping of new reconstruction and analysis algorithms. IceNine is also built with a simulator of diffraction images with an input microstructure.
Wavefront detection method of a single-sensor based adaptive optics system.
Wang, Chongchong; Hu, Lifa; Xu, Huanyu; Wang, Yukun; Li, Dayu; Wang, Shaoxin; Mu, Quanquan; Yang, Chengliang; Cao, Zhaoliang; Lu, Xinghai; Xuan, Li
2015-08-10
In adaptive optics system (AOS) for optical telescopes, the reported wavefront sensing strategy consists of two parts: a specific sensor for tip-tilt (TT) detection and another wavefront sensor for other distortions detection. Thus, a part of incident light has to be used for TT detection, which decreases the light energy used by wavefront sensor and eventually reduces the precision of wavefront correction. In this paper, a single Shack-Hartmann wavefront sensor based wavefront measurement method is presented for both large amplitude TT and other distortions' measurement. Experiments were performed for testing the presented wavefront method and validating the wavefront detection and correction ability of the single-sensor based AOS. With adaptive correction, the root-mean-square of residual TT was less than 0.2 λ, and a clear image was obtained in the lab. Equipped on a 1.23-meter optical telescope, the binary stars with angle distance of 0.6″ were clearly resolved using the AOS. This wavefront measurement method removes the separate TT sensor, which not only simplifies the AOS but also saves light energy for subsequent wavefront sensing and imaging, and eventually improves the detection and imaging capability of the AOS. PMID:26367988
Gong, Yushun; Yu, Tao; Chen, Bihua; He, Mi; Li, Yongqin
2014-01-01
Current automated external defibrillators mandate interruptions of chest compression to avoid the effect of artifacts produced by CPR for reliable rhythm analyses. But even seconds of interruption of chest compression during CPR adversely affects the rate of restoration of spontaneous circulation and survival. Numerous digital signal processing techniques have been developed to remove the artifacts or interpret the corrupted ECG with promising result, but the performance is still inadequate, especially for nonshockable rhythms. In the present study, we suppressed the CPR artifacts with an enhanced adaptive filtering method. The performance of the method was evaluated by comparing the sensitivity and specificity for shockable rhythm detection before and after filtering the CPR corrupted ECG signals. The dataset comprised 283 segments of shockable and 280 segments of nonshockable ECG signals during CPR recorded from 22 adult pigs that experienced prolonged cardiac arrest. For the unfiltered signals, the sensitivity and specificity were 99.3% and 46.8%, respectively. After filtering, a sensitivity of 93.3% and a specificity of 96.0% were achieved. This animal trial demonstrated that the enhanced adaptive filtering method could significantly improve the detection of nonshockable rhythms without compromising the ability to detect a shockable rhythm during uninterrupted CPR. PMID:24795878
NASA Technical Reports Server (NTRS)
Kim, Hyoungin; Liou, Meng-Sing
2011-01-01
In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems
NASA Astrophysics Data System (ADS)
Danaila, Ionut; Moglan, Raluca; Hecht, Frédéric; Le Masson, Stéphane
2014-10-01
We present a new numerical system using finite elements with mesh adaptivity for the simulation of solid-liquid phase change systems. In the liquid phase, the natural convection flow is simulated by solving the incompressible Navier-Stokes equations with Boussinesq approximation. A variable viscosity model allows the velocity to progressively vanish in the solid phase, through an intermediate mushy region. The phase change is modeled by introducing an implicit enthalpy source term in the heat equation. The final system of equations describing the liquid-solid system by a single domain approach is solved using a Newton iterative algorithm. The space discretization is based on a P2-P1 Taylor-Hood finite elements and mesh adaptivity by metric control is used to accurately track the solid-liquid interface or the density inversion interface for water flows. The numerical method is validated against classical benchmarks that progressively add strong non-linearities in the system of equations: natural convection of air, natural convection of water, melting of a phase-change material and water freezing. Very good agreement with experimental data is obtained for each test case, proving the capability of the method to deal with both melting and solidification problems with convection. The presented numerical method is easy to implement using FreeFem++ software using a syntax close to the mathematical formulation.
FALCON: A method for flexible adaptation of local coordinates of nuclei.
König, Carolin; Hansen, Mads Bøttger; Godtliebsen, Ian H; Christiansen, Ove
2016-02-21
We present a flexible scheme for calculating vibrational rectilinear coordinates with well-defined strict locality on a certain set of atoms. Introducing a method for Flexible Adaption of Local COordinates of Nuclei (FALCON) we show how vibrational subspaces can be "grown" in an adaptive manner. Subspace Hessian matrices are set up and used to calculate and analyze vibrational modes and frequencies. FALCON coordinates can more generally be used to construct vibrational coordinates for describing local and (semi-local) interacting modes with desired features. For instance, spatially local vibrations can be approximately described as internal motion within only a group of atoms and delocalized modes can be approximately expressed as relative motions of rigid groups of atoms. The FALCON method can support efficiency in the calculation and analysis of vibrational coordinates and energies in the context of harmonic and anharmonic calculations. The features of this method are demonstrated on a few small molecules, i.e., formylglycine, coumarin, and dimethylether as well as for the amide-I band and low-frequency modes of alanine oligomers and alpha conotoxin. PMID:26896977
Adaptive explicit and implicit finite element methods for transient thermal analysis
NASA Technical Reports Server (NTRS)
Probert, E. J.; Hassan, O.; Morgan, K.; Peraire, J.
1992-01-01
The application of adaptive finite element methods to the solution of transient heat conduction problems in two dimensions is investigated. The computational domain is represented by an unstructured assembly of linear triangular elements and the mesh adaptation is achieved by local regeneration of the grid, using an error estimation procedure coupled to an automatic triangular mesh generator. Two alternative solution procedures are considered. In the first procedure, the solution is advanced by explicit timestepping, with domain decomposition being used to improve the computational efficiency of the method. In the second procedure, an algorithm for constructing continuous lines which pass only once through each node of the mesh is employed. The lines are used as the basis of a fully implicit method, in which the equation system is solved by line relaxation using a block tridiagonal equation solver. The numerical performance of the two procedures is compared for the analysis of a problem involving a moving heat source applied to a convectively cooled cylindrical leading edge.
Wagner, Roland; Helin, Tapio; Obereder, Andreas; Ramlau, Ronny
2016-02-20
The imaging quality of modern ground-based telescopes such as the planned European Extremely Large Telescope is affected by atmospheric turbulence. In consequence, they heavily depend on stable and high-performance adaptive optics (AO) systems. Using measurements of incoming light from guide stars, an AO system compensates for the effects of turbulence by adjusting so-called deformable mirror(s) (DMs) in real time. In this paper, we introduce a novel reconstruction method for ground layer adaptive optics. In the literature, a common approach to this problem is to use Bayesian inference in order to model the specific noise structure appearing due to spot elongation. This approach leads to large coupled systems with high computational effort. Recently, fast solvers of linear order, i.e., with computational complexity O(n), where n is the number of DM actuators, have emerged. However, the quality of such methods typically degrades in low flux conditions. Our key contribution is to achieve the high quality of the standard Bayesian approach while at the same time maintaining the linear order speed of the recent solvers. Our method is based on performing a separate preprocessing step before applying the cumulative reconstructor (CuReD). The efficiency and performance of the new reconstructor are demonstrated using the OCTOPUS, the official end-to-end simulation environment of the ESO for extremely large telescopes. For more specific simulations we also use the MOST toolbox. PMID:26906596
NASA Astrophysics Data System (ADS)
Lei, Yu; Lei, Jianming; Guo, Junhui; Zou, Xuecheng; Li, Bin; Lu, Li
2014-02-01
A new autocorrelation matrix eigenvalue based digital signal processing (DSP) chromatic dispersion (CD) adaptive monitoring and compensation method is proposed. It employs the average of the autocorrelation matrix eigenvalue instead of eigenvalue spread to be the metric of scanning. The average calculation has been effective in relieving the degradation of performance caused by the fluctuation of autocorrelation matrix eigenvalue. Compare with the eigenvalue spread scanning algorithm, this method reduces the monitoring errors to below 10 ps/nm from more than 200 ps/nm, while not increasing its computation complexity. Simulation results show that in 100 Gbit/s polarization division multiplexing (PDM) quadrature phase shift keying (QPSK) coherent optical transmission system, this method improves the bit error rate (BER) performance and the system robustness against the amplified-spontaneous-emission noise.
Dynamics of the adaptive natural gradient descent method for soft committee machines
NASA Astrophysics Data System (ADS)
Inoue, Masato; Park, Hyeyoung; Okada, Masato
2004-05-01
Adaptive natural gradient descent (ANGD) method realizes natural gradient descent (NGD) without needing to know the input distribution of learning data and reduces the calculation cost from a cubic order to a square order. However, no performance analysis of ANGD has been done. We have developed a statistical-mechanical theory of the simplified version of ANGD dynamics for soft committee machines in on-line learning; this method provides deterministic learning dynamics expressed through a few order parameters, even though ANGD intrinsically holds a large approximated Fisher information matrix. Numerical results obtained using this theory were consistent with those of a simulation, with respect not only to the learning curve but also to the learning failure. Utilizing this method, we numerically evaluated ANGD efficiency and found that ANGD generally performs as well as NGD. We also revealed the key condition affecting the learning plateau in ANGD.
Directionally adaptive finite element method for multidimensional Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Tan, Zhiqiang; Varghese, Philip L.
1993-01-01
A directionally adaptive finite element method for multidimensional compressible flows is presented. Quadrilateral and hexahedral elements are used because they have several advantages over triangular and tetrahedral elements. Unlike traditional methods that use quadrilateral/hexahedral elements, our method allows an element to be divided in each of the three directions in 3D and two directions in 2D. Some restrictions on mesh structure are found to be necessary, especially in 3D. The refining and coarsening procedures, and the treatment of constraints are given. A new implementation of upwind schemes in the constrained finite element system is presented. Some example problems, including a Mach 10 shock interaction with the walls of a 2D channel, a 2D viscous compression corner flow, and inviscid and viscous 3D flows in square channels, are also shown.
NASA Astrophysics Data System (ADS)
Arivanandhan, M.; Huang, Xinming; Uda, Satoshi; Bhagavannarayana, G.; Vijayan, N.; Sankaranarayanan, K.; Ramasamy, P.
2008-10-01
Unidirectional benzophenone single crystals grown by vertical Bridgman (VB), microtube-Czochralski (μT-CZ), uniaxially solution crystallization method of Sankaranarayanan-Ramasamy (SR) were characterized using X-ray diffraction (XRD), high-resolution XRD (HRXRD), laser damage threshold (LDT) studies and the results were compared. The XRD study exhibits the growth direction of the benzophenone crystal ingots. The HRXRD curves recorded by multicrystal X-ray diffractometer (MCD) revealed that the crystals grown by all the three methods contain internal structural grain boundaries. The SR grown sample shows relatively good crystalline nature with the full-width half-maximum (FWHM) of the main peak of 39 arcsec. While, the VB grown crystal contain multiple low-angle ( α⩾1 arcmin) grain boundaries, probably due to thermal stress during post-growth annealing caused by the difference in the lattice expansion coefficients of the crystal and the ampoule, whereas such thermal stress are absent in μT-CZ grown sample due to the free standing nature of the grown crystal. Hence, the μT-CZ grown crystals contain only one very low-angle ( α<1 arcmin) grain boundary. LDT study shows that the SR grown benzophenone crystal has higher LDT than the samples grown by other methods, probably due to relatively high-crystalline perfection of the SR grown crystals.
An adaptive distance-based group contribution method for thermodynamic property prediction.
He, Tanjin; Li, Shuang; Chi, Yawei; Zhang, Hong-Bo; Wang, Zhi; Yang, Bin; He, Xin; You, Xiaoqing
2016-09-14
In the search for an accurate yet inexpensive method to predict thermodynamic properties of large hydrocarbon molecules, we have developed an automatic and adaptive distance-based group contribution (DBGC) method. The method characterizes the group interaction within a molecule with an exponential decay function of the group-to-group distance, defined as the number of bonds between the groups. A database containing the molecular bonding information and the standard enthalpy of formation (Hf,298K) for alkanes, alkenes, and their radicals at the M06-2X/def2-TZVP//B3LYP/6-31G(d) level of theory was constructed. Multiple linear regression (MLR) and artificial neural network (ANN) fitting were used to obtain the contributions from individual groups and group interactions for further predictions. Compared with the conventional group additivity (GA) method, the DBGC method predicts Hf,298K for alkanes more accurately using the same training sets. Particularly for some highly branched large hydrocarbons, the discrepancy with the literature data is smaller for the DBGC method than the conventional GA method. When extended to other molecular classes, including alkenes and radicals, the overall accuracy level of this new method is still satisfactory. PMID:27522953
Zeng, Songjun; Liu, Hongrong; Yang, Qibin
2010-01-01
A method for three-dimensional (3D) reconstruction of macromolecule assembles, that is, octahedral symmetrical adapted functions (OSAFs) method, was introduced in this paper and a series of formulations for reconstruction by OSAF method were derived. To verify the feasibility and advantages of the method, two octahedral symmetrical macromolecules, that is, heat shock protein Degp24 and the Red-cell L Ferritin, were utilized as examples to implement reconstruction by the OSAF method. The schedule for simulation was designed as follows: 2000 random orientated projections of single particles with predefined Euler angles and centers of origins were generated, then different levels of noises that is signal-to-noise ratio (S/N) = 0.1, 0.5, and 0.8 were added. The structures reconstructed by the OSAF method were in good agreement with the standard models and the relative errors of the structures reconstructed by the OSAF method to standard structures were very little even for high level noise. The facts mentioned above account for that the OSAF method is feasible and efficient approach to reconstruct structures of macromolecules and have ability to suppress the influence of noise. PMID:20150955
Using Mixed-Methods Research to Adapt and Evaluate a Family Strengthening Intervention in Rwanda
Betancourt, Theresa S.; Meyers-Ohki, Sarah E.; Stevenson, Anne; Ingabire, Charles; Kanyanganzi, Fredrick; Munyana, Morris; Mushashi, Christina; Teta, Sharon; Fayida, Ildephonse; Cyamatare, Felix Rwabukwisi; Stulac, Sara; Beardslee, William R.
2013-01-01
Introduction Research in several international settings indicates that children and adolescents affected by HIV and other compounded adversities are at increased risk for a range of mental health problems including depression, anxiety, and social withdrawal. More intervention research is needed to develop valid measurement and intervention tools to address child mental health in such settings. Objective This article presents a collaborative mixed-methods approach to designing and evaluating a mental health intervention to assist families facing multiple adversities in Rwanda. Methods Qualitative methods were used to gain knowledge of culturally-relevant mental health problems in children and adolescents, individual, family and community resources, and contextual dynamics among HIV-affected families. This data was used to guide the selection and adaptation of mental health measures to assess intervention outcomes. Measures were subjected to a quantitative validation exercise. Qualitative data and community advisory board input also informed the selection and adaptation of a family-based preventive intervention to reduce the risk for mental health problems among children in families affected by HIV.. Community-based participatory methods were used to ensure that the intervention targeted relevant problems manifest in Rwandan children and families and built on local strengths. Results Qualitative data on culturally-appropriate practices for building resilience in vulnerable families has enriched the development of a Family-Strengthening Intervention (FSI). Input from community partners has also contributed to creating a feasible and culturally-relevant intervention. Mental health measures demonstrate strong performance in this population. Conclusion The mixed-methods model discussed represents a refined, multi-phase protocol for incorporating qualitative data and community input in the development and evaluation of feasible, culturally-sound quantitative assessments
A spatially adaptive total variation regularization method for electrical resistance tomography
NASA Astrophysics Data System (ADS)
Song, Xizi; Xu, Yanbin; Dong, Feng
2015-12-01
The total variation (TV) regularization method has been used to solve the ill-posed inverse problem of electrical resistance tomography (ERT), owing to its good ability to preserve edges. However, the quality of the reconstructed images, especially in the flat region, is often degraded by noise. To optimize the regularization term and the regularization factor according to the spatial feature and to improve the resolution of reconstructed images, a spatially adaptive total variation (SATV) regularization method is proposed. A kind of effective spatial feature indicator named difference curvature is used to identify which region is a flat or edge region. According to different spatial features, the SATV regularization method can automatically adjust both the regularization term and regularization factor. At edge regions, the regularization term is approximate to the TV functional to preserve the edges; in flat regions, it is approximate to the first-order Tikhonov (FOT) functional to make the solution stable. Meanwhile, the adaptive regularization factor determined by the spatial feature is used to constrain the regularization strength of the SATV regularization method for different regions. Besides, a numerical scheme is adopted for the implementation of the second derivatives of difference curvature to improve the numerical stability. Several reconstruction image metrics are used to quantitatively evaluate the performance of the reconstructed results. Both simulation and experimental results indicate that, compared with the TV (mean relative error 0.288, mean correlation coefficient 0.627) and FOT (mean relative error 0.295, mean correlation coefficient 0.638) regularization methods, the proposed SATV (mean relative error 0.259, mean correlation coefficient 0.738) regularization method can endure a relatively high level of noise and improve the resolution of reconstructed images.
NASA Astrophysics Data System (ADS)
Coleman, S.; Hurley, S.; Koliba, C.; Zia, A.; Exler, S.
2014-12-01
Eutrophication and nutrient pollution of surface waters occur within complex governance, social, hydrologic and biophysical basin contexts. The pervasive and perennial nutrient pollution in Lake Champlain Basin, despite decades of efforts, exemplifies problems found across the world's surface waters. Stakeholders with diverse values, interests, and forms of explicit and tacit knowledge determine water quality impacts through land use, agricultural and water resource decisions. Uncertainty, ambiguity and dynamic feedback further complicate the ability to promote the continual provision of water quality and ecosystem services. Adaptive management of water resources and land use requires mechanisms to allow for learning and integration of new information over time. The transdisciplinary Research on Adaptation to Climate Change (RACC) team is working to build regional adaptive capacity in Lake Champlain Basin while studying and integrating governance, land use, hydrological, and biophysical systems to evaluate implications for adaptive management. The RACC team has engaged stakeholders through mediated modeling workshops, online forums, surveys, focus groups and interviews. In March 2014, CSS2CC.org, an interactive online forum to source and identify adaptive interventions from a group of stakeholders across sectors was launched. The forum, based on the Delphi Method, brings forward the collective wisdom of stakeholders and experts to identify potential interventions and governance designs in response to scientific uncertainty and ambiguity surrounding the effectiveness of any strategy, climate change impacts, and the social and natural systems governing water quality and eutrophication. A Mediated Modeling Workshop followed the forum in May 2014, where participants refined and identified plausible interventions under different governance, policy and resource scenarios. Results from the online forum and workshop can identify emerging consensus across scales and sectors
Threshold Concepts in Biochemistry
ERIC Educational Resources Information Center
Loertscher, Jennifer
2011-01-01
Threshold concepts can be identified for any discipline and provide a framework for linking student learning to curricular design. Threshold concepts represent a transformed understanding of a discipline, without which the learner cannot progress and are therefore pivotal in learning in a discipline. Although threshold concepts have been…
The Adaptive Biasing Force Method: Everything You Always Wanted To Know but Were Afraid To Ask
2014-01-01
In the host of numerical schemes devised to calculate free energy differences by way of geometric transformations, the adaptive biasing force algorithm has emerged as a promising route to map complex free-energy landscapes. It relies upon the simple concept that as a simulation progresses, a continuously updated biasing force is added to the equations of motion, such that in the long-time limit it yields a Hamiltonian devoid of an average force acting along the transition coordinate of interest. This means that sampling proceeds uniformly on a flat free-energy surface, thus providing reliable free-energy estimates. Much of the appeal of the algorithm to the practitioner is in its physically intuitive underlying ideas and the absence of any requirements for prior knowledge about free-energy landscapes. Since its inception in 2001, the adaptive biasing force scheme has been the subject of considerable attention, from in-depth mathematical analysis of convergence properties to novel developments and extensions. The method has also been successfully applied to many challenging problems in chemistry and biology. In this contribution, the method is presented in a comprehensive, self-contained fashion, discussing with a critical eye its properties, applicability, and inherent limitations, as well as introducing novel extensions. Through free-energy calculations of prototypical molecular systems, many methodological aspects are examined, from stratification strategies to overcoming the so-called hidden barriers in orthogonal space, relevant not only to the adaptive biasing force algorithm but also to other importance-sampling schemes. On the basis of the discussions in this paper, a number of good practices for improving the efficiency and reliability of the computed free-energy differences are proposed. PMID:25247823
Modeling, mesh generation, and adaptive numerical methods for partial differential equations
Babuska, I.; Henshaw, W.D.; Oliger, J.E.; Flaherty, J.E.; Hopcroft, J.E.; Tezduyar, T.
1995-12-31
Mesh generation is one of the most time consuming aspects of computational solutions of problems involving partial differential equations. It is, furthermore, no longer acceptable to compute solutions without proper verification that specified accuracy criteria are being satisfied. Mesh generation must be related to the solution through computable estimates of discretization errors. Thus, an iterative process of alternate mesh and solution generation evolves in an adaptive manner with the end result that the solution is computed to prescribed specifications in an optimal, or at least efficient, manner. While mesh generation and adaptive strategies are becoming available, major computational challenges remain. One, in particular, involves moving boundaries and interfaces, such as free-surface flows and fluid-structure interactions. A 3-week program was held from July 5 to July 23, 1993 with 173 participants and 66 keynote, invited, and contributed presentations. This volume represents written versions of 21 of these lectures. These proceedings are organized roughly in order of their presentation at the workshop. Thus, the initial papers are concerned with geometry and mesh generation and discuss the representation of physical objects and surfaces on a computer and techniques to use this data to generate, principally, unstructured meshes of tetrahedral or hexahedral elements. The remainder of the papers cover adaptive strategies, error estimation, and applications. Several submissions deal with high-order p- and hp-refinement methods where mesh refinement/coarsening (h-refinement) is combined with local variation of method order (p-refinement). Combinations of mathematically verified and physically motivated approaches to error estimation are represented. Applications center on fluid mechanics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.
Olsen, Jeffrey R.; Noel, Camille E.; Baker, Kenneth; Santanam, Lakshmi; Michalski, Jeff M.; Parikh, Parag J.
2012-04-01
Purpose: We have created an automated process using real-time tracking data to evaluate the adequacy of planning target volume (PTV) margins in prostate cancer, allowing a process of adaptive radiotherapy with minimal physician workload. We present an analysis of PTV adequacy and a proposed adaptive process. Methods and Materials: Tracking data were analyzed for 15 patients who underwent step-and-shoot multi-leaf collimation (SMLC) intensity-modulated radiation therapy (IMRT) with uniform 5-mm PTV margins for prostate cancer using the Calypso Registered-Sign Localization System. Additional plans were generated with 0- and 3-mm margins. A custom software application using the planned dose distribution and structure location from computed tomography (CT) simulation was developed to evaluate the dosimetric impact to the target due to motion. The dose delivered to the prostate was calculated for the initial three, five, and 10 fractions, and for the entire treatment. Treatment was accepted as adequate if the minimum delivered prostate dose (D{sub min}) was at least 98% of the planned D{sub min}. Results: For 0-, 3-, and 5-mm PTV margins, adequate treatment was obtained in 3 of 15, 12 of 15, and 15 of 15 patients, and the delivered D{sub min} ranged from 78% to 99%, 96% to 100%, and 99% to 100% of the planned D{sub min}. Changes in D{sub min} did not correlate with magnitude of prostate motion. Treatment adequacy during the first 10 fractions predicted sufficient dose delivery for the entire treatment for all patients and margins. Conclusions: Our adaptive process successfully used real-time tracking data to predict the need for PTV modifications, without the added burden of physician contouring and image analysis. Our methods are applicable to other uses of real-time tracking, including hypofractionated treatment.
Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P
2015-03-01
Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. PMID:25463325