Bremer, P. -T.
2014-08-26
ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.
A method of camera calibration with adaptive thresholding
NASA Astrophysics Data System (ADS)
Gao, Lei; Yan, Shu-hua; Wang, Guo-chao; Zhou, Chun-lei
2009-07-01
In order to calculate the parameters of the camera correctly, we must figure out the accurate coordinates of the certain points in the image plane. Corners are the important features in the 2D images. Generally speaking, they are the points that have high curvature and lie in the junction of different brightness regions of images. So corners detection has already widely used in many fields. In this paper we use the pinhole camera model and SUSAN corner detection algorithm to calibrate the camera. When using the SUSAN corner detection algorithm, we propose an approach to retrieve the gray difference threshold, adaptively. That makes it possible to pick up the right chessboard inner comers in all kinds of gray contrast. The experiment result based on this method was proved to be feasible.
NASA Astrophysics Data System (ADS)
Susanti, D.; Hartini, E.; Permana, A.
2017-01-01
Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.
NASA Astrophysics Data System (ADS)
Ran, Qiwen; Yang, Zhonghua; Ma, Jing; Tan, Liying; Liao, Huixi; Liu, Qingfeng
2013-02-01
In this paper, a weighted adaptive threshold estimating method is proposed to deal with long and deep channel fades in Satellite-to-Ground optical communications. During the channel correlation interval where there are sufficient correlations in adjacent signal samples, the correlations in its change rates are described by weighted equations in the form of Toeplitz matrix. As vital inputs to the proposed adaptive threshold estimator, the optimal values of the change rates can be obtained by solving the weighted equation systems. The effect of channel fades and aberrant samples can be mitigated by joint use of weighted equation systems and Kalman estimation. Based on the channel information data from star observation trails, simulations are made and the numerical results show that the proposed method have better anti-fade performances than the D-value adaptive threshold estimating method in both weak and strong turbulence conditions.
Effective wavelet-based compression method with adaptive quantization threshold and zerotree coding
NASA Astrophysics Data System (ADS)
Przelaskowski, Artur; Kazubek, Marian; Jamrogiewicz, Tomasz
1997-10-01
Efficient image compression technique especially for medical applications is presented. Dyadic wavelet decomposition by use of Antonini and Villasenor bank filters is followed by adaptive space-frequency quantization and zerotree-based entropy coding of wavelet coefficients. Threshold selection and uniform quantization is made on a base of spatial variance estimate built on the lowest frequency subband data set. Threshold value for each coefficient is evaluated as linear function of 9-order binary context. After quantization zerotree construction, pruning and arithmetic coding is applied for efficient lossless data coding. Presented compression method is less complex than the most effective EZW-based techniques but allows to achieve comparable compression efficiency. Specifically our method has similar to SPIHT efficiency in MR image compression, slightly better for CT image and significantly better in US image compression. Thus the compression efficiency of presented method is competitive with the best published algorithms in the literature across diverse classes of medical images.
Lesmes, Luis A; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A; Albright, Thomas D
2015-01-01
Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold-the signal intensity corresponding to a pre-defined sensitivity level (d' = 1)-in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks-(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection-the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10-0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods.
Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.
2015-01-01
Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798
Detection of neuronal spikes using an adaptive threshold based on the max-min spread sorting method.
Chan, Hsiao-Lung; Lin, Ming-An; Wu, Tony; Lee, Shih-Tseng; Tsai, Yu-Tai; Chao, Pei-Kuang
2008-07-15
Neuronal spike information can be used to correlate neuronal activity to various stimuli, to find target neural areas for deep brain stimulation, and to decode intended motor command for brain-machine interface. Typically, spike detection is performed based on the adaptive thresholds determined by running root-mean-square (RMS) value of the signal. Yet conventional detection methods are susceptible to threshold fluctuations caused by neuronal spike intensity. In the present study we propose a novel adaptive threshold based on the max-min spread sorting method. On the basis of microelectrode recording signals and simulated signals with Gaussian noises and colored noises, the novel method had the smallest threshold variations, and similar or better spike detection performance than either the RMS-based method or other improved methods. Moreover, the detection method described in this paper uses the reduced features of raw signal to determine the threshold, thereby giving a simple data manipulation that is beneficial for reducing the computational load when dealing with very large amounts of data (as multi-electrode recordings).
Microwave medical imaging based on sparsity and an iterative method with adaptive thresholding.
Azghani, Masoumeh; Kosmas, Panagiotis; Marvasti, Farokh
2015-02-01
We propose a new image recovery method to improve the resolution in microwave imaging applications. Scattered field data obtained from a simplified breast model with closely located targets is used to formulate an electromagnetic inverse scattering problem, which is then solved using the Distorted Born Iterative Method (DBIM). At each iteration of the DBIM method, an underdetermined set of linear equations is solved using our proposed sparse recovery algorithm, IMATCS. Our results demonstrate the ability of the proposed method to recover small targets in cases where traditional DBIM approaches fail. Furthermore, in order to regularize the sparse recovery algorithm, we propose a novel L(2) -based approach and prove its convergence. The simulation results indicate that the L(2)-regularized method improves the robustness of the algorithm against the ill-posed conditions of the EM inverse scattering problem. Finally, we demonstrate that the regularized IMATCS-DBIM approach leads to fast, accurate and stable reconstructions of highly dense breast compositions.
High-performance thresholding with adaptive equalization
NASA Astrophysics Data System (ADS)
Lam, Ka Po
1998-09-01
The ability to simplify an image whilst retaining such crucial information as shapes and geometric structures is of great importance for real-time image analysis applications. Here the technique of binary thresholding which reduces the image complexity has generally been regarded as one of the most valuable methods, primarily owing to its ease of design and analysis. This paper studies the state of developments in the field, and describes a radically different approach of adaptive thresholding. The latter employs the analytical technique of histogram normalization for facilitating an optimal `contrast level' of the image under consideration. A suitable criterion is also developed to determine the applicability of the adaptive processing procedure. In terms of performance and computational complexity, the proposed algorithm compares favorably to five established image thresholding methods selected for this study. Experimental results have shown that the new algorithm outperforms these methods in terms of a number of important errors measures, including a consistently low visual classification error performance. The simplicity of design of the algorithm also lends itself to efficient parallel implementations.
Efficient adaptive thresholding with image masks
NASA Astrophysics Data System (ADS)
Oh, Young-Taek; Hwang, Youngkyoo; Kim, Jung-Bae; Bang, Won-Chul
2014-03-01
Adaptive thresholding is a useful technique for document analysis. In medical image processing, it is also helpful for segmenting structures, such as diaphragms or blood vessels. This technique sets a threshold using local information around a pixel, then binarizes the pixel according to the value. Although this technique is robust to changes in illumination, it takes a significant amount of time to compute thresholds because it requires adding all of the neighboring pixels. Integral images can alleviate this overhead; however, medical images, such as ultrasound, often come with image masks, and ordinary algorithms often cause artifacts. The main problem is that the shape of the summing area is not rectangular near the boundaries of the image mask. For example, the threshold at the boundary of the mask is incorrect because pixels on the mask image are also counted. Our key idea to cope with this problem is computing the integral image for the image mask to count the valid number of pixels. Our method is implemented on a GPU using CUDA, and experimental results show that our algorithm is 164 times faster than a naïve CPU algorithm for averaging.
Adaptive threshold selection for background removal in fringe projection profilometry
NASA Astrophysics Data System (ADS)
Zhang, Wei; Li, Weishi; Yan, Jianwen; Yu, Liandong; Pan, Chengliang
2017-03-01
In fringe projection profilometry, background and shadow are inevitable in the image of an object, and must be identified and removed. In existing methods, it is nontrivial to determine a proper threshold to segment the background and shadow regions, especially when the gray-level histogram of the image is close to unimodal, and an improper threshold generally results in misclassification of the object and the background/shadow. In this paper, an adaptive threshold method is proposed to tackle the problem. Different from the existing automatic methods, the modulation-level histogram, instead of the gray-level histogram, of the image is employed to determine the threshold. Furthermore, a new weighting factor is proposed to improve Otsu's method to segment the image with a histogram close to unimodal, and the modulation difference of the object pixels and the background/shadow pixels is intensified significantly by the weighting factor. Moreover, the weighting factor is adaptive to the image. The proposed method outperforms existing methods either in accuracy, efficiency or automation. Experimental results are given to demonstrate the feasibility and effectiveness of the proposed method.
An Adaptive Threshold in Mammalian Neocortical Evolution
Kalinka, Alex T.; Tomancak, Pavel; Huttner, Wieland B.
2014-01-01
Expansion of the neocortex is a hallmark of human evolution. However, determining which adaptive mechanisms facilitated its expansion remains an open question. Here we show, using the gyrencephaly index (GI) and other physiological and life-history data for 102 mammalian species, that gyrencephaly is an ancestral mammalian trait. We find that variation in GI does not evolve linearly across species, but that mammals constitute two principal groups above and below a GI threshold value of 1.5, approximately equal to 109 neurons, which may be characterized by distinct constellations of physiological and life-history traits. By integrating data on neurogenic period, neuroepithelial founder pool size, cell-cycle length, progenitor-type abundances, and cortical neuron number into discrete mathematical models, we identify symmetric proliferative divisions of basal progenitors in the subventricular zone of the developing neocortex as evolutionarily necessary for generating a 14-fold increase in daily prenatal neuron production, traversal of the GI threshold, and thus establishment of two principal groups. We conclude that, despite considerable neuroanatomical differences, changes in the length of the neurogenic period alone, rather than any novel neurogenic progenitor lineage, are sufficient to explain differences in neuron number and neocortical size between species within the same principal group. PMID:25405475
Real time electrocardiogram QRS detection using combined adaptive threshold
Christov, Ivaylo I
2004-01-01
Background QRS and ventricular beat detection is a basic procedure for electrocardiogram (ECG) processing and analysis. Large variety of methods have been proposed and used, featuring high percentages of correct detection. Nevertheless, the problem remains open especially with respect to higher detection accuracy in noisy ECGs Methods A real-time detection method is proposed, based on comparison between absolute values of summed differentiated electrocardiograms of one of more ECG leads and adaptive threshold. The threshold combines three parameters: an adaptive slew-rate value, a second value which rises when high-frequency noise occurs, and a third one intended to avoid missing of low amplitude beats. Two algorithms were developed: Algorithm 1 detects at the current beat and Algorithm 2 has an RR interval analysis component in addition. The algorithms are self-adjusting to the thresholds and weighting constants, regardless of resolution and sampling frequency used. They operate with any number L of ECG leads, self-synchronize to QRS or beat slopes and adapt to beat-to-beat intervals. Results The algorithms were tested by an independent expert, thus excluding possible author's influence, using all 48 full-length ECG records of the MIT-BIH arrhythmia database. The results were: sensitivity Se = 99.69 % and specificity Sp = 99.65 % for Algorithm 1 and Se = 99.74 % and Sp = 99.65 % for Algorithm 2. Conclusion The statistical indices are higher than, or comparable to those, cited in the scientific literature. PMID:15333132
Adaptive Spike Threshold Enables Robust and Temporally Precise Neuronal Encoding
Resnik, Andrey; Celikel, Tansu; Englitz, Bernhard
2016-01-01
Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information. PMID:27304526
Automatic DarkAdaptation Threshold Detection Algorithm.
G de Azevedo, Dario; Helegda, Sergio; Glock, Flavio; Russomano, Thais
2005-01-01
This paper describes an algorithm used to automatically determine the threshold sensitivity in a new dark adaptometer. The new instrument is controlled by a personal computer and can be used in the investigation of several retinal diseases. The stimulus field is delivered to the eye through the modified optics of a fundus camera. An automated light stimulus source was developed to operate together with this fundus camera. New control parameters were developed in this instrument to improve the traditional Goldmann-Weekers dark adaptometer.
Adaptive thresholding for reliable topological inference in single subject fMRI analysis
Gorgolewski, Krzysztof J.; Storkey, Amos J.; Bastin, Mark E.; Pernet, Cyril R.
2012-01-01
Single subject fMRI has proved to be a useful tool for mapping functional areas in clinical procedures such as tumor resection. Using fMRI data, clinicians assess the risk, plan and execute such procedures based on thresholded statistical maps. However, because current thresholding methods were developed mainly in the context of cognitive neuroscience group studies, most single subject fMRI maps are thresholded manually to satisfy specific criteria related to single subject analyzes. Here, we propose a new adaptive thresholding method which combines Gamma-Gaussian mixture modeling with topological thresholding to improve cluster delineation. In a series of simulations we show that by adapting to the signal and noise properties, the new method performs well in terms of total number of errors but also in terms of the trade-off between false negative and positive cluster error rates. Similarly, simulations show that adaptive thresholding performs better than fixed thresholding in terms of over and underestimation of the true activation border (i.e., higher spatial accuracy). Finally, through simulations and a motor test–retest study on 10 volunteer subjects, we show that adaptive thresholding improves reliability, mainly by accounting for the global signal variance. This in turn increases the likelihood that the true activation pattern can be determined offering an automatic yet flexible way to threshold single subject fMRI maps. PMID:22936908
Stopping rules in Bayesian adaptive threshold estimation.
Alcalá-Quintana, Rocío; García-Pérez, Miguel A
2005-01-01
Threshold estimation with sequential procedures is justifiable on the surmise that the index used in the so-called dynamic stopping rule has diagnostic value for identifying when an accurate estimate has been obtained. The performance of five types of Bayesian sequential procedure was compared here to that of an analogous fixed-length procedure. Indices for use in sequential procedures were: (1) the width of the Bayesian probability interval, (2) the posterior standard deviation, (3) the absolute change, (4) the average change, and (5) the number of sign fluctuations. A simulation study was carried out to evaluate which index renders estimates with less bias and smaller standard error at lower cost (i.e. lower average number of trials to completion), in both yes-no and two-alternative forced-choice (2AFC) tasks. We also considered the effect of the form and parameters of the psychometric function and its similarity with the model function assumed in the procedure. Our results show that sequential procedures do not outperform fixed-length procedures in yes-no tasks. However, in 2AFC tasks, sequential procedures not based on sign fluctuations all yield minimally better estimates than fixed-length procedures, although most of the improvement occurs with short runs that render undependable estimates and the differences vanish when the procedures run for a number of trials (around 70) that ensures dependability. Thus, none of the indices considered here (some of which are widespread) has the diagnostic value that would justify its use. In addition, difficulties of implementation make sequential procedures unfit as alternatives to fixed-length procedures.
Image Restoration on Copper Inscription Using Nonlinear Filtering and Adaptive Threshold
NASA Astrophysics Data System (ADS)
Chairy, A.; Suprapto, Y. K.; Yuniarno, E. M.
2017-01-01
Inscription is an important document inherited by history of kingdom. Inscription made on hard stuff such as stone and copper. Therefore it is necessary digitizing documents, to keep the authenticity of the document. But the document of the historical heritage have disruption on inscription plate which be called noise. So that, it is necessary to reduce the noise in the image of the inscription, to ease the documentation of historical digital. Then, separation between the background and the writing object carved on inscription is conducted so easy to read. This research is using nonlinear filtering method to reduce the noise and adaptive threshold to separate between the background and letter inscription. Nonlinear filtering method used is median filter, harmonic mean filter and contra harmonic mean filter, whereas in the adaptive threshold using adaptive mean and adaptive median threshold. The results of this research is using measurement methods MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio) and SNR (Signal to Noise Ratio).
Methods for automatic trigger threshold adjustment
Welch, Benjamin J; Partridge, Michael E
2014-03-18
Methods are presented for adjusting trigger threshold values to compensate for drift in the quiescent level of a signal monitored for initiating a data recording event, thereby avoiding false triggering conditions. Initial threshold values are periodically adjusted by re-measuring the quiescent signal level, and adjusting the threshold values by an offset computation based upon the measured quiescent signal level drift. Re-computation of the trigger threshold values can be implemented on time based or counter based criteria. Additionally, a qualification width counter can be utilized to implement a requirement that a trigger threshold criterion be met a given number of times prior to initiating a data recording event, further reducing the possibility of a false triggering situation.
Adaptive thresholding technique for retinal vessel segmentation based on GLCM-energy information.
Mapayi, Temitope; Viriri, Serestina; Tapamo, Jules-Raymond
2015-01-01
Although retinal vessel segmentation has been extensively researched, a robust and time efficient segmentation method is highly needed. This paper presents a local adaptive thresholding technique based on gray level cooccurrence matrix- (GLCM-) energy information for retinal vessel segmentation. Different thresholds were computed using GLCM-energy information. An experimental evaluation on DRIVE database using the grayscale intensity and Green Channel of the retinal image demonstrates the high performance of the proposed local adaptive thresholding technique. The maximum average accuracy rates of 0.9511 and 0.9510 with maximum average sensitivity rates of 0.7650 and 0.7641 were achieved on DRIVE and STARE databases, respectively. When compared to the widely previously used techniques on the databases, the proposed adaptive thresholding technique is time efficient with a higher average sensitivity and average accuracy rates in the same range of very good specificity.
Methods for threshold determination in multiplexed assays
Tammero, Lance F. Bentley; Dzenitis, John M; Hindson, Benjamin J
2014-06-24
Methods for determination of threshold values of signatures comprised in an assay are described. Each signature enables detection of a target. The methods determine a probability density function of negative samples and a corresponding false positive rate curve. A false positive criterion is established and a threshold for that signature is determined as a point at which the false positive rate curve intersects the false positive criterion. A method for quantitative analysis and interpretation of assay results together with a method for determination of a desired limit of detection of a signature in an assay are also described.
Normalized iterative denoising ghost imaging based on the adaptive threshold
NASA Astrophysics Data System (ADS)
Li, Gaoliang; Yang, Zhaohua; Zhao, Yan; Yan, Ruitao; Liu, Xia; Liu, Baolei
2017-02-01
An approach for improving ghost imaging (GI) quality is proposed. In this paper, an iteration model based on normalized GI is built through theoretical analysis. An adaptive threshold value is selected in the iteration model. The initial value of the iteration model is estimated as a step to remove the correlated noise. The simulation and experimental results reveal that the proposed strategy reconstructs a better image than traditional and normalized GI, without adding complexity. The NIDGI-AT scheme does not require prior information regarding the object, and can also choose the threshold adaptively. More importantly, the signal-to-noise ratio (SNR) of the reconstructed image is greatly improved. Therefore, this methodology represents another step towards practical real-world applications.
Gap Measurement of Point Machine Using Adaptive Wavelet Threshold and Mathematical Morphology
Xu, Tianhua; Wang, Guang; Wang, Haifeng; Yuan, Tangming; Zhong, Zhiwang
2016-01-01
A point machine’s gap is an important indication of its healthy status. An edge detection algorithm is proposed to measure and calculate a point machine’s gap from the gap image captured by CCD plane arrays. This algorithm integrates adaptive wavelet-based image denoising, locally adaptive image binarization, and mathematical morphology technologies. The adaptive wavelet-based image denoising obtains not only an optimal denoising threshold, but also unblurred edges. Locally adaptive image binarization has the advantage of overcoming the local intensity variation in gap images. Mathematical morphology may suppress speckle spots caused by reflective metal surfaces in point machines. The subjective and objective evaluations of the proposed method are presented by using point machine gap images from a railway corporation in China. The performance between the proposed method and conventional edge detection methods has also been compared, and the result shows that the former outperforms the latter. PMID:27898042
Wavelet detection of weak far-magnetic signal based on adaptive ARMA model threshold
NASA Astrophysics Data System (ADS)
Zhang, Ning; Lin, Chun-sheng; Fang, Shi
2009-10-01
Based on Mallat algorithm, a de-noising algorithm of adaptive wavelet threshold is applied for weak magnetic signal detection of far moving target in complex magnetic environment. The choice of threshold is the key problem. With the spectrum analysis of the magnetic field target, a threshold algorithm on the basis of adaptive ARMA model filter is brought forward to improve the wavelet filtering performance. The simulation of this algorithm on measured data is carried out. Compared to Donoho threshold algorithm, it shows that adaptive ARMA model threshold algorithm significantly improved the capability of weak magnetic signal detection in complex magnetic environment.
Adaptive Algebraic Multigrid Methods
Brezina, M; Falgout, R; MacLachlan, S; Manteuffel, T; McCormick, S; Ruge, J
2004-04-09
Our ability to simulate physical processes numerically is constrained by our ability to solve the resulting linear systems, prompting substantial research into the development of multiscale iterative methods capable of solving these linear systems with an optimal amount of effort. Overcoming the limitations of geometric multigrid methods to simple geometries and differential equations, algebraic multigrid methods construct the multigrid hierarchy based only on the given matrix. While this allows for efficient black-box solution of the linear systems associated with discretizations of many elliptic differential equations, it also results in a lack of robustness due to assumptions made on the near-null spaces of these matrices. This paper introduces an extension to algebraic multigrid methods that removes the need to make such assumptions by utilizing an adaptive process. The principles which guide the adaptivity are highlighted, as well as their application to algebraic multigrid solution of certain symmetric positive-definite linear systems.
Adaptations to training at the individual anaerobic threshold.
Keith, S P; Jacobs, I; McLellan, T M
1992-01-01
The individual anaerobic threshold (Th(an)) is the highest metabolic rate at which blood lactate concentrations can be maintained at a steady-state during prolonged exercise. The purpose of this study was to test the hypothesis that training at the Th(an) would cause a greater change in indicators of training adaptation than would training "around" the Th(an). Three groups of subjects were evaluated before, and again after 4 and 8 weeks of training: a control group, a group which trained continuously for 30 min at the Th(an) intensity (SS), and a group (NSS) which divided the 30 min of training into 7.5-min blocks at intensities which alternated between being below the Th(an) [Th(an) -30% of the difference between Th(an) and maximal oxygen consumption (VO2max)] and above the Th(an) (Th(an) +30% of the difference between Th(an) and VO2max). The VO2max increased significantly from 4.06 to 4.27 l.min-1 in SS and from 3.89 to 4.06 l.min-1 in NSS. The power output (W) at Th(an) increased from 70.5 to 79.8% VO2max in SS and from 71.1 to 80.7% VO2max in NSS. The magnitude of change in VO2max, W at Th(an), % VO2max at Th(an) and in exercise time to exhaustion at the pretraining Th(an) was similar in both trained groups. Vastus lateralis citrate synthase and 3-hydroxyacyl-CoA-dehydrogenase activities increased to the same extent in both trained groups. While all of these training-induced adaptations were statistically significant (P < 0.05), there were no significant changes in any of these variables for the control subjects.(ABSTRACT TRUNCATED AT 250 WORDS)
An adaptive design for updating the threshold value of a continuous biomarker
Spencer, Amy V.; Harbron, Chris; Mander, Adrian; Wason, James; Peers, Ian
2017-01-01
Potential predictive biomarkers are often measured on a continuous scale, but in practice, a threshold value to divide the patient population into biomarker ‘positive’ and ‘negative’ is desirable. Early phase clinical trials are increasingly using biomarkers for patient selection, but at this stage, it is likely that little will be known about the relationship between the biomarker and the treatment outcome. We describe a single-arm trial design with adaptive enrichment, which can increase power to demonstrate efficacy within a patient subpopulation, the parameters of which are also estimated. Our design enables us to learn about the biomarker and optimally adjust the threshold during the study, using a combination of generalised linear modelling and Bayesian prediction. At the final analysis, a binomial exact test is carried out, allowing the hypothesis that ‘no population subset exists in which the novel treatment has a desirable response rate’ to be tested. Through extensive simulations, we are able to show increased power over fixed threshold methods in many situations without increasing the type-I error rate. We also show that estimates of the threshold, which defines the population subset, are unbiased and often more precise than those from fixed threshold studies. We provide an example of the method applied (retrospectively) to publically available data from a study of the use of tamoxifen after mastectomy by the German Breast Study Group, where progesterone receptor is the biomarker of interest. PMID:27417407
Accelerated Adaptive Integration Method
2015-01-01
Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083
NASA Astrophysics Data System (ADS)
He, Xuefei; Nguyen, Chuong Vinh; Pratap, Mrinalini; Zheng, Yujie; Wang, Yi; Nisbet, David R.; Rug, Melanie; Maier, Alexander G.; Lee, Woei Ming
2016-12-01
Here we propose a region-recognition approach with iterative thresholding, which is adaptively tailored to extract the appropriate region or shape of spatial frequency. In order to justify the method, we tested it with different samples and imaging conditions (different objectives). We demonstrate that our method provides a useful method for rapid imaging of cellular dynamics in microfluidic and cell cultures.
A Fast Method for Measuring Psychophysical Thresholds Across the Cochlear Implant Array
Bierer, Steven M.; Kreft, Heather A.; Oxenham, Andrew J.
2015-01-01
A rapid threshold measurement procedure, based on Bekesy tracking, is proposed and evaluated for use with cochlear implants (CIs). Fifteen postlingually deafened adult CI users participated. Absolute thresholds for 200-ms trains of biphasic pulses were measured using the new tracking procedure and were compared with thresholds obtained with a traditional forced-choice adaptive procedure under both monopolar and quadrupolar stimulation. Virtual spectral sweeps across the electrode array were implemented in the tracking procedure via current steering, which divides the current between two adjacent electrodes and varies the proportion of current directed to each electrode. Overall, no systematic differences were found between threshold estimates with the new channel sweep procedure and estimates using the adaptive forced-choice procedure. Test–retest reliability for the thresholds from the sweep procedure was somewhat poorer than for thresholds from the forced-choice procedure. However, the new method was about 4 times faster for the same number of repetitions. Overall the reliability and speed of the new tracking procedure provides it with the potential to estimate thresholds in a clinical setting. Rapid methods for estimating thresholds could be of particular clinical importance in combination with focused stimulation techniques that result in larger threshold variations between electrodes. PMID:25656797
A fast method for measuring psychophysical thresholds across the cochlear implant array.
Bierer, Julie A; Bierer, Steven M; Kreft, Heather A; Oxenham, Andrew J
2015-02-04
A rapid threshold measurement procedure, based on Bekesy tracking, is proposed and evaluated for use with cochlear implants (CIs). Fifteen postlingually deafened adult CI users participated. Absolute thresholds for 200-ms trains of biphasic pulses were measured using the new tracking procedure and were compared with thresholds obtained with a traditional forced-choice adaptive procedure under both monopolar and quadrupolar stimulation. Virtual spectral sweeps across the electrode array were implemented in the tracking procedure via current steering, which divides the current between two adjacent electrodes and varies the proportion of current directed to each electrode. Overall, no systematic differences were found between threshold estimates with the new channel sweep procedure and estimates using the adaptive forced-choice procedure. Test-retest reliability for the thresholds from the sweep procedure was somewhat poorer than for thresholds from the forced-choice procedure. However, the new method was about 4 times faster for the same number of repetitions. Overall the reliability and speed of the new tracking procedure provides it with the potential to estimate thresholds in a clinical setting. Rapid methods for estimating thresholds could be of particular clinical importance in combination with focused stimulation techniques that result in larger threshold variations between electrodes.
Advances in Adaptive Control Methods
NASA Technical Reports Server (NTRS)
Nguyen, Nhan
2009-01-01
This poster presentation describes recent advances in adaptive control technology developed by NASA. Optimal Control Modification is a novel adaptive law that can improve performance and robustness of adaptive control systems. A new technique has been developed to provide an analytical method for computing time delay stability margin for adaptive control systems.
Tsui, Po-Hsiang; Wan, Yung-Liang; Huang, Chih-Chung; Wang, Ming-Chen
2010-10-01
The Nakagami parameter is associated with the Nakagami distribution estimated from ultrasonic backscattered signals and closely reflects the scatterer concentrations in tissues. There is an interest in exploring the possibility of enhancing the ability of the Nakagami parameter to characterize tissues. In this paper, we explore the effect of adaptive thresholdfiltering based on the noise-assisted empirical mode decomposition of the ultrasonic backscattered signals on the Nakagami parameter as a function of scatterer concentration for improving the Nakagami parameter performance. We carried out phantom experiments using 5 MHz focused and nonfocused transducers. Before filtering, the dynamic ranges of the Nakagami parameter, estimated using focused and nonfocused transducers between the scatterer concentrations of 2 and 32 scatterers/mm3, were 0.44 and 0.1, respectively. After filtering, the dynamic ranges of the Nakagami parameter, using the focused and nonfocused transducers, were 0.71 and 0.79, respectively. The experimental results showed that the adaptive threshold filter makes the Nakagami parameter measured by a focused transducer more sensitive to the variation in the scatterer concentration. The proposed method also endows the Nakagami parameter measured by a nonfocused transducer with the ability to differentiate various scatterer concentrations. However, the Nakagami parameters estimated by focused and nonfocused transducers after adaptive threshold filtering have different physical meanings: the former represents the statistics of signals backscattered from unresolvable scatterers while the latter is associated with stronger resolvable scatterers or local inhomogeneity due to scatterer aggregation.
Issac, Ashish; Partha Sarathi, M; Dutta, Malay Kishore
2015-11-01
Glaucoma is an optic neuropathy which is one of the main causes of permanent blindness worldwide. This paper presents an automatic image processing based method for detection of glaucoma from the digital fundus images. In this proposed work, the discriminatory parameters of glaucoma infection, such as cup to disc ratio (CDR), neuro retinal rim (NRR) area and blood vessels in different regions of the optic disc has been used as features and fed as inputs to learning algorithms for glaucoma diagnosis. These features which have discriminatory changes with the occurrence of glaucoma are strategically used for training the classifiers to improve the accuracy of identification. The segmentation of optic disc and cup based on adaptive threshold of the pixel intensities lying in the optic nerve head region. Unlike existing methods the proposed algorithm is based on an adaptive threshold that uses local features from the fundus image for segmentation of optic cup and optic disc making it invariant to the quality of the image and noise content which may find wider acceptability. The experimental results indicate that such features are more significant in comparison to the statistical or textural features as considered in existing works. The proposed work achieves an accuracy of 94.11% with a sensitivity of 100%. A comparison of the proposed work with the existing methods indicates that the proposed approach has improved accuracy of classification glaucoma from a digital fundus which may be considered clinically significant.
Graded-threshold parametric response maps: towards a strategy for adaptive dose painting
NASA Astrophysics Data System (ADS)
Lausch, A.; Jensen, N.; Chen, J.; Lee, T. Y.; Lock, M.; Wong, E.
2014-03-01
Purpose: To modify the single-threshold parametric response map (ST-PRM) method for predicting treatment outcomes in order to facilitate its use for guidance of adaptive dose painting in intensity-modulated radiotherapy. Methods: Multiple graded thresholds were used to extend the ST-PRM method (Nat. Med. 2009;15(5):572-576) such that the full functional change distribution within tumours could be represented with respect to multiple confidence interval estimates for functional changes in similar healthy tissue. The ST-PRM and graded-threshold PRM (GT-PRM) methods were applied to functional imaging scans of 5 patients treated for hepatocellular carcinoma. Pre and post-radiotherapy arterial blood flow maps (ABF) were generated from CT-perfusion scans of each patient. ABF maps were rigidly registered based on aligning tumour centres of mass. ST-PRM and GT-PRM analyses were then performed on overlapping tumour regions within the registered ABF maps. Main findings: The ST-PRMs contained many disconnected clusters of voxels classified as having a significant change in function. While this may be useful to predict treatment response, it may pose challenges for identifying boost volumes or for informing dose-painting by numbers strategies. The GT-PRMs included all of the same information as ST-PRMs but also visualized the full tumour functional change distribution. Heterogeneous clusters in the ST-PRMs often became more connected in the GT-PRMs by voxels with similar functional changes. Conclusions: GT-PRMs provided additional information which helped to visualize relationships between significant functional changes identified by ST-PRMs. This may enhance ST-PRM utility for guiding adaptive dose painting.
Bauer, Robert; Gharabaghi, Alireza
2015-01-01
Restorative brain-computer interfaces (BCI) are increasingly used to provide feedback of neuronal states in a bid to normalize pathological brain activity and achieve behavioral gains. However, patients and healthy subjects alike often show a large variability, or even inability, of brain self-regulation for BCI control, known as BCI illiteracy. Although current co-adaptive algorithms are powerful for assistive BCIs, their inherent class switching clashes with the operant conditioning goal of restorative BCIs. Moreover, due to the treatment rationale, the classifier of restorative BCIs usually has a constrained feature space, thus limiting the possibility of classifier adaptation. In this context, we applied a Bayesian model of neurofeedback and reinforcement learning for different threshold selection strategies to study the impact of threshold adaptation of a linear classifier on optimizing restorative BCIs. For each feedback iteration, we first determined the thresholds that result in minimal action entropy and maximal instructional efficiency. We then used the resulting vector for the simulation of continuous threshold adaptation. We could thus show that threshold adaptation can improve reinforcement learning, particularly in cases of BCI illiteracy. Finally, on the basis of information-theory, we provided an explanation for the achieved benefits of adaptive threshold setting. PMID:25729347
Bauer, Robert; Gharabaghi, Alireza
2015-01-01
Restorative brain-computer interfaces (BCI) are increasingly used to provide feedback of neuronal states in a bid to normalize pathological brain activity and achieve behavioral gains. However, patients and healthy subjects alike often show a large variability, or even inability, of brain self-regulation for BCI control, known as BCI illiteracy. Although current co-adaptive algorithms are powerful for assistive BCIs, their inherent class switching clashes with the operant conditioning goal of restorative BCIs. Moreover, due to the treatment rationale, the classifier of restorative BCIs usually has a constrained feature space, thus limiting the possibility of classifier adaptation. In this context, we applied a Bayesian model of neurofeedback and reinforcement learning for different threshold selection strategies to study the impact of threshold adaptation of a linear classifier on optimizing restorative BCIs. For each feedback iteration, we first determined the thresholds that result in minimal action entropy and maximal instructional efficiency. We then used the resulting vector for the simulation of continuous threshold adaptation. We could thus show that threshold adaptation can improve reinforcement learning, particularly in cases of BCI illiteracy. Finally, on the basis of information-theory, we provided an explanation for the achieved benefits of adaptive threshold setting.
Threshold Region Performance Prediction for Adaptive Matched Field Processing Localization
2007-11-02
significant non-local estimation errors at low signal-to-noise ratios ( SNRs )-errors not modeled by traditional localization measures such as the Cramer...as a function of SNR , for apertures and environments of interest. Particular attention will be given to the "threshold SNR " (below which localization...performance degrades rapidly due to global estimation errors) and to the minimum SNR required to achieve acceptable range/depth localization. Initial
Positive-negative corresponding normalized ghost imaging based on an adaptive threshold
NASA Astrophysics Data System (ADS)
Li, G. L.; Zhao, Y.; Yang, Z. H.; Liu, X.
2016-11-01
Ghost imaging (GI) technology has attracted increasing attention as a new imaging technique in recent years. However, the signal-to-noise ratio (SNR) of GI with pseudo-thermal light needs to be improved before it meets engineering application demands. We therefore propose a new scheme called positive-negative correspondence normalized GI based on an adaptive threshold (PCNGI-AT) to achieve a good performance with less amount of data. In this work, we use both the advantages of normalized GI (NGI) and positive-negative correspondence GI (P-NCGI). The correctness and feasibility of the scheme were proved in theory before we designed an adaptive threshold selection method, in which the parameter of object signal selection conditions is replaced by the normalizing value. The simulation and experimental results reveal that the SNR of the proposed scheme is better than that of time-correspondence differential GI (TCDGI), avoiding the calculation of the matrix of correlation and reducing the amount of data used. The method proposed will make GI far more practical in engineering applications.
A study of the threshold method utilizing raingage data
NASA Technical Reports Server (NTRS)
Short, David A.; Wolff, David B.; Rosenfeld, Daniel; Atlas, David
1993-01-01
The threshold method for estimation of area-average rain rate relies on determination of the fractional area where rain rate exceeds a preset level of intensity. Previous studies have shown that the optimal threshold level depends on the climatological rain-rate distribution (RRD). It has also been noted, however, that the climatological RRD may be composed of an aggregate of distributions, one for each of several distinctly different synoptic conditions, each having its own optimal threshold. In this study, the impact of RRD variations on the threshold method is shown in an analysis of 1-min rainrate data from a network of tipping-bucket gauges in Darwin, Australia. Data are analyzed for two distinct regimes: the premonsoon environment, having isolated intense thunderstorms, and the active monsoon rains, having organized convective cell clusters that generate large areas of stratiform rain. It is found that a threshold of 10 mm/h results in the same threshold coefficient for both regimes, suggesting an alternative definition of optimal threshold as that which is least sensitive to distribution variations. The observed behavior of the threshold coefficient is well simulated by assumption of lognormal distributions with different scale parameters and same shape parameters.
Parallel multilevel adaptive methods
NASA Technical Reports Server (NTRS)
Dowell, B.; Govett, M.; Mccormick, S.; Quinlan, D.
1989-01-01
The progress of a project for the design and analysis of a multilevel adaptive algorithm (AFAC/HM/) targeted for the Navier Stokes Computer is discussed. The results of initial timing tests of AFAC, coupled with multigrid and an efficient load balancer, on a 16-node Intel iPSC/2 hypercube are included. The results of timing tests are presented.
Automated object extraction from remote sensor image based on adaptive thresholding technique
NASA Astrophysics Data System (ADS)
Zhao, Tongzhou; Ma, Shuaijun; Li, Jin; Ming, Hui; Luo, Xiaobo
2009-10-01
Detection and extraction of the dim moving small objects in the infrared image sequences is an interesting research area. A system for detection of the dim moving small targets in the IR image sequences is presented, and a new algorithm having high performance for extracting moving small targets in infrared image sequences containing cloud clutter is proposed in the paper. This method can get the better detection precision than some other methods, and two independent units can realize the calculative process. The novelty of the algorithm is that it uses adaptive thresholding technique of the moving small targets in both the spatial domain and temporal domain. The results of experiment show that the algorithm we presented has high ratio of detection precision.
Matrix Recipes for Hard Thresholding Methods
2012-11-07
present below some characteristic examples for the linear operator A: Matrix Completion (MC): As a motivating example, consider the famous Netflix ...basis independent models from point queries via low-rank methods. Technical report, EPFL, 2012. [8] J. Bennett and S. Lanning. The netflix prize. In In
Pavement crack identification based on automatic threshold iterative method
NASA Astrophysics Data System (ADS)
Lu, Guofeng; Zhao, Qiancheng; Liao, Jianguo; He, Yongbiao
2017-01-01
Crack detection is an important issue in concrete infrastructure. Firstly, the accuracy of crack geometry parameters measurement is directly affected by the extraction accuracy, the same as the accuracy of the detection system. Due to the properties of unpredictability, randomness and irregularity, it is difficult to establish recognition model of crack. Secondly, various image noise, caused by irregular lighting conditions, dark spots, freckles and bump, exerts an influence on the crack detection accuracy. Peak threshold selection method is improved in this paper, and the processing of enhancement, smoothing and denoising is conducted before iterative threshold selection, which can complete the automatic selection of the threshold value in real time and stability.
Matheoud, Roberta; Della Monica, Patrizia; Loi, Gianfranco; Vigna, Luca; Krengli, Marco; Inglese, Eugenio; Brambilla, Marco
2011-01-30
The purpose of this study was to analyze the behavior of a contouring algorithm for PET images based on adaptive thresholding depending on lesions size and target-to-background (TB) ratio under different conditions of image reconstruction parameters. Based on this analysis, the image reconstruction scheme able to maximize the goodness of fit of the thresholding algorithm has been selected. A phantom study employing spherical targets was designed to determine slice-specific threshold (TS) levels which produce accurate cross-sectional areas. A wide range of TB ratio was investigated. Multiple regression methods were used to fit the data and to construct algorithms depending both on target cross-sectional area and TB ratio, using various reconstruction schemes employing a wide range of iteration number and amount of postfiltering Gaussian smoothing. Analysis of covariance was used to test the influence of iteration number and smoothing on threshold determination. The degree of convergence of ordered-subset expectation maximization (OSEM) algorithms does not influence TS determination. Among these approaches, the OSEM at two iterations and eight subsets with a 6-8 mm post-reconstruction Gaussian three-dimensional filter provided the best fit with a coefficient of determination R² = 0.90 for cross-sectional areas ≤ 133 mm² and R² = 0.95 for cross-sectional areas > 133 mm². The amount of post-reconstruction smoothing has been directly incorporated in the adaptive thresholding algorithms. The feasibility of the method was tested in two patients with lymph node FDG accumulation and in five patients using the bladder to mimic an anatomical structure of large size and uniform uptake, with satisfactory results. Slice-specific adaptive thresholding algorithms look promising as a reproducible method for delineating PET target volumes with good accuracy.
Unipolar Terminal-Attractor Based Neural Associative Memory with Adaptive Threshold
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)
1996-01-01
A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner-product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.
Unipolar terminal-attractor based neural associative memory with adaptive threshold
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)
1993-01-01
A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.
Adapting to a changing environment: non-obvious thresholds in multi-scale systems.
Perryman, Clare; Wieczorek, Sebastian
2014-10-08
Many natural and technological systems fail to adapt to changing external conditions and move to a different state if the conditions vary too fast. Such 'non-adiabatic' processes are ubiquitous, but little understood. We identify these processes with a new nonlinear phenomenon-an intricate threshold where a forced system fails to adiabatically follow a changing stable state. In systems with multiple time scales, we derive existence conditions that show such thresholds to be generic, but non-obvious, meaning they cannot be captured by traditional stability theory. Rather, the phenomenon can be analysed using concepts from modern singular perturbation theory: folded singularities and canard trajectories, including composite canards. Thus, non-obvious thresholds should explain the failure to adapt to a changing environment in a wide range of multi-scale systems including: tipping points in the climate system, regime shifts in ecosystems, excitability in nerve cells, adaptation failure in regulatory genes and adiabatic switching in technology.
NASA Astrophysics Data System (ADS)
Morshed, M. N.; Khatun, S.; Kamarudin, L. M.; Aljunid, S. A.; Ahmad, R. B.; Zakaria, A.; Fakir, M. M.
2017-03-01
Spectrum saturation problem is a major issue in wireless communication systems all over the world. Huge number of users is joining each day to the existing fixed band frequency but the bandwidth is not increasing. These requirements demand for efficient and intelligent use of spectrum. To solve this issue, the Cognitive Radio (CR) is the best choice. Spectrum sensing of a wireless heterogeneous network is a fundamental issue to detect the presence of primary users' signals in CR networks. In order to protect primary users (PUs) from harmful interference, the spectrum sensing scheme is required to perform well even in low signal-to-noise ratio (SNR) environments. Meanwhile, the sensing period is usually required to be short enough so that secondary (unlicensed) users (SUs) can fully utilize the available spectrum. CR networks can be designed to manage the radio spectrum more efficiently by utilizing the spectrum holes in primary user's licensed frequency bands. In this paper, we have proposed an adaptive threshold detection method to detect presence of PU signal using free space path loss (FSPL) model in 2.4 GHz WLAN network. The model is designed for mobile sensors embedded in smartphones. The mobile sensors acts as SU while the existing WLAN network (channels) works as PU. The theoretical results show that the desired threshold range detection of mobile sensors mainly depends on the noise floor level of the location in consideration.
Fast method for dynamic thresholding in volume holographic memories
NASA Astrophysics Data System (ADS)
Porter, Michael S.; Mitkas, Pericles A.
1998-11-01
It is essential for parallel optical memory interfaces to incorporate processing that dynamically differentiates between databit values. These thresholding points will vary as a result of system noise -- due to contrast fluctuations, variations in data page composition, reference beam misalignment, etc. To maintain reasonable data integrity it is necessary to select the threshold close to its optimal level. In this paper, a neural network (NN) approach is proposed as a fast method of determining the threshold to meet the required transfer rate. The multi-layered perceptron network can be incorporated as part of a smart photodetector array (SPA). Other methods have suggested performing the operation by means of histogram or by use of statistical information. These approaches fail in that they unnecessarily switch to a 1-D paradigm. In this serial domain, global thresholding is pointless since sequence detection could be applied. The discussed approach is a parallel solution with less overhead than multi-rail encoding. As part of this method, a small set of values are designated as threshold determination data bits; these are interleaved with the information data bits and are used as inputs to the NN. The approach has been tested using both simulated data as well as data obtained from a volume holographic memory system. Results show convergence of the training and an ability to generalize upon untrained data for binary and multi-level gray scale datapage images. Methodologies are discussed for improving the performance by a proper training set selection.
Colour detection thresholds as a function of chromatic adaptation and light level.
Jennings, B J; Barbur, J L
2010-09-01
Colour threshold discrimination ellipses were measured for a number of states of chromatic adaptation and a range of luminance levels using the Colour Assessment and Diagnosis (CAD) test. An analysis of these results was carried out by examining the cone excitation signals along the cardinal axes that correspond to detection thresholds in the +L-M (reddish), -L+M (greenish), +S (bluish) and -S (yellowish) colour directions. The results reveal a strong linear relationship between the excitations induced by the adapting background field in each cone class and the corresponding changes needed for threshold detection. These findings suggest that the cone excitation change for threshold detection of colour signals is always the same for a given background excitation level (in any cone class), independent of the excitations generated in the other cone classes. These observations have been used to develop a model to predict colour detection thresholds for any specified background luminance and chromaticity within the range of values investigated in this study (e.g., luminances in the range 0.3 to 31 cd.m(-2) and chromaticities within the gamut of typical CRT displays). Predicted colour thresholds were found to be in close agreement with measured values with errors that do not, in general, exceed the measured within-subject variability.
A method for fast feature extraction in threshold scans
NASA Astrophysics Data System (ADS)
Mertens, Marius C.; Ritman, James
2014-01-01
We present a fast, analytical method to calculate the threshold and noise parameters from a threshold scan. This is usually done by fitting a response function to the data which is computationally very intensive. The runtime can be minimized by a hardware implementation, e.g. using an FPGA, which in turn requires to minimize the mathematical complexity of the algorithm in order to fit into the available resources on the FPGA. The systematic errors of the method are analyzed and reasonable choices of the parameters for use in practice are given.
Video object segmentation via adaptive threshold based on background model diversity
NASA Astrophysics Data System (ADS)
Boubekeur, Mohamed Bachir; Luo, SenLin; Labidi, Hocine; Benlefki, Tarek
2015-03-01
The background subtraction could be presented as classification process when investigating the upcoming frames in a video stream, taking in consideration in some cases: a temporal information, in other cases the spatial consistency, and these past years both of the considerations above. The classification often relied in most of the cases on a fixed threshold value. In this paper, a framework for background subtraction and moving object detection based on adaptive threshold measure and short/long frame differencing procedure is proposed. The presented framework explored the case of adaptive threshold using mean squared differences for a sampled background model. In addition, an intuitive update policy which is neither conservative nor blind is presented. The algorithm succeeded on extracting the moving foreground and isolating an accurate background.
Olfactory Detection Thresholds and Adaptation in Adults with Autism Spectrum Condition
ERIC Educational Resources Information Center
Tavassoli, T.; Baron-Cohen, S.
2012-01-01
Sensory issues have been widely reported in Autism Spectrum Conditions (ASC). Since olfaction is one of the least investigated senses in ASC, the current studies explore olfactory detection thresholds and adaptation to olfactory stimuli in adults with ASC. 80 participants took part, 38 (18 females, 20 males) with ASC and 42 control participants…
Low-Threshold Active Teaching Methods for Mathematic Instruction
ERIC Educational Resources Information Center
Marotta, Sebastian M.; Hargis, Jace
2011-01-01
In this article, we present a large list of low-threshold active teaching methods categorized so the instructor can efficiently access and target the deployment of conceptually based lessons. The categories include teaching strategies for lecture on large and small class sizes; student action individually, in pairs, and groups; games; interaction…
A Threshold-Adaptive Reputation System on Mobile Ad Hoc Networks
NASA Astrophysics Data System (ADS)
Tsai, Hsiao-Chien; Lo, Nai-Wei; Wu, Tzong-Chen
In recent years huge potential benefits from novel applications in mobile ad hoc networks (MANET) have been discussed extensively. However, without robust security mechanisms and systems to provide safety shell through the MANET infrastructure, MANET applications can be vulnerable and hammered by malicious attackers easily. In order to detect misbehaved message routing and identify malicious attackers in MANET, schemes based on reputation concept have shown their advantages in this area in terms of good scalability and simple threshold-based detection strategy. We observed that previous reputation schemes generally use predefined thresholds which do not take into account the effect of behavior dynamics between nodes in a period of time. In this paper, we propose a Threshold-Adaptive Reputation System (TARS) to overcome the shortcomings of static threshold strategy and improve the overall MANET performance under misbehaved routing attack. A fuzzy-based inference engine is introduced to evaluate the trustiness of a node's one-hop neighbors. Malicious nodes whose trust values are lower than the adaptive threshold, will be detected and filtered out by their honest neighbors during trustiness evaluation process. The results of network simulation show that the TARS outperforms other compared schemes under security attacks in most cases and at the same time reduces the decrease of total packet delivery ratio by 67% in comparison with MANET without reputation system.
Adaptive RTS threshold for maximum network throughput in IEEE 802.11 DCF
NASA Astrophysics Data System (ADS)
Yan, Shaohu; Zhuo, Yongning; Wu, Shiqi; Guo, Wei
2004-04-01
The IEEE 802.11 medium access control (MAC) protocol provides shared access to wireless channel. Its primary MAC technique is called distributed coordination function (DCF) that includes two packet transmission schemes, namely, basic access and RTS/CTS access mechanisms. In a "hybrid" network combining the two schemes, packets with payload longer than a given threshold (RTS Threshold) are transmitted according to the RTS/CTS mechanism. Based on delicate mathematical model, the average time in a successful and unsuccessful transmission is analyzed in the assumption of idea channel. Then the relation of network saturation throughput and RTS threshold was found and expressed in theoretical formula. We present the numerical techniques to find out the optimum RTS threshold that can maximize the network capacity. An adaptive RTS threshold adjust algorithm (ARTA), with which a station can automatically adjust its RTS threshold to the current optimum value, is also presented in detail. A special procedure is also developed to help ARTA in determination of station numbers. All theoretical analysis and algorithm are validated through computer simulation.
Pattern recognition with adaptive-thresholds for sleep spindle in high density EEG signals.
Gemignani, Jessica; Agrimi, Jacopo; Cheli, Enrico; Gemignani, Angelo; Laurino, Marco; Allegrini, Paolo; Landi, Alberto; Menicucci, Danilo
2015-01-01
Medicine and Surgery, University of Pisa, via Savi 10, 56126, Pisa, Italy Sleep spindles are electroencephalographic oscillations peculiar of non-REM sleep, related to neuronal mechanisms underlying sleep restoration and learning consolidation. Based on their very singular morphology, sleep spindles can be visually recognized and detected, even though this approach can lead to significant mis-detections. For this reason, many efforts have been put in developing a reliable algorithm for spindle automatic detection, and a number of methods, based on different techniques, have been tested via visual validation. This work aims at improving current pattern recognition procedures for sleep spindles detection by taking into account their physiological sources of variability. We provide a method as a synthesis of the current state of art that, improving dynamic threshold adaptation, is able to follow modification of spindle characteristics as a function of sleep depth and inter-subjects variability. The algorithm has been applied to physiological data recorded by a high density EEG in order to perform a validation based on visual inspection and on evaluation of expected results from normal night sleep in healthy subjects.
Future temperature in southwest Asia projected to exceed a threshold for human adaptability
NASA Astrophysics Data System (ADS)
Pal, Jeremy S.; Eltahir, Elfatih A. B.
2016-02-01
A human body may be able to adapt to extremes of dry-bulb temperature (commonly referred to as simply temperature) through perspiration and associated evaporative cooling provided that the wet-bulb temperature (a combined measure of temperature and humidity or degree of `mugginess’) remains below a threshold of 35 °C. (ref. ). This threshold defines a limit of survivability for a fit human under well-ventilated outdoor conditions and is lower for most people. We project using an ensemble of high-resolution regional climate model simulations that extremes of wet-bulb temperature in the region around the Arabian Gulf are likely to approach and exceed this critical threshold under the business-as-usual scenario of future greenhouse gas concentrations. Our results expose a specific regional hotspot where climate change, in the absence of significant mitigation, is likely to severely impact human habitability in the future.
Milne, Roger Brent
1995-12-01
This thesis describes a new method for the numerical solution of partial differential equations of the parabolic type on an adaptively refined mesh in two or more spatial dimensions. The method is motivated and developed in the context of the level set formulation for the curvature dependent propagation of surfaces in three dimensions. In that setting, it realizes the multiple advantages of decreased computational effort, localized accuracy enhancement, and compatibility with problems containing a range of length scales.
[Curvelet denoising algorithm for medical ultrasound image based on adaptive threshold].
Zhuang, Zhemin; Yao, Weike; Yang, Jinyao; Li, FenLan; Yuan, Ye
2014-11-01
The traditional denoising algorithm for ultrasound images would lost a lot of details and weak edge information when suppressing speckle noise. A new denoising algorithm of adaptive threshold based on curvelet transform is proposed in this paper. The algorithm utilizes differences of coefficients' local variance between texture and smooth region in each layer of ultrasound image to define fuzzy regions and membership functions. In the end, using the adaptive threshold that determine by the membership function to denoise the ultrasound image. The experimental text shows that the algorithm can reduce the speckle noise effectively and retain the detail information of original image at the same time, thus it can greatly enhance the performance of B ultrasound instrument.
NASA Astrophysics Data System (ADS)
Krasichkov, Alexander S.; Grigoriev, Eugene B.; Bogachev, Mikhail I.; Nifontov, Eugene M.
2015-10-01
We suggest an analytical approach to the adaptive thresholding in a shape anomaly detection problem. We find an analytical expression for the distribution of the cosine similarity score between a reference shape and an observational shape hindered by strong measurement noise that depends solely on the noise level and is independent of the particular shape analyzed. The analytical treatment is also confirmed by computer simulations and shows nearly perfect agreement. Using this analytical solution, we suggest an improved shape anomaly detection approach based on adaptive thresholding. We validate the noise robustness of our approach using typical shapes of normal and pathological electrocardiogram cycles hindered by additive white noise. We show explicitly that under high noise levels our approach considerably outperforms the conventional tactic that does not take into account variations in the noise level.
Method of adaptive artificial viscosity
NASA Astrophysics Data System (ADS)
Popov, I. V.; Fryazinov, I. V.
2011-09-01
A new finite-difference method for the numerical solution of gas dynamics equations is proposed. This method is a uniform monotonous finite-difference scheme of second-order approximation on time and space outside of domains of shock and compression waves. This method is based on inputting adaptive artificial viscosity (AAV) into gas dynamics equations. In this paper, this method is analyzed for 2D geometry. The testing computations of the movement of contact discontinuities and shock waves and the breakup of discontinuities are demonstrated.
An efficient threshold dynamics method for wetting on rough surfaces
NASA Astrophysics Data System (ADS)
Xu, Xianmin; Wang, Dong; Wang, Xiao-Ping
2017-02-01
The threshold dynamics method developed by Merriman, Bence and Osher (MBO) is an efficient method for simulating the motion by mean curvature flow when the interface is away from the solid boundary. Direct generalization of MBO-type methods to the wetting problem with interfaces intersecting the solid boundary is not easy because solving the heat equation in a general domain with a wetting boundary condition is not as efficient as it is with the original MBO method. The dynamics of the contact point also follows a different law compared with the dynamics of the interface away from the boundary. In this paper, we develop an efficient volume preserving threshold dynamics method for simulating wetting on rough surfaces. This method is based on minimization of the weighted surface area functional over an extended domain that includes the solid phase. The method is simple, stable with O (Nlog N) complexity per time step and is not sensitive to the inhomogeneity or roughness of the solid boundary.
Impact of sub and supra-threshold adaptation currents in networks of spiking neurons.
Colliaux, David; Yger, Pierre; Kaneko, Kunihiko
2015-12-01
Neuronal adaptation is the intrinsic capacity of the brain to change, by various mechanisms, its dynamical responses as a function of the context. Such a phenomena, widely observed in vivo and in vitro, is known to be crucial in homeostatic regulation of the activity and gain control. The effects of adaptation have already been studied at the single-cell level, resulting from either voltage or calcium gated channels both activated by the spiking activity and modulating the dynamical responses of the neurons. In this study, by disentangling those effects into a linear (sub-threshold) and a non-linear (supra-threshold) part, we focus on the the functional role of those two distinct components of adaptation onto the neuronal activity at various scales, starting from single-cell responses up to recurrent networks dynamics, and under stationary or non-stationary stimulations. The effects of slow currents on collective dynamics, like modulation of population oscillation and reliability of spike patterns, is quantified for various types of adaptation in sparse recurrent networks.
NASA Astrophysics Data System (ADS)
Amanda, A. R.; Widita, R.
2016-03-01
The aim of this research is to compare some image segmentation methods for lungs based on performance evaluation parameter (Mean Square Error (MSE) and Peak Signal Noise to Ratio (PSNR)). In this study, the methods compared were connected threshold, neighborhood connected, and the threshold level set segmentation on the image of the lungs. These three methods require one important parameter, i.e the threshold. The threshold interval was obtained from the histogram of the original image. The software used to segment the image here was InsightToolkit-4.7.0 (ITK). This research used 5 lung images to be analyzed. Then, the results were compared using the performance evaluation parameter determined by using MATLAB. The segmentation method is said to have a good quality if it has the smallest MSE value and the highest PSNR. The results show that four sample images match the criteria of connected threshold, while one sample refers to the threshold level set segmentation. Therefore, it can be concluded that connected threshold method is better than the other two methods for these cases.
Evaluating tactile sensitivity adaptation by measuring the differential threshold of archers.
Kotani, Kentaro; Ito, Seiji; Miura, Toshihiro; Horii, Ken
2007-03-01
This study investigated the relationship between the force applied to a finger and the differential threshold of the force. Further, it presented an improvement function for tactile perception in archers by adapting to circumstances in which enhanced tactile perception and finger dexterity are required to practice archery on a daily basis. For this purpose, a tactile display using an air jet was developed. The air was aimed at the center of the fingertip of the index finger. The inner diameter of the nozzle was set to 3 mm. In this study, a psychophysical experiment was conducted to obtain the differential threshold from two subject groups-an archery athlete group and a control group. A total of six levels of standard stimuli ranging from 2.0 gf to 7.0 gf was obtained. As a result, the differential threshold of the archery group was significantly higher than that of the control group. The Weber ratio of the archery group remained around 0.13 and that of control group was 0.10. The experiment also revealed that the differential threshold for archers exhibited less fluctuation between the trials and between the days, which implied that the tactile perception of archery athletes may be more stable than that of non-experienced subjects. This may be a plasticity property of tactile perception.
Robust Optimal Adaptive Control Method with Large Adaptive Gain
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2009-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.
Chang, L; He, S
2014-01-03
Adaptation is an important process of sensory systems to adjust sensitivity to ensure the appropriate information encoding. Sensitivity and kinetics of retinal ganglion cell (RGC) responses have been studied extensively using a brief flash superimposed on different but steady backgrounds. However, it is still unclear if light adaptation exerts any effect on more complex response properties, such as response nonlinearity. In this study, we found that the latency of spike responses to a repeated flashing spot stimulation increased by 30 ms in the mouse ON α RGCs (An ON-type RGC is excited when a spot is turned on in the center of its receptive field). A single dimming event preceding the test flash on a steady adapting background could also produce similar effect in increasing latency of light responses. A simple computational model with a linear transformation of the light stimulus and a threshold-like nonlinearity could account for the experimental data. Moreover, the strength of the measured nonlinearity and the response latency were affected by the duration of light adaptation. The possible biological processes underlying this nonlinearity were explored. Voltage clamp recording revealed the presence of the increase in latency and threshold-like nonlinearity in the excitatory input of RGCs. However, no comparable nonlinearity was observed in the light responses of the ON cone bipolar cells. We further excluded GABAergic and glycinergic inhibition, N-methyl-D-aspartate receptor rectification and voltage-gated Na(+) channels as potential sources of this nonlinearity by pharmacological experiments. Our results indicate the bipolar cell terminals as the potential site of nonlinearity. Computational modeling constrained by experimental data supports that conclusion and suggests the voltage-sensitive Ca(++) channels and Ca(++)-dependent vesicle release in the bipolar cell terminals as mechanistic basis.
Perthame, Benoît; Gauduchon, Mathias
2010-09-01
Deterministic population models for adaptive dynamics are derived mathematically from individual-centred stochastic models in the limit of large populations. However, it is common that numerical simulations of both models fit poorly and give rather different behaviours in terms of evolution speeds and branching patterns. Stochastic simulations involve extinction phenomenon operating through demographic stochasticity, when the number of individual 'units' is small. Focusing on the class of integro-differential adaptive models, we include a similar notion in the deterministic formulations, a survival threshold, which allows phenotypical traits in the population to vanish when represented by few 'individuals'. Based on numerical simulations, we show that the survival threshold changes drastically the solution; (i) the evolution speed is much slower, (ii) the branching patterns are reduced continuously and (iii) these patterns are comparable to those obtained with stochastic simulations. The rescaled models can also be analysed theoretically. One can recover the concentration phenomena on well-separated Dirac masses through the constrained Hamilton-Jacobi equation in the limit of small mutations and large observation times.
Karmali, Faisal; Chaudhuri, Shomesh E; Yi, Yongwoo; Merfeld, Daniel M
2016-03-01
When measuring thresholds, careful selection of stimulus amplitude can increase efficiency by increasing the precision of psychometric fit parameters (e.g., decreasing the fit parameter error bars). To find efficient adaptive algorithms for psychometric threshold ("sigma") estimation, we combined analytic approaches, Monte Carlo simulations, and human experiments for a one-interval, binary forced-choice, direction-recognition task. To our knowledge, this is the first time analytic results have been combined and compared with either simulation or human results. Human performance was consistent with theory and not significantly different from simulation predictions. Our analytic approach provides a bound on efficiency, which we compared against the efficiency of standard staircase algorithms, a modified staircase algorithm with asymmetric step sizes, and a maximum likelihood estimation (MLE) procedure. Simulation results suggest that optimal efficiency at determining threshold is provided by the MLE procedure targeting a fraction correct level of 0.92, an asymmetric 4-down, 1-up staircase targeting between 0.86 and 0.92 or a standard 6-down, 1-up staircase. Psychometric test efficiency, computed by comparing simulation and analytic results, was between 41 and 58% for 50 trials for these three algorithms, reaching up to 84% for 200 trials. These approaches were 13-21% more efficient than the commonly used 3-down, 1-up symmetric staircase. We also applied recent advances to reduce accuracy errors using a bias-reduced fitting approach. Taken together, the results lend confidence that the assumptions underlying each approach are reasonable and that human threshold forced-choice decision making is modeled well by detection theory models and mimics simulations based on detection theory models.
Boegel, Marco; Hoelter, Philip; Redel, Thomas; Maier, Andreas; Hornegger, Joachim; Doerfler, Arnd
2015-01-01
Subarachnoid hemorrhage due to a ruptured cerebral aneurysm is still a devastating disease. Planning of endovascular aneurysm therapy is increasingly based on hemodynamic simulations necessitating reliable vessel segmentation and accurate assessment of vessel diameters. In this work, we propose a fully-automatic, locally adaptive, gradient-based thresholding algorithm. Our approach consists of two steps. First, we estimate the parameters of a global thresholding algorithm using an iterative process. Then, a locally adaptive version of the approach is applied using the estimated parameters. We evaluated both methods on 8 clinical 3D DSA cases. Additionally, we propose a way to select a reference segmentation based on 2D DSA measurements. For large vessels such as the internal carotid artery, our results show very high sensitivity (97.4%), precision (98.7%) and Dice-coefficient (98.0%) with our reference segmentation. Similar results (sensitivity: 95.7%, precision: 88.9% and Dice-coefficient: 90.7%) are achieved for smaller vessels of approximately 1mm diameter.
Seidel, David Ulrich; Flemming, Tobias Angelo; Park, Jonas Jae-Hyun; Remmert, Stephan
2015-01-01
Objective hearing threshold estimation by auditory steady-state responses (ASSR) can be accelerated by the use of narrow-band chirps and adaptive stimulus patterns. This modification has been examined in only a few clinical studies. In this study, clinical data is validated and extended, and the applicability of the method in audiological diagnostics routine is examined. In 60 patients (normal hearing and hearing impaired), ASSR and pure tone audiometry (PTA) thresholds were compared. ASSR were evoked by binaural multi-frequent narrow-band chirps with adaptive stimulus patterns. The precision and required testing time for hearing threshold estimation were determined. The average differences between ASSR and PTA thresholds were 18, 12, 17 and 19 dB for normal hearing (PTA ≤ 20 dB) and 5, 9, 9 and 11 dB for hearing impaired (PTA > 20 dB) at the frequencies of 500, 1,000, 2,000 and 4,000 Hz, respectively, and the differences were significant in all frequencies with the exception of 1 kHz. Correlation coefficients between ASSR and PTA thresholds were 0.36, 0.47, 0.54 and 0.51 for normal hearing and 0.73, 0.74, 0.72 and 0.71 for hearing impaired at 500, 1,000, 2,000 and 4,000 Hz, respectively. Mean ASSR testing time was 33 ± 8 min. In conclusion, auditory steady-state responses with narrow-band-chirps and adaptive stimulus patterns is an efficient method for objective frequency-specific hearing threshold estimation. Precision of threshold estimation is most limited for slighter hearing loss at 500 Hz. The required testing time is acceptable for the application in everyday clinical routine.
An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures
Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Duan, Fuqing; Pan, Yutong
2016-01-01
Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels. PMID:27597878
An Active Contour Model Based on Adaptive Threshold for Extraction of Cerebral Vascular Structures.
Wang, Jiaxin; Zhao, Shifeng; Liu, Zifeng; Tian, Yun; Duan, Fuqing; Pan, Yutong
2016-01-01
Cerebral vessel segmentation is essential and helpful for the clinical diagnosis and the related research. However, automatic segmentation of brain vessels remains challenging because of the variable vessel shape and high complex of vessel geometry. This study proposes a new active contour model (ACM) implemented by the level-set method for segmenting vessels from TOF-MRA data. The energy function of the new model, combining both region intensity and boundary information, is composed of two region terms, one boundary term and one penalty term. The global threshold representing the lower gray boundary of the target object by maximum intensity projection (MIP) is defined in the first-region term, and it is used to guide the segmentation of the thick vessels. In the second term, a dynamic intensity threshold is employed to extract the tiny vessels. The boundary term is used to drive the contours to evolve towards the boundaries with high gradients. The penalty term is used to avoid reinitialization of the level-set function. Experimental results on 10 clinical brain data sets demonstrate that our method is not only able to achieve better Dice Similarity Coefficient than the global threshold based method and localized hybrid level-set method but also able to extract whole cerebral vessel trees, including the thin vessels.
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.
2016-01-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A
2016-08-12
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding
NASA Astrophysics Data System (ADS)
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.
2016-08-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.
Adaptive windowed range-constrained Otsu method using local information
NASA Astrophysics Data System (ADS)
Zheng, Jia; Zhang, Dinghua; Huang, Kuidong; Sun, Yuanxi; Tang, Shaojie
2016-01-01
An adaptive windowed range-constrained Otsu method using local information is proposed for improving the performance of image segmentation. First, the reason why traditional thresholding methods do not perform well in the segmentation of complicated images is analyzed. Therein, the influences of global and local thresholdings on the image segmentation are compared. Second, two methods that can adaptively change the size of the local window according to local information are proposed by us. The characteristics of the proposed methods are analyzed. Thereby, the information on the number of edge pixels in the local window of the binarized variance image is employed to adaptively change the local window size. Finally, the superiority of the proposed method over other methods such as the range-constrained Otsu, the active contour model, the double Otsu, the Bradley's, and the distance-regularized level set evolution is demonstrated. It is validated by the experiments that the proposed method can keep more details and acquire much more satisfying area overlap measure as compared with the other conventional methods.
Effects of yellow, orange and red filter glasses on the thresholds of a dark-adapted human eye.
Aarnisalo, E; Pehkonen, P
1990-04-01
Effects of 13 different yellow, orange and red (Schott) longpass filter glasses on the extrafoveal thresholds obtained by 3 normal subjects after dark-adaptation were measured using a Goldman-Weekers adaptometer. When filters GG400, GG420, GG435, GG455, GG475, GG495, OG515 and OG530 (cutting off radiation up to 527 nm) were used there was no significant change in the threshold value. However, significantly higher threshold values were obtained with the use of the filters OG550, OG570, OG590, RG610 and RG630.
NASA Astrophysics Data System (ADS)
Fan, C.; Zheng, B.; Myint, S. W.; Aggarwal, R.
2014-12-01
Cropping intensity is the number of crops grown per year per unit area of cropland. Since 1970s, the Phoenix Active Management Area (AMA) has undergone rapid urbanization mostly via land conversions from agricultural prime lands to urban land use. Agricultural intensification, or multiple cropping, has been observed globally as a positive response to the growing land pressure as a consequence of urbanization and exploding population. Nevertheless, increased cropping intensity has associated local, regional, and global environmental outcomes such as degradation of water quality and soil fertility. Quantifying spatio-temporal patterns of cropping intensity can serve as a first step towards understanding these environmental problems and developing effective and sustainable cropping strategies. In this study, an adaptive threshold method was developed to measure the cropping intensity in the Phoenix AMA from 1995 to 2010 at five-year intervals. The method has several advantages in terms of (1) minimization of errors arising from missing data and noise; (2) ability to distinguish growing cycles from multiple small false peaks in a vegetation index time series; (3) flexibility when dealing with temporal profiles with diffing numbers of observations. The adaptive threshold approach measures the cropping intensity effectively with overall accuracies higher than 97%. Results indicate a dramatic decline in the area of total croplands, single crops, and double crops. A small land conversion was witnessed from single crops into double crops from 1995 to 2000, whereas a reverse trend was observed from 2005 to 2010. Changes in cropping intensity can affect local water consumption. Therefore, joint investigation of cropping patterns and agricultural water use can provide implications for future water demand, which is an increasingly critical issue in this rapidly expanding desert city.
t-Tests, F-tests and Otsu's methods for image thresholding.
Xue, Jing-Hao; Titterington, D Michael
2011-08-01
Otsu's binarization method is one of the most popular image-thresholding methods; Student's t -test is one of the most widely-used statistical tests to compare two groups. This paper aims to stress the equivalence between Otsu's binarization method and the search for an optimal threshold that provides the largest absolute Student's t-statistic. It is then naturally demonstrated that the extension of Otsu's binarization method to multi-level thresholding is equivalent to the search for optimal thresholds that provide the largest F -statistic through one-way analysis of variance (ANOVA). Furthermore, general equivalences between some parametric image-thresholding methods and the search for optimal thresholds with the largest likelihood-ratio test statistics are briefly discussed.
A new stereo matching method based on threshold constrained minimum spanning tree
NASA Astrophysics Data System (ADS)
Cao, Hai; Ding, Yan; Du, Ming; Zhao, Liangjin; Yuan, Yating
2017-01-01
This paper proposes a novelty dense stereo matching method based on TC-MST (Threshold Constrained Minimum Spanning Tree), which aims to improve the accuracy of distance measuring. Due to the threshold has a great impact on the results of image segments, to select a better threshold, we adopt iteration threshold method. And then we uses MST to calculate the cost aggregation, and utilize the winner-take-all algorithm for the cost aggregation to obtain the disparity. Finally the method proposed is used in a distance measuring system. The experiment results show that this method improves the distance measuring accuracy compared with BM (block matching).
NASA Astrophysics Data System (ADS)
Guo, Dongchao; Trajanovski, Stojan; van de Bovenkamp, Ruud; Wang, Huijuan; Van Mieghem, Piet
2013-10-01
The interplay between disease dynamics on a network and the dynamics of the structure of that network characterizes many real-world systems of contacts. A continuous-time adaptive susceptible-infectious-susceptible (ASIS) model is introduced in order to investigate this interaction, where a susceptible node avoids infections by breaking its links to its infected neighbors while it enhances the connections with other susceptible nodes by creating links to them. When the initial topology of the network is a complete graph, an exact solution to the average metastable-state fraction of infected nodes is derived without resorting to any mean-field approximation. A linear scaling law of the epidemic threshold τc as a function of the effective link-breaking rate ω is found. Furthermore, the bifurcation nature of the metastable fraction of infected nodes of the ASIS model is explained. The metastable-state topology shows high connectivity and low modularity in two regions of the τ,ω plane for any effective infection rate τ>τc: (i) a “strongly adaptive” region with very high ω and (ii) a “weakly adaptive” region with very low ω. These two regions are separated from the other half-open elliptical-like regions of low connectivity and high modularity in a contour-line-like way. Our results indicate that the adaptation of the topology in response to disease dynamics suppresses the infection, while it promotes the network evolution towards a topology that exhibits assortative mixing, modularity, and a binomial-like degree distribution.
Arman, A Cyrus; Sampath, Alapakkam P
2012-05-01
The nervous system frequently integrates parallel streams of information to encode a broad range of stimulus strengths. In mammalian retina it is generally believed that signals generated by rod and cone photoreceptors converge onto cone bipolar cells prior to reaching the retinal output, the ganglion cells. Near absolute visual threshold a specialized mammalian retinal circuit, the rod bipolar pathway, pools signals from many rods and converges on depolarizing (AII) amacrine cells. However, whether subsequent signal flow to OFF ganglion cells requires OFF cone bipolar cells near visual threshold remains unclear. Glycinergic synapses between AII amacrine cells and OFF cone bipolar cells are believed to relay subsequently rod-driven signals to OFF ganglion cells. However, AII amacrine cells also make glycinergic synapses directly with OFF ganglion cells. To determine the route for signal flow near visual threshold, we measured the effect of the glycine receptor antagonist strychnine on response threshold in fully dark-adapted retinal cells. As shown previously, we found that response threshold for OFF ganglion cells was elevated by strychnine. Surprisingly, strychnine did not elevate response threshold in any subclass of OFF cone bipolar cell. Instead, in every OFF cone bipolar subclass strychnine suppressed tonic glycinergic inhibition without altering response threshold. Consistent with this lack of influence of strychnine, we found that the dominant input to OFF cone bipolar cells in darkness was excitatory and the response threshold of the excitatory input varied by subclass. Thus, in the dark-adapted mouse retina, the high absolute sensitivity of OFF ganglion cells cannot be explained by signal transmission through OFF cone bipolar cells.
A new orientation-adaptive interpolation method.
Wang, Qing; Ward, Rabab Kreidieh
2007-04-01
We propose an isophote-oriented, orientation-adaptive interpolation method. The proposed method employs an interpolation kernel that adapts to the local orientation of isophotes, and the pixel values are obtained through an oriented, bilinear interpolation. We show that, by doing so, the curvature of the interpolated isophotes is reduced, and, thus, zigzagging artifacts are largely suppressed. Analysis and experiments show that images interpolated using the proposed method are visually pleasing and almost artifact free.
The Method of Adaptive Comparative Judgement
ERIC Educational Resources Information Center
Pollitt, Alastair
2012-01-01
Adaptive Comparative Judgement (ACJ) is a modification of Thurstone's method of comparative judgement that exploits the power of adaptivity, but in scoring rather than testing. Professional judgement by teachers replaces the marking of tests; a judge is asked to compare the work of two students and simply to decide which of them is the better.…
Nuismer, S L; MacPherson, A; Rosenblum, E B
2012-12-01
Genetic architecture plays an important role in the process of adaptation to novel environments. One example is the role of allelic dominance, where advantageous recessive mutations have a lower probability of fixation than advantageous dominant mutations. This classic observation, termed 'Haldane's sieve', has been well explored theoretically for single isolated populations adapting to new selective regimes. However, the role of dominance is less well understood for peripheral populations adapting to novel environments in the face of recurrent and maladaptive gene flow. Here, we use a combination of analytical approximations and individual-based simulations to explore how dominance influences the likelihood of adaptation to novel peripheral environments. We demonstrate that in the face of recurrent maladaptive gene flow, recessive alleles can fuel adaptation only when their frequency exceeds a critical threshold within the ancestral range.
Willingham, David G.; Naes, Benjamin E.; Heasler, Patrick G.; Zimmer, Mindy M.; Barrett, Christopher A.; Addleman, Raymond S.
2016-05-31
A novel approach to particle identification and particle isotope ratio determination has been developed for nuclear safeguard applications. This particle search approach combines an adaptive thresholding algorithm and marker-controlled watershed segmentation (MCWS) transform, which improves the secondary ion mass spectrometry (SIMS) isotopic analysis of uranium containing particle populations for nuclear safeguards applications. The Niblack assisted MCWS approach (a.k.a. SEEKER) developed for this work has improved the identification of isotopically unique uranium particles under conditions that have historically presented significant challenges for SIMS image data processing techniques. Particles obtained from five NIST uranium certified reference materials (CRM U129A, U015, U150, U500 and U850) were successfully identified in regions of SIMS image data 1) where a high variability in image intensity existed, 2) where particles were touching or were in close proximity to one another and/or 3) where the magnitude of ion signal for a given region was count limited. Analysis of the isotopic distributions of uranium containing particles identified by SEEKER showed four distinct, accurately identified 235U enrichment distributions, corresponding to the NIST certified 235U/238U isotope ratios for CRM U129A/U015 (not statistically differentiated), U150, U500 and U850. Additionally, comparison of the minor uranium isotope (234U, 235U and 236U) atom percent values verified that, even in the absence of high precision isotope ratio measurements, SEEKER could be used to segment isotopically unique uranium particles from SIMS image data. Although demonstrated specifically for SIMS analysis of uranium containing particles for nuclear safeguards, SEEKER has application in addressing a broad set of image processing challenges.
Restrictive Stochastic Item Selection Methods in Cognitive Diagnostic Computerized Adaptive Testing
ERIC Educational Resources Information Center
Wang, Chun; Chang, Hua-Hua; Huebner, Alan
2011-01-01
This paper proposes two new item selection methods for cognitive diagnostic computerized adaptive testing: the restrictive progressive method and the restrictive threshold method. They are built upon the posterior weighted Kullback-Leibler (KL) information index but include additional stochastic components either in the item selection index or in…
Adaptive Discontinuous Galerkin Methods in Multiwavelets Bases
Archibald, Richard K; Fann, George I; Shelton Jr, William Allison
2011-01-01
We use a multiwavelet basis with the Discontinuous Galerkin (DG) method to produce a multi-scale DG method. We apply this Multiwavelet DG method to convection and convection-diffusion problems in multiple dimensions. Merging the DG method with multiwavelets allows the adaptivity in the DG method to be resolved through manipulation of multiwavelet coefficients rather than grid manipulation. Additionally, the Multiwavelet DG method is tested on non-linear equations in one dimension and on the cubed sphere.
NASA Astrophysics Data System (ADS)
Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun
2016-05-01
The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.
NASA Astrophysics Data System (ADS)
Sung, J. H.; Chung, E.-S.
2014-09-01
This study developed a streamflow drought severity-duration-frequency (SDF) curve that is analogous to the well-known depth-duration-frequency (DDF) curve used for rainfall. Severity was defined as the total water deficit volume to target threshold for a given drought duration. Furthermore, this study compared the SDF curves of four threshold level methods: fixed, monthly, daily, and desired yield for water use. The fixed threshold level in this study is the 70th percentile value (Q70) of the flow duration curve (FDC), which is compiled using all available daily streamflows. The monthly threshold level is the monthly varying Q70 values of the monthly FDC. The daily variable threshold is Q70 of the FDC that was obtained from the antecedent 365 daily streamflows. The desired-yield threshold that was determined by the central government consists of domestic, industrial, and agricultural water uses and environmental in-stream flow. As a result, the durations and severities from the desired-yield threshold level were completely different from those for the fixed, monthly and daily levels. In other words, the desired-yield threshold can identify streamflow droughts using the total water deficit to the hydrological and socioeconomic targets, whereas the fixed, monthly, and daily streamflow thresholds derive the deficiencies or anomalies from the average of the historical streamflow. Based on individual frequency analyses, the SDF curves for four thresholds were developed to quantify the relation among the severities, durations, and frequencies. The SDF curves from the fixed, daily, and monthly thresholds have comparatively short durations because the annual maximum durations vary from 30 to 96 days, whereas those from the desired-yield threshold have much longer durations of up to 270 days. For the additional analysis, the return-period-duration curve was also derived to quantify the extent of the drought duration. These curves can be an effective tool to identify
Twelve automated thresholding methods for segmentation of PET images: a phantom study
NASA Astrophysics Data System (ADS)
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.
2012-06-01
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Twelve automated thresholding methods for segmentation of PET images: a phantom study.
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M
2012-06-21
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Cao, Yihui; Yao, Di
2016-01-01
We propose a dual-threshold method based on a strategic combination of RGB and HSV color space for white blood cell (WBC) segmentation. The proposed method consists of three main parts: preprocessing, threshold segmentation, and postprocessing. In the preprocessing part, we get two images for further processing: one contrast-stretched gray image and one H component image from transformed HSV color space. In the threshold segmentation part, a dual-threshold method is proposed for improving the conventional single-threshold approaches and a golden section search method is used for determining the optimal thresholds. For the postprocessing part, mathematical morphology and median filtering are utilized to denoise and remove incomplete WBCs. The proposed method was tested in segmenting the lymphoblasts on a public Acute Lymphoblastic Leukemia (ALL) image dataset. The results show that the performance of the proposed method is better than single-threshold approach independently performed in RGB and HSV color space and the overall single WBC segmentation accuracy reaches 97.85%, showing a good prospect in subsequent lymphoblast classification and ALL diagnosis. PMID:27313659
Threshold selection for classification of MR brain images by clustering method
NASA Astrophysics Data System (ADS)
Moldovanu, Simona; Obreja, Cristian; Moraru, Luminita
2015-12-01
Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.
Threshold selection for classification of MR brain images by clustering method
Moldovanu, Simona; Obreja, Cristian; Moraru, Luminita
2015-12-07
Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.
Kawamoto, Tatsuro; Kabashima, Yoshiyuki
2015-06-01
Investigating the performance of different methods is a fundamental problem in graph partitioning. In this paper, we estimate the so-called detectability threshold for the spectral method with both un-normalized and normalized Laplacians in sparse graphs. The detectability threshold is the critical point at which the result of the spectral method is completely uncorrelated to the planted partition. We also analyze whether the localization of eigenvectors affects the partitioning performance in the detectable region. We use the replica method, which is often used in the field of spin-glass theory, and focus on the case of bisection. We show that the gap between the estimated threshold for the spectral method and the threshold obtained from Bayesian inference is considerable in sparse graphs, even without eigenvector localization. This gap closes in a dense limit.
NASA Astrophysics Data System (ADS)
Kawamoto, Tatsuro; Kabashima, Yoshiyuki
2015-06-01
Investigating the performance of different methods is a fundamental problem in graph partitioning. In this paper, we estimate the so-called detectability threshold for the spectral method with both un-normalized and normalized Laplacians in sparse graphs. The detectability threshold is the critical point at which the result of the spectral method is completely uncorrelated to the planted partition. We also analyze whether the localization of eigenvectors affects the partitioning performance in the detectable region. We use the replica method, which is often used in the field of spin-glass theory, and focus on the case of bisection. We show that the gap between the estimated threshold for the spectral method and the threshold obtained from Bayesian inference is considerable in sparse graphs, even without eigenvector localization. This gap closes in a dense limit.
NASA Astrophysics Data System (ADS)
Kim, H. W.; Yeom, J. M.; Woo, S. H.; Kim, Y. S.; Chae, T. B.
2015-12-01
Geostationary Ocean Color Imager (GOCI) which was launched on 27 June 2010, developed to detect, monitor, and predict the ocean phenomena around Korea. Although GOCI was developed to observe the ocean environment, GOCI has also enormous scientific data for land surface. However, it is extremely important to extract the cloud pixels over the land surface to utilize its data for the land application. Over the land surface, the reflectance variation is higher and the characteristic of surface is more various than those over the ocean. Furthermore, the infra-red (IR) channel is not included in 8 GOCI bands, which is useful to detect the thin cloud and the water vapor with cloud top temperature. Nevertheless, GOCI has potential to detect the cloud using the temporal variation due to the characteristics of geostationary satellite observation. The purpose of this study is to estimate the cloud masking maps over the Korean Peninsula. For cloud masking with GOCI, following methods are used; simple threshold with reflectance and ratio of bands, adaptive threshold with multi-temporal images, and stable multi-temporal vegetation image. In the case of adaptive threshold, high variable cloudy when comparing with surface reflectance is used by comparing the surface reflectance by temporal based analysis. In this study, the multi-temporal NDVI data processed by the bi-directional reflectance distribution function (BRDF) modeling also used to reflect the relative solar-target-sensor geometry during the daytime. This result will have a substantial role for the land application using GOCI data.
Evaluation of different methods for determining growing degree-day thresholds in apricot cultivars
NASA Astrophysics Data System (ADS)
Ruml, Mirjana; Vuković, Ana; Milatović, Dragan
2010-07-01
The aim of this study was to examine different methods for determining growing degree-day (GDD) threshold temperatures for two phenological stages (full bloom and harvest) and select the optimal thresholds for a greater number of apricot ( Prunus armeniaca L.) cultivars grown in the Belgrade region. A 10-year data series were used to conduct the study. Several commonly used methods to determine the threshold temperatures from field observation were evaluated: (1) the least standard deviation in GDD; (2) the least standard deviation in days; (3) the least coefficient of variation in GDD; (4) regression coefficient; (5) the least standard deviation in days with a mean temperature above the threshold; (6) the least coefficient of variation in days with a mean temperature above the threshold; and (7) the smallest root mean square error between the observed and predicted number of days. In addition, two methods for calculating daily GDD, and two methods for calculating daily mean air temperatures were tested to emphasize the differences that can arise by different interpretations of basic GDD equation. The best agreement with observations was attained by method (7). The lower threshold temperature obtained by this method differed among cultivars from -5.6 to -1.7°C for full bloom, and from -0.5 to 6.6°C for harvest. However, the “Null” method (lower threshold set to 0°C) and “Fixed Value” method (lower threshold set to -2°C for full bloom and to 3°C for harvest) gave very good results. The limitations of the widely used method (1) and methods (5) and (6), which generally performed worst, are discussed in the paper.
Evaluation of different methods for determining growing degree-day thresholds in apricot cultivars.
Ruml, Mirjana; Vuković, Ana; Milatović, Dragan
2010-07-01
The aim of this study was to examine different methods for determining growing degree-day (GDD) threshold temperatures for two phenological stages (full bloom and harvest) and select the optimal thresholds for a greater number of apricot (Prunus armeniaca L.) cultivars grown in the Belgrade region. A 10-year data series were used to conduct the study. Several commonly used methods to determine the threshold temperatures from field observation were evaluated: (1) the least standard deviation in GDD; (2) the least standard deviation in days; (3) the least coefficient of variation in GDD; (4) regression coefficient; (5) the least standard deviation in days with a mean temperature above the threshold; (6) the least coefficient of variation in days with a mean temperature above the threshold; and (7) the smallest root mean square error between the observed and predicted number of days. In addition, two methods for calculating daily GDD, and two methods for calculating daily mean air temperatures were tested to emphasize the differences that can arise by different interpretations of basic GDD equation. The best agreement with observations was attained by method (7). The lower threshold temperature obtained by this method differed among cultivars from -5.6 to -1.7 degrees C for full bloom, and from -0.5 to 6.6 degrees C for harvest. However, the "Null" method (lower threshold set to 0 degrees C) and "Fixed Value" method (lower threshold set to -2 degrees C for full bloom and to 3 degrees C for harvest) gave very good results. The limitations of the widely used method (1) and methods (5) and (6), which generally performed worst, are discussed in the paper.
Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A
2015-02-01
Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.
NASA Astrophysics Data System (ADS)
Hansen, Anja; Krueger, Alexander; Ripken, Tammo
2013-03-01
In ophthalmic microsurgery tissue dissection is achieved using femtosecond laser pulses to create an optical breakdown. For vitreo-retinal applications the irradiance distribution in the focal volume is distorted by the anterior components of the eye causing a raised threshold energy for breakdown. In this work, an adaptive optics system enables spatial beam shaping for compensation of aberrations and investigation of wave front influence on optical breakdown. An eye model was designed to allow for aberration correction as well as detection of optical breakdown. The eye model consists of an achromatic lens for modeling the eye's refractive power, a water chamber for modeling the tissue properties, and a PTFE sample for modeling the retina's scattering properties. Aberration correction was performed using a deformable mirror in combination with a Hartmann-Shack-sensor. The influence of an adaptive optics aberration correction on the pulse energy required for photodisruption was investigated using transmission measurements for determination of the breakdown threshold and video imaging of the focal region for study of the gas bubble dynamics. The threshold energy is considerably reduced when correcting for the aberrations of the system and the model eye. Also, a raise in irradiance at constant pulse energy was shown for the aberration corrected case. The reduced pulse energy lowers the potential risk of collateral damage which is especially important for retinal safety. This offers new possibilities for vitreo-retinal surgery using femtosecond laser pulses.
Test beam results of ATLAS DBM pCVD diamond detectors using a novel threshold tuning method
NASA Astrophysics Data System (ADS)
Janssen, J.
2017-03-01
Threshold Baseline Tuning is a novel threshold tuning method meant to increase the hit efficiency of a pixel detector. The tuning method is applicable to any pixel readout ASIC with an adjustable threshold for individual pixels. The method is based on counting noise hits and allows for tuning to very low thresholds. The Threshold Baseline Tuning was successfully tested with ATLAS Diamond Beam Monitor (DBM) polycrystalline chemical vapour deposited (pCVD) diamond detectors in a 120 GeV pion beam at CERN SPS in 2015/2016. Efficiency measurements show the advantage of the Threshold Baseline Tuning over the regular tuning method.
NASA Astrophysics Data System (ADS)
Ren, Zhong; Liu, Guodong; Xiong, Zhihua
2016-10-01
The photoacoustic signals denoising of glucose is one of most important steps in the quality identification of the fruit because the real-time photoacoustic singals of glucose are easily interfered by all kinds of noises. To remove the noises and some useless information, an improved wavelet threshld function were proposed. Compared with the traditional wavelet hard and soft threshold functions, the improved wavelet threshold function can overcome the pseudo-oscillation effect of the denoised photoacoustic signals due to the continuity of the improved wavelet threshold function, and the error between the denoised signals and the original signals can be decreased. To validate the feasibility of the improved wavelet threshold function denoising, the denoising simulation experiments based on MATLAB programmimg were performed. In the simulation experiments, the standard test signal was used, and three different denoising methods were used and compared with the improved wavelet threshold function. The signal-to-noise ratio (SNR) and the root-mean-square error (RMSE) values were used to evaluate the performance of the improved wavelet threshold function denoising. The experimental results demonstrate that the SNR value of the improved wavelet threshold function is largest and the RMSE value is lest, which fully verifies that the improved wavelet threshold function denoising is feasible. Finally, the improved wavelet threshold function denoising was used to remove the noises of the photoacoustic signals of the glucose solutions. The denoising effect is also very good. Therefore, the improved wavelet threshold function denoising proposed by this paper, has a potential value in the field of denoising for the photoacoustic singals.
A multiple threshold method for fitting the generalized Pareto distribution to rainfall time series
NASA Astrophysics Data System (ADS)
Deidda, R.
2010-12-01
Previous studies indicate the generalized Pareto distribution (GPD) as a suitable distribution function to reliably describe the exceedances of daily rainfall records above a proper optimum threshold, which should be selected as small as possible to retain the largest sample while assuring an acceptable fitting. Such an optimum threshold may differ from site to site, affecting consequently not only the GPD scale parameter, but also the probability of threshold exceedance. Thus a first objective of this paper is to derive some expressions to parameterize a simple threshold-invariant three-parameter distribution function which assures a perfect overlapping with the GPD fitted on the exceedances over any threshold larger than the optimum one. Since the proposed distribution does not depend on the local thresholds adopted for fitting the GPD, it is expected to reflect the on-site climatic signature and thus appears particularly suitable for hydrological applications and regional analyses. A second objective is to develop and test the Multiple Threshold Method (MTM) to infer the parameters of interest by using exceedances over a wide range of thresholds applying again the concept of parameters threshold-invariance. We show the ability of the MTM in fitting historical daily rainfall time series recorded with different resolutions and with a significative percentage of heavily quantized data. Finally, we prove the supremacy of the MTM fit against the standard single threshold fit, often adopted for partial duration series, by evaluating and comparing the performances on Monte Carlo samples drawn by GPDs with different shape and scale parameters and different discretizations.
Domain adaptive boosting method and its applications
NASA Astrophysics Data System (ADS)
Geng, Jie; Miao, Zhenjiang
2015-03-01
Differences of data distributions widely exist among datasets, i.e., domains. For many pattern recognition, nature language processing, and content-based analysis systems, a decrease in performance caused by the domain differences between the training and testing datasets is still a notable problem. We propose a domain adaptation method called domain adaptive boosting (DAB). It is based on the AdaBoost approach with extensions to cover the domain differences between the source and target domains. Two main stages are contained in this approach: source-domain clustering and source-domain sample selection. By iteratively adding the selected training samples from the source domain, the discrimination model is able to achieve better domain adaptation performance based on a small validation set. The DAB algorithm is suitable for the domains with large scale samples and easy to extend for multisource adaptation. We implement this method on three computer vision systems: the skin detection model in single images, the video concept detection model, and the object classification model. In the experiments, we compare the performances of several commonly used methods and the proposed DAB. Under most situations, the DAB is superior.
Structured adaptive grid generation using algebraic methods
NASA Technical Reports Server (NTRS)
Yang, Jiann-Cherng; Soni, Bharat K.; Roger, R. P.; Chan, Stephen C.
1993-01-01
The accuracy of the numerical algorithm depends not only on the formal order of approximation but also on the distribution of grid points in the computational domain. Grid adaptation is a procedure which allows optimal grid redistribution as the solution progresses. It offers the prospect of accurate flow field simulations without the use of an excessively timely, computationally expensive, grid. Grid adaptive schemes are divided into two basic categories: differential and algebraic. The differential method is based on a variational approach where a function which contains a measure of grid smoothness, orthogonality and volume variation is minimized by using a variational principle. This approach provided a solid mathematical basis for the adaptive method, but the Euler-Lagrange equations must be solved in addition to the original governing equations. On the other hand, the algebraic method requires much less computational effort, but the grid may not be smooth. The algebraic techniques are based on devising an algorithm where the grid movement is governed by estimates of the local error in the numerical solution. This is achieved by requiring the points in the large error regions to attract other points and points in the low error region to repel other points. The development of a fast, efficient, and robust algebraic adaptive algorithm for structured flow simulation applications is presented. This development is accomplished in a three step process. The first step is to define an adaptive weighting mesh (distribution mesh) on the basis of the equidistribution law applied to the flow field solution. The second, and probably the most crucial step, is to redistribute grid points in the computational domain according to the aforementioned weighting mesh. The third and the last step is to reevaluate the flow property by an appropriate search/interpolate scheme at the new grid locations. The adaptive weighting mesh provides the information on the desired concentration
Optimizing Biosurveillance Systems that Use Threshold-based Event Detection Methods
2009-06-01
Optimizing Biosurveillance Systems that Use Threshold-based Event Detection Methods Ronald D. Fricker, Jr.∗ and David Banschbach† June 1, 2009...Abstract We describe a methodology for optimizing a threshold detection-based biosurveillance system. The goal is to maximize the system-wide probability of...Using this approach, pub- lic health officials can “tune” their biosurveillance systems to optimally detect various threats, thereby allowing
Adaptive Method for Nonsmooth Nonnegative Matrix Factorization.
Yang, Zuyuan; Xiang, Yong; Xie, Kan; Lai, Yue
2017-04-01
Nonnegative matrix factorization (NMF) is an emerging tool for meaningful low-rank matrix representation. In NMF, explicit constraints are usually required, such that NMF generates desired products (or factorizations), especially when the products have significant sparseness features. It is known that the ability of NMF in learning sparse representation can be improved by embedding a smoothness factor between the products. Motivated by this result, we propose an adaptive nonsmooth NMF (Ans-NMF) method in this paper. In our method, the embedded factor is obtained by using a data-related approach, so it matches well with the underlying products, implying a superior faithfulness of the representations. Besides, due to the usage of an adaptive selection scheme to this factor, the sparseness of the products can be separately constrained, leading to wider applicability and interpretability. Furthermore, since the adaptive selection scheme is processed through solving a series of typical linear programming problems, it can be easily implemented. Simulations using computer-generated data and real-world data show the advantages of the proposed Ans-NMF method over the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Ren, Zhong; Liu, Guodong
2016-11-01
To improve the denoising effect of the glucose photoacoustic signals, a modified wavelet thresholding combined shift-invariance algorithm was used in this paper. In addition, the shift-invariance method was added into the improved algorithm. To verify the feasibility of modified wavelet shift-invariance threshold denoising algorithm, the simulation experiments were performed. Results show that the denoising effect of modified wavelet shift-invariance thresholding algorithm is better than that of others because its signal-to-noise ratio is largest and the root-mean-square error is lest. Finally, the modified wavelet shift-invariance threshold denoising was used to remove the noises of the photoacoustic signals of glucose aqueous solutions.
Parallel adaptive wavelet collocation method for PDEs
Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.
2015-10-01
A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.
Parallel adaptive wavelet collocation method for PDEs
NASA Astrophysics Data System (ADS)
Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.
2015-10-01
A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 20483 using as many as 2048 CPU cores.
A simple and valid method to determine thermoregulatory sweating threshold and sensitivity.
Cheuvront, Samuel N; Bearden, Shawn E; Kenefick, Robert W; Ely, Brett R; Degroot, David W; Sawka, Michael N; Montain, Scott J
2009-07-01
Sweating threshold temperature and sweating sensitivity responses are measured to evaluate thermoregulatory control. However, analytic approaches vary, and no standardized methodology has been validated. This study validated a simple and standardized method, segmented linear regression (SReg), for determination of sweating threshold temperature and sensitivity. Archived data were extracted for analysis from studies in which local arm sweat rate (m(sw); ventilated dew-point temperature sensor) and esophageal temperature (T(es)) were measured under a variety of conditions. The relationship m(sw)/T(es) from 16 experiments was analyzed by seven experienced raters (Rater), using a variety of empirical methods, and compared against SReg for the determination of sweating threshold temperature and sweating sensitivity values. Individual interrater differences (n = 324 comparisons) and differences between Rater and SReg (n = 110 comparisons) were evaluated within the context of biologically important limits of magnitude (LOM) via a modified Bland-Altman approach. The average Rater and SReg outputs for threshold temperature and sensitivity were compared (n = 16) using inferential statistics. Rater employed a very diverse set of criteria to determine the sweating threshold temperature and sweating sensitivity for the 16 data sets, but interrater differences were within the LOM for 95% (threshold) and 73% (sensitivity) of observations, respectively. Differences between mean Rater and SReg were within the LOM 90% (threshold) and 83% (sensitivity) of the time, respectively. Rater and SReg were not different by conventional t-test (P > 0.05). SReg provides a simple, valid, and standardized way to determine sweating threshold temperature and sweating sensitivity values for thermoregulatory studies.
NASA Astrophysics Data System (ADS)
Gromczak, Kamila; Gąska, Adam; Kowalski, Marek; Ostrowska, Ksenia; Sładek, Jerzy; Gruza, Maciej; Gąska, Piotr
2017-01-01
The following paper presents a practical approach to the validation process of coordinate measuring methods at an accredited laboratory, using a statistical model of metrological compatibility. The statistical analysis of measurement results obtained using a highly accurate system was intended to determine the permissible validation threshold values. The threshold value constitutes the primary criterion for the acceptance or rejection of the validated method, and depends on both the differences between measurement results with corresponding uncertainties and the individual correlation coefficient. The article specifies and explains the types of measuring methods that were subject to validation and defines the criterion value governing their acceptance or rejection in the validation process.
Dimeo, M.J.; Glenn, M.G.; Holtzman, M.J.; Sheller, J.R.; Nadel, J.A.; Boushey, H.A.
1981-09-01
To determine the lowest concentration of ozone that causes an increase in bronchial reactivity to histamine and to determine whether adaptation to this effect of ozone develops with repeated exposures, we studied 19 healthy adult subjects. Bronchial reactivity was assessed by measuring the rise in specific airway resistance (delta SRaw) produced by inhalation of 10 breaths of histamine aerosol (1.6% solution). Results indicate that the threshold concentration of ozone causing an increase in bronchial reactivity in healthy human subjects is between 0.2 and 0.4 ppm, and that adaptation to this effect of ozone develops with repeated exposures. The threshold concentration of ozone identified in other studies as causing changes in symptoms, lung volumes, or airway resistance was also between 0.2 and 0.4 ppm, and the time course of the development of tolerance to ozone in these other studies was similar to hat observed in our study. We propose that the appearance of symptoms, changes in pulmonary function, and the increase in bronchial reactivity may be caused by a change in the activity of afferent nerve endings in the airway epithelium.
Determination of rainfall thresholds for shallow landslides by a probabilistic and empirical method
NASA Astrophysics Data System (ADS)
Huang, J.; Ju, N. P.; Liao, Y. J.; Liu, D. D.
2015-12-01
Rainfall-induced landslides not only cause property loss, but also kill and injure large numbers of people every year in mountainous areas in China. These losses and casualties may be avoided to some extent with rainfall threshold values used in an early warning system at a regional scale for the occurrence of landslides. However, the limited availability of data always causes difficulties. In this paper we present a method to calculate rainfall threshold values with limited data sets for two rainfall parameters: hourly rainfall intensity and accumulated precipitation. The method has been applied to the Huangshan region, in the province of Anhui, China. Four early warning levels (zero, outlook, attention, and warning) have been adopted and the corresponding rainfall threshold values have been defined by probability lines. A validation procedure showed that this method can significantly enhance the effectiveness of a warning system, and finally reduce and mitigate the risk of shallow landslides in mountainous regions.
Adaptive envelope protection methods for aircraft
NASA Astrophysics Data System (ADS)
Unnikrishnan, Suraj
Carefree handling refers to the ability of a pilot to operate an aircraft without the need to continuously monitor aircraft operating limits. At the heart of all carefree handling or maneuvering systems, also referred to as envelope protection systems, are algorithms and methods for predicting future limit violations. Recently, envelope protection methods that have gained more acceptance, translate limit proximity information to its equivalent in the control channel. Envelope protection algorithms either use very small prediction horizon or are static methods with no capability to adapt to changes in system configurations. Adaptive approaches maximizing prediction horizon such as dynamic trim, are only applicable to steady-state-response critical limit parameters. In this thesis, a new adaptive envelope protection method is developed that is applicable to steady-state and transient response critical limit parameters. The approach is based upon devising the most aggressive optimal control profile to the limit boundary and using it to compute control limits. Pilot-in-the-loop evaluations of the proposed approach are conducted at the Georgia Tech Carefree Maneuver lab for transient longitudinal hub moment limit protection. Carefree maneuvering is the dual of carefree handling in the realm of autonomous Uninhabited Aerial Vehicles (UAVs). Designing a flight control system to fully and effectively utilize the operational flight envelope is very difficult. With the increasing role and demands for extreme maneuverability there is a need for developing envelope protection methods for autonomous UAVs. In this thesis, a full-authority automatic envelope protection method is proposed for limit protection in UAVs. The approach uses adaptive estimate of limit parameter dynamics and finite-time horizon predictions to detect impending limit boundary violations. Limit violations are prevented by treating the limit boundary as an obstacle and by correcting nominal control
Novel wavelet threshold denoising method in axle press-fit zone ultrasonic detection
NASA Astrophysics Data System (ADS)
Peng, Chaoyong; Gao, Xiaorong; Peng, Jianping; Wang, Ai
2017-02-01
Axles are important part of railway locomotives and vehicles. Periodic ultrasonic inspection of axles can effectively detect and monitor axle fatigue cracks. However, in the axle press-fit zone, the complex interface contact condition reduces the signal-noise ratio (SNR). Therefore, the probability of false positives and false negatives increases. In this work, a novel wavelet threshold function is created to remove noise and suppress press-fit interface echoes in axle ultrasonic defect detection. The novel wavelet threshold function with two variables is designed to ensure the precision of optimum searching process. Based on the positive correlation between the correlation coefficient and SNR and with the experiment phenomenon that the defect and the press-fit interface echo have different axle-circumferential correlation characteristics, a discrete optimum searching process for two undetermined variables in novel wavelet threshold function is conducted. The performance of the proposed method is assessed by comparing it with traditional threshold methods using real data. The statistic results of the amplitude and the peak SNR of defect echoes show that the proposed wavelet threshold denoising method not only maintains the amplitude of defect echoes but also has a higher peak SNR.
Adaptive threshold device for detection of reflections based visible light communication
NASA Astrophysics Data System (ADS)
Amini, Changeez; Taherpour, Abbas
2017-04-01
One of the major restriction of existing visible light communication (VLC) systems is the limitation of channel transmission bandwidth which can be used in such systems. In this paper, an optimal and a suboptimal receiver are proposed to increase the on-off keying (OOK) transmission rate and hence to increase bandwidth efficiency of VLC system when a multiple reflections channel model is used to characterize the impacts of reflections in VLC signal propagation. Optimal detector consists of a simple receiver with a memory to find the optimal threshold based on the previous detected data. The error probability of the proposed detector is derived in the closed form and compared with the simulation results. It is demonstrated that the proposed detectors can improve the transmitting bandwidth close to the 3-dB bandwidth of the LOS channel model (which is several hundred MHz), whereas bit-error-rate (BER) remains low, in particular where the optimal detection is utilized.
Ensemble transform sensitivity method for adaptive observations
NASA Astrophysics Data System (ADS)
Zhang, Yu; Xie, Yuanfu; Wang, Hongli; Chen, Dehui; Toth, Zoltan
2016-01-01
The Ensemble Transform (ET) method has been shown to be useful in providing guidance for adaptive observation deployment. It predicts forecast error variance reduction for each possible deployment using its corresponding transformation matrix in an ensemble subspace. In this paper, a new ET-based sensitivity (ETS) method, which calculates the gradient of forecast error variance reduction in terms of analysis error variance reduction, is proposed to specify regions for possible adaptive observations. ETS is a first order approximation of the ET; it requires just one calculation of a transformation matrix, increasing computational efficiency (60%-80% reduction in computational cost). An explicit mathematical formulation of the ETS gradient is derived and described. Both the ET and ETS methods are applied to the Hurricane Irene (2011) case and a heavy rainfall case for comparison. The numerical results imply that the sensitive areas estimated by the ETS and ET are similar. However, ETS is much more efficient, particularly when the resolution is higher and the number of ensemble members is larger.
A method for cell image segmentation using both local and global threshold techniques
NASA Astrophysics Data System (ADS)
Li, Yuexiang; Cho, Siu-Yeung
2013-10-01
The paper proposed a segmentation method combining both local and global threshold techniques to efficiently segment the cell images. Firstly, the image would be divided into several parts, and the Otsu operation would be used to each of them to detect details. Secondly, main body of the objects would be filtered out by a global threshold algorithm. Finally, based on the previous steps, more advanced segmentation outcomes can be achieved. The experimental results show that this algorithm made better performance at detail recognition, such as the cell antennas, which should be very helpful and important in the medical area.
Automation of a center pivot using the temperature-time-threshold method of irriation scheduling
Technology Transfer Automated Retrieval System (TEKTRAN)
A center pivot was completely automated using the temperature-time-threshold (TTT) method of irrigation scheduling. An array of infrared thermometers was mounted on the center pivot and these were used to remotely determine the crop leaf temperature as an indicator of crop water stress. We describ...
Development of a Method to Determine the Audiogram of the Guinea Pig for Threshold Shift Studies,
1984-01-01
52 kHz by using a positive reinforcement training method. In this procedure, tones served as discriminative stimuli for a report response. Guinea pigs...and Stebbins, W. C. 1978. Auditory thresholds and kanamycin-induced hearing loss in the guinea pig assessed by a positive reinforcement procedure
Adaptive method with intercessory feedback control for an intelligent agent
Goldsmith, Steven Y.
2004-06-22
An adaptive architecture method with feedback control for an intelligent agent provides for adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. An adaptive architecture method with feedback control for multiple intelligent agents provides for coordinating and adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. Re-programming of the adaptive architecture is through a nexus which coordinates reflexive and deliberator components.
Adaptive Accommodation Control Method for Complex Assembly
NASA Astrophysics Data System (ADS)
Kang, Sungchul; Kim, Munsang; Park, Shinsuk
Robotic systems have been used to automate assembly tasks in manufacturing and in teleoperation. Conventional robotic systems, however, have been ineffective in controlling contact force in multiple contact states of complex assemblythat involves interactions between complex-shaped parts. Unlike robots, humans excel at complex assembly tasks by utilizing their intrinsic impedance, forces and torque sensation, and tactile contact clues. By examining the human behavior in assembling complex parts, this study proposes a novel geometry-independent control method for robotic assembly using adaptive accommodation (or damping) algorithm. Two important conditions for complex assembly, target approachability and bounded contact force, can be met by the proposed control scheme. It generates target approachable motion that leads the object to move closer to a desired target position, while contact force is kept under a predetermined value. Experimental results from complex assembly tests have confirmed the feasibility and applicability of the proposed method.
Direct comparison of two statistical methods for determination of evoked-potential thresholds
NASA Astrophysics Data System (ADS)
Langford, Ted L.; Patterson, James H., Jr.
1994-07-01
Several statistical procedures have been proposed as objective methods for determining evoked-potential thresholds. Data have been presented to support each of the methods, but there have not been direct comparisons using the same data. The goal of the present study was to evaluate correlation and variance ratio statistics using common data. A secondary goal was to evaluate the utility of a derived potential for determining thresholds. Chronic, bipolar electrodes were stereotaxically implanted in the inferior colliculi of six chinchillas. Evoked potentials were obtained at 0.25, 0.5, 1.0, 2.0, 4.0 and 8.0 kHz using 12-ms tone bursts and 12-ms tone bursts superimposed on 120-ms pedestal tones which were of the same frequency as the bursts, but lower in amplitude by 15 dB. Alternate responses were averaged in blocks of 200 to 4000 depending on the size of the response. Correlations were calculated for the pairs of averages. A response was deemed present if the correlation coefficient reached the 0.05 level of significance in 4000 or fewer averages. Threshold was defined as the mean of the level at which the correlation was significant and a level 5 dB below that at which it was not. Variance ratios were calculated as described by Elberling and Don (1984) using the same data. Averaged tone burst and tone burst-plus pedestal data were differenced and the resulting waveforms subjected to the same statistical analyses described above. All analyses yielded thresholds which were essentially the same as those obtained using behavioral methods. When the difference between stimulus durations is taken into account, however, evoked-potential methods produced lower thresholds than behavioral methods.
NASA Astrophysics Data System (ADS)
Sung, J. H.; Chung, E.-S.; Lee, K. S.
2013-12-01
This study developed a comprehensive method to quantify streamflow drought severity and magnitude based on a traditional frequency analysis. Two types of curve were developed: the streamflow drought severity-duration-frequency (SDF) curve and the streamflow drought magnitude-duration-frequency (MDF) curve (e.g., a rainfall intensity-duration-frequency curve). Severity was represented as the total water deficit volume for the specific drought duration, and magnitude was defined as the daily average water deficit. The variable threshold level method was introduced to set the target instream flow requirement, which can significantly affect the streamflow drought severity and magnitude. The four threshold levels utilized were fixed, monthly, daily, and desired yield for water use. The threshold levels for the desired yield differed considerably from the other levels and represented more realistic conditions because real water demands were considered. The streamflow drought severities and magnitudes from the four threshold methods could be derived at any frequency and duration from the generated SDF and MDF curves. These SDF and MDF curves are useful in designing water resources systems for streamflow drought and water supply management.
Scene sketch generation using mixture of gradient kernels and adaptive thresholding
NASA Astrophysics Data System (ADS)
Paheding, Sidike; Essa, Almabrok; Asari, Vijayan
2016-04-01
This paper presents a simple but effective algorithm for scene sketch generation from input images. The proposed algorithm combines the edge magnitudes of directional Prewitt differential gradient kernels with Kirsch kernels at each pixel position, and then encodes them into an eight bit binary code which encompasses local edge and texture information. In this binary encoding step, relative variance is employed to determine the object shape in each local region. Using relative variance enables object sketch extraction totally adaptive to any shape structure. On the other hand, the proposed technique does not require any parameter to adjust output and it is robust to edge density and noise. Two standard databases are used to show the effectiveness of the proposed framework.
Adapting implicit methods to parallel processors
Reeves, L.; McMillin, B.; Okunbor, D.; Riggins, D.
1994-12-31
When numerically solving many types of partial differential equations, it is advantageous to use implicit methods because of their better stability and more flexible parameter choice, (e.g. larger time steps). However, since implicit methods usually require simultaneous knowledge of the entire computational domain, these methods axe difficult to implement directly on distributed memory parallel processors. This leads to infrequent use of implicit methods on parallel/distributed systems. The usual implementation of implicit methods is inefficient due to the nature of parallel systems where it is common to take the computational domain and distribute the grid points over the processors so as to maintain a relatively even workload per processor. This creates a problem at the locations in the domain where adjacent points are not on the same processor. In order for the values at these points to be calculated, messages have to be exchanged between the corresponding processors. Without special adaptation, this will result in idle processors during part of the computation, and as the number of idle processors increases, the lower the effective speed improvement by using a parallel processor.
Variable Threshold Method for Determining the Boundaries of Imaged Subvisible Particles.
Cavicchi, R E; Collett, Cayla; Telikepalli, Srivalli; Hu, Zhishang; Carrier, Michael; Ripple, Dean C
2017-02-13
An accurate assessment of particle characteristics and concentrations in pharmaceutical products by flow imaging requires accurate particle sizing and morphological analysis. Analysis of images begins with the definition of particle boundaries. Commonly a single threshold defines the level for a pixel in the image to be included in the detection of particles, but depending on the threshold level, this results in either missing translucent particles or oversizing of less transparent particles due to the halos and gradients in intensity near the particle boundaries. We have developed an imaging analysis algorithm that sets the threshold for a particle based on the maximum gray value of the particle. We show that this results in tighter boundaries for particles with high contrast, while conserving the number of highly translucent particles detected. The method is implemented as a plugin for FIJI, an open-source image analysis software. The method is tested for calibration beads in water and glycerol/water solutions, a suspension of microfabricated rods, and stir-stressed aggregates made from IgG. The result is that appropriate thresholds are automatically set for solutions with a range of particle properties, and that improved boundaries will allow for more accurate sizing results and potentially improved particle classification studies.
An NMR log echo data de-noising method based on the wavelet packet threshold algorithm
NASA Astrophysics Data System (ADS)
Meng, Xiangning; Xie, Ranhong; Li, Changxi; Hu, Falong; Li, Chaoliu; Zhou, Cancan
2015-12-01
To improve the de-noising effects of low signal-to-noise ratio (SNR) nuclear magnetic resonance (NMR) log echo data, this paper applies the wavelet packet threshold algorithm to the data. The principle of the algorithm is elaborated in detail. By comparing the properties of a series of wavelet packet bases and the relevance between them and the NMR log echo train signal, ‘sym7’ is found to be the optimal wavelet packet basis of the wavelet packet threshold algorithm to de-noise the NMR log echo train signal. A new method is presented to determine the optimal wavelet packet decomposition scale; this is within the scope of its maximum, using the modulus maxima and the Shannon entropy minimum standards to determine the global and local optimal wavelet packet decomposition scales, respectively. The results of applying the method to the simulated and actual NMR log echo data indicate that compared with the wavelet threshold algorithm, the wavelet packet threshold algorithm, which shows higher decomposition accuracy and better de-noising effect, is much more suitable for de-noising low SNR-NMR log echo data.
Woodbury, C Jeffery; Koerber, H Richard
2007-12-10
Despite intensive study, our understanding of the neuronal structures responsible for transducing the broad spectrum of environmental energies that impinge upon the skin has rested on inference and conjecture. This major shortcoming motivated the development of ex vivo somatosensory system preparations in neonatal mice in the hope that their small size might allow the peripheral terminals of physiologically identified sensory neurons to be labeled intracellularly for direct study. The present report describes the first such study of the peripheral terminals of four slowly adapting type I low-threshold mechanoreceptors (SAIs) that innervated the back skin of neonatal mice. In addition, this report includes information on the central anatomy of the same SAI afferents that were identified peripherally with both physiological and anatomical means, providing an essentially complete view of the central and peripheral morphology of individual SAI afferents in situ. Our findings reveal that SAIs in neonates are strikingly adult-like in all major respects. Afferents were exquisitely sensitive to mechanical stimuli and exhibited a distinctly irregular, slowly adapting discharge to stimulation of 1-4 punctate receptive fields in the skin. Their central collaterals formed transversely oriented and largely nonoverlapping arborizations limited to regions of the dorsal horn corresponding to laminae III-V. Their peripheral arborizations were restricted entirely within miniaturized touch domes, where they gave rise to expanded disc-like endings in close apposition to putative Merkel cells in basal epidermis. These findings therefore provide the first direct confirmation of the functional morphology of this physiologically unique afferent class.
A multi-threshold sampling method for TOF PET signal processing
Kim, Heejong; Kao, Chien-Min; Xie, Q.; Chen, Chin-Tu; Zhou, L.; Tang, F.; Frisch, Henry; Moses, William W.; Choong, Woon-Seng
2009-02-02
As an approach to realizing all-digital data acquisition for positron emission tomography (PET), we have previously proposed and studied a multithreshold sampling method to generate samples of a PET event waveform with respect to a few user-defined amplitudes. In this sampling scheme, one can extract both the energy and timing information for an event. In this paper, we report our prototype implementation of this sampling method and the performance results obtained with this prototype. The prototype consists of two multi-threshold discriminator boards and a time-to-digital converter (TDC) board. Each of the multi-threshold discriminator boards takes one input and provides up to 8 threshold levels, which can be defined by users, for sampling the input signal. The TDC board employs the CERN HPTDC chip that determines the digitized times of the leading and falling edges of the discriminator output pulses. We connect our prototype electronics to the outputs of two Hamamatsu R9800 photomultiplier tubes (PMTs) that are individually coupled to a 6.25 x 6.25 x 25mm{sup 3} LSO crystal. By analyzing waveform samples generated by using four thresholds, we obtain a coincidence timing resolution of about 340 ps and an {approx}18% energy resolution at 511 keV. We are also able to estimate the decay-time constant from the resulting samples and obtain a mean value of 44 ns with an {approx}9 ns FWHM. In comparison, using digitized waveforms obtained at a 20 GSps sampling rate for the same LSO/PMT modules we obtain {approx}300 ps coincidence timing resolution, {approx}14% energy resolution at 511 keV, and {approx}5 ns FWHM for the estimated decay-time constant. Details of the results on the timing and energy resolutions by using the multi-threshold method indicate that it is a promising approach for implementing digital PET data acquisition.
Adaptive model training system and method
Bickford, Randall L; Palnitkar, Rahul M; Lee, Vo
2014-04-15
An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.
Adaptive model training system and method
Bickford, Randall L; Palnitkar, Rahul M
2014-11-18
An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.
Adaptive filtering for the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Marié, Simon; Gloerfelt, Xavier
2017-03-01
In this study, a new selective filtering technique is proposed for the Lattice Boltzmann Method. This technique is based on an adaptive implementation of the selective filter coefficient σ. The proposed model makes the latter coefficient dependent on the shear stress in order to restrict the use of the spatial filtering technique in sheared stress region where numerical instabilities may occur. Different parameters are tested on 2D test-cases sensitive to numerical stability and on a 3D decaying Taylor-Green vortex. The results are compared to the classical static filtering technique and to the use of a standard subgrid-scale model and give significant improvements in particular for low-order filter consistent with the LBM stencil.
NASA Astrophysics Data System (ADS)
Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto
2016-04-01
Estimation of extreme rainfall from data constitutes one of the most important issues in statistical hydrology, as it is associated with the design of hydraulic structures and flood water management. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a generalized Pareto (GP) distribution model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data, graphical methods where one studies the dependence of GP distribution parameters (or related metrics) on the threshold level u, and Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. In this work, we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 daily rainfall records from the NOAA-NCDC open-access database, with more than 110 years of data. We find that non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while methods that are based on asymptotic properties of the upper distribution tail lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e. on the order of 0.1 ÷ 0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on pre-asymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2÷12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the
Estimating errors in fractional cloud cover obtained with infrared threshold methods
Chang, Fu-Lung; Coakley, J.A. Jr. )
1993-05-20
The authors address the question of detecting cloud coverage from satellite imagery. The International Satellite Cloud Climatology Project (ISCCP), and NIMBUS-7 have constructed cloud climatologies, but they differ substantially in the global mean cloud cover. Here the authors address problems in the application of threshold methods to the infrared detection of cloud cover. They look in particular at single-layered cloud cover, and compare the threshold IR detection method with a spatial coherence method. One of the problems is that the pixel size of satellite imagery, namely 4-8km on a side is not particularly small compared to cloud features, or even breaks in cloud cover, and there can be severe error in estimating cloud cover because of this difficulty.
A Simple and Valid Method to Determine Thermoregulatory Sweating Threshold and Sensitivity
2009-01-01
S, Inoue Y, Crandall CG. Function of human eccrine sweat glands during dynamic exercise and passive heat stress. J Appl Physiol 90: 1877–1881, 2001...code) 2009 Journal Article - Journal of Applied Physiology A Simple and Valid Method to Determine Thermoregulatory Sweating Threshold and Sensitivity...Research Institute of Environmental Medicine Natick, MA 01760-5007 M09-27 Same as #7 above. Approved for public release; distribution unlimited Sweating
NASA Astrophysics Data System (ADS)
Akiyama, R.; Kinoshita, A.; Uchida, T.; Takahara, T.; Ishizuka, T.
2014-12-01
It is important to predict time of landslide occurrence for mitigation of landslide disasters. Several physically-based models have been applied for assessing spatial pattern of landslide susceptibility. However, it is still difficult to predict both time and location of landslide occurrence using physically-based model. A new model which is called "idH-SLIDER" (revised H-SLIDER for assessing rainfall intensity duration thresholds.) has been proposed to assess the time and location of landslide occurrence. We combine a hillslope hydrology model proposed by Rosso et al.,[2006] and infinite slope stability analysis to assess rainfall intensity duration thresholds for each grid. The hillslope hydrology is modeled by coupling the conservation of mass of soil water with the Darcy's law used to describe a seepage flow. Application of this model can be derived from rainfall event in 21.July, 2009 in Hofu city, Japan. Several parameters, including soil depth, geometry and etc. are composed of our detailed field survey. By integrating the field survey data and the collected rainfall data, it is possible to calculate rainfall intensity duration thresholds. There are: (1) It is realizable to set an appropriate value for cohesion of soil and reproduce the time and location of shallow landslide during the rainfall event. (2) Only few grid cells where observed rainfall data exceeded our calculated rainfall intensity duration thresholds. It has been proved by calculating the rainfall data of last 37 years. (3) Those results integrate with the historical landslide patterns evaluated by aerial photograph interpretation. And the result could be matched ultimately.(4) In result of sensitive analysis of soil thickness and soil mechanical and hydraulic parameters, it is clarified that condition of soil thickness is thicker or soil cohesion is weaker generated missing ratio increasing. According to these results, the method which has been proposed is suitable for reproducing the spatial
A high-throughput method to measure NaCl and acid taste thresholds in mice.
Ishiwatari, Yutaka; Bachmanov, Alexander A
2009-05-01
To develop a technique suitable for measuring NaCl taste thresholds in genetic studies, we conducted a series of experiments with outbred CD-1 mice using conditioned taste aversion (CTA) and two-bottle preference tests. In Experiment 1, we compared conditioning procedures involving either oral self-administration of LiCl or pairing NaCl intake with LiCl injections and found that thresholds were the lowest after LiCl self-administration. In Experiment 2, we compared different procedures (30-min and 48-h tests) for testing conditioned mice and found that the 48-h test is more sensitive. In Experiment 3, we examined the effects of varying strength of conditioned (NaCl or LiCl taste intensity) and unconditioned (LiCl toxicity) stimuli and concluded that 75-150 mM LiCl or its mixtures with NaCl are the optimal stimuli for conditioning by oral self-administration. In Experiment 4, we examined whether this technique is applicable for measuring taste thresholds for other taste stimuli. Results of these experiments show that conditioning by oral self-administration of LiCl solutions or its mixtures with other taste stimuli followed by 48-h two-bottle tests of concentration series of a conditioned stimulus is an efficient and sensitive method to measure taste thresholds. Thresholds measured with this technique were 2 mM for NaCl and 1 mM for citric acid. This approach is suitable for simultaneous testing of large numbers of animals, which is required for genetic studies. These data demonstrate that mice, like several other species, generalize CTA from LiCl to NaCl, suggesting that they perceive taste of NaCl and LiCl as qualitatively similar, and they also can generalize CTA of a binary mixture of taste stimuli to mixture components.
Kendall, Kristina L; Smith, Abbie E; Graef, Jennifer L; Walter, Ashley A; Moon, Jordan R; Lockwood, Christopher M; Beck, Travis W; Cramer, Joel T; Stout, Jeffrey R
2010-01-01
The submaximal electromyographic fatigue threshold test (EMG(FT)) has been shown to be highly correlated to ventilatory threshold (VT) as determined from maximal graded exercise tests (GXTs). Recently, a prediction equation was developed using the EMG(FT) value to predict VT. The aim of this study, therefore, was to determine if this new equation could accurately track changes in VT after high-intensity interval training (HIIT). Eighteen recreationally trained men (mean +/- SD; age 22.4 +/- 3.2 years) performed a GXT to determine maximal oxygen consumption rate (V(O2)peak) and VT using breath-by-breath spirometry. Participants also completed a discontinuous incremental cycle ergometer test to determine their EMGFT value. A total of four 2-minute work bouts were completed to obtain 15-second averages of the electromyographic amplitude. The resulting slopes from each successive work bout were used to calculate EMG(FT). The EMG(FT) value from each participant was used to estimate VT from the recently developed equation. All participants trained 3 days a week for 6 weeks. Training consisted of 5 sets of 2-minute work bouts with 1 minute of rest in between. Repeated-measures analysis of variance indicated no significant difference between actual and predicted VT values after 3 weeks of training. However, there was a significant difference between the actual and predicted VT values after 6 weeks of training. These findings suggest that the EMG(FT) may be useful when tracking changes in VT after 3 weeks of HIIT in recreationally trained individuals. However, the use of EMG(FT) to predict VT does not seem to be valid for tracking changes after 6 weeks of HIIT. At this time, it is not recommended that EMG(FT) be used to predict and track changes in VT.
Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo
2013-06-01
In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery.
Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B
2015-10-06
Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.
Deng, Liang-Jian; Huang, Ting-Zhu; Zhao, Xi-Le; Zhao, Liang; Wang, Si
2013-05-01
Singular value decomposition (SVD)-based approaches, e.g., truncated SVD and Tikhonov regularization methods, are effective ways to solve problems of small or moderate size. However, SVD, in the sense of computation, is expensive when it is applied in large-sized cases. A multilevel method (MLM) combining SVD-based methods with the thresholding technique for signal restoration is proposed in this paper. Our MLM will transfer large-sized problems to small- or moderate-sized problems in order to make the SVD-based methods available. The linear systems on the coarsest level in the multilevel process will be solved by the Tikhonov regularization method. No presmoothers are implemented in the multilevel process to avoid damaging the parameter choice on the coarsest level. Furthermore, the soft-thresholding denoising technique is employed for the postsmoothers aiming to eliminate the leaving high-frequency information due to the lack of presmoothers. Finally, computational experiments show that our method outperforms other SVD-based methods in signal restoration ability at a shorter CPU-time consumption.
Lucía, A; Vaquero, A F; Pérez, M; Sánchez, O; Sánchez, V; Gómez, M A; Chicharro, J L
1997-06-01
The purpose of this study was to investigate the possible use of integrated surface electromyography (iEMG) in cardiac transplant patients (CTPs) as a new noninvasive determinant of the metabolic response to exercise by studying the relationship between the iEMG threshold (iEMGT) and other more conventional methods for anaerobic threshold (AT) determination, such as the lactate threshold (LT) and the ventilatory threshold (VT). Thirteen patients (age: 57+/-7 years, mean+/-SD; height: 163+/-7 cm; body mass: 70.5+/-8.6 kg; posttransplant time: 87+/-49 weeks) were selected as subjects. Each of them performed a ramp protocol on a cycle ergometer (starting at 0 W, the workload was increased in 10 W/min). During the tests, gas exchange data, blood lactate levels, and iEMG of the vastus lateralis were collected to determine VT, LT, and iEMGT, respectively. The results evidenced no significant difference between mean values of VT, LT, or iEMGT, when expressed either as oxygen uptake (11.1+/-2.4, 11.7+/-2.3, and 11.0+/-2.8 mL/kg/min, respectively) or as percent maximum oxygen uptake (61.6+/-7.5, 62.2+/-7.7, and 59.6+/-8.2%, respectively). In conclusion, our findings suggest that iEMG might be used as a complementary, noninvasive method for AT determination in CTPs. In addition, since the aerobic impairment of these patients is largely due to peripheral limitation, determination of iEMGT could be used to assess the effectiveness of an exercise rehabilitation program to improve muscle aerobic capacity in CTPs.
NASA Astrophysics Data System (ADS)
Chen, Hung-Ming; Chen, Po-Hung; Lin, Cheng-Tso; Liu, Ching-Chung
2012-11-01
An efficient algorithm named modified directional gradient descent searches to enhance the directional gradient descent search (DGDS) algorithm is presented to reduce computations. A modified search pattern with an adaptive threshold for early termination is applied to DGDS to avoid meaningless calculation after the searching point is good enough. A statistical analysis of best motion vector distribution is analyzed to decide the modified search pattern. Then a statistical model based on the characteristics of the block distortion information of the previous coded frame helps the early termination parameters selection, and a trade-off between the video quality and the computational complexity can be obtained. The simulation results show the proposed algorithm provides significant improvement in reducing the motion estimation (ME) by 17.81% of the average search points and 20% of ME time saving compared to the fast DGDS algorithm implemented in H.264/AVC JM 18.2 reference software according to different types of sequences, while maintaining a similar bit rate without losing picture quality.
NASA Astrophysics Data System (ADS)
Benda, Jakub; Houfek, Karel
2017-04-01
For total energies below the ionization threshold it is possible to dramatically reduce the computational burden of the solution of the electron-atom scattering problem based on grid methods combined with the exterior complex scaling. As in the R-matrix method, the problem can be split into the inner and outer problem, where the outer problem considers only the energetically accessible asymptotic channels. The (N + 1)-electron inner problem is coupled to the one-electron outer problems for every channel, resulting in a matrix that scales only linearly with size of the outer grid.
Adaptive numerical methods for partial differential equations
Cololla, P.
1995-07-01
This review describes a structured approach to adaptivity. The Automated Mesh Refinement (ARM) algorithms developed by M Berger are described, touching on hyperbolic and parabolic applications. Adaptivity is achieved by overlaying finer grids only in areas flagged by a generalized error criterion. The author discusses some of the issues involved in abutting disparate-resolution grids, and demonstrates that suitable algorithms exist for dissipative as well as hyperbolic systems.
Amador Carrascal, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F; Urban, Matthew W
2017-04-01
Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocity values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index, ultrasound scanners, scanning protocols, and ultrasound image quality. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this paper, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time [spatiotemporal peak (STP)]; the second method applies an amplitude filter [spatiotemporal thresholding (STTH)] to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared with TTP in phantom. Moreover, in a cohort of 14 healthy subjects, STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared with conventional TTP.
Elsayed, Alaaeldin M.; Hunter, Lisa L.; Keefe, Douglas H.; Feeney, M. Patrick; Brown, David K.; Meinzen-Derr, Jareen K.; Baroch, Kelly; Sullivan-Mahoney, Maureen; Francis, Kara; Schaid, Leigh G.
2015-01-01
Objective To study normative thresholds and latencies for click and tone-burst auditory brainstem response (TB-ABR) for air and bone conduction in normal infants and those discharged from neonatal intensive care units (NICU), who passed newborn hearing screening and follow-up DPOAE. An evoked potential system (Vivosonic Integrity™) that incorporates Bluetooth electrical isolation and Kalman-weighted adaptive processing to improve signal to noise ratios was employed for this study. Results were compared with other published data. Research Design One hundred forty-five infants who passed two-stage hearing screening with transient-evoked otoacoustic emission (OAE) or automated ABR were assessed with clicks at 70 dB nHL and threshold TB-ABR. Tone-bursts at frequencies between 500 to 4000 Hz were employed for air and bone conduction ABR testing using a specified staircase threshold search to establish threshold levels and Wave V peak latencies. Results Median air conduction hearing thresholds using TB-ABR ranged from 0-20 dB nHL, depending on stimulus frequency. Median bone conduction thresholds were 10 dB nHL across all frequencies, and median air-bone gaps were 0 dB across all frequencies. There was no significant threshold difference between left and right ears and no significant relationship between thresholds and hearing loss risk factors, ethnicity or gender. Older age was related to decreased latency for air conduction. Compared to previous studies, mean air conduction thresholds were found at slightly lower (better) levels, while bone conduction levels were better at 2000 Hz and higher at 500 Hz. Latency values were longer at 500 Hz than previous studies using other instrumentation. Sleep state did not affect air or bone conduction thresholds. Conclusions This study demonstrated slightly better Wave V thresholds for air conduction than previous infant studies. The differences found in the current study, while statistically significant, were within the test
Cakmak, S.; Burnett, R.T.; Krewski, D.
1999-06-01
The association between daily fluctuations in ambient particulate matter and daily variations in nonaccidental mortality have been extensively investigated. Although it is now widely recognized that such an association exists, the form of the concentration-response model is still in question. Linear, no threshold and linear threshold models have been most commonly examined. In this paper the authors considered methods to detect and estimate threshold concentrations using time series data of daily mortality rates and air pollution concentrations. Because exposure is measured with error, they also considered the influence of measurement error in distinguishing between these two completing model specifications. The methods were illustrated on a 15-year daily time series of nonaccidental mortality and particulate air pollution data in Toronto, Canada. Nonparametric smoothed representations of the association between mortality and air pollution were adequate to graphically distinguish between these two forms. Weighted nonlinear regression methods for relative risk models were adequate to give nearly unbiased estimates of threshold concentrations even under conditions of extreme exposure measurement error. The uncertainty in the threshold estimates increased with the degree of exposure error. Regression models incorporating threshold concentrations could be clearly distinguished from linear relative risk models in the presence of exposure measurement error. The assumption of a linear model given that a threshold model was the correct form usually resulted in overestimates in the number of averted premature deaths, except for low threshold concentrations and large measurement error.
A Method for Severely Constrained Item Selection in Adaptive Testing.
ERIC Educational Resources Information Center
Stocking, Martha L.; Swanson, Len
1993-01-01
A method is presented for incorporating a large number of constraints on adaptive item selection in the construction of computerized adaptive tests. The method, which emulates practices of expert test specialists, is illustrated for verbal and quantitative measures. Its foundation is application of a weighted deviations model and algorithm. (SLD)
Solution-adaptive finite element method in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1993-01-01
Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.
Adaptive method for electron bunch profile prediction
Scheinker, Alexander; Gessner, Spencer
2015-10-01
We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET. © 2015 authors. Published by the American Physical Society.
Adaptive method for electron bunch profile prediction
NASA Astrophysics Data System (ADS)
Scheinker, Alexander; Gessner, Spencer
2015-10-01
We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET.
Adaptive finite element methods in electrochemistry.
Gavaghan, David J; Gillow, Kathryn; Süli, Endre
2006-12-05
In this article, we review some of our previous work that considers the general problem of numerical simulation of the currents at microelectrodes using an adaptive finite element approach. Microelectrodes typically consist of an electrode embedded (or recessed) in an insulating material. For all such electrodes, numerical simulation is made difficult by the presence of a boundary singularity at the electrode edge (where the electrode meets the insulator), manifested by the large increase in the current density at this point, often referred to as the edge effect. Our approach to overcoming this problem has involved the derivation of an a posteriori bound on the error in the numerical approximation for the current that can be used to drive an adaptive mesh-generation algorithm, allowing calculation of the quantity of interest (the current) to within a prescribed tolerance. We illustrate the generic applicability of the approach by considering a broad range of steady-state applications of the technique.
Adaptive methods, rolling contact, and nonclassical friction laws
NASA Technical Reports Server (NTRS)
Oden, J. T.
1989-01-01
Results and methods on three different areas of contemporary research are outlined. These include adaptive methods, the rolling contact problem for finite deformation of a hyperelastic or viscoelastic cylinder, and non-classical friction laws for modeling dynamic friction phenomena.
Reducing error vector magnitude of OFDM signals using threshold vector circle method
NASA Astrophysics Data System (ADS)
Wang, Jingqi; Wu, Qingqing; Wang, Dong; Zhang, Chunlei; Wu, Wen
2016-10-01
The main disadvantage of Orthogonal Frequency Division Multiplexing (OFDM) signal is the high peak-to-average power ratio (PAPR) which influences the system power efficiency and system performance in the presence of nonlinearities within the high power amplifier (HPA). The error vector magnitude (EVM) is one of the performance metrics by communications standards in OFDM system. In this paper, a novel PAPR reduction method from geometric angle analysis is proposed which keeps the EVM and bit-error-rate (BER) performance. In our method, a threshold vector circle is designed in frequency domain in order to adjust the amplitude and phase of the OFDM signal constellation points to near the ideal points. Simulation results show that PAPR of a QPSK modulated OFDM signal is reduced from 10.98dB to 7.502dB with an EVM reduction of 2.57%. This technique should vastly improve the performance of OFDM signal in communication system.
An Adaptive Discontinuous Galerkin Method for Modeling Atmospheric Convection (Preprint)
2011-04-13
Giraldo and Volkmar Wirth 5 SENSITIVITY STUDIES One important question for each adaptive numerical model is: how accurate is the adaptive method? For...this criterion that is used later for some sensitivity studies . These studies include a comparison between a simulation on an adaptive mesh with a...simulation on a uniform mesh and a sensitivity study concerning the size of the refinement region. 5.1 Comparison Criterion For comparing different
Cerda-Kohler, Hugo; Burgos-Jara, Carlos; Ramírez-Campillo, Rodrigo; Valdés-Cerda, Miguel; Báez, Eduardo; Zapata-Gómez, Daniel; Andrade, David C; Izquierdo, Mikel
2016-10-01
Cerda-Kohler, H, Burgos-Jara, C, Ramírez-Campillo, R, Valdés-Cerda, B, Báez, E, Zapata-Gómez, D, Cristóbal Andrade, D, and Izquierdo, M. Analysis of agreement between 4 lactate threshold measurements methods in professional soccer players. J Strength Cond Res 30(10): 2864-2870, 2016-Lactate threshold (LT) represents the inflection point of blood lactate values from rest to high-intensity exercise during an incremental test, is commonly used to determine exercise intensity, and is related to different positional roles of elite soccer players. Different methodologies have been adopted to determine the LT; however, the agreement between these methodologies in professional soccer players is unclear. Seventeen professional soccer players were recruited (age 24.7 ± 3.7 years, body mass 70.1 ± 5.3 kg, height 172.8 ± 7.3 cm) and performed an incremental treadmill test until volitional fatigue. Speed at LT (LTspeed), heart rate at LT (LTHR), and lactate values from capillary blood samples obtained at 3-minute intervals were analyzed using 4 LT measurement methods: visual inspection (VI), maximum distance (Dmax), modified Dmax (DmaxM), and logarithmic (log-log). Only Bland-Altman analysis for LTHR showed agreement between VI and Dmax, between VI and DmaxM, and between Dmax and DmaxM methods. No agreement between methods was observed after intraclass correlation coefficient and 95% one-sided lower-limit analysis. Comparative results showed that LTspeed was lower (p < 0.01) with the log-log method compared with the Dmax method and lower (p < 0.01) with the latter compared with the VI and DmaxM methods. Regarding LTHR, higher (p < 0.01) values were observed using the VI, DmaxM, and Dmax methods compared with the log-log method. Therefore, VI, Dmax, DmaxM, and log-log methods should not be used interchangeably for LT measurement. More studies are needed to determine a gold standard for LT detection in professional soccer players.
NASA Astrophysics Data System (ADS)
Deidda, R.
2010-07-01
Previous studies indicate the generalized Pareto distribution (GPD) as a suitable distribution function to reliably describe the exceedances of daily rainfall records above a proper optimum threshold, which should be selected as small as possible to retain the largest sample while assuring an acceptable fitting. Such an optimum threshold may differ from site to site, affecting consequently not only the GPD scale parameter, but also the probability of threshold exceedance. Thus a first objective of this paper is to derive some expressions to parameterize a simple threshold-invariant three-parameter distribution function which is able to describe zero and non zero values of rainfall time series by assuring a perfect overlapping with the GPD fitted on the exceedances of any threshold larger than the optimum one. Since the proposed distribution does not depend on the local thresholds adopted for fitting the GPD, it will only reflect the on-site climatic signature and thus appears particularly suitable for hydrological applications and regional analyses. A second objective is to develop and test the Multiple Threshold Method (MTM) to infer the parameters of interest on the exceedances of a wide range of thresholds using again the concept of parameters threshold-invariance. We show the ability of the MTM in fitting historical daily rainfall time series recorded with different resolutions. Finally, we prove the supremacy of the MTM fit against the standard single threshold fit, often adopted for partial duration series, by evaluating and comparing the performances on Monte Carlo samples drawn by GPDs with different shape and scale parameters and different discretizations.
Adaptable radiation monitoring system and method
Archer, Daniel E.; Beauchamp, Brock R.; Mauger, G. Joseph; Nelson, Karl E.; Mercer, Michael B.; Pletcher, David C.; Riot, Vincent J.; Schek, James L.; Knapp, David A.
2006-06-20
A portable radioactive-material detection system capable of detecting radioactive sources moving at high speeds. The system has at least one radiation detector capable of detecting gamma-radiation and coupled to an MCA capable of collecting spectral data in very small time bins of less than about 150 msec. A computer processor is connected to the MCA for determining from the spectral data if a triggering event has occurred. Spectral data is stored on a data storage device, and a power source supplies power to the detection system. Various configurations of the detection system may be adaptably arranged for various radiation detection scenarios. In a preferred embodiment, the computer processor operates as a server which receives spectral data from other networked detection systems, and communicates the collected data to a central data reporting system.
Adaptive computational methods for aerothermal heating analysis
NASA Technical Reports Server (NTRS)
Price, John M.; Oden, J. Tinsley
1988-01-01
The development of adaptive gridding techniques for finite-element analysis of fluid dynamics equations is described. The developmental work was done with the Euler equations with concentration on shock and inviscid flow field capturing. Ultimately this methodology is to be applied to a viscous analysis for the purpose of predicting accurate aerothermal loads on complex shapes subjected to high speed flow environments. The development of local error estimate strategies as a basis for refinement strategies is discussed, as well as the refinement strategies themselves. The application of the strategies to triangular elements and a finite-element flux-corrected-transport numerical scheme are presented. The implementation of these strategies in the GIM/PAGE code for 2-D and 3-D applications is documented and demonstrated.
An adaptive pseudospectral method for discontinuous problems
NASA Technical Reports Server (NTRS)
Augenbaum, Jeffrey M.
1988-01-01
The accuracy of adaptively chosen, mapped polynomial approximations is studied for functions with steep gradients or discontinuities. It is shown that, for steep gradient functions, one can obtain spectral accuracy in the original coordinate system by using polynomial approximations in a transformed coordinate system with substantially fewer collocation points than are necessary using polynomial expansion directly in the original, physical, coordinate system. It is also shown that one can avoid the usual Gibbs oscillation associated with steep gradient solutions of hyperbolic pde's by approximation in suitably chosen coordinate systems. Continuous, high gradient solutions are computed with spectral accuracy (as measured in the physical coordinate system). Discontinuous solutions associated with nonlinear hyperbolic equations can be accurately computed by using an artificial viscosity chosen to smooth out the solution in the mapped, computational domain. Thus, shocks can be effectively resolved on a scale that is subgrid to the resolution available with collocation only in the physical domain. Examples with Fourier and Chebyshev collocation are given.
Moving and adaptive grid methods for compressible flows
NASA Technical Reports Server (NTRS)
Trepanier, Jean-Yves; Camarero, Ricardo
1995-01-01
This paper describes adaptive grid methods developed specifically for compressible flow computations. The basic flow solver is a finite-volume implementation of Roe's flux difference splitting scheme or arbitrarily moving unstructured triangular meshes. The grid adaptation is performed according to geometric and flow requirements. Some results are included to illustrate the potential of the methodology.
NASA Astrophysics Data System (ADS)
Rosen, A. L.; Krumholz, M. R.; Oishi, J. S.; Lee, A. T.; Klein, R. I.
2017-02-01
We present a highly-parallel multi-frequency hybrid radiation hydrodynamics algorithm that combines a spatially-adaptive long characteristics method for the radiation field from point sources with a moment method that handles the diffuse radiation field produced by a volume-filling fluid. Our Hybrid Adaptive Ray-Moment Method (HARM2) operates on patch-based adaptive grids, is compatible with asynchronous time stepping, and works with any moment method. In comparison to previous long characteristics methods, we have greatly improved the parallel performance of the adaptive long-characteristics method by developing a new completely asynchronous and non-blocking communication algorithm. As a result of this improvement, our implementation achieves near-perfect scaling up to O (103) processors on distributed memory machines. We present a series of tests to demonstrate the accuracy and performance of the method.
Adaptive mesh strategies for the spectral element method
NASA Technical Reports Server (NTRS)
Mavriplis, Catherine
1992-01-01
An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.
Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition
NASA Technical Reports Server (NTRS)
Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd
2015-01-01
Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.
Li, Xue; Xu, Yuan; Zhao, Gang; Shi, Chunli; Wang, Zhong-Liang; Wang, Yuqiu
2015-04-01
The eutrophication problem of drinking water source is directly related to the security of urban water supplication, and phosphorus has been proved as an important element to the water quality of the most northern hemisphere lakes and reservoirs. In the paper, 15-year monitoring records (1990∼2004) of Yuqiao Reservoir were used to model the changing trend of the total phosphorus (TP), analyze the uncertainty of nutrient parameters, and estimate the threshold of eutrophication management at a specific water quality goal by the application of Bayesian method through chemical material balance (CMB) model. The results revealed that Yuqiao Reservoir was a P-controlled water ecosystem, and the inner concentration of TP in the reservoir was significantly correlated with TP loading concentration, hydraulic retention coefficient, and bottom water dissolved oxygen concentration. In the case, the goal of water quality for TP in the reservoir was set to be 0.05 mg L(-1) (the third level of national surface water standard for reservoirs according to GB3838-2002), management measures could be taken to improve water quality in reservoir through controlling the highest inflow phosphorus concentration (0.15∼0.21 mg L(-1)) and the lowest DO concentration (3.76∼5.59 mg L(-1)) to the threshold. Inverse method was applied to evaluate the joint manage measures, and the results revealed that it was a valuable measure to avoid eutrophication by controlling lowest dissolved oxygen concentration and adjusting the inflow and outflow of reservoir.
NASA Technical Reports Server (NTRS)
Smith, Paul L.; VonderHaar, Thomas H.
1996-01-01
The principal goal of this project is to establish relationships that would allow application of area-time integral (ATI) calculations based upon satellite data to estimate rainfall volumes. The research is being carried out as a collaborative effort between the two participating organizations, with the satellite data analysis to determine values for the ATIs being done primarily by the STC-METSAT scientists and the associated radar data analysis to determine the 'ground-truth' rainfall estimates being done primarily at the South Dakota School of Mines and Technology (SDSM&T). Synthesis of the two separate kinds of data and investigation of the resulting rainfall-versus-ATI relationships is then carried out jointly. The research has been pursued using two different approaches, which for convenience can be designated as the 'fixed-threshold approach' and the 'adaptive-threshold approach'. In the former, an attempt is made to determine a single temperature threshold in the satellite infrared data that would yield ATI values for identifiable cloud clusters which are closely related to the corresponding rainfall amounts as determined by radar. Work on the second, or 'adaptive-threshold', approach for determining the satellite ATI values has explored two avenues: (1) attempt involved choosing IR thresholds to match the satellite ATI values with ones separately calculated from the radar data on a case basis; and (2) an attempt involved a striaghtforward screening analysis to determine the (fixed) offset that would lead to the strongest correlation and lowest standard error of estimate in the relationship between the satellite ATI values and the corresponding rainfall volumes.
NASA Astrophysics Data System (ADS)
Zhang, Jiyan; Dong, Jinxin
2016-09-01
The color discrimination is a powerful tool for detection of eye diseases, and it is is necessary to produce different kinds of color rapidly and precisely for testing color discrimination thresholds of human eyes. Three channels' pulse-width modulation (PWM) and light-mixing technology is a new way to mixing color, and a new measurement method for color discrimination thresholds of human eyes based on PWM light-mix technology can generate kinds of color stimuli. In this study, 5 youth volunteers were measured via this equipment after the test for the stability of the device's illumination and chrominance. Though the theory of Macadam ellipses and the interleaved staircase method, a psychophysical experiment was made to study the color discrimination threshold of the human eyes around a basic color center. By analyzing the data of the chromatic ellipse and the color discrimination threshold, the result shows that each color is not uniform in a single color region and the color difference threshold of normal human is around the third Macadam ellipses. The experimental results show that the repeatability and accuracy of the observer can meet the accuracy requirements of the relevant experiments, and the data is reliable and effective, which means the measurement method is an effective way to measure the color discrimination thresholds of human visual system.
NASA Technical Reports Server (NTRS)
Hirsch, David
2009-01-01
Spacecraft fire safety emphasizes fire prevention, which is achieved primarily through the use of fire-resistant materials. Materials selection for spacecraft is based on conventional flammability acceptance tests, along with prescribed quantity limitations and configuration control for items that are non-pass or questionable. ISO 14624-1 and -2 are the major methods used to evaluate flammability of polymeric materials intended for use in the habitable environments of spacecraft. The methods are upward flame-propagation tests initiated in static environments and using a well-defined igniter flame at the bottom of the sample. The tests are conducted in the most severe flaming combustion environment expected in the spacecraft. The pass/fail test logic of ISO 14624-1 and -2 does not allow a quantitative comparison with reduced gravity or microgravity test results; therefore their use is limited, and possibilities for in-depth theoretical analyses and realistic estimates of spacecraft fire extinguishment requirements are practically eliminated. To better understand the applicability of laboratory test data to actual spacecraft environments, a modified ISO 14624 protocol has been proposed that, as an alternative to qualifying materials as pass/fail in the worst-expected environments, measures the actual upward flammability limit for the material. A working group established by NASA to provide recommendations for exploration spacecraft internal atmospheres realized the importance of correlating laboratory data with real-life environments and recommended NASA to develop a flammability threshold test method. The working group indicated that for the Constellation Program, the flammability threshold information will allow NASA to identify materials with increased flammability risk from oxygen concentration and total pressure changes, minimize potential impacts, and allow for development of sound requirements for new spacecraft and extravehicular landers and habitats
Adaptive Kernel Based Machine Learning Methods
2012-10-15
multiscale collocation method with a matrix compression strategy to discretize the system of integral equations and then use the multilevel...augmentation method to solve the resulting discrete system. A priori and a posteriori 1 parameter choice strategies are developed for thesemethods. The...performance of the proximity algo- rithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed
Adaptive upscaling with the dual mesh method
Guerillot, D.; Verdiere, S.
1997-08-01
The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.
Goldberg, J M; Lindblom, U
1979-01-01
Vibration threshold determinations were made by means of an electromagnetic vibrator at three sites (carpal, tibial, and tarsal), which were primarily selected for examining patients with polyneuropathy. Because of the vast variation demonstrated for both vibrator output and tissue damping, the thresholds were expressed in terms of amplitude of stimulator movement measured by means of an accelerometer, instead of applied voltage which is commonly used. Statistical analysis revealed a higher power of discimination for amplitude measurements at all three stimulus sites. Digital read-out gave the best statistical result and was also most practical. Reference values obtained from 110 healthy males, 10 to 74 years of age, were highly correlated with age for both upper and lower extremities. The variance of the vibration perception threshold was less than that of the disappearance threshold, and determination of the perception threshold alone may be sufficient in most cases. PMID:501379
Adaptive Finite Element Methods for Continuum Damage Modeling
NASA Technical Reports Server (NTRS)
Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.
1995-01-01
The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.
Stutz, William E; Bolnick, Daniel I
2014-01-01
Genes of the vertebrate major histocompatibility complex (MHC) are of great interest to biologists because of their important role in immunity and disease, and their extremely high levels of genetic diversity. Next generation sequencing (NGS) technologies are quickly becoming the method of choice for high-throughput genotyping of multi-locus templates like MHC in non-model organisms. Previous approaches to genotyping MHC genes using NGS technologies suffer from two problems:1) a "gray zone" where low frequency alleles and high frequency artifacts can be difficult to disentangle and 2) a similar sequence problem, where very similar alleles can be difficult to distinguish as two distinct alleles. Here were present a new method for genotyping MHC loci--Stepwise Threshold Clustering (STC)--that addresses these problems by taking full advantage of the increase in sequence data provided by NGS technologies. Unlike previous approaches for genotyping MHC with NGS data that attempt to classify individual sequences as alleles or artifacts, STC uses a quasi-Dirichlet clustering algorithm to cluster similar sequences at increasing levels of sequence similarity. By applying frequency and similarity based criteria to clusters rather than individual sequences, STC is able to successfully identify clusters of sequences that correspond to individual or similar alleles present in the genomes of individual samples. Furthermore, STC does not require duplicate runs of all samples, increasing the number of samples that can be genotyped in a given project. We show how the STC method works using a single sample library. We then apply STC to 295 threespine stickleback (Gasterosteus aculeatus) samples from four populations and show that neighboring populations differ significantly in MHC allele pools. We show that STC is a reliable, accurate, efficient, and flexible method for genotyping MHC that will be of use to biologists interested in a variety of downstream applications.
NASA Astrophysics Data System (ADS)
Langousis, Andreas; Mamalakis, Antonios; Puliga, Michelangelo; Deidda, Roberto
2016-04-01
In extreme excess modeling, one fits a generalized Pareto (GP) distribution to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as nonparametric methods that are intended to locate the changing point between extreme and nonextreme regions of the data, graphical methods where one studies the dependence of GP-related metrics on the threshold level u, and Goodness-of-Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. Here we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 overcentennial daily rainfall records from the NOAA-NCDC database. We find that nonparametric methods are generally not reliable, while methods that are based on GP asymptotic properties lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e., on the order of 0.1-0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on preasymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2 and 12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the empirical records, as well as variations in their size, constitute the two most important factors that may significantly affect the accuracy of the obtained results.
Cheled-Shoval, Shira L; Reicher, Naama; Niv, Masha Y; Uni, Zehava
2017-03-02
The sense of taste has a key role in nutrient sensing and food intake in animals. A standardized and simple method for determination of tastant-detection thresholds is required for chemosensory research in poultry. We established a 24-h, 2-alternative, forced-choice solution-consumption method and applied it to measure detection thresholds for 3 G-protein-coupled receptor-mediated taste modalities-bitter, sweet, and umami-in chicken. Four parameters were used to determine a significant response: 1) tastant-solution consumption; 2) water (tasteless) consumption; 3) total consumption (tastant and water together); 4) ratio of tastant consumption to total consumption. Our results showed that assignment of the taste solutions and a water control to 2 bottles on random sides of the pen can be reliably used for broiler chicks, even though 47% of the chicks groups demonstrated a consistently preferred side. The detection thresholds for quinine (bitter), L-monosodium glutamate (MSG) (umami), and sucrose (sweet) were determined to be 0.3 mM, 300 mM, and 1 M, respectively. The threshold results for quinine were similar to those for humans and rodents, but the chicks were found to be less sensitive to sucrose and MSG. The described method is useful for studying detection thresholds for tastants that have the potential to affect feed and water consumption in chickens.
Dependence of the L-H power threshold on magnetic balance and heating method in NSTX
NASA Astrophysics Data System (ADS)
Maingi, R.; Biewer, T.; Meyer, H.; Bell, R.; Leblanc, B.; Chang, C. S.
2007-11-01
H-mode access is a critical issue for next step devices, such as the International Thermonuclear Experimental Reactor (ITER), which is projected to have a modest heating power margin over the projected L-H power threshold (PLH). The importance of a second X-point in setting the value of PLH has been clarified in recent experiments on several tokamaks. Specifically a reduction of PLH was observed when the magnetic configuration was changed from single null (SN) to double null (DN) in the MAST, NSTX, and ASDEX-Upgrade devices [1]. Motivated by these results, detailed PLH studies on NSTX have compared discharges with neutral beam and rf heating, as a function of drsep. Similar PLH values and edge parameters are observed with the two heating methods in the same magnetic configuration, with PLH ˜ 0.6 MW lowest in DN and increasing to ˜ 1.1 MW and 2-4 MW in lower-SN and upper-SN configurations respectively (ion grad-B-drift towards lower X-point). The evolution of the experimental profiles of parameters in L-mode before the L/H transition will be compared with simulations using the XGC code (C.S. Chang). [1] MEYER, H. et al., Nucl. Fusion 46 (2006) 64.
NASA Astrophysics Data System (ADS)
Susrama, I. G.; Purnama, K. E.; Purnomo, M. H.
2016-01-01
Oligospermia is a male fertility issue defined as a low sperm concentration in the ejaculate. Normally the sperm concentration is 20-120 million/ml, while Oligospermia patients has sperm concentration less than 20 million/ml. Sperm test done in the fertility laboratory to determine oligospermia by checking fresh sperm according to WHO standards in 2010 [9]. The sperm seen in a microscope using a Neubauer improved counting chamber and manually count the number of sperm. In order to be counted automatically, this research made an automation system to analyse and count the sperm concentration called Automated Analysis of Sperm Concentration Counters (A2SC2) using Otsu threshold segmentation process and morphology. Data sperm used is the fresh sperm directly in the analysis in the laboratory from 10 people. The test results using A2SC2 method obtained an accuracy of 91%. Thus in this study, A2SC2 can be used to calculate the amount and concentration of sperm automatically
Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes
2016-01-01
The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is
Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes
2016-07-22
The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is
Mera, David; Cotos, José M; Varela-Pet, José; Garcia-Pineda, Oscar
2012-10-01
Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the ocean's surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time.
NASA Astrophysics Data System (ADS)
Butler, John S.; Molloy, Anna; Williams, Laura; Kimmich, Okka; Quinlivan, Brendan; O'Riordan, Sean; Hutchinson, Michael; Reilly, Richard B.
2015-08-01
Objective. Recent studies have proposed that the temporal discrimination threshold (TDT), the shortest detectable time period between two stimuli, is a possible endophenotype for adult onset idiopathic isolated focal dystonia (AOIFD). Patients with AOIFD, the third most common movement disorder, and their first-degree relatives have been shown to have abnormal visual and tactile TDTs. For this reason it is important to fully characterize each participant’s data. To date the TDT has only been reported as a single value. Approach. Here, we fit individual participant data with a cumulative Gaussian to extract the mean and standard deviation of the distribution. The mean represents the point of subjective equality (PSE), the inter-stimulus interval at which participants are equally likely to respond that two stimuli are one stimulus (synchronous) or two different stimuli (asynchronous). The standard deviation represents the just noticeable difference (JND) which is how sensitive participants are to changes in temporal asynchrony around the PSE. We extended this method by submitting the data to a non-parametric bootstrapped analysis to get 95% confidence intervals on individual participant data. Main results. Both the JND and PSE correlate with the TDT value but are independent of each other. Hence this suggests that they represent different facets of the TDT. Furthermore, we divided groups by age and compared the TDT, PSE, and JND values. The analysis revealed a statistical difference for the PSE which was only trending for the TDT. Significance. The analysis method will enable deeper analysis of the TDT to leverage subtle differences within and between control and patient groups, not apparent in the standard TDT measure.
NASA Astrophysics Data System (ADS)
Liu, Lixin; Bian, Hongyu; Yagi, Shin-ichi; Yang, Xiaodong
2016-07-01
Raw sonar images may not be used for underwater detection or recognition directly because disturbances such as the grating-lobe and multi-path disturbance affect the gray-level distribution of sonar images and cause phantom echoes. To search for a more robust segmentation method with a reasonable computational cost, a prior-knowledge-based threshold segmentation method of underwater linear object detection is discussed. The possibility of guiding the segmentation threshold evolution of forward-looking sonar images using prior knowledge is verified by experiment. During the threshold evolution, the collinear relation of two lines that correspond to double peaks in the voting space of the edged image is used as the criterion of termination. The interaction is reflected in the sense that the Hough transform contributes to the basis of the collinear relation of lines, while the binary image generated from the current threshold provides the resource of the Hough transform. The experimental results show that the proposed method could maintain a good tradeoff between the segmentation quality and the computational time in comparison with conventional segmentation methods. The proposed method redounds to a further process for unsupervised underwater visual understanding.
Adjoint Methods for Guiding Adaptive Mesh Refinement in Tsunami Modeling
NASA Astrophysics Data System (ADS)
Davis, B. N.; LeVeque, R. J.
2016-12-01
One difficulty in developing numerical methods for tsunami modeling is the fact that solutions contain time-varying regions where much higher resolution is required than elsewhere in the domain, particularly when tracking a tsunami propagating across the ocean. The open source GeoClaw software deals with this issue by using block-structured adaptive mesh refinement to selectively refine around propagating waves. For problems where only a target area of the total solution is of interest (e.g., one coastal community), a method that allows identifying and refining the grid only in regions that influence this target area would significantly reduce the computational cost of finding a solution. In this work, we show that solving the time-dependent adjoint equation and using a suitable inner product with the forward solution allows more precise refinement of the relevant waves. We present the adjoint methodology first in one space dimension for illustration and in a broad context since it could also be used in other adaptive software, and potentially for other tsunami applications beyond adaptive refinement. We then show how this adjoint method has been integrated into the adaptive mesh refinement strategy of the open source GeoClaw software and present tsunami modeling results showing that the accuracy of the solution is maintained and the computational time required is significantly reduced through the integration of the adjoint method into adaptive mesh refinement.
Studies of an Adaptive Kaczmarz Method for Electrical Impedance Imaging
NASA Astrophysics Data System (ADS)
Li, Taoran; Isaacson, David; Newell, Jonathan C.; Saulnier, Gary J.
2013-04-01
We present an adaptive Kaczmarz method for solving the inverse problem in electrical impedance tomography and determining the conductivity distribution inside an object from electrical measurements made on the surface. To best characterize an unknown conductivity distribution and avoid inverting the Jacobian-related term JTJ which could be expensive in terms of memory storage in large scale problems, we propose to solve the inverse problem by adaptively updating both the optimal current pattern with improved distinguishability and the conductivity estimate at each iteration. With a novel subset scheme, the memory-efficient reconstruction algorithm which appropriately combines the optimal current pattern generation and the Kaczmarz method can produce accurate and stable solutions adaptively compared to traditional Kaczmarz and Gauss-Newton type methods. Several reconstruction image metrics are used to quantitatively evaluate the performance of the simulation results.
NASA Astrophysics Data System (ADS)
Gong, He; Fan, Yubo; Zhang, Ming
2008-04-01
The objective of this paper is to identify the effects of mechanical disuse and basic multi-cellular unit (BMU) activation threshold on the form of trabecular bone during menopause. A bone adaptation model with mechanical- biological factors at BMU level was integrated with finite element analysis to simulate the changes of trabecular bone structure during menopause. Mechanical disuse and changes in the BMU activation threshold were applied to the model for the period from 4 years before to 4 years after menopause. The changes in bone volume fraction, trabecular thickness and fractal dimension of the trabecular structures were used to quantify the changes of trabecular bone in three different cases associated with mechanical disuse and BMU activation threshold. It was found that the changes in the simulated bone volume fraction were highly correlated and consistent with clinical data, and that the trabecular thickness reduced significantly during menopause and was highly linearly correlated with the bone volume fraction, and that the change trend of fractal dimension of the simulated trabecular structure was in correspondence with clinical observations. The numerical simulation in this paper may help to better understand the relationship between the bone morphology and the mechanical, as well as biological environment; and can provide a quantitative computational model and methodology for the numerical simulation of the bone structural morphological changes caused by the mechanical environment, and/or the biological environment.
Johansson, R S; Vallbo, A B
1979-01-01
1. Psychophysical thresholds were determined at 162 points in the glabrous skin area of the human hand when slowly rising, triangular indentations of controlled amplitudes were delivered with a small probe. The method of constant stimuli was used with either the two alternative forced choice or the yes-no procedure. It was found that the distribution of the psychophysical thresholds varied with the skin region. Thresholds from the volar aspect of the fingers and the peripheral parts of the palm were low and their distribution was unimodal with a median of 11.2 micrometers. In contrast, there was an over-representation of high thresholds when observations from the centre of the palm, the lateral aspects of the fingers and the regions of the creases were pooled, and the distribution was slightly bimodal with a median of 36.0 micrometers. 2. Nerve impulses were recorded from single fibres in the median nerve of human subjects with percutaneously inserted tungsten needle electrodes. The thresholds of 128 mechanosensitive afferent units in the glabrous skin area of the hand were determined when stimuli were delivered to partly the same points as stimulated for the assessment of the psychophysical thresholds. Of the four types of units present in this area the Pacinian corpuscle (PC) and rapidly adapting (RA) units had the lowest thresholds with medians of 9.2 and 13.8 micrometers, followed by the slowly adapting type I and slowly adapting type II units with medians of 56.5 and 33.1 micrometers. There was no indication of a difference between thresholds of units located in different skin areas. 3. In the region of low psychophysical thresholds there was good agreement between the thresholds of the rapidly adapting and Pacinian corpuscle units and the psychophysical thresholds, particularly at the lower ends of the samples. In the skin regions of high thresholds, on the other hand, practically all psychophysical thresholds were higher than the thresholds of the most
A multigrid method for steady Euler equations on unstructured adaptive grids
NASA Technical Reports Server (NTRS)
Riemslagh, Kris; Dick, Erik
1993-01-01
A flux-difference splitting type algorithm is formulated for the steady Euler equations on unstructured grids. The polynomial flux-difference splitting technique is used. A vertex-centered finite volume method is employed on a triangular mesh. The multigrid method is in defect-correction form. A relaxation procedure with a first order accurate inner iteration and a second-order correction performed only on the finest grid, is used. A multi-stage Jacobi relaxation method is employed as a smoother. Since the grid is unstructured a Jacobi type is chosen. The multi-staging is necessary to provide sufficient smoothing properties. The domain is discretized using a Delaunay triangular mesh generator. Three grids with more or less uniform distribution of nodes but with different resolution are generated by successive refinement of the coarsest grid. Nodes of coarser grids appear in the finer grids. The multigrid method is started on these grids. As soon as the residual drops below a threshold value, an adaptive refinement is started. The solution on the adaptively refined grid is accelerated by a multigrid procedure. The coarser multigrid grids are generated by successive coarsening through point removement. The adaption cycle is repeated a few times. Results are given for the transonic flow over a NACA-0012 airfoil.
Final Report: Symposium on Adaptive Methods for Partial Differential Equations
Pernice, M.; Johnson, C.R.; Smith, P.J.; Fogelson, A.
1998-12-10
OAK-B135 Final Report: Symposium on Adaptive Methods for Partial Differential Equations. Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.
NASA Astrophysics Data System (ADS)
Chang, Shihui; Xue, Fanfan; Zhou, Wenzheng; Zhang, Ji; Jian, Xiqi
2017-03-01
Usually, numerical simulation is used to predict the acoustic filed and temperature distribution of high intensity focused ultrasound (HIFU). In this paper, the simulated lesion volumes obtained by temperature threshold (TRT) 60 °C and equivalent thermal dose (ETD) 240 min were compared with the experimental results which were obtained by animal tissue experiment in vitro. In the simulation, the calculated model was established according to the vitro tissue experiment, and the Finite Difference Time Domain (FDTD) method was used to calculate the acoustic field and temperature distribution in bovine liver by the Westervelt formula and Pennes bio-heat transfer equation, and the non-linear characteristics of the ultrasound was considered. In the experiment, the fresh bovine liver was exposed for 8s, 10s, 12s under different power conditions (150W, 170W, 190W, 210W), and the exposure was repeated 6 times under the same dose. After the exposures, the liver was sliced and photographed every 0.2mm, and the area of the lesion region in every photo was calculated. Then, every value of the areas was multiplied by 0.2mm, and summed to get the approximation volume of the lesion region. The comparison result shows that the lesion volume of the region calculated by TRT 60 °C in simulation was much closer to the lesion volume obtained in experiment, and the volume of the region above 60 °C was larger than the experimental results, but the volume deviation was not exceed 10%. The volume of the lesion region calculated by ETD 240 min was larger than that calculated by TRT 60 °C in simulation, and the volume deviations were ranged from 4.9% to 23.7%.
An improved adaptive IHS method for image fusion
NASA Astrophysics Data System (ADS)
Wang, Ting
2015-12-01
An improved adaptive intensity-hue-saturation (IHS) method is proposed for image fusion in this paper based on the adaptive IHS (AIHS) method and its improved method(IAIHS). Through improved method, the weighting matrix, which decides how many spatial details in the panchromatic (Pan) image should be injected into the multispectral (MS) image, is defined on the basis of the linear relationship of the edges of Pan and MS image. At the same time, a modulation parameter t is used to balance the spatial resolution and spectral resolution of the fusion image. Experiments showed that the improved method can improve spectral quality and maintain spatial resolution compared with the AIHS and IAIHS methods.
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics
Anderson, R W; Pember, R B; Elliott, N S
2002-10-19
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics
Anderson, R W; Pember, R B; Elliott, N S
2004-01-28
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.
A new method for the re-implementation of threshold logic functions with cellular neural networks.
Bénédic, Y; Wira, P; Mercklé, J
2008-08-01
A new strategy is presented for the implementation of threshold logic functions with binary-output Cellular Neural Networks (CNNs). The objective is to optimize the CNNs weights to develop a robust implementation. Hence, the concept of generative set is introduced as a convenient representation of any linearly separable Boolean function. Our analysis of threshold logic functions leads to a complete algorithm that automatically provides an optimized generative set. New weights are deduced and a more robust CNN template assuming the same function can thus be implemented. The strategy is illustrated by a detailed example.
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Jones, Jeffrey A.
1997-01-01
One of the TRMM radar products of interest is the monthly-averaged rain rates over 5 x 5 degree cells. Clearly, the most directly way of calculating these and similar statistics is to compute them from the individual estimates made over the instantaneous field of view of the Instrument (4.3 km horizontal resolution). An alternative approach is the use of a threshold method. It has been established that over sufficiently large regions the fractional area above a rain rate threshold and the area-average rain rate are well correlated for particular choices of the threshold [e.g., Kedem et al., 19901]. A straightforward application of this method to the TRMM data would consist of the conversion of the individual reflectivity factors to rain rates followed by a calculation of the fraction of these that exceed a particular threshold. Previous results indicate that for thresholds near or at 5 mm/h, the correlation between this fractional area and the area-average rain rate is high. There are several drawbacks to this approach, however. At the TRMM radar frequency of 13.8 GHz the signal suffers attenuation so that the negative bias of the high resolution rain rate estimates will increase as the path attenuation increases. To establish a quantitative relationship between fractional area and area-average rain rate, an independent means of calculating the area-average rain rate is needed such as an array of rain gauges. This type of calibration procedure, however, is difficult for a spaceborne radar such as TRMM. To estimate a statistic other than the mean of the distribution requires, in general, a different choice of threshold and a different set of tuning parameters.
Wavelet methods in multi-conjugate adaptive optics
NASA Astrophysics Data System (ADS)
Helin, T.; Yudytskiy, M.
2013-08-01
The next generation ground-based telescopes rely heavily on adaptive optics for overcoming the limitation of atmospheric turbulence. In the future adaptive optics modalities, like multi-conjugate adaptive optics (MCAO), atmospheric tomography is the major mathematical and computational challenge. In this severely ill-posed problem, a fast and stable reconstruction algorithm is needed that can take into account many real-life phenomena of telescope imaging. We introduce a novel reconstruction method for the atmospheric tomography problem and demonstrate its performance and flexibility in the context of MCAO. Our method is based on using locality properties of compactly supported wavelets, both in the spatial and frequency domains. The reconstruction in the atmospheric tomography problem is obtained by solving the Bayesian MAP estimator with a conjugate-gradient-based algorithm. An accelerated algorithm with preconditioning is also introduced. Numerical performance is demonstrated on the official end-to-end simulation tool OCTOPUS of European Southern Observatory.
Adaptive computational methods for SSME internal flow analysis
NASA Technical Reports Server (NTRS)
Oden, J. T.
1986-01-01
Adaptive finite element methods for the analysis of classes of problems in compressible and incompressible flow of interest in SSME (space shuttle main engine) analysis and design are described. The general objective of the adaptive methods is to improve and to quantify the quality of numerical solutions to the governing partial differential equations of fluid dynamics in two-dimensional cases. There are several different families of adaptive schemes that can be used to improve the quality of solutions in complex flow simulations. Among these are: (1) r-methods (node-redistribution or moving mesh methods) in which a fixed number of nodal points is allowed to migrate to points in the mesh where high error is detected; (2) h-methods, in which the mesh size h is automatically refined to reduce local error; and (3) p-methods, in which the local degree p of the finite element approximation is increased to reduce local error. Two of the three basic techniques have been studied in this project: an r-method for steady Euler equations in two dimensions and a p-method for transient, laminar, viscous incompressible flow. Numerical results are presented. A brief introduction to residual methods of a-posterior error estimation is also given and some pertinent conclusions of the study are listed.
Cox-Davenport, Rebecca A; Phelan, Julia C
2015-05-01
First-time NCLEX-RN pass rates are an important indicator of nursing school success and quality. Nursing schools use different methods to anticipate NCLEX outcomes and help prevent student failure and possible threat to accreditation. This study evaluated the impact of a shift in NCLEX preparation policy at a BSN program in the southeast United States. The policy shifted from the use of predictor score thresholds to determine graduation eligibility to a more proactive remediation strategy involving adaptive quizzing. A descriptive correlational design evaluated the impact of an adaptive quizzing system designed to give students ongoing active practice and feedback and explored the relationship between predictor examinations and NCLEX success. Data from student usage of the system as well as scores on predictor tests were collected for three student cohorts. Results revealed a positive correlation between adaptive quizzing system usage and content mastery. Two of the 69 students in the sample did not pass the NCLEX. With so few students failing the NCLEX, predictability of any course variables could not be determined. The power of predictor examinations to predict NCLEX failure could also not be supported. The most consistent factor among students, however, was their content mastery level within the adaptive quizzing system. Implications of these findings are discussed.
Adaptive clustering and adaptive weighting methods to detect disease associated rare variants.
Sha, Qiuying; Wang, Shuaicheng; Zhang, Shuanglin
2013-03-01
Current statistical methods to test association between rare variants and phenotypes are essentially the group-wise methods that collapse or aggregate all variants in a predefined group into a single variant. Comparing with the variant-by-variant methods, the group-wise methods have their advantages. However, two factors may affect the power of these methods. One is that some of the causal variants may be protective. When both risk and protective variants are presented, it will lose power by collapsing or aggregating all variants because the effects of risk and protective variants will counteract each other. The other is that not all variants in the group are causal; rather, a large proportion is believed to be neutral. When a large proportion of variants are neutral, collapsing or aggregating all variants may not be an optimal solution. We propose two alternative methods, adaptive clustering (AC) method and adaptive weighting (AW) method, aiming to test rare variant association in the presence of neutral and/or protective variants. Both of AC and AW are applicable to quantitative traits as well as qualitative traits. Results of extensive simulation studies show that AC and AW have similar power and both of them have clear advantages from power to computational efficiency comparing with existing group-wise methods and existing data-driven methods that allow neutral and protective variants. We recommend AW method because AW method is computationally more efficient than AC method.
Nastasi, Michael Anthony; Wang, Yongqiang; Fraboni, Beatrice; Cosseddu, Piero; Bonfiglio, Annalisa
2013-06-11
Organic thin film devices that included an organic thin film subjected to a selected dose of a selected energy of ions exhibited a stabilized mobility (.mu.) and threshold voltage (VT), a decrease in contact resistance R.sub.C, and an extended operational lifetime that did not degrade after 2000 hours of operation in the air.
Wilczek, Rajmund; Swiątkowski, Maciej; Czepiel, Aleksandra; Sterliński, Maciej; Makowska, Ewa; Kułakowski, Piotr
2011-01-01
We report a case of successful implantation of an additional defibrillation lead into the coronary sinus due to high defibrillation threshold (DFT) in a seriously ill patient with a history of extensive myocardial infarction referred for implantable cardioverter- defibrillator implantation after an episode of unstable ventricular tachycardia. All previous attempts to reduce DFT, including subcutaneous electrode implantation, had been unsuccessful.
New developments in adaptive methods for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Oden, J. T.; Bass, Jon M.
1990-01-01
New developments in a posteriori error estimates, smart algorithms, and h- and h-p adaptive finite element methods are discussed in the context of two- and three-dimensional compressible and incompressible flow simulations. Applications to rotor-stator interaction, rotorcraft aerodynamics, shock and viscous boundary layer interaction and fluid-structure interaction problems are discussed.
Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.
ERIC Educational Resources Information Center
Butler, Ronald W.
The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…
A Conditional Exposure Control Method for Multidimensional Adaptive Testing
ERIC Educational Resources Information Center
Finkelman, Matthew; Nering, Michael L.; Roussos, Louis A.
2009-01-01
In computerized adaptive testing (CAT), ensuring the security of test items is a crucial practical consideration. A common approach to reducing item theft is to define maximum item exposure rates, i.e., to limit the proportion of examinees to whom a given item can be administered. Numerous methods for controlling exposure rates have been proposed…
Threshold estimation in two-alternative forced-choice (2AFC) tasks: the Spearman-Kärber method.
Ulrich, Rolf; Miller, Jeff
2004-04-01
The Spearman-Kärber method can be used to estimate the threshold value or difference limen in two-alternative forced-choice tasks. This method yields a simple estimator for the difference limen and its standard error, so that both can be calculated with a pocket calculator. In contrast to previous estimators, the present approach does not require any assumptions about the shape of the true underlying psychometric function. The performance of this new nonparametric estimator is compared with the standard technique of probit analysis. The Spearman-Kärber method appears to be a valuable addition to the toolbox of psychophysical methods, because it is most accurate for estimating the mean (i.e., absolute and difference thresholds) and dispersion of the psychometric function, although it is not optimal for estimating percentile-based parameters of this function.
NASA Astrophysics Data System (ADS)
Grossman, Zvi; Paul, William E.
1992-11-01
A major challenge for immunologists is to explain how the immune system adjusts its responses to the microenvironmental context in which antigens are recognized. We propose that lymphocytes achieve this by tuning and updating their responsiveness to recurrent signals. In particular, cellular anergy in vivo is a dynamic state in which the threshold for a stereotypic mode of activation has been elevated. Anergy is associated with other forms of cellular activity, not paralysis. Cells engaged in such subthreshold interactions mediate functions such as maintenance of immunological memory and control of infections. In such interactions, patterns of signals are recognized and classified and evoke selective responses. The robust mechanism proposed for segregation of suprathreshold and subthreshold immune responses allows lymphocytes to use recognition of self-antigens in executing physiological functions. Autoreactivity is allowed where it is dissociated from uncontrolled aggression.
Sperling, Milena P. R.; Simões, Rodrigo P.; Caruso, Flávia C. R.; Mendes, Renata G.; Arena, Ross; Borghi-Silva, Audrey
2016-01-01
ABSTRACT Background Recent studies have shown that the magnitude of the metabolic and autonomic responses during progressive resistance exercise (PRE) is associated with the determination of the anaerobic threshold (AT). AT is an important parameter to determine intensity in dynamic exercise. Objectives To investigate the metabolic and cardiac autonomic responses during dynamic resistance exercise in patients with Coronary Artery Disease (CAD). Method Twenty men (age = 63±7 years) with CAD [Left Ventricular Ejection Fraction (LVEF) = 60±10%] underwent a PRE protocol on a leg press until maximal exertion. The protocol began at 10% of One Repetition Maximum Test (1-RM), with subsequent increases of 10% until maximal exhaustion. Heart Rate Variability (HRV) indices from Poincaré plots (SD1, SD2, SD1/SD2) and time domain (rMSSD and RMSM), and blood lactate were determined at rest and during PRE. Results Significant alterations in HRV and blood lactate were observed starting at 30% of 1-RM (p<0.05). Bland-Altman plots revealed a consistent agreement between blood lactate threshold (LT) and rMSSD threshold (rMSSDT) and between LT and SD1 threshold (SD1T). Relative values of 1-RM in all LT, rMSSDT and SD1T did not differ (29%±5 vs 28%±5 vs 29%±5 Kg, respectively). Conclusion HRV during PRE could be a feasible noninvasive method of determining AT in CAD patients to plan intensities during cardiac rehabilitation. PMID:27556384
Adaptive reconnection-based arbitrary Lagrangian Eulerian method
Bo, Wurigen; Shashkov, Mikhail
2015-07-21
We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less
Adaptive reconnection-based arbitrary Lagrangian Eulerian method
Bo, Wurigen; Shashkov, Mikhail
2015-07-21
We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALE method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.
Ales, Justin M.; Farzin, Faraz; Rossion, Bruno; Norcia, Anthony M.
2012-01-01
We introduce a sensitive method for measuring face detection thresholds rapidly, objectively, and independently of low-level visual cues. The method is based on the swept parameter steady-state visual evoked potential (ssVEP), in which a stimulus is presented at a specific temporal frequency while parametrically varying (“sweeping”) the detectability of the stimulus. Here, the visibility of a face image was increased by progressive derandomization of the phase spectra of the image in a series of equally spaced steps. Alternations between face and fully randomized images at a constant rate (3/s) elicit a robust first harmonic response at 3 Hz specific to the structure of the face. High-density EEG was recorded from 10 human adult participants, who were asked to respond with a button-press as soon as they detected a face. The majority of participants produced an evoked response at the first harmonic (3 Hz) that emerged abruptly between 30% and 35% phase-coherence of the face, which was most prominent on right occipito-temporal sites. Thresholds for face detection were estimated reliably in single participants from 15 trials, or on each of the 15 individual face trials. The ssVEP-derived thresholds correlated with the concurrently measured perceptual face detection thresholds. This first application of the sweep VEP approach to high-level vision provides a sensitive and objective method that could be used to measure and compare visual perception thresholds for various object shapes and levels of categorization in different human populations, including infants and individuals with developmental delay. PMID:23024355
NASA Astrophysics Data System (ADS)
Deidda, Roberto; Mamalakis, Antonis; Langousis, Andreas
2015-04-01
One of the most crucial issues in statistical hydrology is the estimation of extreme rainfall from data. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a Generalized Pareto Distribution (GPD) model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches that can be grouped into three basic classes: a) non-parametric methods that locate the changing point between extreme and non-extreme regions of the data, b) graphical methods where one studies the dependence of the GPD parameters (or related metrics) to the threshold level u, and c) Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GPD model is applicable. In this work, we review representative methods for GPD threshold detection, discuss fundamental differences in their theoretical bases, and apply them to daily rainfall records from the NOAA-NCDC open-access database (http://www.ncdc.noaa.gov/oa/climate/ghcn-daily/). We find that non-parametric methods that locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while graphical methods and GoF metrics that rely on limiting arguments for the upper distribution tail lead to unrealistically high thresholds u. The latter is expected, since one checks the validity of the limiting arguments rather than the applicability of a GPD distribution model. Better performance is demonstrated by graphical methods and GoF metrics that rely on GPD properties. Finally, we discuss the effects of data quantization (common in hydrologic applications) on the estimated thresholds. Acknowledgments: The research project is implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General
Method and system for environmentally adaptive fault tolerant computing
NASA Technical Reports Server (NTRS)
Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)
2010-01-01
A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.
Workshop on adaptive grid methods for fusion plasmas
Wiley, J.C.
1995-07-01
The author describes a general `hp` finite element method with adaptive grids. The code was based on the work of Oden, et al. The term `hp` refers to the method of spatial refinement (h), in conjunction with the order of polynomials used as a part of the finite element discretization (p). This finite element code seems to handle well the different mesh grid sizes occuring between abuted grids with different resolutions.
ICASE/LaRC Workshop on Adaptive Grid Methods
NASA Technical Reports Server (NTRS)
South, Jerry C., Jr. (Editor); Thomas, James L. (Editor); Vanrosendale, John (Editor)
1995-01-01
Solution-adaptive grid techniques are essential to the attainment of practical, user friendly, computational fluid dynamics (CFD) applications. In this three-day workshop, experts gathered together to describe state-of-the-art methods in solution-adaptive grid refinement, analysis, and implementation; to assess the current practice; and to discuss future needs and directions for research. This was accomplished through a series of invited and contributed papers. The workshop focused on a set of two-dimensional test cases designed by the organizers to aid in assessing the current state of development of adaptive grid technology. In addition, a panel of experts from universities, industry, and government research laboratories discussed their views of needs and future directions in this field.
Singh, Amritpal; Saini, Barjinder Singh; Singh, Dilbag
2016-06-01
Multiscale approximate entropy (MAE) is used to quantify the complexity of a time series as a function of time scale τ. Approximate entropy (ApEn) tolerance threshold selection 'r' is based on either: (1) arbitrary selection in the recommended range (0.1-0.25) times standard deviation of time series (2) or finding maximum ApEn (ApEnmax) i.e., the point where self-matches start to prevail over other matches and choosing the corresponding 'r' (rmax) as threshold (3) or computing rchon by empirically finding the relation between rmax, SD1/SD2 ratio and N using curve fitting, where, SD1 and SD2 are short-term and long-term variability of a time series respectively. None of these methods is gold standard for selection of 'r'. In our previous study [1], an adaptive procedure for selection of 'r' is proposed for approximate entropy (ApEn). In this paper, this is extended to multiple time scales using MAEbin and multiscale cross-MAEbin (XMAEbin). We applied this to simulations i.e. 50 realizations (n = 50) of random number series, fractional Brownian motion (fBm) and MIX (P) [1] series of data length of N = 300 and short term recordings of HRV and SBPV performed under postural stress from supine to standing. MAEbin and XMAEbin analysis was performed on laboratory recorded data of 50 healthy young subjects experiencing postural stress from supine to upright. The study showed that (i) ApEnbin of HRV is more than SBPV in supine position but is lower than SBPV in upright position (ii) ApEnbin of HRV decreases from supine i.e. 1.7324 ± 0.112 (mean ± SD) to upright 1.4916 ± 0.108 due to vagal inhibition (iii) ApEnbin of SBPV increases from supine i.e. 1.5535 ± 0.098 to upright i.e. 1.6241 ± 0.101 due sympathetic activation (iv) individual and cross complexities of RRi and systolic blood pressure (SBP) series depend on time scale under consideration (v) XMAEbin calculated using ApEnmax is correlated with cross-MAE calculated using ApEn (0.1-0.26) in steps of 0
Free energy calculations: an efficient adaptive biasing potential method.
Dickson, Bradley M; Legoll, Frédéric; Lelièvre, Tony; Stoltz, Gabriel; Fleurat-Lessard, Paul
2010-05-06
We develop an efficient sampling and free energy calculation technique within the adaptive biasing potential (ABP) framework. By mollifying the density of states we obtain an approximate free energy and an adaptive bias potential that is computed directly from the population along the coordinates of the free energy. Because of the mollifier, the bias potential is "nonlocal", and its gradient admits a simple analytic expression. A single observation of the reaction coordinate can thus be used to update the approximate free energy at every point within a neighborhood of the observation. This greatly reduces the equilibration time of the adaptive bias potential. This approximation introduces two parameters: strength of mollification and the zero of energy of the bias potential. While we observe that the approximate free energy is a very good estimate of the actual free energy for a large range of mollification strength, we demonstrate that the errors associated with the mollification may be removed via deconvolution. The zero of energy of the bias potential, which is easy to choose, influences the speed of convergence but not the limiting accuracy. This method is simple to apply to free energy or mean force computation in multiple dimensions and does not involve second derivatives of the reaction coordinates, matrix manipulations nor on-the-fly adaptation of parameters. For the alanine dipeptide test case, the new method is found to gain as much as a factor of 10 in efficiency as compared to two basic implementations of the adaptive biasing force methods, and it is shown to be as efficient as well-tempered metadynamics with the postprocess deconvolution giving a clear advantage to the mollified density of states method.
An Adaptive Cross-Architecture Combination Method for Graph Traversal
You, Yang; Song, Shuaiwen; Kerbyson, Darren J.
2014-06-18
Breadth-First Search (BFS) is widely used in many real-world applications including computational biology, social networks, and electronic design automation. The combination method, using both top-down and bottom-up techniques, is the most effective BFS approach. However, current combination methods rely on trial-and-error and exhaustive search to locate the optimal switching point, which may cause significant runtime overhead. To solve this problem, we design an adaptive method based on regression analysis to predict an optimal switching point for the combination method at runtime within less than 0.1% of the BFS execution time.
NASA Technical Reports Server (NTRS)
Smith, Stephen W.; Seshadri, Banavara R.; Newman, John A.
2015-01-01
The experimental methods to determine near-threshold fatigue crack growth rate data are prescribed in ASTM standard E647. To produce near-threshold data at a constant stress ratio (R), the applied stress-intensity factor (K) is decreased as the crack grows based on a specified K-gradient. Consequently, as the fatigue crack growth rate threshold is approached and the crack tip opening displacement decreases, remote crack wake contact may occur due to the plastically deformed crack wake surfaces and shield the growing crack tip resulting in a reduced crack tip driving force and non-representative crack growth rate data. If such data are used to life a component, the evaluation could yield highly non-conservative predictions. Although this anomalous behavior has been shown to be affected by K-gradient, starting K level, residual stresses, environmental assisted cracking, specimen geometry, and material type, the specifications within the standard to avoid this effect are limited to a maximum fatigue crack growth rate and a suggestion for the K-gradient value. This paper provides parallel experimental and computational simulations for the K-decreasing method for two materials (an aluminum alloy, AA 2024-T3 and a titanium alloy, Ti 6-2-2-2-2) to aid in establishing clear understanding of appropriate testing requirements. These simulations investigate the effect of K-gradient, the maximum value of stress-intensity factor applied, and material type. A material independent term is developed to guide in the selection of appropriate test conditions for most engineering alloys. With the use of such a term, near-threshold fatigue crack growth rate tests can be performed at accelerated rates, near-threshold data can be acquired in days instead of weeks without having to establish testing criteria through trial and error, and these data can be acquired for most engineering materials, even those that are produced in relatively small product forms.
Adaptive Kaczmarz Method for Image Reconstruction in Electrical Impedance Tomography
Li, Taoran; Kao, Tzu-Jen; Isaacson, David; Newell, Jonathan C.; Saulnier, Gary J.
2013-01-01
We present an adaptive Kaczmarz method for solving the inverse problem in electrical impedance tomography and determining the conductivity distribution inside an object from electrical measurements made on the surface. To best characterize an unknown conductivity distribution and avoid inverting the Jacobian-related term JTJ which could be expensive in terms of computation cost and memory in large scale problems, we propose solving the inverse problem by applying the optimal current patterns for distinguishing the actual conductivity from the conductivity estimate between each iteration of the block Kaczmarz algorithm. With a novel subset scheme, the memory-efficient reconstruction algorithm which appropriately combines the optimal current pattern generation with the Kaczmarz method can produce more accurate and stable solutions adaptively as compared to traditional Kaczmarz and Gauss-Newton type methods. Choices of initial current pattern estimates are discussed in the paper. Several reconstruction image metrics are used to quantitatively evaluate the performance of the simulation results. PMID:23718952
Final Report: Symposium on Adaptive Methods for Partial Differential Equations
Pernice, Michael; Johnson, Christopher R.; Smith, Philip J.; Fogelson, Aaron
1998-12-08
Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.
Ferguson, Karen J.; Chappell, Francesca M.; Wardlaw, Joanna M.
2010-01-01
Objective Brain tissue segmentation by conventional threshold-based techniques may have limited accuracy and repeatability in older subjects. We present a new multispectral magnetic resonance (MR) image analysis approach for segmenting normal and abnormal brain tissue, including white matter lesions (WMLs). Methods We modulated two 1.5T MR sequences in the red/green colour space and calculated the tissue volumes using minimum variance quantisation. We tested it on 14 subjects, mean age 73.3 ± 10 years, representing the full range of WMLs and atrophy. We compared the results of WML segmentation with those using FLAIR-derived thresholds, examined the effect of sampling location, WML amount and field inhomogeneities, and tested observer reliability and accuracy. Results FLAIR-derived thresholds were significantly affected by the location used to derive the threshold (P = 0.0004) and by WML volume (P = 0.0003), and had higher intra-rater variability than the multispectral technique (mean difference ± SD: 759 ± 733 versus 69 ± 326 voxels respectively). The multispectral technique misclassified 16 times fewer WMLs. Conclusion Initial testing suggests that the multispectral technique is highly reproducible and accurate with the potential to be applied to routinely collected clinical MRI data. Electronic supplementary material The online version of this article (doi:10.1007/s00330-010-1718-6) contains supplementary material, which is available to authorized users. PMID:20157814
Adaptive Set-Based Methods for Association Testing.
Su, Yu-Chen; Gauderman, William James; Berhane, Kiros; Lewinger, Juan Pablo
2016-02-01
With a typical sample size of a few thousand subjects, a single genome-wide association study (GWAS) using traditional one single nucleotide polymorphism (SNP)-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. Although self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly "adapt" to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a least absolute shrinkage and selection operator (LASSO)-based test.
Advanced numerical methods in mesh generation and mesh adaptation
Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A
2010-01-01
Numerical solution of partial differential equations requires appropriate meshes, efficient solvers and robust and reliable error estimates. Generation of high-quality meshes for complex engineering models is a non-trivial task. This task is made more difficult when the mesh has to be adapted to a problem solution. This article is focused on a synergistic approach to the mesh generation and mesh adaptation, where best properties of various mesh generation methods are combined to build efficiently simplicial meshes. First, the advancing front technique (AFT) is combined with the incremental Delaunay triangulation (DT) to build an initial mesh. Second, the metric-based mesh adaptation (MBA) method is employed to improve quality of the generated mesh and/or to adapt it to a problem solution. We demonstrate with numerical experiments that combination of all three methods is required for robust meshing of complex engineering models. The key to successful mesh generation is the high-quality of the triangles in the initial front. We use a black-box technique to improve surface meshes exported from an unattainable CAD system. The initial surface mesh is refined into a shape-regular triangulation which approximates the boundary with the same accuracy as the CAD mesh. The DT method adds robustness to the AFT. The resulting mesh is topologically correct but may contain a few slivers. The MBA uses seven local operations to modify the mesh topology. It improves significantly the mesh quality. The MBA method is also used to adapt the mesh to a problem solution to minimize computational resources required for solving the problem. The MBA has a solid theoretical background. In the first two experiments, we consider the convection-diffusion and elasticity problems. We demonstrate the optimal reduction rate of the discretization error on a sequence of adaptive strongly anisotropic meshes. The key element of the MBA method is construction of a tensor metric from hierarchical edge
Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes
NASA Technical Reports Server (NTRS)
Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak
2004-01-01
High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel
NASA Astrophysics Data System (ADS)
Tan, Kok Liang; Tanaka, Toshiyuki; Nakamura, Hidetoshi; Shirahata, Toru; Sugiura, Hiroaki
Chronic Obstructive Pulmonary Disease is a disease in which the airways and tiny air sacs (alveoli) inside the lung are partially obstructed or destroyed. Emphysema is what occurs as more and more of the walls between air sacs get destroyed. The goal of this paper is to produce a more practical emphysema-quantification algorithm that has higher correlation with the parameters of pulmonary function tests compared to classical methods. The use of the threshold range from approximately -900 Hounsfield Unit to -990 Hounsfield Unit for extracting emphysema from CT has been reported in many papers. From our experiments, we realize that a threshold which is optimal for a particular CT data set might not be optimal for other CT data sets due to the subtle radiographic variations in the CT images. Consequently, we propose a multi-threshold method that utilizes ten thresholds between and including -900 Hounsfield Unit and -990 Hounsfield Unit for identifying the different potential emphysematous regions in the lung. Subsequently, we divide the lung into eight sub-volumes. From each sub-volume, we calculate the ratio of the voxels with the intensity below a certain threshold. The respective ratios of the voxels below the ten thresholds are employed as the features for classifying the sub-volumes into four emphysema severity classes. Neural network is used as the classifier. The neural network is trained using 80 training sub-volumes. The performance of the classifier is assessed by classifying 248 test sub-volumes of the lung obtained from 31 subjects. Actual diagnoses of the sub-volumes are hand-annotated and consensus-classified by radiologists. The four-class classification accuracy of the proposed method is 89.82%. The sub-volumetric classification results produced in this study encompass not only the information of emphysema severity but also the distribution of emphysema severity from the top to the bottom of the lung. We hypothesize that besides emphysema severity, the
Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping
2016-01-01
Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068
Methods for prismatic/tetrahedral grid generation and adaptation
NASA Technical Reports Server (NTRS)
Kallinderis, Y.
1995-01-01
The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.
Efficient Unstructured Grid Adaptation Methods for Sonic Boom Prediction
NASA Technical Reports Server (NTRS)
Campbell, Richard L.; Carter, Melissa B.; Deere, Karen A.; Waithe, Kenrick A.
2008-01-01
This paper examines the use of two grid adaptation methods to improve the accuracy of the near-to-mid field pressure signature prediction of supersonic aircraft computed using the USM3D unstructured grid flow solver. The first method (ADV) is an interactive adaptation process that uses grid movement rather than enrichment to more accurately resolve the expansion and compression waves. The second method (SSGRID) uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid with the pressure waves and reduce the cell count required to achieve an accurate signature prediction at a given distance from the vehicle. Both methods initially create negative volume cells that are repaired in a module in the ADV code. While both approaches provide significant improvements in the near field signature (< 3 body lengths) relative to a baseline grid without increasing the number of grid points, only the SSGRID approach allows the details of the signature to be accurately computed at mid-field distances (3-10 body lengths) for direct use with mid-field-to-ground boom propagation codes.
Methods for prismatic/tetrahedral grid generation and adaptation
NASA Astrophysics Data System (ADS)
Kallinderis, Y.
1995-10-01
The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.
Space-time adaptive numerical methods for geophysical applications.
Castro, C E; Käser, M; Toro, E F
2009-11-28
In this paper we present high-order formulations of the finite volume and discontinuous Galerkin finite-element methods for wave propagation problems with a space-time adaptation technique using unstructured meshes in order to reduce computational cost without reducing accuracy. Both methods can be derived in a similar mathematical framework and are identical in their first-order version. In their extension to higher order accuracy in space and time, both methods use spatial polynomials of higher degree inside each element, a high-order solution of the generalized Riemann problem and a high-order time integration method based on the Taylor series expansion. The static adaptation strategy uses locally refined high-resolution meshes in areas with low wave speeds to improve the approximation quality. Furthermore, the time step length is chosen locally adaptive such that the solution is evolved explicitly in time by an optimal time step determined by a local stability criterion. After validating the numerical approach, both schemes are applied to geophysical wave propagation problems such as tsunami waves and seismic waves comparing the new approach with the classical global time-stepping technique. The problem of mesh partitioning for large-scale applications on multi-processor architectures is discussed and a new mesh partition approach is proposed and tested to further reduce computational cost.
Developing new online calibration methods for multidimensional computerized adaptive testing.
Chen, Ping; Wang, Chun; Xin, Tao; Chang, Hua-Hua
2017-02-01
Multidimensional computerized adaptive testing (MCAT) has received increasing attention over the past few years in educational measurement. Like all other formats of CAT, item replenishment is an essential part of MCAT for its item bank maintenance and management, which governs retiring overexposed or obsolete items over time and replacing them with new ones. Moreover, calibration precision of the new items will directly affect the estimation accuracy of examinees' ability vectors. In unidimensional CAT (UCAT) and cognitive diagnostic CAT, online calibration techniques have been developed to effectively calibrate new items. However, there has been very little discussion of online calibration in MCAT in the literature. Thus, this paper proposes new online calibration methods for MCAT based upon some popular methods used in UCAT. Three representative methods, Method A, the 'one EM cycle' method and the 'multiple EM cycles' method, are generalized to MCAT. Three simulation studies were conducted to compare the three new methods by manipulating three factors (test length, item bank design, and level of correlation between coordinate dimensions). The results showed that all the new methods were able to recover the item parameters accurately, and the adaptive online calibration designs showed some improvements compared to the random design under most conditions.
A simplified self-adaptive grid method, SAGE
NASA Technical Reports Server (NTRS)
Davies, C.; Venkatapathy, E.
1989-01-01
The formulation of the Self-Adaptive Grid Evolution (SAGE) code, based on the work of Nakahashi and Deiwert, is described in the first section of this document. The second section is presented in the form of a user guide which explains the input and execution of the code, and provides many examples. Application of the SAGE code, by Ames Research Center and by others, in the solution of various flow problems has been an indication of the code's general utility and success. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for single, zonal, and multiple grids. Modifications to the methodology and the simplified input options make this current version a flexible and user-friendly code.
Optimal and adaptive methods of processing hydroacoustic signals (review)
NASA Astrophysics Data System (ADS)
Malyshkin, G. S.; Sidel'nikov, G. B.
2014-09-01
Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.
NASA Astrophysics Data System (ADS)
Wilson, Mark; Mitra, Sunanda; Roberson, Glenn H.; Shieh, Yao-Yang
1997-10-01
Currently early detection of breast cancer is primarily accomplished by mammography and suspicious findings may lead to a decision for performing a biopsy. Digital enhancement and pattern recognition techniques may aid in early detection of some patterns such as microcalcification clusters indicating onset of DCIS (ductal carcinoma in situ) that accounts for 20% of all mammographically detected breast cancers and could be treated when detected early. These individual calcifications are hard to detect due to size and shape variability and inhomogeneous background texture. Our study addresses only early detection of microcalcifications that allows the radiologist to interpret the x-ray findings in computer-aided enhanced form easier than evaluating the x-ray film directly. We present an algorithm which locates microcalcifications based on local grayscale variability and of tissue structures and image statistics. Threshold filters with lower and upper bounds computed from the image statistics of the entire image and selected subimages were designed to enhance the entire image. This enhanced image was used as the initial image for identifying the micro-calcifications based on the variable box threshold filters at different resolutions. The test images came from the Texas Tech University Health Sciences Center and the MIAS mammographic database, which are classified into various categories including microcalcifications. Classification of other types of abnormalities in mammograms based on their characteristic features is addressed in later studies.
NASA Astrophysics Data System (ADS)
Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen
2016-11-01
To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.
Grid adaptation and remapping for arbitrary lagrangian eulerian (ALE) methods
Lapenta, G. M.
2002-01-01
Methods to include automatic grid adaptation tools within the Arbitrary Lagrangian Eulerian (ALE) method are described. Two main developments will be described. First, a new grid adaptation approach is described, based on an automatic and accurate estimate of the local truncation error. Second, a new method to remap the information between two grids is presented, based on the MPDATA approach. The Arbitrary Lagrangian Eulerian (ALE) method solves hyperbolic equations by splitting the operators is two phases. First, in the Lagrangian phase, the equations under consideration are written in a Lagrangian frame and are discretized. In this phase, the grid moves with the solution, the velocity of each node being the local fluid velocity. Second, in the Eulerian phase, a new grid is generated and the information is transferred to the new grid. The advantage of considering this second step is the possibility of avoiding mesh distortion and tangling typical of pure Lagrangian methods. The second phase of the ALE method is the primary topic of the present communication. In the Eulerian phase two tasks need to be completed. First, a new grid need to be created (we will refer to this task as rezoning). Second, the information is transferred from the grid available at the end of the Lagrangian phase to the new grid (we will refer to this task as remapping). New techniques are presented for the two tasks of the Eulerian phase: rezoning and remapping.
A novel adaptive force control method for IPMC manipulation
NASA Astrophysics Data System (ADS)
Hao, Lina; Sun, Zhiyong; Li, Zhi; Su, Yunquan; Gao, Jianchao
2012-07-01
IPMC is a type of electro-active polymer material, also called artificial muscle, which can generate a relatively large deformation under a relatively low input voltage (generally speaking, less than 5 V), and can be implemented in a water environment. Due to these advantages, IPMC can be used in many fields such as biomimetics, service robots, bio-manipulation, etc. Until now, most existing methods for IPMC manipulation are displacement control not directly force control, however, under most conditions, the success rate of manipulations for tiny fragile objects is limited by the contact force, such as using an IPMC gripper to fix cells. Like most EAPs, a creep phenomenon exists in IPMC, of which the generated force will change with time and the creep model will be influenced by the change of the water content or other environmental factors, so a proper force control method is urgently needed. This paper presents a novel adaptive force control method (AIPOF control—adaptive integral periodic output feedback control), based on employing a creep model of which parameters are obtained by using the FRLS on-line identification method. The AIPOF control method can achieve an arbitrary pole configuration as long as the plant is controllable and observable. This paper also designs the POF and IPOF controller to compare their test results. Simulation and experiments of micro-force-tracking tests are carried out, with results confirming that the proposed control method is viable.
Investigation of the Multiple Method Adaptive Control (MMAC) method for flight control systems
NASA Technical Reports Server (NTRS)
Athans, M.; Baram, Y.; Castanon, D.; Dunn, K. P.; Green, C. S.; Lee, W. H.; Sandell, N. R., Jr.; Willsky, A. S.
1979-01-01
The stochastic adaptive control of the NASA F-8C digital-fly-by-wire aircraft using the multiple model adaptive control (MMAC) method is presented. The selection of the performance criteria for the lateral and the longitudinal dynamics, the design of the Kalman filters for different operating conditions, the identification algorithm associated with the MMAC method, the control system design, and simulation results obtained using the real time simulator of the F-8 aircraft at the NASA Langley Research Center are discussed.
A two-dimensional adaptive mesh generation method
NASA Astrophysics Data System (ADS)
Altas, Irfan; Stephenson, John W.
1991-05-01
The present, two-dimensional adaptive mesh-generation method allows selective modification of a small portion of the mesh without affecting large areas of adjacent mesh-points, and is applicable with or without boundary-fitted coordinate-generation procedures. The cases of differential equation discretization by, on the one hand, classical difference formulas designed for uniform meshes, and on the other the present difference formulas, are illustrated through the application of the method to the Hiemenz flow for which the Navier-Stokes equation's exact solution is known, as well as to a two-dimensional viscous internal flow problem.
An adaptive penalty method for DIRECT algorithm in engineering optimization
NASA Astrophysics Data System (ADS)
Vilaça, Rita; Rocha, Ana Maria A. C.
2012-09-01
The most common approach for solving constrained optimization problems is based on penalty functions, where the constrained problem is transformed into a sequence of unconstrained problem by penalizing the objective function when constraints are violated. In this paper, we analyze the implementation of an adaptive penalty method, within the DIRECT algorithm, in which the constraints that are more difficult to be satisfied will have relatively higher penalty values. In order to assess the applicability and performance of the proposed method, some benchmark problems from engineering design optimization are considered.
Adaptive Current Control Method for Hybrid Active Power Filter
NASA Astrophysics Data System (ADS)
Chau, Minh Thuyen
2016-09-01
This paper proposes an adaptive current control method for Hybrid Active Power Filter (HAPF). It consists of a fuzzy-neural controller, identification and prediction model and cost function. The fuzzy-neural controller parameters are adjusted according to the cost function minimum criteria. For this reason, the proposed control method has a capability on-line control clings to variation of the load harmonic currents. Compared to the single fuzzy logic control method, the proposed control method shows the advantages of better dynamic response, compensation error in steady-state is smaller, able to online control is better and harmonics cancelling is more effective. Simulation and experimental results have demonstrated the effectiveness of the proposed control method.
Parallel, adaptive finite element methods for conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.
1994-01-01
We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.
A novel adaptive noise filtering method for SAR images
NASA Astrophysics Data System (ADS)
Li, Weibin; He, Mingyi
2009-08-01
In the most application situation, signal or image always is corrupted by additive noise. As a result there are mass methods to remove the additive noise while few approaches can work well for the multiplicative noise. The paper presents an improved MAP-based filter for multiplicative noise by adaptive window denoising technique. A Gamma noise models is discussed and a preprocessing technique to differential the matured and un-matured pixel is applied to get accurate estimate for Equivalent Number of Looks. Also the adaptive local window growth and 3 different denoise strategies are applied to smooth noise while keep its subtle information according to its local statistics feature. The simulation results show that the performance is better than existing filter. Several image experiments demonstrate its theoretical performance.
Cutler, Timothy D; Wang, Chong; Hoff, Steven J; Zimmerman, Jeffrey J
2013-04-01
In aerobiology, dose-response studies are used to estimate the risk of infection to a susceptible host presented by exposure to a specific dose of an airborne pathogen. In the research setting, host- and pathogen-specific factors that affect the dose-response continuum can be accounted for by experimental design, but the requirement to precisely determine the dose of infectious pathogen to which the host was exposed is often challenging. By definition, quantification of viable airborne pathogens is based on the culture of micro-organisms, but some airborne pathogens are transmissible at concentrations below the threshold of quantification by culture. In this paper we present an approach to the calculation of exposure dose at microbiologically unquantifiable levels using an application of the "continuous-stirred tank reactor (CSTR) model" and the validation of this approach using rhodamine B dye as a surrogate for aerosolized microbial pathogens in a dynamic aerosol toroid (DAT).
The method of subliminal psychodynamic activation: do individual thresholds make a difference?
Malik, R; Paraherakis, A; Joseph, S; Ladd, H
1996-12-01
The present experiment investigated the effects of subliminal psychodynamic stimuli on anxiety as measured by heart rate. Following an anxiety-inducing task, male and female subjects were tachistoscopically shown, at their subjective thresholds, one of five subliminal stimuli, MOMMY AND I ARE ONE, DADDY AND I ARE ONE (symbiotic messages). MOMMY HAS LEFT ME (abandonment message), I AM HAPPY AND CALM (positively toned but nonsymbiotic phrase), or MYMMO NAD I REA ENO (control stimulus). It was hypothesized that men would exhibit a greater decrease in heart rate after exposure to the MOMMY stimulus than the control message. No definitive predictions were made for women. The abandonment phrase was expected to increase heart rate. A positively toned message was included to assess whether its effects would be comparable to those hypothesized for the MOMMY message. The results yielded no significant effects for stimulus or gender and so provided no support for the hypotheses.
Planetary gearbox fault diagnosis using an adaptive stochastic resonance method
NASA Astrophysics Data System (ADS)
Lei, Yaguo; Han, Dong; Lin, Jing; He, Zhengjia
2013-07-01
Planetary gearboxes are widely used in aerospace, automotive and heavy industry applications due to their large transmission ratio, strong load-bearing capacity and high transmission efficiency. The tough operation conditions of heavy duty and intensive impact load may cause gear tooth damage such as fatigue crack and teeth missed etc. The challenging issues in fault diagnosis of planetary gearboxes include selection of sensitive measurement locations, investigation of vibration transmission paths and weak feature extraction. One of them is how to effectively discover the weak characteristics from noisy signals of faulty components in planetary gearboxes. To address the issue in fault diagnosis of planetary gearboxes, an adaptive stochastic resonance (ASR) method is proposed in this paper. The ASR method utilizes the optimization ability of ant colony algorithms and adaptively realizes the optimal stochastic resonance system matching input signals. Using the ASR method, the noise may be weakened and weak characteristics highlighted, and therefore the faults can be diagnosed accurately. A planetary gearbox test rig is established and experiments with sun gear faults including a chipped tooth and a missing tooth are conducted. And the vibration signals are collected under the loaded condition and various motor speeds. The proposed method is used to process the collected signals and the results of feature extraction and fault diagnosis demonstrate its effectiveness.
Adaptation of fast marching methods to intracellular signaling
NASA Astrophysics Data System (ADS)
Chikando, Aristide C.; Kinser, Jason M.
2006-02-01
Imaging of signaling phenomena within the intracellular domain is a well studied field. Signaling is the process by which all living cells communicate with their environment and with each other. In the case of signaling calcium waves, numerous computational models based on solving homogeneous reaction diffusion equations have been developed. Typically, the reaction diffusion approach consists of solving systems of partial differential equations at each update step. The traditional methods used to solve these reaction diffusion equations are very computationally expensive since they must employ small time steps in order to reduce the computational error. The presented research suggests the application of fast marching methods to imaging signaling calcium waves, more specifically fertilization calcium waves, in Xenopus laevis eggs. The fast marching approach provides fast and efficient means of tracking the evolution of monotonically advancing fronts. A model that employs biophysical properties of intracellular calcium signaling, and adapts fast marching methods to tracking the propagation of signaling calcium waves is presented. The developed model is used to reproduce simulation results obtained with reaction diffusion based model. Results obtained with our model agree with both the results obtained with reaction diffusion based models, and confocal microscopy observations during in vivo experiments. The adaptation of fast marching methods to intracellular protein or macromolecule trafficking is also briefly explored.
Robust time and frequency domain estimation methods in adaptive control
NASA Technical Reports Server (NTRS)
Lamaire, Richard Orville
1987-01-01
A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.
The SMART CLUSTER METHOD - adaptive earthquake cluster analysis and declustering
NASA Astrophysics Data System (ADS)
Schaefer, Andreas; Daniell, James; Wenzel, Friedemann
2016-04-01
Earthquake declustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity with usual applications comprising of probabilistic seismic hazard assessments (PSHAs) and earthquake prediction methods. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation. Various methods have been developed to address this issue from other researchers. These have differing ranges of complexity ranging from rather simple statistical window methods to complex epidemic models. This study introduces the smart cluster method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal identification. Hereby, an adaptive search algorithm for data point clusters is adopted. It uses the earthquake density in the spatio-temporal neighbourhood of each event to adjust the search properties. The identified clusters are subsequently analysed to determine directional anisotropy, focussing on a strong correlation along the rupture plane and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010/2011 Darfield-Christchurch events, an adaptive classification procedure is applied to disassemble subsequent ruptures which may have been grouped into an individual cluster using near-field searches, support vector machines and temporal splitting. The steering parameters of the search behaviour are linked to local earthquake properties like magnitude of completeness, earthquake density and Gutenberg-Richter parameters. The method is capable of identifying and classifying earthquake clusters in space and time. It is tested and validated using earthquake data from California and New Zealand. As a result of the cluster identification process, each event in
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods
Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675
A decentralized adaptive robust method for chaos control.
Kobravi, Hamid-Reza; Erfanian, Abbas
2009-09-01
This paper presents a control strategy, which is based on sliding mode control, adaptive control, and fuzzy logic system for controlling the chaotic dynamics. We consider this control paradigm in chaotic systems where the equations of motion are not known. The proposed control strategy is robust against the external noise disturbance and system parameter variations and can be used to convert the chaotic orbits not only to the desired periodic ones but also to any desired chaotic motions. Simulation results of controlling some typical higher order chaotic systems demonstrate the effectiveness of the proposed control method.
Adaptive grid methods for RLV environment assessment and nozzle analysis
NASA Technical Reports Server (NTRS)
Thornburg, Hugh J.
1996-01-01
Rapid access to highly accurate data about complex configurations is needed for multi-disciplinary optimization and design. In order to efficiently meet these requirements a closer coupling between the analysis algorithms and the discretization process is needed. In some cases, such as free surface, temporally varying geometries, and fluid structure interaction, the need is unavoidable. In other cases the need is to rapidly generate and modify high quality grids. Techniques such as unstructured and/or solution-adaptive methods can be used to speed the grid generation process and to automatically cluster mesh points in regions of interest. Global features of the flow can be significantly affected by isolated regions of inadequately resolved flow. These regions may not exhibit high gradients and can be difficult to detect. Thus excessive resolution in certain regions does not necessarily increase the accuracy of the overall solution. Several approaches have been employed for both structured and unstructured grid adaption. The most widely used involve grid point redistribution, local grid point enrichment/derefinement or local modification of the actual flow solver. However, the success of any one of these methods ultimately depends on the feature detection algorithm used to determine solution domain regions which require a fine mesh for their accurate representation. Typically, weight functions are constructed to mimic the local truncation error and may require substantial user input. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. These weight functions can then be used to construct blending functions for algebraic redistribution, interpolation functions for unstructured grid generation
Woodbury, C J; Ritter, A M; Koerber, H R
2001-07-30
Adult skin sensory neurons exhibit characteristic projection patterns in the dorsal horn of the spinal gray matter that are tightly correlated with modality. However, little is known about how these patterns come about during the ontogeny of the distinct subclasses of skin sensory neurons. To this end, we have developed an intact ex vivo somatosensory system preparation in neonatal mice, allowing single, physiologically identified cutaneous afferents to be iontophoretically injected with Neurobiotin for subsequent histological analyses. The present report, centered on rapidly adapting mechanoreceptors, represents the first study of the central projections of identified skin sensory neurons in neonatal animals. Cutaneous afferents exhibiting rapidly adapting responses to sustained natural stimuli were encountered as early as recordings were made. Well-stained representatives of coarse (tylotrich and guard) and fine-diameter (down) hair follicle afferents, along with a putative Pacinian corpuscle afferent, were recovered from 2-7-day-old neonates. All were characterized by narrow, uninflected somal action potentials and generally low mechanical thresholds, and many could be activated via deflection of recently erupted hairs. The central collaterals of hair follicle afferents formed recurrent, flame-shaped arbors that were essentially miniaturized replicas of their adult counterparts, with identical laminar terminations. The terminal arbors of down hair afferents, previously undescribed in rodents, were distinct and consistently occupied a more superficial position than tylotrich and guard hair afferents. Nevertheless, the former extended no higher than the middle of the incipient substantia gelatinosa, leaving a clear gap more dorsally. In all major respects, therefore, hair follicle afferents display the same laminar specificity in neonates as they do in adults. The widely held misperception that their collaterals extend exuberant projections into pain
Claxton, Karl; Martin, Steve; Soares, Marta; Rice, Nigel; Spackman, Eldon; Hinde, Sebastian; Devlin, Nancy; Smith, Peter C; Sculpher, Mark
2015-01-01
BACKGROUND Cost-effectiveness analysis involves the comparison of the incremental cost-effectiveness ratio of a new technology, which is more costly than existing alternatives, with the cost-effectiveness threshold. This indicates whether or not the health expected to be gained from its use exceeds the health expected to be lost elsewhere as other health-care activities are displaced. The threshold therefore represents the additional cost that has to be imposed on the system to forgo 1 quality-adjusted life-year (QALY) of health through displacement. There are no empirical estimates of the cost-effectiveness threshold used by the National Institute for Health and Care Excellence. OBJECTIVES (1) To provide a conceptual framework to define the cost-effectiveness threshold and to provide the basis for its empirical estimation. (2) Using programme budgeting data for the English NHS, to estimate the relationship between changes in overall NHS expenditure and changes in mortality. (3) To extend this mortality measure of the health effects of a change in expenditure to life-years and to QALYs by estimating the quality-of-life (QoL) associated with effects on years of life and the additional direct impact on QoL itself. (4) To present the best estimate of the cost-effectiveness threshold for policy purposes. METHODS Earlier econometric analysis estimated the relationship between differences in primary care trust (PCT) spending, across programme budget categories (PBCs), and associated disease-specific mortality. This research is extended in several ways including estimating the impact of marginal increases or decreases in overall NHS expenditure on spending in each of the 23 PBCs. Further stages of work link the econometrics to broader health effects in terms of QALYs. RESULTS The most relevant 'central' threshold is estimated to be £12,936 per QALY (2008 expenditure, 2008-10 mortality). Uncertainty analysis indicates that the probability that the threshold is < £20
Turbulence profiling methods applied to ESO's adaptive optics facility
NASA Astrophysics Data System (ADS)
Valenzuela, Javier; Béchet, Clémentine; Garcia-Rissmann, Aurea; Gonté, Frédéric; Kolb, Johann; Le Louarn, Miska; Neichel, Benoît; Madec, Pierre-Yves; Guesalaga, Andrés.
2014-07-01
Two algorithms were recently studied for C2n profiling from wide-field Adaptive Optics (AO) measurements on GeMS (Gemini Multi-Conjugate AO system). They both rely on the Slope Detection and Ranging (SLODAR) approach, using spatial covariances of the measurements issued from various wavefront sensors. The first algorithm estimates the C2n profile by applying the truncated least-squares inverse of a matrix modeling the response of slopes covariances to various turbulent layer heights. In the second method, the profile is estimated by deconvolution of these spatial cross-covariances of slopes. We compare these methods in the new configuration of ESO Adaptive Optics Facility (AOF), a high-order multiple laser system under integration. For this, we use measurements simulated by the AO cluster of ESO. The impact of the measurement noise and of the outer scale of the atmospheric turbulence is analyzed. The important influence of the outer scale on the results leads to the development of a new step for outer scale fitting included in each algorithm. This increases the reliability and robustness of the turbulence strength and profile estimations.
An adaptive stepsize method for the chemical Langevin equation.
Ilie, Silvana; Teslya, Alexandra
2012-05-14
Mathematical and computational modeling are key tools in analyzing important biological processes in cells and living organisms. In particular, stochastic models are essential to accurately describe the cellular dynamics, when the assumption of the thermodynamic limit can no longer be applied. However, stochastic models are computationally much more challenging than the traditional deterministic models. Moreover, many biochemical systems arising in applications have multiple time-scales, which lead to mathematical stiffness. In this paper we investigate the numerical solution of a stochastic continuous model of well-stirred biochemical systems, the chemical Langevin equation. The chemical Langevin equation is a stochastic differential equation with multiplicative, non-commutative noise. We propose an adaptive stepsize algorithm for approximating the solution of models of biochemical systems in the Langevin regime, with small noise, based on estimates of the local error. The underlying numerical method is the Milstein scheme. The proposed adaptive method is tested on several examples arising in applications and it is shown to have improved efficiency and accuracy compared to the existing fixed stepsize schemes.
Chiu, Chuan-Hung; Wen, Tzai-Hung; Chien, Lung-Chang; Yu, Hwa-Lung
2014-01-01
Understanding the spatial characteristics of dengue fever (DF) incidences is crucial for governmental agencies to implement effective disease control strategies. We investigated the associations between environmental and socioeconomic factors and DF geographic distribution, are proposed a probabilistic risk assessment approach that uses threshold-based quantile regression to identify the significant risk factors for DF transmission and estimate the spatial distribution of DF risk regarding full probability distributions. To interpret risk, return period was also included to characterize the frequency pattern of DF geographic occurrences. The study area included old Kaohsiung City and Fongshan District, two areas in Taiwan that have been affected by severe DF infections in recent decades. Results indicated that water-related facilities, including canals and ditches, and various types of residential area, as well as the interactions between them, were significant factors that elevated DF risk. By contrast, the increase of per capita income and its associated interactions with residential areas mitigated the DF risk in the study area. Nonlinear associations between these factors and DF risk were present in various quantiles, implying that water-related factors characterized the underlying spatial patterns of DF, and high-density residential areas indicated the potential for high DF incidence (e.g., clustered infections). The spatial distributions of DF risks were assessed in terms of three distinct map presentations: expected incidence rates, incidence rates in various return periods, and return periods at distinct incidence rates. These probability-based spatial risk maps exhibited distinct DF risks associated with environmental factors, expressed as various DF magnitudes and occurrence probabilities across Kaohsiung, and can serve as a reference for local governmental agencies.
Increment Threshold Functions in Retinopathy of Prematurity
Hansen, Ronald M.; Moskowitz, Anne; Bush, Jennifer N.; Fulton, Anne B.
2016-01-01
Purpose To assess scotopic background adaptation in subjects with a history of preterm birth and retinopathy of prematurity (ROP). Retinopathy of prematurity is known to have long-term effects on rod photoreceptor and rod mediated postreceptor retinal function. Methods Rod-mediated thresholds for detection of 3° diameter, 50 ms stimuli presented 20° from fixation were measured using a spatial forced choice method in 36 subjects (aged 9–17 years) with a history of preterm birth and 11 age similar term-born subjects. Thresholds were measured first in the dark-adapted condition and then in the presence of 6 steady background lights (−2.8 to +2.0 log scot td). A model of the increment threshold function was fit to each subject's thresholds to estimate the dark-adapted threshold (TDA) and the Eigengrau (A0, the background that elevates threshold 0.3 log unit above TDA). Results In subjects with a history of severe ROP, both TDA and A0 were significantly elevated relative to those in former preterms who never had ROP and term-born control subjects. Subjects who had mild ROP had normal TDA but elevated A0. Neither TDA nor A0 differed significantly between former preterms who never had ROP and term-born controls. Conclusions The results suggest that in severe ROP, threshold is affected at a preadaptation site, possibly the rod outer segment. In mild ROP, changes in the Eigengrau may reflect increased intrinsic noise in the photoreceptor or postreceptor circuitry or both. PMID:27145476
NASA Astrophysics Data System (ADS)
Tereshchenko, S. A.; Savelyev, M. S.; Podgaetsky, V. M.; Gerasimenko, A. Yu.; Selishchev, S. V.
2016-09-01
A threshold model is described which permits one to determine the properties of limiters for high-powered laser light. It takes into account the threshold characteristics of the nonlinear optical interaction between the laser beam and the limiter working material. The traditional non-threshold model is a particular case of the threshold model when the limiting threshold is zero. The nonlinear characteristics of carbon nanotubes in liquid and solid media are obtained from experimental Z-scan data. Specifically, the nonlinear threshold effect was observed for aqueous dispersions of nanotubes, but not for nanotubes in solid polymethylmethacrylate. The threshold model fits the experimental Z-scan data better than the non-threshold model. Output characteristics were obtained that integrally describe the nonlinear properties of the optical limiters.
The dynamic time-over-threshold method for multi-channel APD based gamma-ray detectors
NASA Astrophysics Data System (ADS)
Orita, T.; Shimazoe, K.; Takahashi, H.
2015-03-01
t- Recent advances in manufacturing technology have enabled the use of multi-channel pixelated detectors in gamma-ray imaging applications. When obtaining gamma-ray measurements, it is important to obtain pulse height information in order to avoid unnecessary events such as scattering. However, as the number of channels increases, more electronics are needed to process each channel's signal, and the corresponding increases in circuit size and power consumption can result in practical problems. The time-over-threshold (ToT) method, which has recently become popular in the medical field, is a signal processing technique that can effectively avoid such problems. However, ToT suffers from poor linearity and its dynamic range is limited. We therefore propose a new ToT technique called the dynamic time-over-threshold (dToT) method [4]. A new signal processing system using dToT and CR-RC shaping demonstrated much better linearity than that of a conventional ToT. Using a test circuit with a new Gd3Al2Ga3O12 (GAGG) scintillator and an avalanche photodiode, the pulse height spectra of 137Cs and 22Na sources were measured with high linearity. Based on these results, we designed a new application-specific integrated circuit (ASIC) for this multi-channel dToT system, measured the spectra of a 22Na source, and investigated the linearity of the system.
NASA Technical Reports Server (NTRS)
Kantor, A. V.; Timonin, V. G.; Azarova, Y. S.
1974-01-01
The method of adaptive discretization is the most promising for elimination of redundancy from telemetry messages characterized by signal shape. Adaptive discretization with associative sorting was considered as a way to avoid the shortcomings of adaptive discretization with buffer smoothing and adaptive discretization with logical switching in on-board information compression devices (OICD) in spacecraft. Mathematical investigations of OICD are presented.
Robust image registration using adaptive coherent point drift method
NASA Astrophysics Data System (ADS)
Yang, Lijuan; Tian, Zheng; Zhao, Wei; Wen, Jinhuan; Yan, Weidong
2016-04-01
Coherent point drift (CPD) method is a powerful registration tool under the framework of the Gaussian mixture model (GMM). However, the global spatial structure of point sets is considered only without other forms of additional attribute information. The equivalent simplification of mixing parameters and the manual setting of the weight parameter in GMM make the CPD method less robust to outlier and have less flexibility. An adaptive CPD method is proposed to automatically determine the mixing parameters by embedding the local attribute information of features into the construction of GMM. In addition, the weight parameter is treated as an unknown parameter and automatically determined in the expectation-maximization algorithm. In image registration applications, the block-divided salient image disk extraction method is designed to detect sparse salient image features and local self-similarity is used as attribute information to describe the local neighborhood structure of each feature. The experimental results on optical images and remote sensing images show that the proposed method can significantly improve the matching performance.
Calancie, Blair
2017-01-01
The motor evoked potential (MEP) is used in the operating room to gauge-and ultimately protect-the functional integrity of the corticospinal tract (CST). However, there is no consensus as to how to best interpret the MEP for maximizing its sensitivity and specificity to CST compromise. The most common way is to use criteria associated with response magnitude (response amplitude; waveform complexity, etc.). With this approach, should an MEP in response to a fixed stimulus intensity diminish below some predetermined cutoff, suggesting CST dysfunction, then the surgical team is warned. An alternative approach is to examine the minimum stimulus energy-the threshold-needed to elicit a minimal response from a given target muscle. Threshold increases could then be used as an alternative basis for evaluating CST functional integrity. As the original proponent of this Threshold-Level alarm criteria for MEP monitoring during surgery, I have been asked to summarize the basis for this method. In so doing, I have included justification for what might seem to be arbitrary recommendations. Special emphasis is placed on anesthetic considerations because these issues are especially important when weak stimulus intensities are called for. Finally, it is important to emphasize that all the alarm criteria currently in use for interpreting intraoperative MEPs have been shown to be effective for protecting CST axons during surgery. Although differences between approaches are more than academic, overall it is much better for patient welfare to be using some form of MEP monitoring than to use none at all, while you wait for consensus about alarm criteria to emerge.
Research on PGNAA adaptive analysis method with BP neural network
NASA Astrophysics Data System (ADS)
Peng, Ke-Xin; Yang, Jian-Bo; Tuo, Xian-Guo; Du, Hua; Zhang, Rui-Xue
2016-11-01
A new approach method to dealing with the puzzle of spectral analysis in prompt gamma neutron activation analysis (PGNAA) is developed and demonstrated. It consists of utilizing BP neural network to PGNAA energy spectrum analysis which is based on Monte Carlo (MC) simulation, the main tasks which we will accomplish as follows: (1) Completing the MC simulation of PGNAA spectrum library, we respectively set mass fractions of element Si, Ca, Fe from 0.00 to 0.45 with a step of 0.05 and each sample is simulated using MCNP. (2) Establishing the BP model of adaptive quantitative analysis of PGNAA energy spectrum, we calculate peak areas of eight characteristic gamma rays that respectively correspond to eight elements in each individual of 1000 samples and that of the standard sample. (3) Verifying the viability of quantitative analysis of the adaptive algorithm where 68 samples were used successively. Results show that the precision when using neural network to calculate the content of each element is significantly higher than the MCLLS.
Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method
NASA Astrophysics Data System (ADS)
Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony
Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.
NASA Technical Reports Server (NTRS)
Alov, N. V.; Dadayan, K. A.
1988-01-01
The feasibility of measuring metal work functions using the secondary emission threshold method and an electron spectrometer is demonstrated. Measurements are reported for Nb, Mo, Ta, and W bombarded by Ar(+) ions.
A locally adaptive kernel regression method for facies delineation
NASA Astrophysics Data System (ADS)
Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.
2015-12-01
Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.
Sparse diffraction imaging method using an adaptive reweighting homotopy algorithm
NASA Astrophysics Data System (ADS)
Yu, Caixia; Zhao, Jingtao; Wang, Yanfei; Qiu, Zhen
2017-02-01
Seismic diffractions carry valuable information from subsurface small-scale geologic discontinuities, such as faults, cavities and other features associated with hydrocarbon reservoirs. However, seismic imaging methods mainly use reflection theory for constructing imaging models, which means a smooth constraint on imaging conditions. In fact, diffractors occupy a small account of distributions in an imaging model and possess discontinuous characteristics. In mathematics, this kind of phenomena can be described by the sparse optimization theory. Therefore, we propose a diffraction imaging method based on a sparsity-constraint model for studying diffractors. A reweighted L 2-norm and L 1-norm minimization model is investigated, where the L 2 term requests a least-square error between modeled diffractions and observed diffractions and the L 1 term imposes sparsity on the solution. In order to efficiently solve this model, we use an adaptive reweighting homotopy algorithm that updates the solutions by tracking a path along inexpensive homotopy steps. Numerical examples and field data application demonstrate the feasibility of the proposed method and show its significance for detecting small-scale discontinuities in a seismic section. The proposed method has an advantage in improving the focusing ability of diffractions and reducing the migration artifacts.
An adaptive Cartesian grid generation method for Dirty geometry
NASA Astrophysics Data System (ADS)
Wang, Z. J.; Srinivasan, Kumar
2002-07-01
Traditional structured and unstructured grid generation methods need a water-tight boundary surface grid to start. Therefore, these methods are named boundary to interior (B2I) approaches. Although these methods have achieved great success in fluid flow simulations, the grid generation process can still be very time consuming if non-water-tight geometries are given. Significant user time can be taken to repair or clean a dirty geometry with cracks, overlaps or invalid manifolds before grid generation can take place. In this paper, we advocate a different approach in grid generation, namely the interior to boundary (I2B) approach. With an I2B approach, the computational grid is first generated inside the computational domain. Then this grid is intelligently connected to the boundary, and the boundary grid is a result of this connection. A significant advantage of the I2B approach is that dirty geometries can be handled without cleaning or repairing, dramatically reducing grid generation time. An I2B adaptive Cartesian grid generation method is developed in this paper to handle dirty geometries without geometry repair. Comparing with a B2I approach, the grid generation time with the I2B approach for a complex automotive engine can be reduced by three orders of magnitude. Copyright
A forward method for optimal stochastic nonlinear and adaptive control
NASA Technical Reports Server (NTRS)
Bayard, David S.
1988-01-01
A computational approach is taken to solve the optimal nonlinear stochastic control problem. The approach is to systematically solve the stochastic dynamic programming equations forward in time, using a nested stochastic approximation technique. Although computationally intensive, this provides a straightforward numerical solution for this class of problems and provides an alternative to the usual dimensionality problem associated with solving the dynamic programming equations backward in time. It is shown that the cost degrades monotonically as the complexity of the algorithm is reduced. This provides a strategy for suboptimal control with clear performance/computation tradeoffs. A numerical study focusing on a generic optimal stochastic adaptive control example is included to demonstrate the feasibility of the method.
Adaptive mesh refinement and adjoint methods in geophysics simulations
NASA Astrophysics Data System (ADS)
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times
NASA Astrophysics Data System (ADS)
Kurihara, Yosuke; Watanabe, Kajiro; Kobayashi, Kazuyuki; Tanaka, Hiroshi
General anesthesia used for surgical operations may cause unstable conditions of the patients after the operations, which could lead to respiratory arrests. Under such circumstances, nurses could fail in finding the change of the conditions, and other malpractices could also occur. It is highly possible that such malpractices may occur while transferring a patient from ICU to the room using a stretcher. Monitoring the change in the blood oxygen saturation concentration and other vital signs to detect a respiratory arrest is not easy when transferring a patient on a stretcher. Here we present several noise reduction system and algorithm to detect respiratory arrests in transferring a patient, based on the unconstrained air pressure method that the authors presented previously. As the result, when the acceleration level of the stretcher noise was 0.5G, the respiratory arrest detection ratio using this novel method was 65%, while that with the conventional method was 0%.
Adaptive Elastic Net for Generalized Methods of Moments.
Caner, Mehmet; Zhang, Hao Helen
2014-01-30
Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.
Evaluation of Adaptive Subdivision Method on Mobile Device
NASA Astrophysics Data System (ADS)
Rahim, Mohd Shafry Mohd; Isa, Siti Aida Mohd; Rehman, Amjad; Saba, Tanzila
2013-06-01
Recently, there are significant improvements in the capabilities of mobile devices; but rendering large 3D object is still tedious because of the constraint in resources of mobile devices. To reduce storage requirement, 3D object is simplified but certain area of curvature is compromised and the surface will not be smooth. Therefore a method to smoother selected area of a curvature is implemented. One of the popular methods is adaptive subdivision method. Experiments are performed using two data with results based on processing time, rendering speed and the appearance of the object on the devices. The result shows a downfall in frame rate performance due to the increase in the number of triangles with each level of iteration while the processing time of generating the new mesh also significantly increase. Since there is a difference in screen size between the devices the surface on the iPhone appears to have more triangles and more compact than the surface displayed on the iPad. [Figure not available: see fulltext.
On the basis of this assumption (a) the relationships between the method of limits and the method of constant stimuli are derived, (b) a procedure for...comparing data obtained by the two methods is recommended, (c) a procedure for comparing ascending and descending series within the method of limits is given. (Author)
NASA Astrophysics Data System (ADS)
Yang, R. X.; Li, C.; Sun, Y. J.; Heng, Y. K.; Sun, S. S.; Dai, H. L.; Wu, Z.; Liu, Z.; Wang, X. Z.; An, F. F.
2017-01-01
The Beijing Spectrometer (BESIII) has just updated its end-cap Time-of-Flight (ETOF) system, using the Multi-gap Resistive Plate Chamber (MRPC) to replace the current scintillator detectors. These MRPCs shows multi-peak phenomena in their time-over-threshold (TOT) distribution, which was also observed in the Long-strip MRPC built for the RHIC-STAR Muon Telescope Detector (MTD). After carefully investigated the correlation between the multi-peak distribution and incident hit positions along the strips, we find out that it can be semi-quantitatively explained by the signal reflections on the ends of the readout strips. Therefore a new offline calibration method was implemented on the MRPC ETOF data in BESIII, making T-TOT correlation significantly improved to evaluate the time resolution.
Two methods of tuning threshold voltage of bulk FinFETs with replacement high-k metal-gate stacks
NASA Astrophysics Data System (ADS)
Xu, Miao; Zhu, Huilong; Zhang, Yanbo; Xu, Qiuxia; Zhang, Yongkui; Qin, Changliang; Zhang, Qingzhu; Yin, Huaxiang; Xu, Hao; Chen, Shuai; Luo, Jun; Li, Chunlong; Zhao, Chao; Ye, Tianchun
2017-03-01
In this work, we propose two threshold voltage (VTH) tuning methods for bulk FinFETs with replacement high-k metal gate. The first method is to perform a vertical implantation into fin structure after dummy gate removal, self-aligned forming halo & punch through stop pocket (halo & PTSP) doping profile. The second method is to execute P+/BF2+ ion implantations into the single common work function (WF) layer in N-/P-FinFETs, respectively. These two methods have been investigated by TCAD simulations and MOS-capacitor experiments respectively, and then integrated into FinFET fabrication successfully. Experimental results show that the halo & PTSP doping profile can reduce VTH roll off and total variation. With P+/BF2+ doped WF layer, the VTH-sat shift -0.43 V/+1.26 V for N-FinFETs and -0.75 V/+0.11 V for P-FinFETs, respectively, with gate length of 500 nm. The proposed two methods are simple and effective for FinFET VTH tuning, and have potential for future application of massive production.
Bayesian approach to color-difference models based on threshold and constant-stimuli methods.
Brusola, Fernando; Tortajada, Ignacio; Lengua, Ismael; Jordá, Begoña; Peris, Guillermo
2015-06-15
An alternative approach based on statistical Bayesian inference is presented to deal with the development of color-difference models and the precision of parameter estimation. The approach was applied to simulated data and real data, the latter published by selected authors involved with the development of color-difference formulae using traditional methods. Our results show very good agreement between the Bayesian and classical approaches. Among other benefits, our proposed methodology allows one to determine the marginal posterior distribution of each random individual parameter of the color-difference model. In this manner, it is possible to analyze the effect of individual parameters on the statistical significance calculation of a color-difference equation.
Method for removing tilt control in adaptive optics systems
Salmon, Joseph Thaddeus
1998-01-01
A new adaptive optics system and method of operation, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G'=(I-X(X.sup.T X).sup.-1 X.sup.T)G(I-A)
Method for removing tilt control in adaptive optics systems
Salmon, J.T.
1998-04-28
A new adaptive optics system and method of operation are disclosed, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G{prime} = (I{minus}X(X{sup T} X){sup {minus}1}X{sup T})G(I{minus}A). 3 figs.
Adapted G-mode Clustering Method applied to Asteroid Taxonomy
NASA Astrophysics Data System (ADS)
Hasselmann, Pedro H.; Carvano, Jorge M.; Lazzaro, D.
2013-11-01
The original G-mode was a clustering method developed by A. I. Gavrishin in the late 60's for geochemical classification of rocks, but was also applied to asteroid photometry, cosmic rays, lunar sample and planetary science spectroscopy data. In this work, we used an adapted version to classify the asteroid photometry from SDSS Moving Objects Catalog. The method works by identifying normal distributions in a multidimensional space of variables. The identification starts by locating a set of points with smallest mutual distance in the sample, which is a problem when data is not planar. Here we present a modified version of the G-mode algorithm, which was previously written in FORTRAN 77, in Python 2.7 and using NumPy, SciPy and Matplotlib packages. The NumPy was used for array and matrix manipulation and Matplotlib for plot control. The Scipy had a import role in speeding up G-mode, Scipy.spatial.distance.mahalanobis was chosen as distance estimator and Numpy.histogramdd was applied to find the initial seeds from which clusters are going to evolve. Scipy was also used to quickly produce dendrograms showing the distances among clusters. Finally, results for Asteroids Taxonomy and tests for different sample sizes and implementations are presented.
A Self-Adaptive Projection and Contraction Method for Linear Complementarity Problems
Liao Lizhi Wang Shengli
2003-10-15
In this paper we develop a self-adaptive projection and contraction method for the linear complementarity problem (LCP). This method improves the practical performance of the modified projection and contraction method by adopting a self-adaptive technique. The global convergence of our new method is proved under mild assumptions. Our numerical tests clearly demonstrate the necessity and effectiveness of our proposed method.
Adaptable Metadata Rich IO Methods for Portable High Performance IO
Lofstead, J.; Zheng, Fang; Klasky, Scott A; Schwan, Karsten
2009-01-01
Since IO performance on HPC machines strongly depends on machine characteristics and configuration, it is important to carefully tune IO libraries and make good use of appropriate library APIs. For instance, on current petascale machines, independent IO tends to outperform collective IO, in part due to bottlenecks at the metadata server. The problem is exacerbated by scaling issues, since each IO library scales differently on each machine, and typically, operates efficiently to different levels of scaling on different machines. With scientific codes being run on a variety of HPC resources, efficient code execution requires us to address three important issues: (1) end users should be able to select the most efficient IO methods for their codes, with minimal effort in terms of code updates or alterations; (2) such performance-driven choices should not prevent data from being stored in the desired file formats, since those are crucial for later data analysis; and (3) it is important to have efficient ways of identifying and selecting certain data for analysis, to help end users cope with the flood of data produced by high end codes. This paper employs ADIOS, the ADaptable IO System, as an IO API to address (1)-(3) above. Concerning (1), ADIOS makes it possible to independently select the IO methods being used by each grouping of data in an application, so that end users can use those IO methods that exhibit best performance based on both IO patterns and the underlying hardware. In this paper, we also use this facility of ADIOS to experimentally evaluate on petascale machines alternative methods for high performance IO. Specific examples studied include methods that use strong file consistency vs. delayed parallel data consistency, as that provided by MPI-IO or POSIX IO. Concerning (2), to avoid linking IO methods to specific file formats and attain high IO performance, ADIOS introduces an efficient intermediate file format, termed BP, which can be converted, at small
Principles and Methods of Adapted Physical Education and Recreation.
ERIC Educational Resources Information Center
Arnheim, Daniel D.; And Others
This text is designed for the elementary and secondary school physical educator and the recreation specialist in adapted physical education and, more specifically, as a text for college courses in adapted and corrective physical education and therapeutic recreation. The text is divided into four major divisions: scope, key teaching and therapy…
Broom, Donald M
2006-01-01
The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and
Gao, Hong; Xu, Yuntao; Yang, Lei; Lam, Chow-Shing; Wang, Hailing; Zhou, Jingang; Ng, C Y
2011-12-14
By employing the vacuum ultraviolet (VUV) laser velocity-map imaging (VMI) photoelectron scheme to discriminate energetic photoelectrons, we have measured the VUV-VMI-threshold photoelectrons (VUV-VMI-TPE) spectra of propargyl radical [C(3)H(3)(X̃(2)B(1))] near its ionization threshold at photoelectron energy bandwidths of 3 and 7 cm(-1) (full-width at half-maximum, FWHM). The simulation of the VUV-VMI-TPE spectra thus obtained, along with the Stark shift correction, has allowed the determination of a precise value 70 156 ± 4 cm(-1) (8.6982 ± 0.0005 eV) for the ionization energy (IE) of C(3)H(3). In the present VMI-TPE experiment, the Stark shift correction is determined by comparing the VUV-VMI-TPE and VUV laser pulsed field ionization-photoelectron (VUV-PFI-PE) spectra for the origin band of the photoelectron spectrum of the X̃(+)-X̃ transition of chlorobenzene. The fact that the FWHMs for this origin band observed using the VUV-VMI-TPE and VUV-PFI-PE methods are nearly the same indicates that the energy resolutions achieved in the VUV-VMI-TPE and VUV-PFI-PE measurements are comparable. The IE(C(3)H(3)) value obtained based on the VUV-VMI-TPE measurement is consistent with the value determined by the VUV laser PIE spectrum of supersonically cooled C(3)H(3)(X̃(2)B(1)) radicals, which is also reported in this article.
NASA Astrophysics Data System (ADS)
Nakamura, Y.; Shimazoe, K.; Takahashi, H.
2016-02-01
Silicon photomultipliers (SiPMs), which are a relatively new type of photon detector, have received more attention in the fields of nuclear medicine and high-energy physics because of their compactness and high gain up to 106. In this work, a SiPM-based multi-channel gamma ray detector with individual read out based on the dynamic time-over-threshold (dToT) method is implemented and demonstrated as an elemental material for large-area gamma ray imager applications. The detector consists of 64 channels of KETEK SiPM PM6660 (6 × 6 mm2 containing 10,000 micro-cells of 60 × 60 μm2) coupled to an 8 × 8 array of high-energy resolution Gd3(Al,Ga)5O12(Ce) (HR-GAGG) crystals (10 × 10 × 10 mm3) segmented by a 1 mm thick BaSO4 reflector. To produce a digital pulse containing linear energy information, the dToT-based read-out circuit consists of a CR-RC shaping amplifier (2.2 μs) and comparator with a feedback component. By modelling the pulse of the SiPM, the light output, and the CR-RC shaping amplifier, the integral-non-linearity (INL) was numerically calculated in terms of the delay time and the time constant of dynamic threshold movement. The experimental results of the averaged INL and energy resolution were 5.8±1.6% and the full-width-at-half-maximum (FWHM) of 7.4±0.9% at 662 keV, respectively. The 64-channel single-mode detector module was successfully implemented, demonstrating potential for its use as an elemental material for large-area gamma ray imaging applications.
Tsunami modelling with adaptively refined finite volume methods
LeVeque, R.J.; George, D.L.; Berger, M.J.
2011-01-01
Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.
A hybrid method for optimization of the adaptive Goldstein filter
NASA Astrophysics Data System (ADS)
Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue
2014-12-01
The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.
LDRD Final Report: Adaptive Methods for Laser Plasma Simulation
Dorr, M R; Garaizar, F X; Hittinger, J A
2003-01-29
The goal of this project was to investigate the utility of parallel adaptive mesh refinement (AMR) in the simulation of laser plasma interaction (LPI). The scope of work included the development of new numerical methods and parallel implementation strategies. The primary deliverables were (1) parallel adaptive algorithms to solve a system of equations combining plasma fluid and light propagation models, (2) a research code implementing these algorithms, and (3) an analysis of the performance of parallel AMR on LPI problems. The project accomplished these objectives. New algorithms were developed for the solution of a system of equations describing LPI. These algorithms were implemented in a new research code named ALPS (Adaptive Laser Plasma Simulator) that was used to test the effectiveness of the AMR algorithms on the Laboratory's large-scale computer platforms. The details of the algorithm and the results of the numerical tests were documented in an article published in the Journal of Computational Physics [2]. A principal conclusion of this investigation is that AMR is most effective for LPI systems that are ''hydrodynamically large'', i.e., problems requiring the simulation of a large plasma volume relative to the volume occupied by the laser light. Since the plasma-only regions require less resolution than the laser light, AMR enables the use of efficient meshes for such problems. In contrast, AMR is less effective for, say, a single highly filamented beam propagating through a phase plate, since the resulting speckle pattern may be too dense to adequately separate scales with a locally refined mesh. Ultimately, the gain to be expected from the use of AMR is highly problem-dependent. One class of problems investigated in this project involved a pair of laser beams crossing in a plasma flow. Under certain conditions, energy can be transferred from one beam to the other via a resonant interaction with an ion acoustic wave in the crossing region. AMR provides an
Nishimaru, Eiji; Ichikawa, Katsuhiro; Hara, Takanori; Terakawa, Shoichi; Yokomachi, Kazushi; Fujioka, Chikako; Kiguchi, Masao; Ishifuro, Minoru
2012-01-01
Adaptive iterative reconstruction techniques (IRs) can decrease image noise in computed tomography (CT) and are expected to contribute to reduction of the radiation dose. To evaluate the performance of IRs, the conventional two-dimensional (2D) noise power spectrum (NPS) is widely used. However, when an IR provides an NPS value drop at all spatial frequency (which is similar to NPS changes by dose increase), the conventional method cannot evaluate the correct noise property because the conventional method does not correspond to the volume data natures of CT images. The purpose of our study was to develop a new method for NPS measurements that can be adapted to IRs. Our method utilized thick multi-planar reconstruction (MPR) images. The thick images are generally made by averaging CT volume data in a direction perpendicular to a MPR plane (e.g. z-direction for axial MPR plane). By using this averaging technique as a cutter for 3D-NPS, we can obtain adequate 2D-extracted NPS (eNPS) from 3D NPS. We applied this method to IR images generated with adaptive iterative dose reduction 3D (AIDR-3D, Toshiba) to investigate the validity of our method. A water phantom with 24 cm-diameters was scanned at 120 kV and 200 mAs with a 320-row CT (Acquilion One, Toshiba). From the results of study, the adequate thickness of MPR images for eNPS was more than 25.0 mm. Our new NPS measurement method utilizing thick MPR images was accurate and effective for evaluating noise reduction effects of IRs.
On Accuracy of Adaptive Grid Methods for Captured Shocks
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2002-01-01
The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.
NASA Astrophysics Data System (ADS)
Khamwan, Kitiwat; Krisanachinda, Anchali; Pluempitiwiriyawej, Charnchai
2012-10-01
This study presents an automatic method to trace the boundary of the tumour in positron emission tomography (PET) images. It has been discovered that Otsu's threshold value is biased when the within-class variances between the object and the background are significantly different. To solve the problem, a double-stage threshold search that minimizes the energy between the first Otsu's threshold and the maximum intensity value is introduced. Such shifted-optimal thresholding is embedded into a region-based active contour so that both algorithms are performed consecutively. The efficiency of the method is validated using six sphere inserts (0.52-26.53 cc volume) of the IEC/2001 torso phantom. Both spheres and phantom were filled with 18F solution with four source-to-background ratio (SBR) measurements of PET images. The results illustrate that the tumour volumes segmented by combined algorithm are of higher accuracy than the traditional active contour. The method had been clinically implemented in ten oesophageal cancer patients. The results are evaluated and compared with the manual tracing by an experienced radiation oncologist. The advantage of the algorithm is the reduced erroneous delineation that improves the precision and accuracy of PET tumour contouring. Moreover, the combined method is robust, independent of the SBR threshold-volume curves, and it does not require prior lesion size measurement.
NASA Technical Reports Server (NTRS)
Wang, Ray (Inventor)
2009-01-01
A method and system for spatial data manipulation input and distribution via an adaptive wireless transceiver. The method and system include a wireless transceiver for automatically and adaptively controlling wireless transmissions using a Waveform-DNA method. The wireless transceiver can operate simultaneously over both the short and long distances. The wireless transceiver is automatically adaptive and wireless devices can send and receive wireless digital and analog data from various sources rapidly in real-time via available networks and network services.
Adaptive L₁/₂ shooting regularization method for survival analysis using gene expression data.
Liu, Xiao-Ying; Liang, Yong; Xu, Zong-Ben; Zhang, Hai; Leung, Kwong-Sak
2013-01-01
A new adaptive L₁/₂ shooting regularization method for variable selection based on the Cox's proportional hazards mode being proposed. This adaptive L₁/₂ shooting algorithm can be easily obtained by the optimization of a reweighed iterative series of L₁ penalties and a shooting strategy of L₁/₂ penalty. Simulation results based on high dimensional artificial data show that the adaptive L₁/₂ shooting regularization method can be more accurate for variable selection than Lasso and adaptive Lasso methods. The results from real gene expression dataset (DLBCL) also indicate that the L₁/₂ regularization method performs competitively.
Adaptation of a-Stratified Method in Variable Length Computerized Adaptive Testing.
ERIC Educational Resources Information Center
Wen, Jian-Bing; Chang, Hua-Hua; Hau, Kit-Tai
Test security has often been a problem in computerized adaptive testing (CAT) because the traditional wisdom of item selection overly exposes high discrimination items. The a-stratified (STR) design advocated by H. Chang and his collaborators, which uses items of less discrimination in earlier stages of testing, has been shown to be very…
Systems and Methods for Derivative-Free Adaptive Control
NASA Technical Reports Server (NTRS)
Yucelen, Tansel (Inventor); Kim, Kilsoo (Inventor); Calise, Anthony J. (Inventor)
2015-01-01
An adaptive control system is disclosed. The control system can control uncertain dynamic systems. The control system can employ one or more derivative-free adaptive control architectures. The control system can further employ one or more derivative-free weight update laws. The derivative-free weight update laws can comprise a time-varying estimate of an ideal vector of weights. The control system of the present invention can therefore quickly stabilize systems that undergo sudden changes in dynamics, caused by, for example, sudden changes in weight. Embodiments of the present invention can also provide a less complex control system than existing adaptive control systems. The control system can control aircraft and other dynamic systems, such as, for example, those with non-minimum phase dynamics.
Study of adaptive methods for data compression of scanner data
NASA Technical Reports Server (NTRS)
1977-01-01
The performance of adaptive image compression techniques and the applicability of a variety of techniques to the various steps in the data dissemination process are examined in depth. It is concluded that the bandwidth of imagery generated by scanners can be reduced without introducing significant degradation such that the data can be transmitted over an S-band channel. This corresponds to a compression ratio equivalent to 1.84 bits per pixel. It is also shown that this can be achieved using at least two fairly simple techniques with weight-power requirements well within the constraints of the LANDSAT-D satellite. These are the adaptive 2D DPCM and adaptive hybrid techniques.
Oliver, R; Bjoertomt, O; Driver, J; Greenwood, R; Rothwell, J
2010-01-01
There is considerable inter-study and inter-individual variation in the scalp location of parietal sites where transcranial magnetic stimulation (TMS) may modulate visuospatial behaviours (see Ryan, Bonilha, & Jackson 2006); and no clear consensus on methods for identifying such sites. Here we introduce a novel TMS “hunting paradigm” that allows rapid, reliable identification of a site over right anterior intraparietal sulcus (IPS), where short trains (at 10 Hz for 0.5s) of TMS disrupt performance of a task in which subjects judge the presence or absence of a small peripheral gap (at 14 degrees eccentricity), on one or other (known) side of an extended (29 degrees) horizontal line centred on fixation. Signal detection analysis confirmed that TMS at this site reduced sensitivity (d’) for gap targets in the left visual hemifield. A further experiment showed that the same right-parietal TMS increased sensitivity instead for gaps in the right hemifield. Comparing TMS across a grid of scalp locations around the identified ‘hotspot’ confirmed the spatial specificity. Assessment of the TMS intensity required to produce the phenomena found this was linearly related to individuals’ resting motor TMS threshold over hand M1. Our approach provides a systematic new way to identify an effective site and intensity in individuals, at which TMS over right parietal cortex reliably changes visuospatial sensitivity. PMID:19651149
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
Inner string cementing adapter and method of use
Helms, L.C.
1991-08-20
This patent describes an inner string cementing adapter for use on a work string in a well casing having floating equipment therein. It comprises mandrel means for connecting to a lower end of the work string; and sealing means adjacent to the mandrel means for substantially flatly sealing against a surface of the floating equipment without engaging a central opening in the floating equipment.
An adaptive precision gradient method for optimal control.
NASA Technical Reports Server (NTRS)
Klessig, R.; Polak, E.
1973-01-01
This paper presents a gradient algorithm for unconstrained optimal control problems. The algorithm is stated in terms of numerical integration formulas, the precision of which is controlled adaptively by a test that ensures convergence. Empirical results show that this algorithm is considerably faster than its fixed precision counterpart.-
A New Method to Cancel RFI---The Adaptive Filter
NASA Astrophysics Data System (ADS)
Bradley, R.; Barnbaum, C.
1996-12-01
An increasing amount of precious radio frequency spectrum in the VHF, UHF, and microwave bands is being utilized each year to support new commercial and military ventures, and all have the potential to interfere with radio astronomy observations. Some radio spectral lines of astronomical interest occur outside the protected radio astronomy bands and are unobservable due to heavy interference. Conventional approaches to deal with RFI include legislation, notch filters, RF shielding, and post-processing techniques. Although these techniques are somewhat successful, each suffers from insufficient interference cancellation. One concept of interference excision that has not been used before in radio astronomy is adaptive interference cancellation. The concept of adaptive interference canceling was first introduced in the mid-1970s as a way to reduce unwanted noise in low frequency (audio) systems. Examples of such systems include the canceling of maternal ECG in fetal electrocardiography and the reduction of engine noise in the passenger compartment of automobiles. Only recently have high-speed digital filter chips made adaptive filtering possible in a bandwidth as large a few megahertz, finally opening the door to astronomical uses. The system consists of two receivers: the main beam of the radio telescope receives the desired signal corrupted by RFI coming in the sidelobes, and the reference antenna receives only the RFI. The reference antenna is processed using a digital adaptive filter and then subtracted from the signal in the main beam, thus producing the system output. The weights of the digital filter are adjusted by way of an algorithm that minimizes, in a least-squares sense, the power output of the system. Through an adaptive-iterative process, the interference canceler will lock onto the RFI and the filter will adjust itself to minimize the effect of the RFI at the system output. We are building a prototype 100 MHz receiver and will measure the cancellation
The use of the spectral method within the fast adaptive composite grid method
McKay, S.M.
1994-12-31
The use of efficient algorithms for the solution of partial differential equations has been sought for many years. The fast adaptive composite grid (FAC) method combines an efficient algorithm with high accuracy to obtain low cost solutions to partial differential equations. The FAC method achieves fast solution by combining solutions on different grids with varying discretizations and using multigrid like techniques to find fast solution. Recently, the continuous FAC (CFAC) method has been developed which utilizes an analytic solution within a subdomain to iterate to a solution of the problem. This has been shown to achieve excellent results when the analytic solution can be found. The CFAC method will be extended to allow solvers which construct a function for the solution, e.g., spectral and finite element methods. In this discussion, the spectral methods will be used to provide a fast, accurate solution to the partial differential equation. As spectral methods are more accurate than finite difference methods, the ensuing accuracy from this hybrid method outside of the subdomain will be investigated.
Adaptive finite element methods for two-dimensional problems in computational fracture mechanics
NASA Technical Reports Server (NTRS)
Min, J. B.; Bass, J. M.; Spradley, L. W.
1994-01-01
Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.
Method and apparatus for adaptive force and position control of manipulators
NASA Technical Reports Server (NTRS)
Seraji, Homayoun (Inventor)
1989-01-01
The present invention discloses systematic methods and apparatus for the design of real time controllers. Real-time control employs adaptive force/position by use of feedforward and feedback controllers, with the feedforward controller being the inverse of the linearized model of robot dynamics and containing only proportional-double-derivative terms is disclosed. The feedback controller, of the proportional-integral-derivative type, ensures that manipulator joints follow reference trajectories and the feedback controller achieves robust tracking of step-plus-exponential trajectories, all in real time. The adaptive controller includes adaptive force and position control within a hybrid control architecture. The adaptive controller, for force control, achieves tracking of desired force setpoints, and the adaptive position controller accomplishes tracking of desired position trajectories. Circuits in the adaptive feedback and feedforward controllers are varied by adaptation laws.
A new adaptive time step method for unsteady flow simulations in a human lung.
Fenández-Tena, Ana; Marcos, Alfonso C; Martínez, Cristina; Keith Walters, D
2017-04-07
The innovation presented is a method for adaptive time-stepping that allows clustering of time steps in portions of the cycle for which flow variables are rapidly changing, based on the concept of using a uniform step in a relevant dependent variable rather than a uniform step in the independent variable time. A user-defined function was developed to adapt the magnitude of the time step (adaptive time step) to a defined rate of change in inlet velocity. Quantitative comparison indicates that the new adaptive time stepping method significantly improves accuracy for simulations using an equivalent number of time steps per cycle.
NASA Astrophysics Data System (ADS)
Bussetta, Philippe; Marceau, Daniel; Ponthot, Jean-Philippe
2012-02-01
The aim of this work is to propose a new numerical method for solving the mechanical frictional contact problem in the general case of multi-bodies in a three dimensional space. This method is called adapted augmented Lagrangian method (AALM) and can be used in a multi-physical context (like thermo-electro-mechanical fields problems). This paper presents this new method and its advantages over other classical methods such as penalty method (PM), adapted penalty method (APM) and, augmented Lagrangian method (ALM). In addition, the efficiency and the reliability of the AALM are proved with some academic problems and an industrial thermo-electromechanical problem.
Ly, Sovann; Arashiro, Takeshi; Ieng, Vanra; Tsuyuoka, Reiko; Parry, Amy; Horwood, Paul; Heng, Seng; Hamid, Sarah; Vandemaele, Katelijn; Chin, Savuth; Sar, Borann
2017-01-01
Objective To establish seasonal and alert thresholds and transmission intensity categories for influenza to provide timely triggers for preventive measures or upscaling control measures in Cambodia. Methods Using Cambodia’s influenza-like illness (ILI) and laboratory-confirmed influenza surveillance data from 2009 to 2015, three parameters were assessed to monitor influenza activity: the proportion of ILI patients among all outpatients, proportion of ILI samples positive for influenza and the product of the two. With these parameters, four threshold levels (seasonal, moderate, high and alert) were established and transmission intensity was categorized based on a World Health Organization alignment method. Parameters were compared against their respective thresholds. Results Distinct seasonality was observed using the two parameters that incorporated laboratory data. Thresholds established using the composite parameter, combining syndromic and laboratory data, had the least number of false alarms in declaring season onset and were most useful in monitoring intensity. Unlike in temperate regions, the syndromic parameter was less useful in monitoring influenza activity or for setting thresholds. Conclusion Influenza thresholds based on appropriate parameters have the potential to provide timely triggers for public health measures in a tropical country where monitoring and assessing influenza activity has been challenging. Based on these findings, the Ministry of Health plans to raise general awareness regarding influenza among the medical community and the general public. Our findings have important implications for countries in the tropics/subtropics and in resource-limited settings, and categorized transmission intensity can be used to assess severity of potential pandemic influenza as well as seasonal influenza.
Okubo, Mitsuru; Nishimura, Yasumasa; Nakamatsu, Kiyoshi; Okumura, Masahiko R.T.; Shibata, Toru; Kanamori, Shuichi; Hanaoka, Kouhei R.T.; Hosono, Makoto
2010-06-01
Purpose: Clinical applicability of a multiple-threshold method for [{sup 18}F]fluoro-2-deoxyglucose (FDG) activity in radiation treatment planning was evaluated. Methods and Materials: A total of 32 patients who underwent positron emission and computed tomography (PET/CT) simulation were included; 18 patients had lung cancer, and 14 patients had pharyngeal cancer. For tumors of <=2 cm, 2 to 5 cm, and >5 cm, thresholds were defined as 2.5 standardized uptake value (SUV), 35%, and 20% of the maximum FDG activity, respectively. The cervical and mediastinal lymph nodes with the shortest axial diameter of >=10 mm were considered to be metastatic on CT (LNCT). The retropharyngeal lymph nodes with the shortest axial diameter of >=5 mm on CT and MRI were also defined as metastatic. Lymph nodes showing maximum FDG activity greater than the adopted thresholds for radiation therapy planning were designated LNPET-RTP, and lymph nodes with a maximum FDG activity of >=2.5 SUV were regarded as malignant and were designated LNPET-2.5 SUV. Results: The sizes of gross tumor volumes on PET (GTVPET) with the adopted thresholds in the axial plane were visually well fitted to those of GTV on CT (GTVCT). However, the volumes of GTVPET were larger than those of GTVCT, with significant differences (p < 0.0001) for lung cancer, due to respiratory motion. For lung cancer, the numbers of LNCT, LNPET-RTP, and LNPET-2.5 SUV were 29, 28, and 34, respectively. For pharyngeal cancer, the numbers of LNCT, LNPET-RTP, and LNPET-2.5 SUV were 14, 9, and 15, respectively. Conclusions: Our multiple thresholds were applicable for delineating the primary target on PET/CT simulation. However, these thresholds were inaccurate for depicting malignant lymph nodes.
Surface estimation methods with phased-arrays for adaptive ultrasonic imaging in complex components
NASA Astrophysics Data System (ADS)
Robert, S.; Calmon, P.; Calvo, M.; Le Jeune, L.; Iakovleva, E.
2015-03-01
Immersion ultrasonic testing of structures with complex geometries may be significantly improved by using phased-arrays and specific adaptive algorithms that allow to image flaws under a complex and unknown interface. In this context, this paper presents a comparative study of different Surface Estimation Methods (SEM) available in the CIVA software and used for adaptive imaging. These methods are based either on time-of-flight measurements or on image processing. We also introduce a generalized adaptive method where flaws may be fully imaged with half-skip modes. In this method, both the surface and the back-wall of a complex structure are estimated before imaging flaws.
Lingel, Christian; Haist, Tobias; Osten, Wolfgang
2016-12-20
We propose an adaptive optical setup using a spatial light modulator (SLM), which is suitable to perform different phase retrieval methods with varying optical features and without mechanical movement. By this approach, it is possible to test many different phase retrieval methods and their parameters (optical and algorithmic) using one stable setup and without hardware adaption. We show exemplary results for the well-known transport of intensity equation (TIE) method and a new iterative adaptive phase retrieval method, where the object phase is canceled by an inverse phase written into part of the SLM. The measurement results are compared to white light interferometric measurements.
NASA Astrophysics Data System (ADS)
Aver'ianov, N. E.; Baloshin, Iu. A.; Martiukhina, L. I.; Pavlishin, I. V.; Sud'Enkov, Iu. V.
1987-09-01
The amplitudes of the acoustic signals excited in metal reflectors by laser pulses are analyzed as a function of the energy density of target irradiation. It is shown that the slope of the resulting plot is related to the threshold of plasma generation near the specimen surface. Results are presented for the emission wavelengths of Nd-glass and CO2 lasers.
Nonlinear mode decomposition: A noise-robust, adaptive decomposition method
NASA Astrophysics Data System (ADS)
Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta
2015-09-01
The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.
Nonlinear mode decomposition: a noise-robust, adaptive decomposition method.
Iatsenko, Dmytro; McClintock, Peter V E; Stefanovska, Aneta
2015-09-01
The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool-nonlinear mode decomposition (NMD)-which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques-which, together with the adaptive choice of their parameters, make it extremely noise robust-and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.
Investigating Item Exposure Control Methods in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Ozturk, Nagihan Boztunc; Dogan, Nuri
2015-01-01
This study aims to investigate the effects of item exposure control methods on measurement precision and on test security under various item selection methods and item pool characteristics. In this study, the Randomesque (with item group sizes of 5 and 10), Sympson-Hetter, and Fade-Away methods were used as item exposure control methods. Moreover,…
An examination of an adapter method for measuring the vibration transmitted to the human arms.
Xu, Xueyan S; Dong, Ren G; Welcome, Daniel E; Warren, Christopher; McDowell, Thomas W
2015-09-01
The objective of this study is to evaluate an adapter method for measuring the vibration on the human arms. Four instrumented adapters with different weights were used to measure the vibration transmitted to the wrist, forearm, and upper arm of each subject. Each adapter was attached at each location on the subjects using an elastic cloth wrap. Two laser vibrometers were also used to measure the transmitted vibration at each location to evaluate the validity of the adapter method. The apparent mass at the palm of the hand along the forearm direction was also measured to enhance the evaluation. This study found that the adapter and laser-measured transmissibility spectra were comparable with some systematic differences. While increasing the adapter mass reduced the resonant frequency at the measurement location, increasing the tightness of the adapter attachment increased the resonant frequency. However, the use of lightweight (≤15 g) adapters under medium attachment tightness did not change the basic trends of the transmissibility spectrum. The resonant features observed in the transmissibility spectra were also correlated with those observed in the apparent mass spectra. Because the local coordinate systems of the adapters may be significantly misaligned relative to the global coordinates of the vibration test systems, large errors were observed for the adapter-measured transmissibility in some individual orthogonal directions. This study, however, also demonstrated that the misalignment issue can be resolved by either using the total vibration transmissibility or by measuring the misalignment angles to correct the errors. Therefore, the adapter method is acceptable for understanding the basic characteristics of the vibration transmission in the human arms, and the adapter-measured data are acceptable for approximately modeling the system.
An examination of an adapter method for measuring the vibration transmitted to the human arms
Xu, Xueyan S.; Dong, Ren G.; Welcome, Daniel E.; Warren, Christopher; McDowell, Thomas W.
2016-01-01
The objective of this study is to evaluate an adapter method for measuring the vibration on the human arms. Four instrumented adapters with different weights were used to measure the vibration transmitted to the wrist, forearm, and upper arm of each subject. Each adapter was attached at each location on the subjects using an elastic cloth wrap. Two laser vibrometers were also used to measure the transmitted vibration at each location to evaluate the validity of the adapter method. The apparent mass at the palm of the hand along the forearm direction was also measured to enhance the evaluation. This study found that the adapter and laser-measured transmissibility spectra were comparable with some systematic differences. While increasing the adapter mass reduced the resonant frequency at the measurement location, increasing the tightness of the adapter attachment increased the resonant frequency. However, the use of lightweight (≤15 g) adapters under medium attachment tightness did not change the basic trends of the transmissibility spectrum. The resonant features observed in the transmissibility spectra were also correlated with those observed in the apparent mass spectra. Because the local coordinate systems of the adapters may be significantly misaligned relative to the global coordinates of the vibration test systems, large errors were observed for the adapter-measured transmissibility in some individual orthogonal directions. This study, however, also demonstrated that the misalignment issue can be resolved by either using the total vibration transmissibility or by measuring the misalignment angles to correct the errors. Therefore, the adapter method is acceptable for understanding the basic characteristics of the vibration transmission in the human arms, and the adapter-measured data are acceptable for approximately modeling the system. PMID:26834309
Pinchi, Vilma; Pradella, Francesco; Vitale, Giulia; Rugo, Dario; Nieri, Michele; Norelli, Gian-Aristide
2016-01-01
The age threshold of 14 years is relevant in Italy as the minimum age for criminal responsibility. It is of utmost importance to evaluate the diagnostic accuracy of every odontological method for age evaluation considering the sensitivity, or the ability to estimate the true positive cases, and the specificity, or the ability to estimate the true negative cases. The research aims to compare the specificity and sensitivity of four commonly adopted methods of dental age estimation - Demirjian, Haavikko, Willems and Cameriere - in a sample of Italian children aged between 11 and 16 years, with an age threshold of 14 years, using receiver operating characteristic curves and the area under the curve (AUC). In addition, new decision criteria are developed to increase the accuracy of the methods. Among the four odontological methods for age estimation adopted in the research, the Cameriere method showed the highest AUC in both female and male cohorts. The Cameriere method shows a high degree of accuracy at the age threshold of 14 years. To adopt the Cameriere method to estimate the 14-year age threshold more accurately, however, it is suggested - according to the Youden index - that the decision criterion be set at the lower value of 12.928 for females and 13.258 years for males, obtaining a sensitivity of 85% and specificity of 88% in females, and a sensitivity of 77% and specificity of 92% in males. If a specificity level >90% is needed, the cut-off point should be set at 12.959 years (82% sensitivity) for females.
NASA Astrophysics Data System (ADS)
Hsu, Kuo-Hsien
2012-11-01
Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.
NASA Astrophysics Data System (ADS)
Wang, Changsheng; Chen, Jun; Xia, Cedric; Ren, Feng; Chen, Jieshi
2014-04-01
A new approach is presented in this paper to calculate the critical threshold value of fracture initiation. It is based on the experimental data for forming limit curves and fracture forming limit curves. The deformation path for finally a fractured material point is assumed as two-stage proportional loading: biaxial loading from the beginning to the onset of incipient necking, followed plane strain deformation within the incipient neck until the final fracture. The fracture threshold value is determined by analytical integration and validated by numerical simulation. Four phenomenological models for ductile fracture are selected in this study, i.e., Brozzo, McClintock, Rice-Tracey, and Oyane models. The threshold value for each model is obtained through best-fitting of experimental data. The results are compared with each other and test data. These fracture criteria are implemented in ABAQUS/EXPLICIT through user subroutine VUMAT to simulate the blanking process of advanced high-strength steels. The simulated fracture surfaces are examined to determine the initiation of ductile fracture during the process, and compared with experimental results for DP780 sheet steel blanking. The comparisons between FE simulated results coupled with different fracture models and experimental one show good agreements on punching edge quality. The study demonstrates that the proposed approach to calculate threshold values of fracture models is efficient and reliable. The results also suggest that the McClintock and Oyane fracture models are more accurate than the Rice-Tracey or Brozzo models in predicting load-stroke curves. However, the predicted blanking edge quality does not have appreciable differences.
NASA Astrophysics Data System (ADS)
Yenn Chong, See; Lee, Jung-Ryul; Yik Park, Chan
2013-03-01
Conventional threshold crossing technique generally encounters the difficulty in setting a common threshold level in the extraction of the respective time-of-flights (ToFs) and amplitudes from the guided waves obtained at many different points by spatial scanning. Therefore, we propose a statistical threshold determination method through noise map generation to automatically process numerous guided waves having different propagation distances. First, a two-dimensional (2-D) noise map is generated using one-dimensional (1-D) WT magnitudes at time zero of the acquired waves. Then, the probability density functions (PDFs) of Gamma distribution, Weibull distribution and exponential distribution are used to model the measured 2-D noise map. Graphical goodness-of-fit measurements are used to find the best fit among the three theoretical distributions. Then, the threshold level is automatically determined by selecting the desired confidence level of the noise rejection in the cumulative distribution function of the best fit PDF. Based on this threshold level, the amplitudes and ToFs are extracted and mapped into a 2-D matrix array form. The threshold level determined by the noise statistics may cross the noise signal after time zero. These crossings are represented as salt-and-pepper noise in the ToF and amplitude maps but finally removed by the 1-D median filter. This proposed method was verified in a thick stainless steel hollow cylinder where guided waves were acquired in an area of 180 mm×126 mm of the cylinder by using a laser ultrasonic scanning system and an ultrasonic sensor. The Gamma distribution was estimated as the best fit to the verification experimental data by the proposed algorithm. The statistical parameters of the Gamma distribution were used to determine the threshold level appropriate for most of the guided waves. The ToFs and amplitudes of the first arrival mode were mapped into a 2-D matrix array form. Each map included 447 noisy points out of 90
A new and efficient method to obtain benzalkonium chloride adapted cells of Listeria monocytogenes.
Saá Ibusquiza, Paula; Herrera, Juan J R; Vázquez-Sánchez, Daniel; Parada, Adelaida; Cabo, Marta L
2012-10-01
A new method to obtain benzalkonium chloride (BAC) adapted L. monocytogenes cells was developed. A factorial design was used to assess the effects of the inoculum size and BAC concentration on the adaptation (measured in terms of lethal dose 50 -LD50-) of 6 strains of Listeria monocytogenes after only one exposure. The proposed method could be applied successfully in the L. monocytogenes strains with higher adaptive capacity to BAC. In those cases, a significant empirical equation was obtained showing a positive effect of the inoculum size and a positive interaction between the effects of BAC and inoculum size on the level of adaptation achieved. However, a slight negative effect of BAC, due to the biocide, was also significant. The proposed method improves the classical method based on successive stationary phase cultures in sublethal BAC concentrations because it is less time-consuming and more effective. For the laboratory strain L. monocytogenes 5873, by applying the new procedure it was possible to increase BAC-adaptation 3.69-fold in only 33 h, whereas using the classical procedure 2.61-fold of increase was reached after 5 days. Moreover, with the new method, the maximum level of adaptation was determined for all the strains reaching surprisingly almost the same concentration of BAC (mg/l) for 5 out 6 strains. Thus, a good reference for establishing the effective concentrations of biocides to ensure the maximum level of adaptation was also determined.
Analysis of modified SMI method for adaptive array weight control
NASA Technical Reports Server (NTRS)
Dilsavor, R. L.; Moses, R. L.
1989-01-01
An adaptive array is applied to the problem of receiving a desired signal in the presence of weak interference signals which need to be suppressed. A modification, suggested by Gupta, of the sample matrix inversion (SMI) algorithm controls the array weights. In the modified SMI algorithm, interference suppression is increased by subtracting a fraction F of the noise power from the diagonal elements of the estimated covariance matrix. Given the true covariance matrix and the desired signal direction, the modified algorithm is shown to maximize a well-defined, intuitive output power ratio criterion. Expressions are derived for the expected value and variance of the array weights and output powers as a function of the fraction F and the number of snapshots used in the covariance matrix estimate. These expressions are compared with computer simulation and good agreement is found. A trade-off is found to exist between the desired level of interference suppression and the number of snapshots required in order to achieve that level with some certainty. The removal of noise eigenvectors from the covariance matrix inverse is also discussed with respect to this application. Finally, the type and severity of errors which occur in the covariance matrix estimate are characterized through simulation.
Parallel architectures for iterative methods on adaptive, block structured grids
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1983-01-01
A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.
Mixed Methods in Intervention Research: Theory to Adaptation
ERIC Educational Resources Information Center
Nastasi, Bonnie K.; Hitchcock, John; Sarkar, Sreeroopa; Burkholder, Gary; Varjas, Kristen; Jayasena, Asoka
2007-01-01
The purpose of this article is to demonstrate the application of mixed methods research designs to multiyear programmatic research and development projects whose goals include integration of cultural specificity when generating or translating evidence-based practices. The authors propose a set of five mixed methods designs related to different…
Adaptive Discontinuous Evolution Galerkin Method for Dry Atmospheric Flow
2013-04-02
standard one-dimensional approximate Riemann solver used for the flux integration demonstrate better stability, accuracy as well as reliability of the...discontinuous evolution Galerkin method for dry atmospheric convection. Comparisons with the standard one-dimensional approximate Riemann solver used...instead of a standard one- dimensional approximate Riemann solver, the flux integration within the discontinuous Galerkin method is now realized by
Speckle reduction in optical coherence tomography by adaptive total variation method
NASA Astrophysics Data System (ADS)
Wu, Tong; Shi, Yaoyao; Liu, Youwen; He, Chongjun
2015-12-01
An adaptive total variation method based on the combination of speckle statistics and total variation restoration is proposed and developed for reducing speckle noise in optical coherence tomography (OCT) images. The statistical distribution of the speckle noise in OCT image is investigated and measured. With the measured parameters such as the mean value and variance of the speckle noise, the OCT image is restored by the adaptive total variation restoration method. The adaptive total variation restoration algorithm was applied to the OCT images of a volunteer's hand skin, which showed effective speckle noise reduction and image quality improvement. For image quality comparison, the commonly used median filtering method was also applied to the same images to reduce the speckle noise. The measured results demonstrate the superior performance of the adaptive total variation restoration method in terms of image signal-to-noise ratio, equivalent number of looks, contrast-to-noise ratio, and mean square error.
An adaptation of Krylov subspace methods to path following
Walker, H.F.
1996-12-31
Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.
Brodin, N. Patrik; Partanen, Ari; Asp, Patrik; Branch, Craig A.; Guha, Chandan; Tomé, Wolfgang A.
2016-01-01
Purpose: Tissue-mimicking thermal therapy phantoms that coagulate at specific temperatures are valuable tools for developing and evaluating treatment strategies related to thermal therapy. Here, the authors propose a simple and efficient method for determining the coagulation threshold temperature of transparent thermal therapy gel phantoms. Methods: The authors used a previously published gel phantom recipe with 2% (w/v) of bovine serum albumin as the temperature-sensitive protein. Using the programmable heating settings of a polymerase chain reaction (PCR) machine, the authors heated 50 μl gel samples to various temperatures for 3 min and then imaged them using the BioRad Gel Doc system to determine the coagulation temperature using an opacity quantification method. The estimated coagulation temperatures were then validated for gel phantoms prepared with different pH levels using high-intensity focused ultrasound (HIFU) heating and magnetic resonance imaging (MRI) thermometry methods on a clinical MR-HIFU system. Results: The PCR heating method produced consistent and reproducible coagulation of gel samples in precise correlation with the set incubation temperatures. The resulting coagulation threshold temperatures for gel phantoms of varying pH levels were found to be 44.1 ± 0.1, 53.4 ± 0.9, and 60.3 ± 0.9 °C for pH levels of 4.25, 4.50, and 4.75, respectively. This corresponded well with the coagulation threshold temperatures determined by MR-thermometry, with coagulation defined as a 95% decrease in T2 relaxation time, which were estimated at 53.6 ± 1.9 and 62.9 ± 2.4 °C for a pH of 4.50 and 4.75, respectively. Conclusions: The opacity quantification method provides a fast and reproducible estimate of the coagulation threshold temperature of transparent temperature-sensitive gel phantoms. The temperatures determined using this method were well within the range of temperatures estimated using MR-thermometry. Due to the specific heating capabilities
USEPA ambient air monitoring methods for volatile organic compounds (VOCs) using specially-prepared canisters and solid adsorbents are directly adaptable to monitoring for vapors in the indoor environment. The draft Method TO-15 Supplement, an extension of the USEPA Method TO-15,...
Adapting Western research methods to indigenous ways of knowing.
Simonds, Vanessa W; Christopher, Suzanne
2013-12-01
Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid.
Automatic multirate methods for ordinary differential equations. [Adaptive time steps
Gear, C.W.
1980-01-01
A study is made of the application of integration methods in which different step sizes are used for different members of a system of equations. Such methods can result in savings if the cost of derivative evaluation is high or if a system is sparse; however, the estimation and control of errors is very difficult and can lead to high overheads. Three approaches are discussed, and it is shown that the least intuitive is the most promising. 2 figures.
Systems and Methods for Parameter Dependent Riccati Equation Approaches to Adaptive Control
NASA Technical Reports Server (NTRS)
Kim, Kilsoo (Inventor); Yucelen, Tansel (Inventor); Calise, Anthony J. (Inventor)
2015-01-01
Systems and methods for adaptive control are disclosed. The systems and methods can control uncertain dynamic systems. The control system can comprise a controller that employs a parameter dependent Riccati equation. The controller can produce a response that causes the state of the system to remain bounded. The control system can control both minimum phase and non-minimum phase systems. The control system can augment an existing, non-adaptive control design without modifying the gains employed in that design. The control system can also avoid the use of high gains in both the observer design and the adaptive control law.
ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve☆
Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk
2014-01-01
In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725
Adaptive error covariances estimation methods for ensemble Kalman filters
Zhen, Yicun; Harlim, John
2015-08-01
This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.
Wang, Ming-De; Zhu, Di; Bäckström, Torbjörn; Wahlström, Göran
2001-01-01
An anaesthesia threshold was used to investigate the pharmacodynamic and pharmacokinetic interactions between ethanol and pregnanolone in male rats.The criterion to determine threshold doses of pregnanolone was the first burst suppression of 1 s in the EEG. Ethanol (0.5, 1.0, 1.5 and 2.0 g kg−1) was injected i.p. 15 min before pregnanolone infusion. Trunk blood, serum, cortex, cerebellum, hippocampus, striatum, brain stem, fat and muscle tissues obtained at criterion were used to determine ethanol (blood) and pregnanolone.Ethanol reduced threshold doses in a dose dependent linear manner. A similar reduction of pregnanolone tissue concentrations was only found in brain stem and striatum. Deviations consisted of larger decreases in serum, cerebellum and hippocampus after 0.5 g kg−1 ethanol and in cerebellum, cortex and hippocampus after 2.0 g kg−1 of ethanol. Positive correlations between dose and concentration of pregnanolone was recorded in brain stem, hippocampus, cerebellum and cortex. A kinetic component influenced the concentration in cortex. There was a correlation between dose and serum concentration of pregnanolone only after ethanol. In the muscle 0.5 g kg−1 ethanol had no influence on pregnanolone concentration.The linear, additive pharmacodynamic interaction could involve the GABA ionophore. A pharmacokinetic interaction was found in cortex. The retained high uptake of pregnanolone in muscle (after 0.5 g kg−1) corresponded to losses in other tissues (including serum). The reduced uptake of pregnanolone in cerebellum, cortex and hippocampus (after 2.0 g kg−1) was not due to a corresponding change in serum concentration. It was probably due to a reduced blood flow. PMID:11724744
NASA Astrophysics Data System (ADS)
Fubelli, Giandomenico
2014-05-01
The assessment of landslide triggering rainfall thresholds is a useful technique to predict the occurrence of such phenomena and provide public authorities with the values of the critical rainfall over which it is appropriate to consider the state of alert. In this perspective, I investigated the urban area of San Vito Romano, a 3500 inhabitants), located in the Aniene River basin, about 50 km east of Rome, and heavily affected by landslides. This area extends over a calcarenitic-marly-arenaceous bedrock of Tortonian age, arranged in a monocline structure dipping 10/15 degrees eastward, in parallel with the slope angle. Part of the village overlays a 500 m large translational rock slide that has caused damage to many buildings during the last decades. Boreholes, drilled in the landslide area, some of them supplied with piezometers and inclinometers, have provided detailed information on the underlying bedrock (silico-clastic deposits of the Frosinone Formation of upper Tortonian age) and the covering near-surface materials. In particular, borehole data showed the existence of three different sliding surfaces located at different depths (6, 12 and 24 meters). In order to establish a relationship between landslide events and the triggering rainfall amounts, I have carried out an inventory of all the slope movements that affected the study area in the last few decades on the basis of field survey, stratigraphic analysis, archive research and piezometric/inclinometric data. Then, I calculated and mapped the cumulative rainfall amounts within 3 days, 10 days, 1 month and 3 months before each landslide occurrence. By comparing the landslide distribution with the rainfall maps, I calculated the rainfall thresholds for each event, also considering the depth of the related sliding surface. In this context, I observed that a 3 days pre-event precipitation of 100 mm mobilized the shallow material overlying the upper sliding surface only with at least 170 mm of rain in the
Adaptive entropy-constrained discontinuous Galerkin method for simulation of turbulent flows
NASA Astrophysics Data System (ADS)
Lv, Yu; Ihme, Matthias
2015-11-01
A robust and adaptive computational framework will be presented for high-fidelity simulations of turbulent flows based on the discontinuous Galerkin (DG) scheme. For this, an entropy-residual based adaptation indicator is proposed to enable adaptation in polynomial and physical space. The performance and generality of this entropy-residual indicator is evaluated through direct comparisons with classical indicators. In addition, a dynamic load balancing procedure is developed to improve computational efficiency. The adaptive framework is tested by considering a series of turbulent test cases, which include homogeneous isotropic turbulence, channel flow and flow-over-a-cylinder. The accuracy, performance and scalability are assessed, and the benefit of this adaptive high-order method is discussed. The funding from NSF CAREER award is greatly acknowledged.
A high-throughput multiplex method adapted for GMO detection.
Chaouachi, Maher; Chupeau, Gaëlle; Berard, Aurélie; McKhann, Heather; Romaniuk, Marcel; Giancola, Sandra; Laval, Valérie; Bertheau, Yves; Brunel, Dominique
2008-12-24
A high-throughput multiplex assay for the detection of genetically modified organisms (GMO) was developed on the basis of the existing SNPlex method designed for SNP genotyping. This SNPlex assay allows the simultaneous detection of up to 48 short DNA sequences (approximately 70 bp; "signature sequences") from taxa endogenous reference genes, from GMO constructions, screening targets, construct-specific, and event-specific targets, and finally from donor organisms. This assay avoids certain shortcomings of multiplex PCR-based methods already in widespread use for GMO detection. The assay demonstrated high specificity and sensitivity. The results suggest that this assay is reliable, flexible, and cost- and time-effective for high-throughput GMO detection.
An Adaptive Kalman Filter using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
An Adaptive Kalman Filter Using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
The Pilates method and cardiorespiratory adaptation to training.
Tinoco-Fernández, Maria; Jiménez-Martín, Miguel; Sánchez-Caravaca, M Angeles; Fernández-Pérez, Antonio M; Ramírez-Rodrigo, Jesús; Villaverde-Gutiérrez, Carmen
2016-01-01
Although all authors report beneficial health changes following training based on the Pilates method, no explicit analysis has been performed of its cardiorespiratory effects. The objective of this study was to evaluate possible changes in cardiorespiratory parameters with the Pilates method. A total of 45 university students aged 18-35 years (77.8% female and 22.2% male), who did not routinely practice physical exercise or sports, volunteered for the study and signed informed consent. The Pilates training was conducted over 10 weeks, with three 1-hour sessions per week. Physiological cardiorespiratory responses were assessed using a MasterScreen CPX apparatus. After the 10-week training, statistically significant improvements were observed in mean heart rate (135.4-124.2 beats/min), respiratory exchange ratio (1.1-0.9) and oxygen equivalent (30.7-27.6) values, among other spirometric parameters, in submaximal aerobic testing. These findings indicate that practice of the Pilates method has a positive influence on cardiorespiratory parameters in healthy adults who do not routinely practice physical exercise activities.
K-matrix method with B-splines: σnell, βn and resonances in He photoionization below N = 4 threshold
NASA Astrophysics Data System (ADS)
Argenti, Luca; Moccia, Roberto
2006-06-01
A B-spline based K-matrix method has been implemented to investigate the photoionization of atoms with simple valence shells. With a particular choice of knots, the present method is able to reproduce all the essential features of the continuum wavefunctions including up to 20-25 resonant multiplets below each ionization threshold. A detailed study of the interval between the N = 3 and N = 4 thresholds where the state labelled [031]+5 (parabolic quantum numbers: [N1N2m]An), the first of a series converging to the higher N = 5 threshold, is known to fall, is presented. According to propensity rules this state cannot decay directly in the underlying continuum, but interacts strongly with the [021]+n series and appreciably with the [030]-n series. As a result all parameters of the two series are strongly modulated and, between 75.5 eV and 75.57 eV, partial cross section and asymmetry parameter patterns change dramatically.
The Limits to Adaptation; A Systems Approach
The Limits to Adaptation: A Systems Approach. The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering...
Self-Adaptive Filon's Integration Method and Its Application to Computing Synthetic Seismograms
NASA Astrophysics Data System (ADS)
Zhang, Hai-Ming; Chen, Xiao-Fei
2001-03-01
Based on the principle of the self-adaptive Simpson integration method, and by incorporating the `fifth-order' Filon's integration algorithm [Bull. Seism. Soc. Am. 73(1983)913], we have proposed a simple and efficient numerical integration method, i.e., the self-adaptive Filon's integration method (SAFIM), for computing synthetic seismograms at large epicentral distances. With numerical examples, we have demonstrated that the SAFIM is not only accurate but also very efficient. This new integration method is expected to be very useful in seismology, as well as in computing similar oscillatory integrals in other branches of physics.
NASA Astrophysics Data System (ADS)
Tanizawa, Ken; Hirose, Akira
Adaptive polarization mode dispersion (PMD) compensation is required for the speed-up and advancement of the present optical communications. The combination of a tunable PMD compensator and its adaptive control method achieves adaptive PMD compensation. In this paper, we report an effective search control algorithm for the feedback control of the PMD compensator. The algorithm is based on the hill-climbing method. However, the step size changes randomly to prevent the convergence from being trapped at a local maximum or a flat, unlike the conventional hill-climbing method. The randomness depends on the Gaussian probability density functions. We conducted transmission simulations at 160Gb/s and the results show that the proposed method provides more optimal compensator control than the conventional hill-climbing method.
A Massively Parallel Adaptive Fast Multipole Method on Heterogeneous Architectures
Lashuk, Ilya; Chandramowlishwaran, Aparna; Langston, Harper; Nguyen, Tuan-Anh; Sampath, Rahul S; Shringarpure, Aashay; Vuduc, Richard; Ying, Lexing; Zorin, Denis; Biros, George
2012-01-01
We describe a parallel fast multipole method (FMM) for highly nonuniform distributions of particles. We employ both distributed memory parallelism (via MPI) and shared memory parallelism (via OpenMP and GPU acceleration) to rapidly evaluate two-body nonoscillatory potentials in three dimensions on heterogeneous high performance computing architectures. We have performed scalability tests with up to 30 billion particles on 196,608 cores on the AMD/CRAY-based Jaguar system at ORNL. On a GPU-enabled system (NSF's Keeneland at Georgia Tech/ORNL), we observed 30x speedup over a single core CPU and 7x speedup over a multicore CPU implementation. By combining GPUs with MPI, we achieve less than 10 ns/particle and six digits of accuracy for a run with 48 million nonuniformly distributed particles on 192 GPUs.
Adaptive bit truncation and compensation method for EZW image coding
NASA Astrophysics Data System (ADS)
Dai, Sheng-Kui; Zhu, Guangxi; Wang, Yao
2003-09-01
The embedded zero-tree wavelet algorithm (EZW) is widely adopted to compress wavelet coefficients of images with the property that the bits stream can be truncated and produced anywhere. The lower bit plane of the wavelet coefficents is verified to be less important than the higher bit plane. Therefore it can be truncated and not encoded. Based on experiments, a generalized function, which can provide a glancing guide for EZW encoder to intelligently decide the number of low bit plane to be truncated, is deduced in this paper. In the EZW decoder, a simple method is presented to compensate for the truncated wavelet coefficients, and finally it can surprisingly enhance the quality of reconstructed image and spend scarcely any additional cost at the same time.
An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
1999-01-01
An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.
Definition of temperature thresholds: the example of the French heat wave warning system.
Pascal, Mathilde; Wagner, Vérène; Le Tertre, Alain; Laaidi, Karine; Honoré, Cyrille; Bénichou, Françoise; Beaudeau, Pascal
2013-01-01
Heat-related deaths should be somewhat preventable. In France, some prevention measures are activated when minimum and maximum temperatures averaged over three days reach city-specific thresholds. The current thresholds were computed based on a descriptive analysis of past heat waves and on local expert judgement. We tested whether a different method would confirm these thresholds. The study was set in the six cities of Paris, Lyon, Marseille, Nantes, Strasbourg and Limoges between 1973 and 2003. For each city, we estimated the excess in mortality associated with different temperature thresholds, using a generalised additive model, controlling for long-time trends, seasons and days of the week. These models were used to compute the mortality predicted by different percentiles of temperatures. The thresholds were chosen as the percentiles associated with a significant excess mortality. In all cities, there was a good correlation between current thresholds and the thresholds derived from the models, with 0°C to 3°C differences for averaged maximum temperatures. Both set of thresholds were able to anticipate the main periods of excess mortality during the summers of 1973 to 2003. A simple method relying on descriptive analysis and expert judgement is sufficient to define protective temperature thresholds and to prevent heat wave mortality. As temperatures are increasing along with the climate change and adaptation is ongoing, more research is required to understand if and when thresholds should be modified.
Impedance adaptation methods of the piezoelectric energy harvesting
NASA Astrophysics Data System (ADS)
Kim, Hyeoungwoo
In this study, the important issues of energy recovery were addressed and a comprehensive investigation was performed on harvesting electrical power from an ambient mechanical vibration source. Also discussed are the impedance matching methods used to increase the efficiency of energy transfer from the environment to the application. Initially, the mechanical impedance matching method was investigated to increase mechanical energy transferred to the transducer from the environment. This was done by reducing the mechanical impedance such as damping factor and energy reflection ratio. The vibration source and the transducer were modeled by a two-degree-of-freedom dynamic system with mass, spring constant, and damper. The transmissibility employed to show how much mechanical energy that was transferred in this system was affected by the damping ratio and the stiffness of elastic materials. The mechanical impedance of the system was described by electrical system using analogy between the two systems in order to simply the total mechanical impedance. Secondly, the transduction rate of mechanical energy to electrical energy was improved by using a PZT material which has a high figure of merit and a high electromechanical coupling factor for electrical power generation, and a piezoelectric transducer which has a high transduction rate was designed and fabricated. The high g material (g33 = 40 [10-3Vm/N]) was developed to improve the figure of merit of the PZT ceramics. The cymbal composite transducer has been found as a promising structure for piezoelectric energy harvesting under high force at cyclic conditions (10--200 Hz), because it has almost 40 times higher effective strain coefficient than PZT ceramics. The endcap of cymbal also enhances the endurance of the ceramic to sustain ac load along with stress amplification. In addition, a macro fiber composite (MFC) was employed as a strain component because of its flexibility and the high electromechanical coupling
Laska, Matthias; Grimm, Nina
2003-02-01
Recently, Olsson and Cain (2000, Chem. Senses, 25: 493) introduced a psychometric method which, for the first time, allows the standardized determination of odor quality discrimination (OQD) thresholds. The method defines a threshold value that is an average fraction by which one odorant has to be substituted with another to reach a criterion level of discrimination. This measure of discrimination is reciprocal in the sense that it is a result of two separate psychometric functions involving two different standards but the same comparison stimuli. Using the same odor stimuli as Olsson and Cain, with six human subjects but adopting a slightly different experimental design, we were able to replicate their finding that the proportion of correct discriminations changes monotonically with the proportion of adulterant in mixtures of eugenol and citral. As the SURE (SUbstitution-REciprocity) method is based on discriminative responses, it should also be applicable with nonhuman species which can be trained to give unequivocal discriminative responses at the behavioral level. Using an olfactory conditioning paradigm, we therefore trained four squirrel monkeys to discriminate between exactly the same pairs of odor stimuli as our human subjects. We found the psychometric functions of the monkeys to be similar to those of the human subjects. Our results show that the SURE method can successfully be employed with nonhuman primates and thus offers a new approach to study the odor spaces of nonhuman species. Future studies should elucidate whether the SURE method allows for direct comparisons of OQD thresholds and of similarities and differences between odor quality perception of different species.
A self-adaptive-grid method with application to airfoil flow
NASA Technical Reports Server (NTRS)
Nakahashi, K.; Deiwert, G. S.
1985-01-01
A self-adaptive-grid method is described that is suitable for multidimensional steady and unsteady computations. Based on variational principles, a spring analogy is used to redistribute grid points in an optimal sense to reduce the overall solution error. User-specified parameters, denoting both maximum and minimum permissible grid spacings, are used to define the all-important constants, thereby minimizing the empiricism and making the method self-adaptive. Operator splitting and one-sided controls for orthogonality and smoothness are used to make the method practical, robust, and efficient. Examples are included for both steady and unsteady viscous flow computations about airfoils in two dimensions, as well as for a steady inviscid flow computation and a one-dimensional case. These examples illustrate the precise control the user has with the self-adaptive method and demonstrate a significant improvement in accuracy and quality of the solutions.
Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D
2012-10-01
Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.
Threshold Graph Limits and Random Threshold Graphs
Diaconis, Persi; Holmes, Susan; Janson, Svante
2010-01-01
We study the limit theory of large threshold graphs and apply this to a variety of models for random threshold graphs. The results give a nice set of examples for the emerging theory of graph limits. PMID:20811581
Anderson, R W; Pember, R B; Elliott, N S
2001-10-22
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.
ERIC Educational Resources Information Center
Wang, Ze; Rohrer, David; Chuang, Chi-ching; Fujiki, Mayo; Herman, Keith; Reinke, Wendy
2015-01-01
This study compared 5 scoring methods in terms of their statistical assumptions. They were then used to score the Teacher Observation of Classroom Adaptation Checklist, a measure consisting of 3 subscales and 21 Likert-type items. The 5 methods used were (a) sum/average scores of items, (b) latent factor scores with continuous indicators, (c)…
An adaptive, formally second order accurate version of the immersed boundary method
NASA Astrophysics Data System (ADS)
Griffith, Boyce E.; Hornung, Richard D.; McQueen, David M.; Peskin, Charles S.
2007-04-01
Like many problems in biofluid mechanics, cardiac mechanics can be modeled as the dynamic interaction of a viscous incompressible fluid (the blood) and a (visco-)elastic structure (the muscular walls and the valves of the heart). The immersed boundary method is a mathematical formulation and numerical approach to such problems that was originally introduced to study blood flow through heart valves, and extensions of this work have yielded a three-dimensional model of the heart and great vessels. In the present work, we introduce a new adaptive version of the immersed boundary method. This adaptive scheme employs the same hierarchical structured grid approach (but a different numerical scheme) as the two-dimensional adaptive immersed boundary method of Roma et al. [A multilevel self adaptive version of the immersed boundary method, Ph.D. Thesis, Courant Institute of Mathematical Sciences, New York University, 1996; An adaptive version of the immersed boundary method, J. Comput. Phys. 153 (2) (1999) 509-534] and is based on a formally second order accurate (i.e., second order accurate for problems with sufficiently smooth solutions) version of the immersed boundary method that we have recently described [B.E. Griffith, C.S. Peskin, On the order of accuracy of the immersed boundary method: higher order convergence rates for sufficiently smooth problems, J. Comput. Phys. 208 (1) (2005) 75-105]. Actual second order convergence rates are obtained for both the uniform and adaptive methods by considering the interaction of a viscous incompressible flow and an anisotropic incompressible viscoelastic shell. We also present initial results from the application of this methodology to the three-dimensional simulation of blood flow in the heart and great vessels. The results obtained by the adaptive method show good qualitative agreement with simulation results obtained by earlier non-adaptive versions of the method, but the flow in the vicinity of the model heart valves
Adaptive methods: when and how should they be used in clinical trials?
Porcher, Raphaël; Lecocq, Brigitte; Vray, Muriel
2011-01-01
Adaptive clinical trial designs are defined as designs that use data cumulated during trial to possibly modify certain aspects without compromising the validity and integrity of the said trial. Compared to more traditional trials, in theory, adaptive designs allow the same information to be generated but in a more efficient manner. The advantages and limits of this type of design together with the weight of the constraints, in particular of a logistic nature, that their use implies, differ depending on whether the trial is exploratory or confirmatory with a view to registration. One of the key elements ensuring trial integrity is the involvement of an independent committee to determine adaptations in terms of experimental design during the study. Adaptive methods for clinical trials are appealing and may be accepted by the relevant authorities. However, the constraints that they impose must be determined well in advance.
An h-adaptive local discontinuous Galerkin method for the Navier-Stokes-Korteweg equations
NASA Astrophysics Data System (ADS)
Tian, Lulu; Xu, Yan; Kuerten, J. G. M.; van der Vegt, J. J. W.
2016-08-01
In this article, we develop a mesh adaptation algorithm for a local discontinuous Galerkin (LDG) discretization of the (non)-isothermal Navier-Stokes-Korteweg (NSK) equations modeling liquid-vapor flows with phase change. This work is a continuation of our previous research, where we proposed LDG discretizations for the (non)-isothermal NSK equations with a time-implicit Runge-Kutta method. To save computing time and to capture the thin interfaces more accurately, we extend the LDG discretization with a mesh adaptation method. Given the current adapted mesh, a criterion for selecting candidate elements for refinement and coarsening is adopted based on the locally largest value of the density gradient. A strategy to refine and coarsen the candidate elements is then provided. We emphasize that the adaptive LDG discretization is relatively simple and does not require additional stabilization. The use of a locally refined mesh in combination with an implicit Runge-Kutta time method is, however, non-trivial, but results in an efficient time integration method for the NSK equations. Computations, including cases with solid wall boundaries, are provided to demonstrate the accuracy, efficiency and capabilities of the adaptive LDG discretizations.
Adaptive remeshing method in 2D based on refinement and coarsening techniques
NASA Astrophysics Data System (ADS)
Giraud-Moreau, L.; Borouchaki, H.; Cherouat, A.
2007-04-01
The analysis of mechanical structures using the Finite Element Method, in the framework of large elastoplastic strains, needs frequent remeshing of the deformed domain during computation. Remeshing is necessary for two main reasons, the large geometric distortion of finite elements and the adaptation of the mesh size to the physical behavior of the solution. This paper presents an adaptive remeshing method to remesh a mechanical structure in two dimensions subjected to large elastoplastic deformations with damage. The proposed remeshing technique includes adaptive refinement and coarsening procedures, based on geometrical and physical criteria. The proposed method has been integrated in a computational environment using the ABAQUS solver. Numerical examples show the efficiency of the proposed approach.
NASA Astrophysics Data System (ADS)
Moore, F.; Burke, M.
2015-12-01
A wide range of studies using a variety of methods strongly suggest that climate change will have a negative impact on agricultural production in many areas. Farmers though should be able to learn about a changing climate and to adjust what they grow and how they grow it in order to reduce these negative impacts. However, it remains unclear how effective these private (autonomous) adaptations will be, or how quickly they will be adopted. Constraining the uncertainty on this adaptation is important for understanding the impacts of climate change on agriculture. Here we review a number of empirical methods that have been proposed for understanding the rate and effectiveness of private adaptation to climate change. We compare these methods using data on agricultural yields in the United States and western Europe.
Fast multipole and space adaptive multiresolution methods for the solution of the Poisson equation
NASA Astrophysics Data System (ADS)
Bilek, Petr; Duarte, Max; Nečas, David; Bourdon, Anne; Bonaventura, Zdeněk
2016-09-01
This work focuses on the conjunction of the fast multipole method (FMM) with the space adaptive multiresolution (MR) technique for grid adaptation. Since both methods, MR and FMM provide a priori error estimates, both achieve O(N) computational complexity, and both operate on the same hierarchical space division, their conjunction represents a natural choice when designing a numerically efficient and robust strategy for time dependent problems. Special attention is given to the use of these methods in the simulation of streamer discharges in air. We have designed a FMM Poisson solver on multiresolution adapted grid in 2D. The accuracy and the computation complexity of the solver has been verified for a set of manufactured solutions. We confirmed that the developed solver attains desired accuracy and this accuracy is controlled only by the number of terms in the multipole expansion in combination with the multiresolution accuracy tolerance. The implementation has a linear computation complexity O(N).
The adaptive problems of female teenage refugees and their behavioral adjustment methods for coping
Mhaidat, Fatin
2016-01-01
This study aimed at identifying the levels of adaptive problems among teenage female refugees in the government schools and explored the behavioral methods that were used to cope with the problems. The sample was composed of 220 Syrian female students (seventh to first secondary grades) enrolled at government schools within the Zarqa Directorate and who came to Jordan due to the war conditions in their home country. The study used the scale of adaptive problems that consists of four dimensions (depression, anger and hostility, low self-esteem, and feeling insecure) and a questionnaire of the behavioral adjustment methods for dealing with the problem of asylum. The results indicated that the Syrian teenage female refugees suffer a moderate degree of adaptation problems, and the positive adjustment methods they have used are more than the negatives. PMID:27175098
Lei, Xusheng; Li, Jingjing
2012-01-01
This paper presents an adaptive information fusion method to improve the accuracy and reliability of the altitude measurement information for small unmanned aerial rotorcraft during the landing process. Focusing on the low measurement performance of sensors mounted on small unmanned aerial rotorcraft, a wavelet filter is applied as a pre-filter to attenuate the high frequency noises in the sensor output. Furthermore, to improve altitude information, an adaptive extended Kalman filter based on a maximum a posteriori criterion is proposed to estimate measurement noise covariance matrix in real time. Finally, the effectiveness of the proposed method is proved by static tests, hovering flight and autonomous landing flight tests. PMID:23201993
A comparison of locally adaptive multigrid methods: LDC, FAC and FIC
NASA Technical Reports Server (NTRS)
Khadra, Khodor; Angot, Philippe; Caltagirone, Jean-Paul
1993-01-01
This study is devoted to a comparative analysis of three 'Adaptive ZOOM' (ZOom Overlapping Multi-level) methods based on similar concepts of hierarchical multigrid local refinement: LDC (Local Defect Correction), FAC (Fast Adaptive Composite), and FIC (Flux Interface Correction)--which we proposed recently. These methods are tested on two examples of a bidimensional elliptic problem. We compare, for V-cycle procedures, the asymptotic evolution of the global error evaluated by discrete norms, the corresponding local errors, and the convergence rates of these algorithms.
Software for the parallel adaptive solution of conservation laws by discontinous Galerkin methods.
Flaherty, J. E.; Loy, R. M.; Shephard, M. S.; Teresco, J. D.
1999-08-17
The authors develop software tools for the solution of conservation laws using parallel adaptive discontinuous Galerkin methods. In particular, the Rensselaer Partition Model (RPM) provides parallel mesh structures within an adaptive framework to solve the Euler equations of compressible flow by a discontinuous Galerkin method (LOCO). Results are presented for a Rayleigh-Taylor flow instability for computations performed on 128 processors of an IBM SP computer. In addition to managing the distributed data and maintaining a load balance, RPM provides information about the parallel environment that can be used to tailor partitions to a specific computational environment.
The block adaptive multigrid method applied to the solution of the Euler equations
NASA Technical Reports Server (NTRS)
Pantelelis, Nikos
1993-01-01
In the present study, a scheme capable of solving very fast and robust complex nonlinear systems of equations is presented. The Block Adaptive Multigrid (BAM) solution method offers multigrid acceleration and adaptive grid refinement based on the prediction of the solution error. The proposed solution method was used with an implicit upwind Euler solver for the solution of complex transonic flows around airfoils. Very fast results were obtained (18-fold acceleration of the solution) using one fourth of the volumes of a global grid with the same solution accuracy for two test cases.
Adaptive-Anisotropic Wavelet Collocation Method on general curvilinear coordinate systems
NASA Astrophysics Data System (ADS)
Brown-Dymkoski, Eric; Vasilyev, Oleg V.
2017-03-01
A new general framework for an Adaptive-Anisotropic Wavelet Collocation Method (A-AWCM) for the solution of partial differential equations is developed. This proposed framework addresses two major shortcomings of existing wavelet-based adaptive numerical methodologies, namely the reliance on a rectangular domain and the "curse of anisotropy", i.e. drastic over-resolution of sheet- and filament-like features arising from the inability of the wavelet refinement mechanism to distinguish highly correlated directional information in the solution. The A-AWCM addresses both of these challenges by incorporating coordinate transforms into the Adaptive Wavelet Collocation Method for the solution of PDEs. The resulting integrated framework leverages the advantages of both the curvilinear anisotropic meshes and wavelet-based adaptive refinement in a complimentary fashion, resulting in greatly reduced cost of resolution for anisotropic features. The proposed Adaptive-Anisotropic Wavelet Collocation Method retains the a priori error control of the solution and fully automated mesh refinement, while offering new abilities through the flexible mesh geometry, including body-fitting. The new A-AWCM is demonstrated for a variety of cases, including parabolic diffusion, acoustic scattering, and unsteady external flow.
A NOISE ADAPTIVE FUZZY EQUALIZATION METHOD FOR PROCESSING SOLAR EXTREME ULTRAVIOLET IMAGES
Druckmueller, M.
2013-08-15
A new image enhancement tool ideally suited for the visualization of fine structures in extreme ultraviolet images of the corona is presented in this paper. The Noise Adaptive Fuzzy Equalization method is particularly suited for the exceptionally high dynamic range images from the Atmospheric Imaging Assembly instrument on the Solar Dynamics Observatory. This method produces artifact-free images and gives significantly better results than methods based on convolution or Fourier transform which are often used for that purpose.
FLIP: A method for adaptively zoned, particle-in-cell calculations of fluid in two dimensions
Brackbill, J.U.; Ruppel, H.M.
1986-08-01
A method is presented for calculating fluid flow in two dimensions using a full particle-in-cell representation on an adaptively zoned grid. The method has many interesting properties, among them an almost total absence of numerical dissipation and the ability to represent large variations in the data. The method is described using a standard formalism and its properties are illustrated by supersonic flow over a step and the interaction of a shock with a thin foil.
Adaptive eigenspace method for inverse scattering problems in the frequency domain
NASA Astrophysics Data System (ADS)
Grote, Marcus J.; Kray, Marie; Nahum, Uri
2017-02-01
A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.
Automatic off-body overset adaptive Cartesian mesh method based on an octree approach
NASA Astrophysics Data System (ADS)
Péron, Stéphanie; Benoit, Christophe
2013-01-01
This paper describes a method for generating adaptive structured Cartesian grids within a near-body/off-body mesh partitioning framework for the flow simulation around complex geometries. The off-body Cartesian mesh generation derives from an octree structure, assuming each octree leaf node defines a structured Cartesian block. This enables one to take into account the large scale discrepancies in terms of resolution between the different bodies involved in the simulation, with minimum memory requirements. Two different conversions from the octree to Cartesian grids are proposed: the first one generates Adaptive Mesh Refinement (AMR) type grid systems, and the second one generates abutting or minimally overlapping Cartesian grid set. We also introduce an algorithm to control the number of points at each adaptation, that automatically determines relevant values of the refinement indicator driving the grid refinement and coarsening. An application to a wing tip vortex computation assesses the capability of the method to capture accurately the flow features.
A GPU-accelerated adaptive discontinuous Galerkin method for level set equation
NASA Astrophysics Data System (ADS)
Karakus, A.; Warburton, T.; Aksel, M. H.; Sert, C.
2016-01-01
This paper presents a GPU-accelerated nodal discontinuous Galerkin method for the solution of two- and three-dimensional level set (LS) equation on unstructured adaptive meshes. Using adaptive mesh refinement, computations are localised mostly near the interface location to reduce the computational cost. Small global time step size resulting from the local adaptivity is avoided by local time-stepping based on a multi-rate Adams-Bashforth scheme. Platform independence of the solver is achieved with an extensible multi-threading programming API that allows runtime selection of different computing devices (GPU and CPU) and different threading interfaces (CUDA, OpenCL and OpenMP). Overall, a highly scalable, accurate and mass conservative numerical scheme that preserves the simplicity of LS formulation is obtained. Efficiency, performance and local high-order accuracy of the method are demonstrated through distinct numerical test cases.
Method study on fuzzy-PID adaptive control of electric-hydraulic hitch system
NASA Astrophysics Data System (ADS)
Li, Mingsheng; Wang, Liubu; Liu, Jian; Ye, Jin
2017-03-01
In this paper, fuzzy-PID adaptive control method is applied to the control of tractor electric-hydraulic hitch system. According to the characteristics of the system, a fuzzy-PID adaptive controller is designed and the electric-hydraulic hitch system model is established. Traction control and position control performance simulation are carried out with the common PID control method. A field test rig was set up to test the electric-hydraulic hitch system. The test results showed that, after the fuzzy-PID adaptive control is adopted, when the tillage depth steps from 0.1m to 0.3m, the system transition process time is 4s, without overshoot, and when the tractive force steps from 3000N to 7000N, the system transition process time is 5s, the system overshoot is 25%.
Three-dimensional self-adaptive grid method for complex flows
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Deiwert, George S.
1988-01-01
A self-adaptive grid procedure for efficient computation of three-dimensional complex flow fields is described. The method is based on variational principles to minimize the energy of a spring system analogy which redistributes the grid points. Grid control parameters are determined by specifying maximum and minimum grid spacing. Multidirectional adaptation is achieved by splitting the procedure into a sequence of successive applications of a unidirectional adaptation. One-sided, two-directional constraints for orthogonality and smoothness are used to enhance the efficiency of the method. Feasibility of the scheme is demonstrated by application to a multinozzle, afterbody, plume flow field. Application of the algorithm for initial grid generation is illustrated by constructing a three-dimensional grid about a bump-like geometry.
Method and system for training dynamic nonlinear adaptive filters which have embedded memory
NASA Technical Reports Server (NTRS)
Rabinowitz, Matthew (Inventor)
2002-01-01
Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.
Statistical mechanics analysis of thresholding 1-bit compressed sensing
NASA Astrophysics Data System (ADS)
Xu, Yingying; Kabashima, Yoshiyuki
2016-08-01
The one-bit compressed sensing framework aims to reconstruct a sparse signal by only using the sign information of its linear measurements. To compensate for the loss of scale information, past studies in the area have proposed recovering the signal by imposing an additional constraint on the l 2-norm of the signal. Recently, an alternative strategy that captures scale information by introducing a threshold parameter to the quantization process was advanced. In this paper, we analyze the typical behavior of thresholding 1-bit compressed sensing utilizing the replica method of statistical mechanics, so as to gain an insight for properly setting the threshold value. Our result shows that fixing the threshold at a constant value yields better performance than varying it randomly when the constant is optimally tuned, statistically. Unfortunately, the optimal threshold value depends on the statistical properties of the target signal, which may not be known in advance. In order to handle this inconvenience, we develop a heuristic that adaptively tunes the threshold parameter based on the frequency of positive (or negative) values in the binary outputs. Numerical experiments show that the heuristic exhibits satisfactory performance while incurring low computational cost.
A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection
Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; Burkardt, John V.
2015-06-24
This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Solution of the Euler Equations
Anderson, R W; Elliott, N S; Pember, R B
2003-02-14
A new method that combines staggered grid arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the methods are driven by the need to reconcile traditional AMR techniques with the staggered variables and moving, deforming meshes associated with Lagrange based ALE schemes. We develop interlevel solution transfer operators and interlevel boundary conditions first in the case of purely Lagrangian hydrodynamics, and then extend these ideas into an ALE method by developing adaptive extensions of elliptic mesh relaxation techniques. Conservation properties of the method are analyzed, and a series of test problem calculations are presented which demonstrate the utility and efficiency of the method.
Adaptive iteration method for star centroid extraction under highly dynamic conditions
NASA Astrophysics Data System (ADS)
Gao, Yushan; Qin, Shiqiao; Wang, Xingshu
2016-10-01
Star centroiding accuracy decreases significantly when star sensor works under highly dynamic conditions or star images are corrupted by severe noise, reducing the output attitude precision. Herein, an adaptive iteration method is proposed to solve this problem. Firstly, initial star centroids are predicted by traditional method, and then based on initial reported star centroids and angular velocities of the star sensor, adaptive centroiding windows are generated to cover the star area and then an iterative method optimizing the location of centroiding window is used to obtain the final star spot extraction results. Simulation results shows that, compared with traditional star image restoration method and Iteratively Weighted Center of Gravity method, AWI algorithm maintains higher extraction accuracy when rotation velocities or noise level increases.
A numerical study of 2D detonation waves with adaptive finite volume methods on unstructured grids
NASA Astrophysics Data System (ADS)
Hu, Guanghui
2017-02-01
In this paper, a framework of adaptive finite volume solutions for the reactive Euler equations on unstructured grids is proposed. The main ingredients of the algorithm include a second order total variation diminishing Runge-Kutta method for temporal discretization, and the finite volume method with piecewise linear solution reconstruction of the conservative variables for the spatial discretization in which the least square method is employed for the reconstruction, and weighted essentially nonoscillatory strategy is used to restrain the potential numerical oscillation. To resolve the high demanding on the computational resources due to the stiffness of the system caused by the reaction term and the shock structure in the solutions, the h-adaptive method is introduced. OpenMP parallelization of the algorithm is also adopted to further improve the efficiency of the implementation. Several one and two dimensional benchmark tests on the ZND model are studied in detail, and numerical results successfully show the effectiveness of the proposed method.
Development and evaluation of a method of calibrating medical displays based on fixed adaptation
Sund, Patrik Månsson, Lars Gunnar; Båth, Magnus
2015-04-15
Purpose: The purpose of this work was to develop and evaluate a new method for calibration of medical displays that includes the effect of fixed adaptation and by using equipment and luminance levels typical for a modern radiology department. Methods: Low contrast sinusoidal test patterns were derived at nine luminance levels from 2 to 600 cd/m{sup 2} and used in a two alternative forced choice observer study, where the adaptation level was fixed at the logarithmic average of 35 cd/m{sup 2}. The contrast sensitivity at each luminance level was derived by establishing a linear relationship between the ten pattern contrast levels used at every luminance level and a detectability index (d′) calculated from the fraction of correct responses. A Gaussian function was fitted to the data and normalized to the adaptation level. The corresponding equation was used in a display calibration method that included the grayscale standard display function (GSDF) but compensated for fixed adaptation. In the evaluation study, the contrast of circular objects with a fixed pixel contrast was displayed using both calibration methods and was rated on a five-grade scale. Results were calculated using a visual grading characteristics method. Error estimations in both observer studies were derived using a bootstrap method. Results: The contrast sensitivities for the darkest and brightest patterns compared to the contrast sensitivity at the adaptation luminance were 37% and 56%, respectively. The obtained Gaussian fit corresponded well with similar studies. The evaluation study showed a higher degree of equally distributed contrast throughout the luminance range with the calibration method compensated for fixed adaptation than for the GSDF. The two lowest scores for the GSDF were obtained for the darkest and brightest patterns. These scores were significantly lower than the lowest score obtained for the compensated GSDF. For the GSDF, the scores for all luminance levels were statistically
Method for reducing the drag of blunt-based vehicles by adaptively increasing forebody roughness
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A. (Inventor); Saltzman, Edwin J. (Inventor); Moes, Timothy R. (Inventor); Iliff, Kenneth W. (Inventor)
2005-01-01
A method for reducing drag upon a blunt-based vehicle by adaptively increasing forebody roughness to increase drag at the roughened area of the forebody, which results in a decrease in drag at the base of this vehicle, and in total vehicle drag.
Kornilova, L N; Cowings, P S; Toscano, W B; Arlashchenko, N I; Korneev, D Iu; Ponomarenko, A V; Salagovich, S V; Sarantseva, A V; Kozlovskaia, I B
2000-01-01
Presented are results of testing the method of adaptive biocontrol during preflight training of cosmonauts. Within the MIR-25 crew, a high level of controllability of the autonomous reactions was characteristic of Flight Commanders MIR-23 and MIR-25 and flight Engineer MIR-23, while Flight Engineer MIR-25 displayed a weak intricate dependence of these reactions on the depth of relaxation or strain.
New cardiac MRI gating method using event-synchronous adaptive digital filter.
Park, Hodong; Park, Youngcheol; Cho, Sungpil; Jang, Bongryoel; Lee, Kyoungjoung
2009-11-01
When imaging the heart using MRI, an artefact-free electrocardiograph (ECG) signal is not only important for monitoring the patient's heart activity but also essential for cardiac gating to reduce noise in MR images induced by moving organs. The fundamental problem in conventional ECG is the distortion induced by electromagnetic interference. Here, we propose an adaptive algorithm for the suppression of MR gradient artefacts (MRGAs) in ECG leads of a cardiac MRI gating system. We have modeled MRGAs by assuming a source of strong pulses used for dephasing the MR signal. The modeled MRGAs are rectangular pulse-like signals. We used an event-synchronous adaptive digital filter whose reference signal is synchronous to the gradient peaks of MRI. The event detection processor for the event-synchronous adaptive digital filter was implemented using the phase space method-a sort of topology mapping method-and least-squares acceleration filter. For evaluating the efficiency of the proposed method, the filter was tested using simulation and actual data. The proposed method requires a simple experimental setup that does not require extra hardware connections to obtain the reference signals of adaptive digital filter. The proposed algorithm was more effective than the multichannel approach.
An adaptive multiresolution gradient-augmented level set method for advection problems
NASA Astrophysics Data System (ADS)
Schneider, Kai; Kolomenskiy, Dmitry; Nave, Jean-Chtristophe
2014-11-01
Advection problems are encountered in many applications, such as transport of passive scalars modeling pollution or mixing in chemical engineering. In some problems, the solution develops small-scale features localized in a part of the computational domain. If the location of these features changes in time, the efficiency of the numerical method can be significantly improved by adapting the partition dynamically to the solution. We present a space-time adaptive scheme for solving advection equations in two space dimensions. The third order accurate gradient-augmented level set method using a semi-Lagrangian formulation with backward time integration is coupled with a point value multiresolution analysis using Hermite interpolation. Thus locally refined dyadic spatial grids are introduced which are efficiently implemented with dynamic quad-tree data structures. For adaptive time integration, an embedded Runge-Kutta method is employed. The precision of the new fully adaptive method is analysed and speed up of CPU time and memory compression with respect to the uniform grid discretization are reported.
NASA Technical Reports Server (NTRS)
Kornilova, L. N.; Cowings, P. S.; Toscano, W. B.; Arlashchenko, N. I.; Korneev, D. Iu; Ponomarenko, A. V.; Salagovich, S. V.; Sarantseva, A. V.; Kozlovskaia, I. B.
2000-01-01
Presented are results of testing the method of adaptive biocontrol during preflight training of cosmonauts. Within the MIR-25 crew, a high level of controllability of the autonomous reactions was characteristic of Flight Commanders MIR-23 and MIR-25 and flight Engineer MIR-23, while Flight Engineer MIR-25 displayed a weak intricate dependence of these reactions on the depth of relaxation or strain.
ERIC Educational Resources Information Center
Zwick, Rebecca; And Others
1994-01-01
Simulated data were used to investigate the performance of modified versions of the Mantel-Haenszel method of differential item functioning (DIF) analysis in computerized adaptive tests (CAT). Results indicate that CAT-based DIF procedures perform well and support the use of item response theory-based matching variables in DIF analysis. (SLD)
ERIC Educational Resources Information Center
Zwick, Rebecca; And Others
Simulated data were used to investigate the performance of modified versions of the Mantel-Haenszel and standardization methods of differential item functioning (DIF) analysis in computer-adaptive tests (CATs). Each "examinee" received 25 items out of a 75-item pool. A three-parameter logistic item response model was assumed, and…
Thresholds of cutaneous afferents related to perceptual threshold across the human foot sole
Strzalkowski, Nicholas D. J.; Mildren, Robyn L.
2015-01-01
Perceptual thresholds are known to vary across the foot sole, despite a reported even distribution in cutaneous afferents. Skin mechanical properties have been proposed to account for these differences; however, a direct relationship between foot sole afferent firing, perceptual threshold, and skin mechanical properties has not been previously investigated. Using the technique of microneurography, we recorded the monofilament firing thresholds of cutaneous afferents and associated perceptual thresholds across the foot sole. In addition, receptive field hardness measurements were taken to investigate the influence of skin hardness on these threshold measures. Afferents were identified as fast adapting [FAI (n = 48) or FAII (n = 13)] or slowly adapting [SAI (n = 21) or SAII (n = 20)], and were grouped based on receptive field location (heel, arch, metatarsals, toes). Overall, perceptual thresholds were found to most closely align with firing thresholds of FA afferents. In contrast, SAI and SAII afferent firing thresholds were found to be significantly higher than perceptual thresholds and are not thought to mediate monofilament perceptual threshold across the foot sole. Perceptual thresholds and FAI afferent firing thresholds were significantly lower in the arch compared with other regions, and skin hardness was found to positively correlate with both FAI and FAII afferent firing and perceptual thresholds. These data support a perceptual influence of skin hardness, which is likely the result of elevated FA afferent firing threshold at harder foot sole sites. The close coupling between FA afferent firing and perceptual threshold across foot sole indicates that small changes in FA afferent firing can influence perceptual thresholds. PMID:26289466
Matthews, Devin A.; Stanton, John F.
2015-02-14
The theory of non-orthogonal spin-adaptation for closed-shell molecular systems is applied to coupled cluster methods with quadruple excitations (CCSDTQ). Calculations at this level of detail are of critical importance in describing the properties of molecular systems to an accuracy which can meet or exceed modern experimental techniques. Such calculations are of significant (and growing) importance in such fields as thermodynamics, kinetics, and atomic and molecular spectroscopies. With respect to the implementation of CCSDTQ and related methods, we show that there are significant advantages to non-orthogonal spin-adaption with respect to simplification and factorization of the working equations and to creating an efficient implementation. The resulting algorithm is implemented in the CFOUR program suite for CCSDT, CCSDTQ, and various approximate methods (CCSD(T), CC3, CCSDT-n, and CCSDT(Q))
Matthews, Devin A; Stanton, John F
2015-02-14
The theory of non-orthogonal spin-adaptation for closed-shell molecular systems is applied to coupled cluster methods with quadruple excitations (CCSDTQ). Calculations at this level of detail are of critical importance in describing the properties of molecular systems to an accuracy which can meet or exceed modern experimental techniques. Such calculations are of significant (and growing) importance in such fields as thermodynamics, kinetics, and atomic and molecular spectroscopies. With respect to the implementation of CCSDTQ and related methods, we show that there are significant advantages to non-orthogonal spin-adaption with respect to simplification and factorization of the working equations and to creating an efficient implementation. The resulting algorithm is implemented in the CFOUR program suite for CCSDT, CCSDTQ, and various approximate methods (CCSD(T), CC3, CCSDT-n, and CCSDT(Q)).
NASA Astrophysics Data System (ADS)
Matthews, Devin A.; Stanton, John F.
2015-02-01
The theory of non-orthogonal spin-adaptation for closed-shell molecular systems is applied to coupled cluster methods with quadruple excitations (CCSDTQ). Calculations at this level of detail are of critical importance in describing the properties of molecular systems to an accuracy which can meet or exceed modern experimental techniques. Such calculations are of significant (and growing) importance in such fields as thermodynamics, kinetics, and atomic and molecular spectroscopies. With respect to the implementation of CCSDTQ and related methods, we show that there are significant advantages to non-orthogonal spin-adaption with respect to simplification and factorization of the working equations and to creating an efficient implementation. The resulting algorithm is implemented in the CFOUR program suite for CCSDT, CCSDTQ, and various approximate methods (CCSD(T), CC3, CCSDT-n, and CCSDT(Q)).
Cochard, E; Aubry, J F; Tanter, M; Prada, C
2011-08-01
An adaptive projection method for ultrasonic focusing through the rib cage, with minimal energy deposition on the ribs, was evaluated experimentally in 3D geometry. Adaptive projection is based on decomposition of the time-reversal operator (DORT method) and projection on the "noise" subspace. It is shown that 3D implementation of this method is straightforward, and not more time-consuming than 2D. Comparisons are made between adaptive projection, spherical focusing, and a previously proposed time-reversal focusing method, by measuring pressure fields in the focal plane and rib region using the three methods. The ratio of the specific absorption rate at the focus over the one at the ribs was found to be increased by a factor of up to eight, versus spherical emission. Beam steering out of geometric focus was also investigated. For all configurations projecting steered emissions were found to deposit less energy on the ribs than steering time-reversed emissions: thus the non-invasive method presented here is more efficient than state-of-the-art invasive techniques. In fact, this method could be used for real-time treatment, because a single acquisition of back-scattered echoes from the ribs is enough to treat a large volume around the focus, thanks to real time projection of the steered beams.
An edge-based solution-adaptive method applied to the AIRPLANE code
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.
1995-01-01
Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.
An edge-based solution-adaptive method applied to the AIRPLANE code
NASA Astrophysics Data System (ADS)
Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.
1995-11-01
Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.
Applying Parallel Adaptive Methods with GeoFEST/PYRAMID to Simulate Earth Surface Crustal Dynamics
NASA Technical Reports Server (NTRS)
Norton, Charles D.; Lyzenga, Greg; Parker, Jay; Glasscoe, Margaret; Donnellan, Andrea; Li, Peggy
2006-01-01
This viewgraph presentation reviews the use Adaptive Mesh Refinement (AMR) in simulating the Crustal Dynamics of Earth's Surface. AMR simultaneously improves solution quality, time to solution, and computer memory requirements when compared to generating/running on a globally fine mesh. The use of AMR in simulating the dynamics of the Earth's Surface is spurred by future proposed NASA missions, such as InSAR for Earth surface deformation and other measurements. These missions will require support for large-scale adaptive numerical methods using AMR to model observations. AMR was chosen because it has been successful in computation fluid dynamics for predictive simulation of complex flows around complex structures.
Greenberg, A C
1997-06-01
Researchers using the method of subliminal psychodynamic activation need to consider the neutrality of their control messages. Anagrams or numbers are recommended as even benign-sounding phrases can produce nonneutral effects.
An Adaptive Instability Suppression Controls Method for Aircraft Gas Turbine Engine Combustors
NASA Technical Reports Server (NTRS)
Kopasakis, George; DeLaat, John C.; Chang, Clarence T.
2008-01-01
An adaptive controls method for instability suppression in gas turbine engine combustors has been developed and successfully tested with a realistic aircraft engine combustor rig. This testing was part of a program that demonstrated, for the first time, successful active combustor instability control in an aircraft gas turbine engine-like environment. The controls method is called Adaptive Sliding Phasor Averaged Control. Testing of the control method has been conducted in an experimental rig with different configurations designed to simulate combustors with instabilities of about 530 and 315 Hz. Results demonstrate the effectiveness of this method in suppressing combustor instabilities. In addition, a dramatic improvement in suppression of the instability was achieved by focusing control on the second harmonic of the instability. This is believed to be due to a phenomena discovered and reported earlier, the so called Intra-Harmonic Coupling. These results may have implications for future research in combustor instability control.
NASA Astrophysics Data System (ADS)
Chai, Runqi; Savvaris, Al; Tsourdos, Antonios
2016-06-01
In this paper, a fuzzy physical programming (FPP) method has been introduced for solving multi-objective Space Manoeuvre Vehicles (SMV) skip trajectory optimization problem based on hp-adaptive pseudospectral methods. The dynamic model of SMV is elaborated and then, by employing hp-adaptive pseudospectral methods, the problem has been transformed to nonlinear programming (NLP) problem. According to the mission requirements, the solutions were calculated for each single-objective scenario. To get a compromised solution for each target, the fuzzy physical programming (FPP) model is proposed. The preference function is established with considering the fuzzy factor of the system such that a proper compromised trajectory can be acquired. In addition, the NSGA-II is tested to obtain the Pareto-optimal solution set and verify the Pareto optimality of the FPP solution. Simulation results indicate that the proposed method is effective and feasible in terms of dealing with the multi-objective skip trajectory optimization for the SMV.
NASA Technical Reports Server (NTRS)
Rosenfeld, Daniel; Short, David A.; Atlas, David
1990-01-01
A theory is developed which establishes the basis for the use of rainfall areas within present thresholds as a measure of either the instantaneous areawide rain rate of convective storms or the total volume of rain from an individual storm over its lifetime. The method is based upon the existence of a well-behaved pdf of rain rate either from the many storms at one instant or from a single storm during its life. The generality of the instantaneous areawide method was examined by applying it to quantitative radar data sets from the GARP Tropical Atlantic Experiment for South Africa, Texas, and Darwin (Australia). It is shown that the pdf's developed for each of these areas are consistent with the theory.
CARA Risk Assessment Thresholds
NASA Technical Reports Server (NTRS)
Hejduk, M. D.
2016-01-01
Warning remediation threshold (Red threshold): Pc level at which warnings are issued, and active remediation considered and usually executed. Analysis threshold (Green to Yellow threshold): Pc level at which analysis of event is indicated, including seeking additional information if warranted. Post-remediation threshold: Pc level to which remediation maneuvers are sized in order to achieve event remediation and obviate any need for immediate follow-up maneuvers. Maneuver screening threshold: Pc compliance level for routine maneuver screenings (more demanding than regular Red threshold due to additional maneuver uncertainty).
Framework for Instructional Technology: Methods of Implementing Adaptive Training and Education
2014-01-01
business , or the military. With Role Adaptation, trainees select their role (e.g., tank driver vs. tank gunner) and are then presented with different...one-size-fits-all, non -mastery based methods (for a review see Durlach & Ray, 2011). After conducting a meta-analysis of various tutoring methods... verbal ), and/or to challenge or stimulate learners with above average aptitude. Multiple versions might also be created to suit students with
Xia, Kelin; Zhan, Meng; Wan, Decheng; Wei, Guo-Wei
2011-01-01
Mesh deformation methods are a versatile strategy for solving partial differential equations (PDEs) with a vast variety of practical applications. However, these methods break down for elliptic PDEs with discontinuous coefficients, namely, elliptic interface problems. For this class of problems, the additional interface jump conditions are required to maintain the well-posedness of the governing equation. Consequently, in order to achieve high accuracy and high order convergence, additional numerical algorithms are required to enforce the interface jump conditions in solving elliptic interface problems. The present work introduces an interface technique based adaptively deformed mesh strategy for resolving elliptic interface problems. We take the advantages of the high accuracy, flexibility and robustness of the matched interface and boundary (MIB) method to construct an adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients. The proposed method generates deformed meshes in the physical domain and solves the transformed governed equations in the computational domain, which maintains regular Cartesian meshes. The mesh deformation is realized by a mesh transformation PDE, which controls the mesh redistribution by a source term. The source term consists of a monitor function, which builds in mesh contraction rules. Both interface geometry based deformed meshes and solution gradient based deformed meshes are constructed to reduce the L∞ and L2 errors in solving elliptic interface problems. The proposed adaptively deformed mesh based interface method is extensively validated by many numerical experiments. Numerical results indicate that the adaptively deformed mesh based interface method outperforms the original MIB method for dealing with elliptic interface problems. PMID:22586356
Refinement trajectory and determination of eigenstates by a wavelet based adaptive method
Pipek, Janos; Nagy, Szilvia
2006-11-07
The detail structure of the wave function is analyzed at various refinement levels using the methods of wavelet analysis. The eigenvalue problem of a model system is solved in granular Hilbert spaces, and the trajectory of the eigenstates is traced in terms of the resolution. An adaptive method is developed for identifying the fine structure localization regions, where further refinement of the wave function is necessary.
Sha, Sue; Ghosh, Atalanta; Plum-Mörschel, Leona; Heise, Tim; Rothenberg, Paul
2013-01-01
Context: The stepwise hyperglycemic clamp procedure (SHCP) is the gold standard for measuring the renal threshold for glucose excretion (RTG), but its use is limited to small studies in specialized laboratories. Objective: The objective of the study was to validate a new method for determining RTG using data obtained during a mixed-meal tolerance test (MMTT) in untreated and canagliflozin-treated subjects with type 2 diabetes mellitus (T2DM). Design: This was an open-label study with 2 sequential parts. Setting: The study was performed at a single center in Germany. Patients: Twenty-eight subjects with T2DM were studied. Interventions: No treatment intervention was given in part 1. In part 2, subjects were treated with canagliflozin 100 mg/d for 8 days. In each part, subjects underwent an MMTT and a 5-step SHCP on consecutive days. Main Outcome Measures: For both methods, RTG was estimated using measured blood glucose (BG) and urinary glucose excretion (UGE); estimated glomerular filtration rates were also used to determine RTG during the MMTT. The methods were compared using the concordance correlation coefficient and geometric mean ratios. Results: In untreated and canagliflozin-treated subjects, the relationship between UGE rate and BG was well described by a threshold relationship. Good agreement was obtained between the MMTT-based and SHCP-derived RTG values. The concordance correlation coefficient (for all subjects) was 0.94; geometric mean ratios (90% confidence intervals) for RTG values (MMTT/SHCP) were 0.93 (0.89–0.96) in untreated subjects and 1.03 (0.78–1.37) in canagliflozin-treated subjects. Study procedures and treatments were generally well tolerated in untreated and canagliflozin-treated subjects. Conclusions: In both untreated and canagliflozin-treated subjects with T2DM, RTG can be accurately estimated from measured BG, UGE, and estimated glomerular filtration rates using an MMTT-based method. PMID:23585665
NASA Astrophysics Data System (ADS)
Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.
2016-01-01
18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a
A wavelet-optimized, very high order adaptive grid and order numerical method
NASA Technical Reports Server (NTRS)
Jameson, Leland
1996-01-01
Differencing operators of arbitrarily high order can be constructed by interpolating a polynomial through a set of data followed by differentiation of this polynomial and finally evaluation of the polynomial at the point where a derivative approximation is desired. Furthermore, the interpolating polynomial can be constructed from algebraic, trigonometric, or, perhaps exponential polynomials. This paper begins with a comparison of such differencing operator construction. Next, the issue of proper grids for high order polynomials is addressed. Finally, an adaptive numerical method is introduced which adapts the numerical grid and the order of the differencing operator depending on the data. The numerical grid adaptation is performed on a Chebyshev grid. That is, at each level of refinement the grid is a Chebvshev grid and this grid is refined locally based on wavelet analysis.
A Digitalized Gyroscope System Based on a Modified Adaptive Control Method.
Xia, Dunzhu; Hu, Yiwei; Ni, Peizhen
2016-03-04
In this work we investigate the possibility of applying the adaptive control algorithm to Micro-Electro-Mechanical System (MEMS) gyroscopes. Through comparing the gyroscope working conditions with the reference model, the adaptive control method can provide online estimation of the key parameters and the proper control strategy for the system. The digital second-order oscillators in the reference model are substituted for two phase locked loops (PLLs) to achieve a more steady amplitude and frequency control. The adaptive law is modified to satisfy the condition of unequal coupling stiffness and coupling damping coefficient. The rotation mode of the gyroscope system is considered in our work and a rotation elimination section is added to the digitalized system. Before implementing the algorithm in the hardware platform, different simulations are conducted to ensure the algorithm can meet the requirement of the angular rate sensor, and some of the key adaptive law coefficients are optimized. The coupling components are detected and suppressed respectively and Lyapunov criterion is applied to prove the stability of the system. The modified adaptive control algorithm is verified in a set of digitalized gyroscope system, the control system is realized in digital domain, with the application of Field Programmable Gate Array (FPGA). Key structure parameters are measured and compared with the estimation results, which validated that the algorithm is feasible in the setup. Extra gyroscopes are used in repeated experiments to prove the commonality of the algorithm.
Adaptive Kalman filtering methods for tracking GPS signals in high noise/high dynamic environments
NASA Astrophysics Data System (ADS)
Zuo, Qiyao; Yuan, Hong; Lin, Baojun
2007-11-01
GPS C/A signal tracking algorithms have been developed based on adaptive Kalman filtering theory. In the research, an adaptive Kalman filter is used to substitute for standard tracking loop filters. The goal is to improve estimation accuracy and tracking stabilization in high noise and high dynamic environments. The linear dynamics model and the measurements model are designed to estimate code phase, carrier phase, Doppler shift, and rate of change of Doppler shift. Two adaptive algorithms are applied to improve robustness and adaptive faculty of the tracking, one is Sage adaptive filtering approach and the other is strong tracking method. Both the new algorithms and the conventional tracking loop have been tested by using simulation data. In the simulation experiment, the highest jerk of the receiver is set to 10G m/s 3 with the lowest C/No 30dBHz. The results indicate that the Kalman filtering algorithms are more robust than the standard tracking loop, and performance of tracking loop using the algorithms is satisfactory in such extremely adverse circumstances.
Huttunen, Sanna; Olsson, Sanna; Buchbender, Volker; Enroth, Johannes; Hedenäs, Lars; Quandt, Dietmar
2012-01-01
Adaptive evolution has often been proposed to explain correlations between habitats and certain phenotypes. In mosses, a high frequency of species with specialized sporophytic traits in exposed or epiphytic habitats was, already 100 years ago, suggested as due to adaptation. We tested this hypothesis by contrasting phylogenetic and morphological data from two moss families, Neckeraceae and Lembophyllaceae, both of which show parallel shifts to a specialized morphology and to exposed epiphytic or epilithic habitats. Phylogeny-based tests for correlated evolution revealed that evolution of four sporophytic traits is correlated with a habitat shift. For three of them, evolutionary rates of dual character-state changes suggest that habitat shifts appear prior to changes in morphology. This suggests that they could have evolved as adaptations to new habitats. Regarding the fourth correlated trait the specialized morphology had already evolved before the habitat shift. In addition, several other specialized "epiphytic" traits show no correlation with a habitat shift. Besides adaptive diversification, other processes thus also affect the match between phenotype and environment. Several potential factors such as complex genetic and developmental pathways yielding the same phenotypes, differences in strength of selection, or constraints in phenotypic evolution may lead to an inability of phylogeny-based comparative methods to detect potential adaptations.
Huttunen, Sanna; Olsson, Sanna; Buchbender, Volker; Enroth, Johannes; Hedenäs, Lars; Quandt, Dietmar
2012-01-01
Adaptive evolution has often been proposed to explain correlations between habitats and certain phenotypes. In mosses, a high frequency of species with specialized sporophytic traits in exposed or epiphytic habitats was, already 100 years ago, suggested as due to adaptation. We tested this hypothesis by contrasting phylogenetic and morphological data from two moss families, Neckeraceae and Lembophyllaceae, both of which show parallel shifts to a specialized morphology and to exposed epiphytic or epilithic habitats. Phylogeny-based tests for correlated evolution revealed that evolution of four sporophytic traits is correlated with a habitat shift. For three of them, evolutionary rates of dual character-state changes suggest that habitat shifts appear prior to changes in morphology. This suggests that they could have evolved as adaptations to new habitats. Regarding the fourth correlated trait the specialized morphology had already evolved before the habitat shift. In addition, several other specialized “epiphytic” traits show no correlation with a habitat shift. Besides adaptive diversification, other processes thus also affect the match between phenotype and environment. Several potential factors such as complex genetic and developmental pathways yielding the same phenotypes, differences in strength of selection, or constraints in phenotypic evolution may lead to an inability of phylogeny-based comparative methods to detect potential adaptations. PMID:23118967
A Digitalized Gyroscope System Based on a Modified Adaptive Control Method
Xia, Dunzhu; Hu, Yiwei; Ni, Peizhen
2016-01-01
In this work we investigate the possibility of applying the adaptive control algorithm to Micro-Electro-Mechanical System (MEMS) gyroscopes. Through comparing the gyroscope working conditions with the reference model, the adaptive control method can provide online estimation of the key parameters and the proper control strategy for the system. The digital second-order oscillators in the reference model are substituted for two phase locked loops (PLLs) to achieve a more steady amplitude and frequency control. The adaptive law is modified to satisfy the condition of unequal coupling stiffness and coupling damping coefficient. The rotation mode of the gyroscope system is considered in our work and a rotation elimination section is added to the digitalized system. Before implementing the algorithm in the hardware platform, different simulations are conducted to ensure the algorithm can meet the requirement of the angular rate sensor, and some of the key adaptive law coefficients are optimized. The coupling components are detected and suppressed respectively and Lyapunov criterion is applied to prove the stability of the system. The modified adaptive control algorithm is verified in a set of digitalized gyroscope system, the control system is realized in digital domain, with the application of Field Programmable Gate Array (FPGA). Key structure parameters are measured and compared with the estimation results, which validated that the algorithm is feasible in the setup. Extra gyroscopes are used in repeated experiments to prove the commonality of the algorithm. PMID:26959019
Menkir, A; Bramel-Cox, P J; Witt, M D
1994-08-01
The association among six traits in the F2 lines derived from adapted × exotic backcrosses of sorghum developed via two introgression methods was studied using principal component analysis. The first principal component defined a hybrid index in matings of the wild accession ('12-26') but not in matings of the cultivated sorghum genotypes ('Segeolane' and 'SC408'), no matter which adapted parent was used. This component accounted for 27-42% of the total variation in each mating. The 'recombination spindle' was wide in all matings of CK60 and KP9B, which indicated that the relationships among traits were not strong enough to restrict recombination among the parental characters. The index scores of both CK60 and KP9B matings showed clear differentiation of the backcross generations only when the exotic parent was the undomesticated wild accession ('12-26'). None of the distributions of the first principal component scores in any backcross population was bimodal. The frequency of recombinant genotypes derived from a mating was determined by the level of domestication and adaptation of the exotic parent and the genetic background of the adapted parent. Backcrossing to a population (KP9B) was found to be superior to backcrossing to an inbred line (CK60) to produce lines with an improved adapted phenotype.
An h-adaptive finite element method for turbulent heat transfer
Carriington, David B
2009-01-01
A two-equation turbulence closure model (k-{omega}) using an h-adaptive grid technique and finite element method (FEM) has been developed to simulate low Mach flow and heat transfer. These flows are applicable to many flows in engineering and environmental sciences. Of particular interest in the engineering modeling areas are: combustion, solidification, and heat exchanger design. Flows for indoor air quality modeling and atmospheric pollution transport are typical types of environmental flows modeled with this method. The numerical method is based on a hybrid finite element model using an equal-order projection process. The model includes thermal and species transport, localized mesh refinement (h-adaptive) and Petrov-Galerkin weighting for the stabilizing the advection. This work develops the continuum model of a two-equation turbulence closure method. The fractional step solution method is stated along with the h-adaptive grid method (Carrington and Pepper, 2002). Solutions are presented for 2d flow over a backward-facing step.
High-precision self-adaptive phase-calibration method for wavelength-tuning interferometry
NASA Astrophysics Data System (ADS)
Zhu, Xueliang; Zhao, Huiying; Dong, Longchao; Wang, Hongjun; Liu, Bingcai; Yuan, Daocheng; Tian, Ailing; Wang, Fangjie; Zhang, Chupeng; Ban, Xinxing
2017-03-01
We introduce a high-precision self-adaptive phase-calibration method for performing wavelength-tuning interferometry. Our method is insensitive to the nonlinearity of the phase shifter, even under random control. Intensity errors derived from laser voltage changes can be restrained by adopting this approach. Furthermore, this method can effectively overcome the influences from the background and modulation intensities in the interferogram, regardless of the phase structure. Numerical simulations and experiments are implemented to verify the validity of this high-precision calibration method.
Ultra-low threshold polariton condensation
NASA Astrophysics Data System (ADS)
Steger, Mark; Fluegel, Brian; Alberi, Kirstin; Snoke, David W.; Pfeiffer, Loren N.; West, Ken; Mascarenhas, Angelo
2017-03-01
We demonstrate condensation of microcavity polaritons with a very sharp threshold occuring at two orders of magnitude lower pump intensity than previous demonstrations of condensation. The long cavity-lifetime and trapping and pumping geometries are crucial to the realization of this low threshold. Polariton condensation, or "polariton lasing" has long been proposed as a promising source of coherent light at lower threshold than traditional lasing, and these results suggest methods to bring this threshold even lower.
An adaptive subspace trust-region method for frequency-domain seismic full waveform inversion
NASA Astrophysics Data System (ADS)
Zhang, Huan; Li, Xiaofan; Song, Hanjie; Liu, Shaolin
2015-05-01
Full waveform inversion is currently considered as a promising seismic imaging method to obtain high-resolution and quantitative images of the subsurface. It is a nonlinear ill-posed inverse problem, the main difficulty of which that prevents the full waveform inversion from widespread applying to real data is the sensitivity to incorrect initial models and noisy data. Local optimization theories including Newton's method and gradient method always lead the convergence to local minima, while global optimization algorithms such as simulated annealing are computationally costly. To confront this issue, in this paper we investigate the possibility of applying the trust-region method to the full waveform inversion problem. Different from line search methods, trust-region methods force the new trial step within a certain neighborhood of the current iterate point. Theoretically, the trust-region methods are reliable and robust, and they have very strong convergence properties. The capability of this inversion technique is tested with the synthetic Marmousi velocity model and the SEG/EAGE Salt model. Numerical examples demonstrate that the adaptive subspace trust-region method can provide solutions closer to the global minima compared to the conventional Approximate Hessian approach and the L-BFGS method with a higher convergence rate. In addition, the match between the inverted model and the true model is still excellent even when the initial model deviates far from the true model. Inversion results with noisy data also exhibit the remarkable capability of the adaptive subspace trust-region method for low signal-to-noise data inversions. Promising numerical results suggest this adaptive subspace trust-region method is suitable for full waveform inversion, as it has stronger convergence and higher convergence rate.
Long-time simulations of the Kelvin-Helmholtz instability using an adaptive vortex method.
Sohn, Sung-Ik; Yoon, Daeki; Hwang, Woonjae
2010-10-01
The nonlinear evolution of an interface subject to a parallel shear flow is studied by the vortex sheet model. We perform long-time computations for the vortex sheet in density-stratified fluids by using the point vortex method and investigate late-time dynamics of the Kelvin-Helmholtz instability. We apply an adaptive point insertion procedure and a high-order shock-capturing scheme to the vortex method to handle the nonuniform distribution of point vortices and enhance the resolution. Our adaptive vortex method successfully simulates chaotically distorted interfaces of the Kelvin-Helmholtz instability with fine resolutions. The numerical results show that the Kelvin-Helmholtz instability evolves a secondary instability at a late time, distorting the internal rollup, and eventually develops to a disordered structure.
NASA Astrophysics Data System (ADS)
Lee, Sanghyun; Wheeler, Mary F.
2017-02-01
We present a novel approach to the simulation of miscible displacement by employing adaptive enriched Galerkin finite element methods (EG) coupled with entropy residual stabilization for transport. In particular, numerical simulations of viscous fingering instabilities in heterogeneous porous media and Hele-Shaw cells are illustrated. EG is formulated by enriching the conforming continuous Galerkin finite element method (CG) with piecewise constant functions. The method provides locally and globally conservative fluxes, which are crucial for coupled flow and transport problems. Moreover, EG has fewer degrees of freedom in comparison with discontinuous Galerkin (DG) and an efficient flow solver has been derived which allows for higher order schemes. Dynamic adaptive mesh refinement is applied in order to reduce computational costs for large-scale three dimensional applications. In addition, entropy residual based stabilization for high order EG transport systems prevents spurious oscillations. Numerical tests are presented to show the capabilities of EG applied to flow and transport.
An adaptive method for determining an acquisition parameter t0 in a modified CPMG sequence
NASA Astrophysics Data System (ADS)
Xing, Donghui; Fan, Yiren; Hao, Jianfei; Ge, Xinmin; Li, Chaoliu; Xiao, Yufeng; Wu, Fei
2017-03-01
The modified CPMG (Carr-Purcell-Meiboom-Gill) pulse sequence is a common sequence used for measuring the internal magnetic field gradient distribution of formation rocks, for which t0 (the duration of the first window) is a key acquisition parameter. In order to obtain the optimal t0, an adaptive method is proposed in this paper. By studying the factors influencing discriminant factor σ and its variation trend using T2-G forward numerical simulation, it is found that the optimal t0 corresponds to the maximum value of σ. Then combining the constraint condition of SNR (Signal Noise Ratio) of spin echo, an optimal t0 in modified CPMG pulse sequence is determined. This method can reduce the difficulties of operating T2-G experiments. Finally, the adaptive method is verified by the results of the T2-G experiments for four water-saturated sandstone samples.
A novel adaptive 3D medical image interpolation method based on shape
NASA Astrophysics Data System (ADS)
Chen, Jiaxin; Ma, Wei
2013-03-01
Image interpolation of cross-sections is one of the key steps of medical visualization. Aiming at the problem of fuzzy boundaries and large amount of calculation, which are brought by the traditional interpolation, a novel adaptive 3-D medical image interpolation method is proposed in this paper. Firstly, the contour is obtained by the edge interpolation, and the corresponding points are found according to the relation of the contour and points on the original images. Secondly, this algorithm utilizes volume relativity to get the best point-pair with the adaptive methods. Finally, the grey value of interpolation pixel is got by the matching point interpolation. The experimental results show that the method presented in the paper not only can meet the requirements of interpolation accuracy, but also can be used effectively in medical image 3D reconstruction.
[Adaptation of a method for determining serum iron after deproteinization on a parallel analyzer].
Pontézière, C; Meneguzzer, E; Succari, M; Miocque, M
1989-04-01
The study of the determination of iron in sera by a bathophenanthroline method after deproteinization, has been realized according to the protocol Valtec conceived by SFBC, after adaptation on a FP9 parallel analyzer. The critical study of this adaptation included trials of within run precision (CV of 1.25%), total precision (CV 2.29 to 4.66%) as also evaluation of analytical range: the limit of linearity is 140 mumol/l. The evaluation of inaccuracy performed with patient specimens leads to establishment of follow up norms and interpretation norms of allometry line. Our whole results are in agreement with the performance standards of the protocol for the validation of methods published by the Société Française de Biologie Clinique. Finally the described method is quick acting, reliable and very inexpensive.
Eiber, Calvin D; Dokos, Socrates; Lovell, Nigel H; Suaning, Gregg J
2016-08-19
The capacity to quickly and accurately simulate extracellular stimulation of neurons is essential to the design of next-generation neural prostheses. Existing platforms for simulating neurons are largely based on finite-difference techniques; due to the complex geometries involved, the more powerful spectral or differential quadrature techniques cannot be applied directly. This paper presents a mathematical basis for the application of a spectral element method to the problem of simulating the extracellular stimulation of retinal neurons, which is readily extensible to neural fibers of any kind. The activating function formalism is extended to arbitrary neuron geometries, and a segmentation method to guarantee an appropriate choice of collocation points is presented. Differential quadrature may then be applied to efficiently solve the resulting cable equations. The capacity for this model to simulate action potentials propagating through branching structures and to predict minimum extracellular stimulation thresholds for individual neurons is demonstrated. The presented model is validated against published values for extracellular stimulation threshold and conduction velocity for realistic physiological parameter values. This model suggests that convoluted axon geometries are more readily activated by extracellular stimulation than linear axon geometries, which may have ramifications for the design of neural prostheses.
Uncertainty Estimates of Psychoacoustic Thresholds Obtained from Group Tests
NASA Technical Reports Server (NTRS)
Rathsam, Jonathan; Christian, Andrew
2016-01-01
Adaptive psychoacoustic test methods, in which the next signal level depends on the response to the previous signal, are the most efficient for determining psychoacoustic thresholds of individual subjects. In many tests conducted in the NASA psychoacoustic labs, the goal is to determine thresholds representative of the general population. To do this economically, non-adaptive testing methods are used in which three or four subjects are tested at the same time with predetermined signal levels. This approach requires us to identify techniques for assessing the uncertainty in resulting group-average psychoacoustic thresholds. In this presentation we examine the Delta Method of frequentist statistics, the Generalized Linear Model (GLM), the Nonparametric Bootstrap, a frequentist method, and Markov Chain Monte Carlo Posterior Estimation and a Bayesian approach. Each technique is exercised on a manufactured, theoretical dataset and then on datasets from two psychoacoustics facilities at NASA. The Delta Method is the simplest to implement and accurate for the cases studied. The GLM is found to be the least robust, and the Bootstrap takes the longest to calculate. The Bayesian Posterior Estimate is the most versatile technique examined because it allows the inclusion of prior information.
A simple and inexpensive method for determining cold sensitivity and adaptation in mice.
Brenner, Daniel S; Golden, Judith P; Vogt, Sherri K; Gereau, Robert W
2015-03-17
Cold hypersensitivity is a serious clinical problem, affecting a broad subset of patients and causing significant decreases in quality of life. The cold plantar assay allows the objective and inexpensive assessment of cold sensitivity in mice, and can quantify both analgesia and hypersensitivity. Mice are acclimated on a glass plate, and a compressed dry ice pellet is held against the glass surface underneath the hindpaw. The latency to withdrawal from the cooling glass is used as a measure of cold sensitivity. Cold sensation is also important for survival in regions with seasonal temperature shifts, and in order to maintain sensitivity animals must be able to adjust their thermal response thresholds to match the ambient temperature. The Cold Plantar Assay (CPA) also allows the study of adaptation to changes in ambient temperature by testing the cold sensitivity of mice at temperatures ranging from 30 °C to 5 °C. Mice are acclimated as described above, but the glass plate is cooled to the desired starting temperature using aluminum boxes (or aluminum foil packets) filled with hot water, wet ice, or dry ice. The temperature of the plate is measured at the center using a filament T-type thermocouple probe. Once the plate has reached the desired starting temperature, the animals are tested as described above. This assay allows testing of mice at temperatures ranging from innocuous to noxious. The CPA yields unambiguous and consistent behavioral responses in uninjured mice and can be used to quantify both hypersensitivity and analgesia. This protocol describes how to use the CPA to measure cold hypersensitivity, analgesia, and adaptation in mice.
Functional phase response curves: a method for understanding synchronization of adapting neurons.
Cui, Jianxia; Canavier, Carmen C; Butera, Robert J
2009-07-01
Phase response curves (PRCs) for a single neuron are often used to predict the synchrony of mutually coupled neurons. Previous theoretical work on pulse-coupled oscillators used single-pulse perturbations. We propose an alternate method in which functional PRCs (fPRCs) are generated using a train of pulses applied at a fixed delay after each spike, with the PRC measured when the phasic relationship between the stimulus and the subsequent spike in the neuron has converged. The essential information is the dependence of the recovery time from pulse onset until the next spike as a function of the delay between the previous spike and the onset of the applied pulse. Experimental fPRCs in Aplysia pacemaker neurons were different from single-pulse PRCs, principally due to adaptation. In the biological neuron, convergence to the fully adapted recovery interval was slower at some phases than that at others because the change in the effective intrinsic period due to adaptation changes the effective phase resetting in a way that opposes and slows the effects of adaptation. The fPRCs for two isolated adapting model neurons were used to predict the existence and stability of 1:1 phase-locked network activity when the two neurons were coupled. A stability criterion was derived by linearizing a coupled map based on the fPRC and the existence and stability criteria were successfully tested in two-simulated-neuron networks with reciprocal inhibition or excitation. The fPRC is the first PRC-based tool that can account for adaptation in analyzing networks of neural oscillators.
Functional Phase Response Curves: A Method for Understanding Synchronization of Adapting Neurons
Cui, Jianxia; Canavier, Carmen C.; Butera, Robert J.
2009-01-01
Phase response curves (PRCs) for a single neuron are often used to predict the synchrony of mutually coupled neurons. Previous theoretical work on pulse-coupled oscillators used single-pulse perturbations. We propose an alternate method in which functional PRCs (fPRCs) are generated using a train of pulses applied at a fixed delay after each spike, with the PRC measured when the phasic relationship between the stimulus and the subsequent spike in the neuron has converged. The essential information is the dependence of the recovery time from pulse onset until the next spike as a function of the delay between the previous spike and the onset of the applied pulse. Experimental fPRCs in Aplysia pacemaker neurons were different from single-pulse PRCs, principally due to adaptation. In the biological neuron, convergence to the fully adapted recovery interval was slower at some phases than that at others because the change in the effective intrinsic period due to adaptation changes the effective phase resetting in a way that opposes and slows the effects of adaptation. The fPRCs for two isolated adapting model neurons were used to predict the existence and stability of 1:1 phase-locked network activity when the two neurons were coupled. A stability criterion was derived by linearizing a coupled map based on the fPRC and the existence and stability criteria were successfully tested in two-simulated-neuron networks with reciprocal inhibition or excitation. The fPRC is the first PRC-based tool that can account for adaptation in analyzing networks of neural oscillators. PMID:19420126
A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction
NASA Technical Reports Server (NTRS)
Bockelie, Michael J.; Eiseman, Peter R.
1990-01-01
A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.
NASA Astrophysics Data System (ADS)
Han, Dongmei; Xu, Xinyi; Yan, Denghua
2016-04-01
In recent years, global climate change has significantly caused a serious crisis of water resources throughout the world. However, mainly through variations in temperature, climate change will affect water requirements of crop. It is obvious that the rise of temperature affects growing period and phenological period of crop directly, then changes the water demand quota of crop. Methods including accumulated temperature threshold and climatic tendency rate were adopted, which made up for the weakness of phenological observations, to reveal the response of crop phenological change during the growing period. Then using Penman-Menteith model and crop coefficients from the United Nations Food& Agriculture Organization (FAO), the paper firstly explored crop water requirements in different growth periods, and further forecasted quantitatively crop water requirements in Heihe River Basin, China under different climate change scenarios. Results indicate that: (i) The results of crop phenological change established in the method of accumulated temperature threshold were in agreement with measured results, and (ii) there were many differences in impacts of climate warming on water requirement of different crops. The growth periods of wheat and corn had tendency of shortening as well as the length of growth periods. (ii)Results of crop water requirements under different climate change scenarios showed: when temperature increased by 1°C, the start time of wheat growth period changed, 2 days earlier than before, and the length of total growth period shortened 2 days. Wheat water requirements increased by 1.4mm. However, corn water requirements decreased by almost 0.9mm due to the increasing temperature of 1°C. And the start time of corn growth period become 3 days ahead, and the length of total growth period shortened 4 days. Therefore, the contradiction between water supply and water demands are more obvious under the future climate warming in Heihe River Basin, China.
A method for online verification of adapted fields using an independent dose monitor
Chang Jina; Norrlinger, Bernhard D.; Heaton, Robert K.; Jaffray, David A.; Cho, Young-Bin; Islam, Mohammad K.; Mahon, Robert
2013-07-15
Purpose: Clinical implementation of online adaptive radiotherapy requires generation of modified fields and a method of dosimetric verification in a short time. We present a method of treatment field modification to account for patient setup error, and an online method of verification using an independent monitoring system.Methods: The fields are modified by translating each multileaf collimator (MLC) defined aperture in the direction of the patient setup error, and magnifying to account for distance variation to the marked isocentre. A modified version of a previously reported online beam monitoring system, the integral quality monitoring (IQM) system, was investigated for validation of adapted fields. The system consists of a large area ion-chamber with a spatial gradient in electrode separation to provide a spatially sensitive signal for each beam segment, mounted below the MLC, and a calculation algorithm to predict the signal. IMRT plans of ten prostate patients have been modified in response to six randomly chosen setup errors in three orthogonal directions.Results: A total of approximately 49 beams for the modified fields were verified by the IQM system, of which 97% of measured IQM signal agree with the predicted value to within 2%.Conclusions: The modified IQM system was found to be suitable for online verification of adapted treatment fields.
Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement
NASA Astrophysics Data System (ADS)
Shervani-Tabar, Navid; Vasilyev, Oleg V.
2016-11-01
This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.
Bradman, Matthew J G; Ferrini, Francesco; Salio, Chiara; Merighi, Adalberto
2015-11-30
Here, we reconsider the status quo in testing mechanical sensitivity with von Frey's hairs. The aim is to improve paw withdrawal estimates by integrating current psychometric theory, and to maximise the clinical relevance and statistical power of mechanosensory models. A wealth of research into human tactile stimulus perception may be extended to the quantification of laboratory animal behaviour. We start by reviewing each step of the test, from its design and application through to data analysis. Filament range is assessed as a whole; possible test designs are compared; techniques of filament application to mice and rats are considered; curve fitting software is introduced; possibilities for data pooling and curve fitting are evaluated. A rational update of classical methods in line with recent advances in psychometrics and supported by open source software is expected to improve data homogeneity, and Reduce and Refine animal use in accord with the '3R' principles.
Vivid Motor Imagery as an Adaptation Method for Head Turns on a Short-Arm Centrifuge
NASA Technical Reports Server (NTRS)
Newby, N. J.; Mast, F. W.; Natapoff, A.; Paloski, W. H.
2006-01-01
from one another. For the perceived duration of sensations, the CG group again exhibited the least amount of adaptation. However, the rates of adaptation of the PA and the MA groups were indistinguishable, suggesting that the imagined pseudostimulus appeared to be just as effective a means of adaptation as the actual stimulus. The MA group's rate of adaptation to motion sickness symptoms was also comparable to the PA group. The use of vivid motor imagery may be an effective method for adapting to the illusory sensations and motion sickness symptoms produced by cross-coupled stimuli. For space-based AG applications, this technique may prove quite useful in retaining astronauts considered highly susceptible to motion sickness as it reduces the number of actual CCS required to attain adaptation.
Adaptation of LASCA method for diagnostics of malignant tumours in laboratory animals
NASA Astrophysics Data System (ADS)
Ul'yanov, S. S.; Laskavyi, V. N.; Glova, Alina B.; Polyanina, T. I.; Ul'yanova, O. V.; Fedorova, V. A.; Ul'yanov, A. S.
2012-05-01
The LASCA method is adapted for diagnostics of malignant neoplasms in laboratory animals. Tumours are studied in mice of Balb/c inbred line after inoculation of cells of syngeneic myeloma cell line Sp.2/0 — Ag.8. The appropriateness of using the tLASCA method in tumour investigations is substantiated; its advantages in comparison with the sLASCA method are demonstrated. It is found that the most informative characteristic, indicating the presence of a tumour, is the fractal dimension of LASCA images.
Adaptive stochastic resonance method for impact signal detection based on sliding window
NASA Astrophysics Data System (ADS)
Li, Jimeng; Chen, Xuefeng; He, Zhengjia
2013-04-01
Aiming at solving the existing sharp problems in impact signal detection by using stochastic resonance (SR) in the fault diagnosis of rotating machinery, such as the measurement index selection of SR and the detection of impact signal with different impact amplitudes, the present study proposes an adaptive SR method for impact signal detection based on sliding window by analyzing the SR characteristics of impact signal. This method can not only achieve the optimal selection of system parameters by means of weighted kurtosis index constructed through using kurtosis index and correlation coefficient, but also achieve the detection of weak impact signal through the algorithm of data segmentation based on sliding window, even though the differences between different impact amplitudes are great. The algorithm flow of adaptive SR method is given and effectiveness of the method has been verified by the contrastive results between the proposed method and the traditional SR method of simulation experiments. Finally, the proposed method has been applied to a gearbox fault diagnosis in a hot strip finishing mill in which two local faults located on the pinion are obtained successfully. Therefore, it can be concluded that the proposed method is of great practical value in engineering.
NASA Astrophysics Data System (ADS)
Kim, Nakwan
Utilizing the universal approximation property of neural networks, we develop several novel approaches to neural network-based adaptive output feedback control of nonlinear systems, and illustrate these approaches for several flight control applications. In particular, we address the problem of non-affine systems and eliminate the fixed point assumption present in earlier work. All of the stability proofs are carried out in a form that eliminates an algebraic loop in the neural network implementation. An approximate input/output feedback linearizing controller is augmented with a neural network using input/output sequences of the uncertain system. These approaches permit adaptation to both parametric uncertainty and unmodeled dynamics. All physical systems also have control position and rate limits, which may either deteriorate performance or cause instability for a sufficiently high control bandwidth. Here we apply a method for protecting an adaptive process from the effects of input saturation and time delays, known as "pseudo control hedging". This method was originally developed for the state feedback case, and we provide a stability analysis that extends its domain of applicability to the case of output feedback. The approach is illustrated by the design of a pitch-attitude flight control system for a linearized model of an R-50 experimental helicopter, and by the design of a pitch-rate control system for a 58-state model of a flexible aircraft consisting of rigid body dynamics coupled with actuator and flexible modes. A new approach to augmentation of an existing linear controller is introduced. It is especially useful when there is limited information concerning the plant model, and the existing controller. The approach is applied to the design of an adaptive autopilot for a guided munition. Design of a neural network adaptive control that ensures asymptotically stable tracking performance is also addressed.
NASA Astrophysics Data System (ADS)
Sheng, Qin; Sun, Hai-wei
2016-11-01
This study concerns the asymptotic stability of an eikonal, or ray, transformation based Peaceman-Rachford splitting method for solving the paraxial Helmholtz equation with high wave numbers. Arbitrary nonuniform grids are considered in transverse and beam propagation directions. The differential equation targeted has been used for modeling propagations of high intensity laser pulses over a long distance without diffractions. Self-focusing of high intensity beams may be balanced with the de-focusing effect of created ionized plasma channel in the situation, and applications of grid adaptations are frequently essential. It is shown rigorously that the fully discretized oscillation-free decomposition method on arbitrary adaptive grids is asymptotically stable with a stability index one. Simulation experiments are carried out to illustrate our concern and conclusions.
Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics
NASA Technical Reports Server (NTRS)
Stowers, S. T.; Bass, J. M.; Oden, J. T.
1993-01-01
A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.
Liu, Hui; Zhang, Cai-Ming; Su, Zhi-Yuan; Wang, Kai; Deng, Kai
2015-01-01
The key problem of computer-aided diagnosis (CAD) of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO) pulmonary nodules than other typical algorithms. PMID:25945120
An a posteriori-driven adaptive Mixed High-Order method with application to electrostatics
NASA Astrophysics Data System (ADS)
Di Pietro, Daniele A.; Specogna, Ruben
2016-12-01
In this work we propose an adaptive version of the recently introduced Mixed High-Order method and showcase its performance on a comprehensive set of academic and industrial problems in computational electromagnetism. The latter include, in particular, the numerical modeling of comb-drive and MEMS devices. Mesh adaptation is driven by newly derived, residual-based error estimators. The resulting method has several advantageous features: It supports fairly general meshes, it enables arbitrary approximation orders, and has a moderate computational cost thanks to hybridization and static condensation. The a posteriori-driven mesh refinement is shown to significantly enhance the performance on problems featuring singular solutions, allowing to fully exploit the high-order of approximation.
An adaptive tau-leaping method for stochastic simulations of reaction-diffusion systems
NASA Astrophysics Data System (ADS)
Padgett, Jill M. A.; Ilie, Silvana
2016-03-01
Stochastic modelling is critical for studying many biochemical processes in a cell, in particular when some reacting species have low population numbers. For many such cellular processes the spatial distribution of the molecular species plays a key role. The evolution of spatially heterogeneous biochemical systems with some species in low amounts is accurately described by the mesoscopic model of the Reaction-Diffusion Master Equation. The Inhomogeneous Stochastic Simulation Algorithm provides an exact strategy to numerically solve this model, but it is computationally very expensive on realistic applications. We propose a novel adaptive time-stepping scheme for the tau-leaping method for approximating the solution of the Reaction-Diffusion Master Equation. This technique combines effective strategies for variable time-stepping with path preservation to reduce the computational cost, while maintaining the desired accuracy. The numerical tests on various examples arising in applications show the improved efficiency achieved by the new adaptive method.
The Limits to Adaptation: A Systems Approach
The ability to adapt to climate change is delineated by capacity thresholds, after which climate damages begin to overwhelm the adaptation response. Such thresholds depend upon physical properties (natural processes and engineering parameters), resource constraints (expressed th...
System and method for adaptively deskewing parallel data signals relative to a clock
Jenkins, Philip Nord; Cornett, Frank N.
2008-10-07
A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in detected skew.
System and method for adaptively deskewing parallel data signals relative to a clock
Jenkins, Philip Nord; Cornett, Frank N
2011-10-04
A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in detected skew.
A Mass Conservation Algorithm for Adaptive Unrefinement Meshes Used by Finite Element Methods
2012-01-01
dimensional mesh generation. In: Proc. 4th ACM-SIAM Symp. on Disc. Algorithms. (1993) 83–92 [9] Weatherill, N., Hassan, O., Marcum, D., Marchant, M.: Grid ...Conference on Computational Science, ICCS 2012 A Mass Conservation Algorithm For Adaptive Unrefinement Meshes Used By Finite Element Methods Hung V. Nguyen...velocity fields, and chemical distribution, as well as conserve mass, especially for water quality applications. Solution accuracy depends highly on mesh
Anderson, R W; Pember, R B; Elliot, N S
2000-09-26
A new method for the solution of the unsteady Euler equations has been developed. The method combines staggered grid Lagrangian techniques with structured local adaptive mesh refinement (AMR). This method is a precursor to a more general adaptive arbitrary Lagrangian Eulerian (ALE-AMR) algorithm under development, which will facilitate the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required. Many of the core issues involved in the development of the ALE-AMR method hinge upon the integration of AMR with a Lagrange step, which is the focus of the work described here. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. These new algorithmic components are first developed in one dimension and are then generalized to two dimensions. Solutions of several model problems involving shock hydrodynamics are presented and discussed.
NASA Astrophysics Data System (ADS)
Pedretti, Daniele; Fernàndez-Garcia, Daniel
2013-09-01
Particle tracking methods to simulate solute transport deal with the issue of having to reconstruct smooth concentrations from a limited number of particles. This is an error-prone process that typically leads to large fluctuations in the determined late-time behavior of breakthrough curves (BTCs). Kernel density estimators (KDE) can be used to automatically reconstruct smooth BTCs from a small number of particles. The kernel approach incorporates the uncertainty associated with subsampling a large population by equipping each particle with a probability density function. Two broad classes of KDE methods can be distinguished depending on the parametrization of this function: global and adaptive methods. This paper shows that each method is likely to estimate a specific portion of the BTCs. Although global methods offer a valid approach to estimate early-time behavior and peak of BTCs, they exhibit important fluctuations at the tails where fewer particles exist. In contrast, locally adaptive methods improve tail estimation while oversmoothing both early-time and peak concentrations. Therefore a new method is proposed combining the strength of both KDE approaches. The proposed approach is universal and only needs one parameter (α) which slightly depends on the shape of the BTCs. Results show that, for the tested cases, heavily-tailed BTCs are properly reconstructed with α ≈ 0.5 .
Adaptive reproducing kernel particle method for extraction of the cortical surface.
Xu, Meihe; Thompson, Paul M; Toga, Arthur W
2006-06-01
We propose a novel adaptive approach based on the Reproducing Kernel Particle Method (RKPM) to extract the cortical surfaces of the brain from three-dimensional (3-D) magnetic resonance images (MRIs). To formulate the discrete equations of the deformable model, a flexible particle shape function is employed in the Galerkin approximation of the weak form of the equilibrium equations. The proposed support generation method ensures that support of all particles cover the entire computational domains. The deformable model is adaptively adjusted by dilating the shape function and by inserting or merging particles in the high curvature regions or regions stopped by the target boundary. The shape function of the particle with a dilation parameter is adaptively constructed in response to particle insertion or merging. The proposed method offers flexibility in representing highly convolved structures and in refining the deformable models. Self-intersection of the surface, during evolution, is prevented by tracing backward along gradient descent direction from the crest interface of the distance field, which is computed by fast marching. These operations involve a significant computational cost. The initial model for the deformable surface is simple and requires no prior knowledge of the segmented structure. No specific template is required, e.g., an average cortical surface obtained from many subjects. The extracted cortical surface efficiently localizes the depths of the cerebral sulci, unlike some other active surface approaches that penalize regions of high curvature. Comparisons with manually segmented landmark data are provided to demonstrate the high accuracy of the proposed method. We also compare the proposed method to the finite element method, and to a commonly used cortical surface extraction approach, the CRUISE method. We also show that the independence of the shape functions of the RKPM from the underlying mesh enhances the convergence speed of the deformable
Threshold magnitudes for a multichannel correlation detector in background seismicity
Carmichael, Joshua D.; Hartse, Hans
2016-04-01
Colocated explosive sources often produce correlated seismic waveforms. Multichannel correlation detectors identify these signals by scanning template waveforms recorded from known reference events against "target" data to find similar waveforms. This screening problem is challenged at thresholds required to monitor smaller explosions, often because non-target signals falsely trigger such detectors. Therefore, it is generally unclear what thresholds will reliably identify a target explosion while screening non-target background seismicity. Here, we estimate threshold magnitudes for hypothetical explosions located at the North Korean nuclear test site over six months of 2010, by processing International Monitoring System (IMS) array data with a multichannelmore » waveform correlation detector. Our method (1) accounts for low amplitude background seismicity that falsely triggers correlation detectors but is unidentifiable with conventional power beams, (2) adapts to diurnally variable noise levels and (3) uses source-receiver reciprocity concepts to estimate thresholds for explosions spatially separated from the template source. Furthermore, we find that underground explosions with body wave magnitudes mb = 1.66 are detectable at the IMS array USRK with probability 0.99, when using template waveforms consisting only of P -waves, without false alarms. We conservatively find that these thresholds also increase by up to a magnitude unit for sources located 4 km or more from the Feb.12, 2013 announced nuclear test.« less
Threshold magnitudes for a multichannel correlation detector in background seismicity
Carmichael, Joshua D.; Hartse, Hans
2016-04-01
Colocated explosive sources often produce correlated seismic waveforms. Multichannel correlation detectors identify these signals by scanning template waveforms recorded from known reference events against "target" data to find similar waveforms. This screening problem is challenged at thresholds required to monitor smaller explosions, often because non-target signals falsely trigger such detectors. Therefore, it is generally unclear what thresholds will reliably identify a target explosion while screening non-target background seismicity. Here, we estimate threshold magnitudes for hypothetical explosions located at the North Korean nuclear test site over six months of 2010, by processing International Monitoring System (IMS) array data with a multichannel waveform correlation detector. Our method (1) accounts for low amplitude background seismicity that falsely triggers correlation detectors but is unidentifiable with conventional power beams, (2) adapts to diurnally variable noise levels and (3) uses source-receiver reciprocity concepts to estimate thresholds for explosions spatially separated from the template source. Furthermore, we find that underground explosions with body wave magnitudes m_{b} = 1.66 are detectable at the IMS array USRK with probability 0.99, when using template waveforms consisting only of P -waves, without false alarms. We conservatively find that these thresholds also increase by up to a magnitude unit for sources located 4 km or more from the Feb.12, 2013 announced nuclear test.
Crowley, Stephanie J.; Suh, Christina; Molina, Thomas A.; Fogg, Louis F.; Sharkey, Katherine M.; Carskadon, Mary A.
2016-01-01
Objective/Background Circadian rhythm sleep-wake disorders often manifest during the adolescent years. Measurement of circadian phase such as the Dim Light Melatonin Onset (DLMO) improves diagnosis and treatment of these disorders, but financial and time costs limit the use of DLMO phase assessments in clinic. The current analysis aims to inform a cost-effective and efficient protocol to measure the DLMO in older adolescents by reducing the number of samples and total sampling duration. Patients/Methods A total of 66 healthy adolescents (26 males) aged 14.8 to 17.8 years participated in a study in which sleep was fixed for one week before they came to the laboratory for saliva collection in dim light (<20 lux). Two partial 6-h salivary melatonin profiles were derived for each participant. Both profiles began 5 h before bedtime and ended 1 h after bedtime, but one profile was derived from samples taken every 30 mins (13 samples) and the other from samples taken every 60 mins (7 samples). Three standard thresholds (first 3 melatonin values mean + 2 SDs, 3 pg/mL, and 4 pg/mL) were used to compute the DLMO. Agreement between DLMOs derived from 30-min and 60-min sampling rates was determined using a Bland-Altman analysis; agreement between sampling rate DLMOs was defined as ± 1 h. Results and Conclusions Within a 6-h sampling window, 60-min sampling provided DLMO estimates that were within ± 1 h of DLMO from 30-min sampling, but only when an absolute threshold (3 pg/mL or 4 pg/mL) was used to compute the DLMO. Future analyses should be extended to include adolescents with circadian rhythm sleep-wake disorders. PMID:27318227
Daneshmand, Saeed; Marathe, Thyagaraja; Lachapelle, Gérard
2016-10-31
The use of antenna arrays in Global Navigation Satellite System (GNSS) applications is gaining significant attention due to its superior capability to suppress both narrowband and wideband interference. However, the phase distortions resulting from array processing may limit the applicability of these methods for high precision applications using carrier phase based positioning techniques. This paper studies the phase distortions occurring with the adaptive blind beamforming method in which satellite angle of arrival (AoA) information is not employed in the optimization problem. To cater to non-stationary interference scenarios, the array weights of the adaptive beamformer are continuously updated. The effects of these continuous updates on the tracking parameters of a GNSS receiver are analyzed. The second part of this paper focuses on reducing the phase distortions during the blind beamforming process in order to allow the receiver to perform carrier phase based positioning by applying a constraint on the structure of the array configuration and by compensating the array uncertainties. Limitations of the previous methods are studied and a new method is proposed that keeps the simplicity of the blind beamformer structure and, at the same time, reduces tracking degradations while achieving millimetre level positioning accuracy in interference environments. To verify the applicability of the proposed method and analyze the degradations, array signals corresponding to the GPS L1 band are generated using a combination of hardware and software simulators. Furthermore, the amount of degradation and performance of the proposed method under different conditions are evaluated based on Monte Carlo simulations.
Daneshmand, Saeed; Marathe, Thyagaraja; Lachapelle, Gérard
2016-01-01
The use of antenna arrays in Global Navigation Satellite System (GNSS) applications is gaining significant attention due to its superior capability to suppress both narrowband and wideband interference. However, the phase distortions resulting from array processing may limit the applicability of these methods for high precision applications using carrier phase based positioning techniques. This paper studies the phase distortions occurring with the adaptive blind beamforming method in which satellite angle of arrival (AoA) information is not employed in the optimization problem. To cater to non-stationary interference scenarios, the array weights of the adaptive beamformer are continuously updated. The effects of these continuous updates on the tracking parameters of a GNSS receiver are analyzed. The second part of this paper focuses on reducing the phase distortions during the blind beamforming process in order to allow the receiver to perform carrier phase based positioning by applying a constraint on the structure of the array configuration and by compensating the array uncertainties. Limitations of the previous methods are studied and a new method is proposed that keeps the simplicity of the blind beamformer structure and, at the same time, reduces tracking degradations while achieving millimetre level positioning accuracy in interference environments. To verify the applicability of the proposed method and analyze the degradations, array signals corresponding to the GPS L1 band are generated using a combination of hardware and software simulators. Furthermore, the amount of degradation and performance of the proposed method under different conditions are evaluated based on Monte Carlo simulations. PMID:27809252
Atzberger, Paul J.
2010-05-01
Stochastic partial differential equations are introduced for the continuum concentration fields of reaction-diffusion systems. The stochastic partial differential equations account for fluctuations arising from the finite number of molecules which diffusively migrate and react. Spatially adaptive stochastic numerical methods are developed for approximation of the stochastic partial differential equations. The methods allow for adaptive meshes with multiple levels of resolution, Neumann and Dirichlet boundary conditions, and domains having geometries with curved boundaries. A key issue addressed by the methods is the formulation of consistent discretizations for the stochastic driving fields at coarse-refined interfaces of the mesh and at boundaries. Methods are also introduced for the efficient generation of the required stochastic driving fields on such meshes. As a demonstration of the methods, investigations are made of the role of fluctuations in a biological model for microorganism direction sensing based on concentration gradients. Also investigated, a mechanism for spatial pattern formation induced by fluctuations. The discretization approaches introduced for SPDEs have the potential to be widely applicable in the development of numerical methods for the study of spatially extended stochastic systems.
Threshold Concepts in Biochemistry
ERIC Educational Resources Information Center
Loertscher, Jennifer
2011-01-01
Threshold concepts can be identified for any discipline and provide a framework for linking student learning to curricular design. Threshold concepts represent a transformed understanding of a discipline, without which the learner cannot progress and are therefore pivotal in learning in a discipline. Although threshold concepts have been…
NASA Astrophysics Data System (ADS)
Abedini, Mohammad; Nojoumian, Mohammad Ali; Salarieh, Hassan; Meghdari, Ali
2015-08-01
In this paper, model reference control of a fractional order system has been discussed. In order to control the fractional order plant, discrete-time approximation methods have been applied. Plant and reference model are discretized by Grünwald-Letnikov definition of the fractional order derivative using "Short Memory Principle". Unknown parameters of the fractional order system are appeared in the discrete time approximate model as combinations of parameters of the main system. The discrete time MRAC via RLS identification is modified to estimate the parameters and control the fractional order plant. Numerical results show the effectiveness of the proposed method of model reference adaptive control.
Souza-Junior, Eduardo José; de Souza-Régis, Marcos Ribeiro; Alonso, Roberta Caroline Bruschi; de Freitas, Anderson Pinheiro; Sinhoreti, Mario Alexandre Coelho; Cunha, Leonardo Gonçalves
2011-01-01
The aim of the present study was to evaluate the influence of curing methods and composite volumes on the marginal and internal adaptation of composite restoratives. Two cavities with different volumes (Lower volume: 12.6 mm(3); Higher volume: 24.5 mm(3)) were prepared on the buccal surface of 60 bovine teeth and restored using Filtek Z250 in bulk filling. For each cavity, specimens were randomly assigned into three groups according to the curing method (n=10): 1) continuous light (CL: 27 seconds at 600 mW/cm(2)); 2) soft-start (SS: 10 seconds at 150 mW/cm(2)+24 seconds at 600 mW/cm(2)); and 3) pulse delay (PD: five seconds at 150 mW/cm(2)+three minutes with no light+25 seconds at 600 mW/cm(2)). The radiant exposure for all groups was 16 J/cm(2). Marginal adaptation was measured with the dye staining gap procedure, using Caries Detector. Outer margins were stained for five seconds and the gap percentage was determined using digital images on a computer measurement program (Image Tool). Then, specimens were sectioned in slices and stained for five seconds, and the internal gaps were measured using the same method. Data were submitted to two-way analysis of variance and Tukey test (p<0.05). Composite volume had a significant influence on superficial and internal gap formation, depending on the curing method. For CL groups, restorations with higher volume showed higher marginal gap incidence than did the lower volume restorations. Additionally, the effect of the curing method depended on the volume. Regarding marginal adaptation, SS resulted in a significant reduction of gap formation, when compared to CL, for higher volume restorations. For lower volume restorations, there was no difference among the curing methods. For internal adaptation, the modulated curing methods SS and PD promoted a significant reduction of gap formation, when compared to CL, only for the lower volume restoration. Therefore, in similar conditions of the cavity configuration, the higher the
An Adaptive INS-Aided PLL Tracking Method for GNSS Receivers in Harsh Environments.
Cong, Li; Li, Xin; Jin, Tian; Yue, Song; Xue, Rui
2016-01-23
As the weak link in global navigation satellite system (GNSS) signal processing, the phase-locked loop (PLL) is easily influenced with frequent cycle slips and loss of lock as a result of higher vehicle dynamics and lower signal-to-noise ratios. With inertial navigation system (INS) aid, PLLs' tracking performance can be improved. However, for harsh environments with high dynamics and signal attenuation, the traditional INS-aided PLL with fixed loop parameters has some limitations to improve the tracking adaptability. In this paper, an adaptive INS-aided PLL capable of adjusting its noise bandwidth and coherent integration time has been proposed. Through theoretical analysis, the relation between INS-aided PLL phase tracking error and carrier to noise density ratio (C/N₀), vehicle dynamics, aiding information update time, noise bandwidth, and coherent integration time has been built. The relation formulae are used to choose the optimal integration time and bandwidth for a given application under the minimum tracking error criterion. Software and hardware simulation results verify the correctness of the theoretical analysis, and demonstrate that the adaptive tracking method can effectively improve the PLL tracking ability and integrated GNSS/INS navigation performance. For harsh environments, the tracking sensitivity is increased by 3 to 5 dB, velocity errors are decreased by 36% to 50% and position errors are decreased by 6% to 24% when compared with other INS-aided PLL methods.
Classical FEM-BEM coupling methods: nonlinearities, well-posedness, and adaptivity
NASA Astrophysics Data System (ADS)
Aurada, Markus; Feischl, Michael; Führer, Thomas; Karkulik, Michael; Melenk, Jens Markus; Praetorius, Dirk
2013-04-01
We consider a (possibly) nonlinear interface problem in 2D and 3D, which is solved by use of various adaptive FEM-BEM coupling strategies, namely the Johnson-Nédélec coupling, the Bielak-MacCamy coupling, and Costabel's symmetric coupling. We provide a framework to prove that the continuous as well as the discrete Galerkin solutions of these coupling methods additionally solve an appropriate operator equation with a Lipschitz continuous and strongly monotone operator. Therefore, the original coupling formulations are well-defined, and the Galerkin solutions are quasi-optimal in the sense of a Céa-type lemma. For the respective Galerkin discretizations with lowest-order polynomials, we provide reliable residual-based error estimators. Together with an estimator reduction property, we prove convergence of the adaptive FEM-BEM coupling methods. A key point for the proof of the estimator reduction are novel inverse-type estimates for the involved boundary integral operators which are advertized. Numerical experiments conclude the work and compare performance and effectivity of the three adaptive coupling procedures in the presence of generic singularities.
Parallel level-set methods on adaptive tree-based grids
NASA Astrophysics Data System (ADS)
Mirzadeh, Mohammad; Guittet, Arthur; Burstedde, Carsten; Gibou, Frederic
2016-10-01
We present scalable algorithms for the level-set method on dynamic, adaptive Quadtree and Octree Cartesian grids. The algorithms are fully parallelized and implemented using the MPI standard and the open-source p4est library. We solve the level set equation with a semi-Lagrangian method which, similar to its serial implementation, is free of any time-step restrictions. This is achieved by introducing a scalable global interpolation scheme on adaptive tree-based grids. Moreover, we present a simple parallel reinitialization scheme using the pseudo-time transient formulation. Both parallel algorithms scale on the Stampede supercomputer, where we are currently using up to 4096 CPU cores, the limit of our current account. Finally, a relevant application of the algorithms is presented in modeling a crystallization phenomenon by solving a Stefan problem, illustrating a level of detail that would be impossible to achieve without a parallel adaptive strategy. We believe that the algorithms presented in this article will be of interest and useful to researchers working with the level-set framework and modeling multi-scale physics in general.
An Adaptive INS-Aided PLL Tracking Method for GNSS Receivers in Harsh Environments
Cong, Li; Li, Xin; Jin, Tian; Yue, Song; Xue, Rui
2016-01-01
As the weak link in global navigation satellite system (GNSS) signal processing, the phase-locked loop (PLL) is easily influenced with frequent cycle slips and loss of lock as a result of higher vehicle dynamics and lower signal-to-noise ratios. With inertial navigation system (INS) aid, PLLs’ tracking performance can be improved. However, for harsh environments with high dynamics and signal attenuation, the traditional INS-aided PLL with fixed loop parameters has some limitations to improve the tracking adaptability. In this paper, an adaptive INS-aided PLL capable of adjusting its noise bandwidth and coherent integration time has been proposed. Through theoretical analysis, the relation between INS-aided PLL phase tracking error and carrier to noise density ratio (C/N0), vehicle dynamics, aiding information update time, noise bandwidth, and coherent integration time has been built. The relation formulae are used to choose the optimal integration time and bandwidth for a given application under the minimum tracking error criterion. Software and hardware simulation results verify the correctness of the theoretical analysis, and demonstrate that the adaptive tracking method can effectively improve the PLL tracking ability and integrated GNSS/INS navigation performance. For harsh environments, the tracking sensitivity is increased by 3 to 5 dB, velocity errors are decreased by 36% to 50% and position errors are decreased by 6% to 24% when compared with other INS-aided PLL methods. PMID:26805853
Lee, W H; Kim, T-S; Cho, M H; Ahn, Y B; Lee, S Y
2006-12-07
In studying bioelectromagnetic problems, finite element analysis (FEA) offers several advantages over conventional methods such as the boundary element method. It allows truly volumetric analysis and incorporation of material properties such as anisotropic conductivity. For FEA, mesh generation is the first critical requirement and there exist many different approaches. However, conventional approaches offered by commercial packages and various algorithms do not generate content-adaptive meshes (cMeshes), resulting in numerous nodes and elements in modelling the conducting domain, and thereby increasing computational load and demand. In this work, we present efficient content-adaptive mesh generation schemes for complex biological volumes of MR images. The presented methodology is fully automatic and generates FE meshes that are adaptive to the geometrical contents of MR images, allowing optimal representation of conducting domain for FEA. We have also evaluated the effect of cMeshes on FEA in three dimensions by comparing the forward solutions from various cMesh head models to the solutions from the reference FE head model in which fine and equidistant FEs constitute the model. The results show that there is a significant gain in computation time with minor loss in numerical accuracy. We believe that cMeshes should be useful in the FEA of bioelectromagnetic problems.
NASA Astrophysics Data System (ADS)
Lee, W. H.; Kim, T.-S.; Cho, M. H.; Ahn, Y. B.; Lee, S. Y.
2006-12-01
In studying bioelectromagnetic problems, finite element analysis (FEA) offers several advantages over conventional methods such as the boundary element method. It allows truly volumetric analysis and incorporation of material properties such as anisotropic conductivity. For FEA, mesh generation is the first critical requirement and there exist many different approaches. However, conventional approaches offered by commercial packages and various algorithms do not generate content-adaptive meshes (cMeshes), resulting in numerous nodes and elements in modelling the conducting domain, and thereby increasing computational load and demand. In this work, we present efficient content-adaptive mesh generation schemes for complex biological volumes of MR images. The presented methodology is fully automatic and generates FE meshes that are adaptive to the geometrical contents of MR images, allowing optimal representation of conducting domain for FEA. We have also evaluated the effect of cMeshes on FEA in three dimensions by comparing the forward solutions from various cMesh head models to the solutions from the reference FE head model in which fine and equidistant FEs constitute the model. The results show that there is a significant gain in computation time with minor loss in numerical accuracy. We believe that cMeshes should be useful in the FEA of bioelectromagnetic problems.
Pulse front adaptive optics: a new method for control of ultrashort laser pulses.
Sun, Bangshan; Salter, Patrick S; Booth, Martin J
2015-07-27
Ultrafast lasers enable a wide range of physics research and the manipulation of short pulses is a critical part of the ultrafast tool kit. Current methods of laser pulse shaping are usually considered separately in either the spatial or the temporal domain, but laser pulses are complex entities existing in four dimensions, so full freedom of manipulation requires advanced forms of spatiotemporal control. We demonstrate through a combination of adaptable diffractive and reflective optical elements - a liquid crystal spatial light modulator (SLM) and a deformable mirror (DM) - decoupled spatial control over the pulse front (temporal group delay) and phase front of an ultra-short pulse was enabled. Pulse front modulation was confirmed through autocorrelation measurements. This new adaptive optics technique, for the first time enabling in principle arbitrary shaping of the pulse front, promises to offer a further level of control for ultrafast lasers.
A DAFT DL_POLY distributed memory adaptation of the Smoothed Particle Mesh Ewald method
NASA Astrophysics Data System (ADS)
Bush, I. J.; Todorov, I. T.; Smith, W.
2006-09-01
The Smoothed Particle Mesh Ewald method [U. Essmann, L. Perera, M.L. Berkowtz, T. Darden, H. Lee, L.G. Pedersen, J. Chem. Phys. 103 (1995) 8577] for calculating long ranged forces in molecular simulation has been adapted for the parallel molecular dynamics code DL_POLY_3 [I.T. Todorov, W. Smith, Philos. Trans. Roy. Soc. London 362 (2004) 1835], making use of a novel 3D Fast Fourier Transform (DAFT) [I.J. Bush, The Daresbury Advanced Fourier transform, Daresbury Laboratory, 1999] that perfectly matches the Domain Decomposition (DD) parallelisation strategy [W. Smith, Comput. Phys. Comm. 62 (1991) 229; M.R.S. Pinches, D. Tildesley, W. Smith, Mol. Sim. 6 (1991) 51; D. Rapaport, Comput. Phys. Comm. 62 (1991) 217] of the DL_POLY_3 code. In this article we describe software adaptations undertaken to import this functionality and provide a review of its performance.
An adaptive two-stage dose-response design method for establishing Proof of Concept
Franchetti, Yoko; Anderson, Stewart J.; Sampson, Allan R.
2013-01-01
We propose an adaptive two-stage dose-response design where a pre-specified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish ‘global’ PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs. PMID:23957520
Computation of variably saturated subsurface flow by adaptive mixed hybrid finite element methods
NASA Astrophysics Data System (ADS)
Bause, M.; Knabner, P.
2004-06-01
We present adaptive mixed hybrid finite element discretizations of the Richards equation, a nonlinear parabolic partial differential equation modeling the flow of water into a variably saturated porous medium. The approach simultaneously constructs approximations of the flux and the pressure head in Raviart-Thomas spaces. The resulting nonlinear systems of equations are solved by a Newton method. For the linear problems of the Newton iteration a multigrid algorithm is used. We consider two different kinds of error indicators for space adaptive grid refinement: superconvergence and residual based indicators. They can be calculated easily by means of the available finite element approximations. This seems attractive for computations since no additional (sub-)problems have to be solved. Computational experiments conducted for realistic water table recharge problems illustrate the effectiveness and robustness of the approach.
An adaptive segment method for smoothing lidar signal based on noise estimation
NASA Astrophysics Data System (ADS)
Wang, Yuzhao; Luo, Pingping
2014-10-01
An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.
Adaptive Projection Subspace Dimension for the Thick-Restart Lanczos Method
Yamazaki, Ichitaro; Bai, Zhaojun; Simon, Horst; Wang, Lin-Wang; Wu, K.
2008-10-01
The Thick-Restart Lanczos (TRLan) method is an effective method for solving large-scale Hermitian eigenvalue problems. However, its performance strongly depends on the dimension of the projection subspace. In this paper, we propose an objective function to quantify the effectiveness of a chosen subspace dimension, and then introduce an adaptive scheme to dynamically adjust the dimension at each restart. An open-source software package, nu-TRLan, which implements the TRLan method with this adaptive projection subspace dimension is available in the public domain. The numerical results of synthetic eigenvalue problems are presented to demonstrate that nu-TRLan achieves speedups of between 0.9 and 5.1 over the static method using a default subspace dimension. To demonstrate the effectiveness of nu-TRLan in a real application, we apply it to the electronic structure calculations of quantum dots. We show that nu-TRLan can achieve speedups of greater than 1.69 over the state-of-the-art eigensolver for this application, which is based on the Conjugate Gradient method with a powerful preconditioner.
Cen, Guanjun; Yu, Yonghao; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao
2015-01-01
In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks' rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby's growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods.
Cen, Guanjun; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao
2015-01-01
In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks’ rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby’s growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods. PMID:26546689
Jokinen, Emma; Yrttiaho, Santeri; Pulakka, Hannu; Vainio, Martti; Alku, Paavo
2012-12-01
Post-filtering can be utilized to improve the quality and intelligibility of telephone speech. Previous studies have shown that energy reallocation with a high-pass type filter works effectively in improving the intelligibility of speech in difficult noise conditions. The present study introduces a signal-to-noise ratio adaptive post-filtering method that utilizes energy reallocation to transfer energy from the first formant to higher frequencies. The proposed method adapts to the level of the background noise so that, in favorable noise conditions, the post-filter has a flat frequency response and the effect of the post-filtering is increased as the level of the ambient noise increases. The performance of the proposed method is compared with a similar post-filtering algorithm and unprocessed speech in subjective listening tests which evaluate both intelligibility and listener preference. The results indicate that both of the post-filtering methods maintain the quality of speech in negligible noise conditions and are able to provide intelligibility improvement over unprocessed speech in adverse noise conditions. Furthermore, the proposed post-filtering algorithm performs better than the other post-filtering method under evaluation in moderate to difficult noise conditions, where intelligibility improvement is mostly required.
A Parallel Adaptive Wavelet Method for the Simulation of Compressible Reacting Flows
NASA Astrophysics Data System (ADS)
Zikoski, Zachary; Paolucci, Samuel
2011-11-01
The Wavelet Adaptive Multiresolution Representation (WAMR) method provides a robust method for controlling spatial grid adaption--fine grid spacing in regions of a solution requiring high resolution (i.e. near steep gradients, singularities, or near- singularities) and using much coarser grid spacing where the solution is slowly varying. The sparse grids produced using the WAMR method exhibit very high compression ratios compared to uniform grids of equivalent resolution. Subsequently, a wide range of spatial scales often occurring in continuum physics models can be captured efficiently. Furthermore, the wavelet transform provides a direct measure of local error at each grid point, effectively producing automatically verified solutions. The algorithm is parallelized using an MPI-based domain decomposition approach suitable for a wide range of distributed-memory parallel architectures. The method is applied to the solution of the compressible, reactive Navier-Stokes equations and includes multi-component diffusive transport and chemical kinetics models. Results for the method's parallel performance are reported, and its effectiveness on several challenging compressible reacting flow problems is highlighted.
A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection
Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; ...
2015-06-24
This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the newmore » technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.« less
Johansson, A Torbjorn; White, Paul R
2011-08-01
This paper proposes an adaptive filter-based method for detection and frequency estimation of whistle calls, such as the calls of birds and marine mammals, which are typically analyzed in the time-frequency domain using a spectrogram. The approach taken here is based on adaptive notch filtering, which is an established technique for frequency tracking. For application to automatic whistle processing, methods for detection and improved frequency tracking through frequency crossings as well as interfering transients are developed and coupled to the frequency tracker. Background noise estimation and compensation is accomplished using order statistics and pre-whitening. Using simulated signals as well as recorded calls of marine mammals and a human whistled speech utterance, it is shown that the proposed method can detect more simultaneous whistles than two competing spectrogram-based methods while not reporting any false alarms on the example datasets. In one example, it extracts complete 1.4 and 1.8 s bottlenose dolphin whistles successfully through frequency crossings. The method performs detection and estimates frequency tracks even at high sweep rates. The algorithm is also shown to be effective on human whistled utterances.
NASA Astrophysics Data System (ADS)
Bu, Guochao; Wang, Pei
2016-04-01
Terrestrial laser scanning (TLS) has been used to extract accurate forest biophysical parameters for inventory purposes. The diameter at breast height (DBH) is a key parameter for individual trees because it has the potential for modeling the height, volume, biomass, and carbon sequestration potential of the tree based on empirical allometric scaling equations. In order to extract the DBH from the single-scan data of TLS automatically and accurately within a certain range, we proposed an adaptive circle-ellipse fitting method based on the point cloud transect. This proposed method can correct the error caused by the simple circle fitting method when a tree is slanted. A slanted tree was detected by the circle-ellipse fitting analysis, then the corresponding slant angle was found based on the ellipse fitting result. With this information, the DBH of the trees could be recalculated based on reslicing the point cloud data at breast height. Artificial stem data simulated by a cylindrical model of leaning trees and the scanning data acquired with the RIEGL VZ-400 were used to test the proposed adaptive fitting method. The results shown that the proposed method can detect the trees and accurately estimate the DBH for leaning trees.
A hyper-spherical adaptive sparse-grid method for high-dimensional discontinuity detection
Zhang, Guannan; Webster, Clayton G; Gunzburger, Max D; Burkardt, John V
2014-03-01
This work proposes and analyzes a hyper-spherical adaptive hi- erarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces is proposed. The method is motivated by the the- oretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a func- tion representation of the discontinuity hyper-surface of an N-dimensional dis- continuous quantity of interest, by virtue of a hyper-spherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyper-spherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smooth- ness of the hyper-surface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous error estimates and complexity anal- yses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.
Locomotor adaptation to a powered ankle-foot orthosis depends on control method
Cain, Stephen M; Gordon, Keith E; Ferris, Daniel P
2007-01-01
Background We studied human locomotor adaptation to powered ankle-foot orthoses with the intent of identifying differences between two different orthosis control methods. The first orthosis control method used a footswitch to provide bang-bang control (a kinematic control) and the second orthosis control method used a proportional myoelectric signal from the soleus (a physiological control). Both controllers activated an artificial pneumatic muscle providing plantar flexion torque. Methods Subjects walked on a treadmill for two thirty-minute sessions spaced three days apart under either footswitch control (n = 6) or myoelectric control (n = 6). We recorded lower limb electromyography (EMG), joint kinematics, and orthosis kinetics. We compared stance phase EMG amplitudes, correlation of joint angle patterns, and mechanical work performed by the powered orthosis between the two controllers over time. Results During steady state at the end of the second session, subjects using proportional myoelectric control had much lower soleus and gastrocnemius activation than the subjects using footswitch control. The substantial decrease in triceps surae recruitment allowed the proportional myoelectric control subjects to walk with ankle kinematics close to normal and reduce negative work performed by the orthosis. The footswitch control subjects walked with substantially perturbed ankle kinematics and performed more negative work with the orthosis. Conclusion These results provide evidence that the choice of orthosis control method can greatly alter how humans adapt to powered orthosis assistance during walking. Specifically, proportional myoelectric control results in larger reductions in muscle activation and gait kinematics more similar to normal compared to footswitch control. PMID:18154649
The stochastic control of the F-8C aircraft using the Multiple Model Adaptive Control (MMAC) method
NASA Technical Reports Server (NTRS)
Athans, M.; Dunn, K. P.; Greene, E. S.; Lee, W. H.; Sandel, N. R., Jr.
1975-01-01
The purpose of this paper is to summarize results obtained for the adaptive control of the F-8C aircraft using the so-called Multiple Model Adaptive Control method. The discussion includes the selection of the performance criteria for both the lateral and the longitudinal dynamics, the design of the Kalman filters for different flight conditions, the 'identification' aspects of the design using hypothesis testing ideas, and the performance of the closed loop adaptive system.
Calvo, Juan Francisco; San José, Sol; Garrido, LLuís; Puertas, Enrique; Moragues, Sandra; Pozo, Miquel; Casals, Joan
2013-10-01
To introduce an approach for online adaptive replanning (i.e., dose-guided radiosurgery) in frameless stereotactic radiosurgery, when a 6-dimensional (6D) robotic couch is not available in the linear accelerator (linac). Cranial radiosurgical treatments are planned in our department using intensity-modulated technique. Patients are immobilized using thermoplastic mask. A cone-beam computed tomography (CBCT) scan is acquired after the initial laser-based patient setup (CBCT{sub setup}). The online adaptive replanning procedure we propose consists of a 6D registration-based mapping of the reference plan onto actual CBCT{sub setup}, followed by a reoptimization of the beam fluences (“6D plan”) to achieve similar dosage as originally was intended, while the patient is lying in the linac couch and the original beam arrangement is kept. The goodness of the online adaptive method proposed was retrospectively analyzed for 16 patients with 35 targets treated with CBCT-based frameless intensity modulated technique. Simulation of reference plan onto actual CBCT{sub setup}, according to the 4 degrees of freedom, supported by linac couch was also generated for each case (4D plan). Target coverage (D99%) and conformity index values of 6D and 4D plans were compared with the corresponding values of the reference plans. Although the 4D-based approach does not always assure the target coverage (D99% between 72% and 103%), the proposed online adaptive method gave a perfect coverage in all cases analyzed as well as a similar conformity index value as was planned. Dose-guided radiosurgery approach is effective to assure the dose coverage and conformity of an intracranial target volume, avoiding resetting the patient inside the mask in a “trial and error” way so as to remove the pitch and roll errors when a robotic table is not available.
Self-adaptive method for high frequency multi-channel analysis of surface wave method
Technology Transfer Automated Retrieval System (TEKTRAN)
When the high frequency multi-channel analysis of surface waves (MASW) method is conducted to explore soil properties in the vadose zone, existing rules for selecting the near offset and spread lengths cannot satisfy the requirements of planar dominant Rayleigh waves for all frequencies of interest ...
Adaptation to environmental change is not a new concept. Humans have shown throughout history a capacity for adapting to different climates and environmental changes. Farmers, foresters, civil engineers, have all been forced to adapt to numerous challenges to overcome adversity...
Comparison of different automatic threshold algorithms for image segmentation in microscope images
NASA Astrophysics Data System (ADS)
Boecker, Wilfried; Muller, W.-U.; Streffer, Christian
1995-08-01
Image segmentation is almost always a necessary step in image processing. The employed threshold algorithms are based on the detection of local minima in the gray level histograms of the entire image. In automatic cell recognition equipment, like chromosome analysis or micronuclei counting systems, flexible and adaptive thresholds are required to consider variation in gray level intensities of the background and of the specimen. We have studied three different methods of threshold determination: 1) a statistical procedure, which uses the interclass entropy maximization of the gray level histogram. The iterative algorithm can be used for multithreshold segmentation. The contribution of iteration step 'i' is 2+i-1) number of thresholds; 2) a numerical approach, which detects local minima in the gray level histogram. The algorithm must be tailored and optimized for specific applications like cell recognition with two different thresholds for cell nuclei and cell cytoplasm segmentation; 3) an artificial neural network, which is trained with learning sets of image histograms and the corresponding interactively determined thresholds. We have investigated feed forward networks with one and two layers, respectively. The gray level frequencies are used as inputs for the net. The number of different thresholds per image determines the output channels. We have tested and compared these different threshold algorithms for practical use in fluorescence microscopy as well as in bright field microscopy. The implementation and the results are presented and discussed.
Adaptive method for quantifying uncertainty in discharge measurements using velocity-area method.
NASA Astrophysics Data System (ADS)
Despax, Aurélien; Favre, Anne-Catherine; Belleville, Arnaud
2015-04-01
Streamflow information provided by hydrometric services such as EDF-DTG allow real time monitoring of rivers, streamflow forecasting, paramount hydrological studies and engineering design. In open channels, the traditional approach to measure flow uses a rating curve, which is an indirect method to estimate the discharge in rivers based on water level and punctual discharge measurements. A large proportion of these discharge measurements are performed using the velocity-area method; it consists in integrating flow velocities and depths through the cross-section [1]. The velocity field is estimated by choosing a number m of verticals, distributed across the river, where vertical velocity profile is sampled by a current-meter at ni different depths. Uncertainties coming from several sources are related to the measurement process. To date, the framework for assessing uncertainty in velocity-area discharge measurements is the method presented in the ISO 748 standard [2] which follows the GUM [3] approach. The equation for the combined uncertainty in measured discharge u(Q), at 68% level of confidence, proposed by the ISO 748 standard is expressed as: Σ 2 2 2 -q2i[u2(Bi)+-u2(Di)+-u2p(Vi)+-(1ni) ×-[u2c(Vi)+-u2exp(Vi)
An HP Adaptive Discontinuous Galerkin Method for Hyperbolic Conservation Laws. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Bey, Kim S.
1994-01-01
This dissertation addresses various issues for model classes of hyperbolic conservation laws. The basic approach developed in this work employs a new family of adaptive, hp-version, finite element methods based on a special discontinuous Galerkin formulation for hyperbolic problems. The discontinuous Galerkin formulation admits high-order local approximations on domains of quite general geometry, while providing a natural framework for finite element approximations and for theoretical developments. The use of hp-versions of the finite element method makes possible exponentially convergent schemes with very high accuracies in certain cases; the use of adaptive hp-schemes allows h-refinement in regions of low regularity and p-enrichment to deliver high accuracy, while keeping problem sizes manageable and dramatically smaller than many conventional approaches. The use of discontinuous Galerkin methods is uncommon in applications, but the methods rest on a reasonable mathematical basis for low-order cases and has local approximation features that can be exploited to produce very efficient schemes, especially in a parallel, multiprocessor environment. The place of this work is to first and primarily focus on a model class of linear hyperbolic conservation laws for which concrete mathematical results, methodologies, error estimates, convergence criteria, and parallel adaptive strategies can be developed, and to then briefly explore some extensions to more general cases. Next, we provide preliminaries to the study and a review of some aspects of the theory of hyperbolic conservation laws. We also provide a review of relevant literature on this subject and on the numerical analysis of these types of problems.
A vertical parallax reduction method for stereoscopic video based on adaptive interpolation
NASA Astrophysics Data System (ADS)
Li, Qingyu; Zhao, Yan
2016-10-01
The existence of vertical parallax is the main factor of affecting the viewing comfort of stereo video. Visual fatigue is gaining widespread attention with the booming development of 3D stereoscopic video technology. In order to reduce the vertical parallax without affecting the horizontal parallax, a self-adaptive image scaling algorithm is proposed, which can use the edge characteristics efficiently. In the meantime, the nonlinear Levenberg-Marquardt (L-M) algorithm is introduced in this paper to improve the accuracy of the transformation matrix. Firstly, the self-adaptive scaling algorithm is used for the original image interpolation. When the pixel point of original image is in the edge areas, the interpretation is implemented adaptively along the edge direction obtained by Sobel operator. Secondly the SIFT algorithm, which is invariant to scaling, rotation and affine transformation, is used to detect the feature matching points from the binocular images. Then according to the coordinate position of matching points, the transformation matrix, which can reduce the vertical parallax, is calculated using Levenberg-Marquardt algorithm. Finally, the transformation matrix is applied to target image to calculate the new coordinate position of each pixel from the view image. The experimental results show that: comparing with the method which reduces the vertical parallax using linear algorithm to calculate two-dimensional projective transformation, the proposed method improves the vertical parallax reduction obviously. At the same time, in terms of the impact on horizontal parallax, the proposed method has more similar horizontal parallax to that of the original image after vertical parallax reduction. Therefore, the proposed method can optimize the vertical parallax reduction.
Investigation of self-adaptive LED surgical lighting based on entropy contrast enhancing method
NASA Astrophysics Data System (ADS)
Liu, Peng; Wang, Huihui; Zhang, Yaqin; Shen, Junfei; Wu, Rengmao; Zheng, Zhenrong; Li, Haifeng; Liu, Xu
2014-05-01
Investigation was performed to explore the possibility of enhancing contrast by varying the spectral distribution (SPD) of the surgical lighting. The illumination scenes with different SPDs were generated by the combination of a self-adaptive white light optimization method and the LED ceiling system, the images of biological sample are taken by a CCD camera and then processed by an 'Entropy' based contrast evaluation model which is proposed specific for surgery occasion. Compared with the neutral white LED based and traditional algorithm based image enhancing methods, the illumination based enhancing method turns out a better performance in contrast enhancing and improves the average contrast value about 9% and 6%, respectively. This low cost method is simple, practicable, and thus may provide an alternative solution for the expensive visual facility medical instruments.
Motion correction of magnetic resonance imaging data by using adaptive moving least squares method.
Nam, Haewon; Lee, Yeon Ju; Jeong, Byeongseon; Park, Hae-Jeong; Yoon, Jungho
2015-06-01
Image artifacts caused by subject motion during the imaging sequence are one of the most common problems in magnetic resonance imaging (MRI) and often degrade the image quality. In this study, we develop a motion correction algorithm for the interleaved-MR acquisition. An advantage of the proposed method is that it does not require either additional equipment or redundant over-sampling. The general framework of this study is similar to that of Rohlfing et al. [1], except for the introduction of the following fundamental modification. The three-dimensional (3-D) scattered data approximation method is used to correct the artifacted data as a post-processing step. In order to obtain a better match to the local structures of the given image, we use the data-adapted moving least squares (MLS) method that can improve the performance of the classical method. Numerical results are provided to demonstrate the advantages of the proposed algorithm.
A Cartesian Adaptive Level Set Method for Two-Phase Flows
NASA Technical Reports Server (NTRS)
Ham, F.; Young, Y.-N.
2003-01-01
In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.