Science.gov

Sample records for adaptive statistical iterative

  1. Statistical iterative reconstruction using adaptive fractional order regularization

    PubMed Central

    Zhang, Yi; Wang, Yan; Zhang, Weihua; Lin, Feng; Pu, Yifei; Zhou, Jiliu

    2016-01-01

    In order to reduce the radiation dose of the X-ray computed tomography (CT), low-dose CT has drawn much attention in both clinical and industrial fields. A fractional order model based on statistical iterative reconstruction framework was proposed in this study. To further enhance the performance of the proposed model, an adaptive order selection strategy, determining the fractional order pixel-by-pixel, was given. Experiments, including numerical and clinical cases, illustrated better results than several existing methods, especially, in structure and texture preservation. PMID:27231604

  2. The adaptive statistical iterative reconstruction-V technique for radiation dose reduction in abdominal CT: comparison with the adaptive statistical iterative reconstruction technique

    PubMed Central

    Cho, Jinhan; Oh, Jongyeong; Kim, Dongwon; Cho, Junghyun; Kim, Sanghyun; Lee, Sangyun; Lee, Jihyun

    2015-01-01

    Objective: To investigate whether reduced radiation dose abdominal CT images reconstructed with adaptive statistical iterative reconstruction V (ASIR-V) compromise the depiction of clinically competent features when compared with the currently used routine radiation dose CT images reconstructed with ASIR. Methods: 27 consecutive patients (mean body mass index: 23.55 kg m−2 underwent CT of the abdomen at two time points. At the first time point, abdominal CT was scanned at 21.45 noise index levels of automatic current modulation at 120 kV. Images were reconstructed with 40% ASIR, the routine protocol of Dong-A University Hospital. At the second time point, follow-up scans were performed at 30 noise index levels. Images were reconstructed with filtered back projection (FBP), 40% ASIR, 30% ASIR-V, 50% ASIR-V and 70% ASIR-V for the reduced radiation dose. Both quantitative and qualitative analyses of image quality were conducted. The CT dose index was also recorded. Results: At the follow-up study, the mean dose reduction relative to the currently used common radiation dose was 35.37% (range: 19–49%). The overall subjective image quality and diagnostic acceptability of the 50% ASIR-V scores at the reduced radiation dose were nearly identical to those recorded when using the initial routine-dose CT with 40% ASIR. Subjective ratings of the qualitative analysis revealed that of all reduced radiation dose CT series reconstructed, 30% ASIR-V and 50% ASIR-V were associated with higher image quality with lower noise and artefacts as well as good sharpness when compared with 40% ASIR and FBP. However, the sharpness score at 70% ASIR-V was considered to be worse than that at 40% ASIR. Objective image noise for 50% ASIR-V was 34.24% and 46.34% which was lower than 40% ASIR and FBP. Conclusion: Abdominal CT images reconstructed with ASIR-V facilitate radiation dose reductions of to 35% when compared with the ASIR. Advances in knowledge: This study represents the first

  3. Ultralow dose computed tomography attenuation correction for pediatric PET CT using adaptive statistical iterative reconstruction

    SciTech Connect

    Brady, Samuel L.; Shulkin, Barry L.

    2015-02-15

    Purpose: To develop ultralow dose computed tomography (CT) attenuation correction (CTAC) acquisition protocols for pediatric positron emission tomography CT (PET CT). Methods: A GE Discovery 690 PET CT hybrid scanner was used to investigate the change to quantitative PET and CT measurements when operated at ultralow doses (10–35 mA s). CT quantitation: noise, low-contrast resolution, and CT numbers for 11 tissue substitutes were analyzed in-phantom. CT quantitation was analyzed to a reduction of 90% volume computed tomography dose index (0.39/3.64; mGy) from baseline. To minimize noise infiltration, 100% adaptive statistical iterative reconstruction (ASiR) was used for CT reconstruction. PET images were reconstructed with the lower-dose CTAC iterations and analyzed for: maximum body weight standardized uptake value (SUV{sub bw}) of various diameter targets (range 8–37 mm), background uniformity, and spatial resolution. Radiation dose and CTAC noise magnitude were compared for 140 patient examinations (76 post-ASiR implementation) to determine relative dose reduction and noise control. Results: CT numbers were constant to within 10% from the nondose reduced CTAC image for 90% dose reduction. No change in SUV{sub bw}, background percent uniformity, or spatial resolution for PET images reconstructed with CTAC protocols was found down to 90% dose reduction. Patient population effective dose analysis demonstrated relative CTAC dose reductions between 62% and 86% (3.2/8.3–0.9/6.2). Noise magnitude in dose-reduced patient images increased but was not statistically different from predose-reduced patient images. Conclusions: Using ASiR allowed for aggressive reduction in CT dose with no change in PET reconstructed images while maintaining sufficient image quality for colocalization of hybrid CT anatomy and PET radioisotope uptake.

  4. Adaptive iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Bruder, H.; Raupach, R.; Sunnegardh, J.; Sedlmair, M.; Stierstorfer, K.; Flohr, T.

    2011-03-01

    It is well known that, in CT reconstruction, Maximum A Posteriori (MAP) reconstruction based on a Poisson noise model can be well approximated by Penalized Weighted Least Square (PWLS) minimization based on a data dependent Gaussian noise model. We study minimization of the PWLS objective function using the Gradient Descent (GD) method, and show that if an exact inverse of the forward projector exists, the PWLS GD update equation can be translated into an update equation which entirely operates in the image domain. In case of non-linear regularization and arbitrary noise model this means that a non-linear image filter must exist which solves the optimization problem. In the general case of non-linear regularization and arbitrary noise model, the analytical computation is not trivial and might lead to image filters which are computationally very expensive. We introduce a new iteration scheme in image space, based on a regularization filter with an anisotropic noise model. Basically, this approximates the statistical data weighting and regularization in PWLS reconstruction. If needed, e.g. for compensation of the non-exactness of backprojector, the image-based regularization loop can be preceded by a raw data based loop without regularization and statistical data weighting. We call this combined iterative reconstruction scheme Adaptive Iterative Reconstruction (AIR). It will be shown that in terms of low-contrast visibility, sharpness-to-noise and contrast-to-noise ratio, PWLS and AIR reconstruction are similar to a high degree of accuracy. In clinical images the noise texture of AIR is also superior to the more artificial texture of PWLS.

  5. Characterization of adaptive statistical iterative reconstruction algorithm for dose reduction in CT: A pediatric oncology perspective

    SciTech Connect

    Brady, S. L.; Yee, B. S.; Kaufman, R. A.

    2012-09-15

    Purpose: This study demonstrates a means of implementing an adaptive statistical iterative reconstruction (ASiR Trade-Mark-Sign ) technique for dose reduction in computed tomography (CT) while maintaining similar noise levels in the reconstructed image. The effects of image quality and noise texture were assessed at all implementation levels of ASiR Trade-Mark-Sign . Empirically derived dose reduction limits were established for ASiR Trade-Mark-Sign for imaging of the trunk for a pediatric oncology population ranging from 1 yr old through adolescence/adulthood. Methods: Image quality was assessed using metrics established by the American College of Radiology (ACR) CT accreditation program. Each image quality metric was tested using the ACR CT phantom with 0%-100% ASiR Trade-Mark-Sign blended with filtered back projection (FBP) reconstructed images. Additionally, the noise power spectrum (NPS) was calculated for three common reconstruction filters of the trunk. The empirically derived limitations on ASiR Trade-Mark-Sign implementation for dose reduction were assessed using (1, 5, 10) yr old and adolescent/adult anthropomorphic phantoms. To assess dose reduction limits, the phantoms were scanned in increments of increased noise index (decrementing mA using automatic tube current modulation) balanced with ASiR Trade-Mark-Sign reconstruction to maintain noise equivalence of the 0% ASiR Trade-Mark-Sign image. Results: The ASiR Trade-Mark-Sign algorithm did not produce any unfavorable effects on image quality as assessed by ACR criteria. Conversely, low-contrast resolution was found to improve due to the reduction of noise in the reconstructed images. NPS calculations demonstrated that images with lower frequency noise had lower noise variance and coarser graininess at progressively higher percentages of ASiR Trade-Mark-Sign reconstruction; and in spite of the similar magnitudes of noise, the image reconstructed with 50% or more ASiR Trade-Mark-Sign presented a more

  6. Image quality of CT angiography with model-based iterative reconstruction in young children with congenital heart disease: comparison with filtered back projection and adaptive statistical iterative reconstruction.

    PubMed

    Son, Sung Sil; Choo, Ki Seok; Jeon, Ung Bae; Jeon, Gye Rok; Nam, Kyung Jin; Kim, Tae Un; Yeom, Jeong A; Hwang, Jae Yeon; Jeong, Dong Wook; Lim, Soo Jin

    2015-06-01

    To retrospectively evaluate the image quality of CT angiography (CTA) reconstructed by model-based iterative reconstruction (MBIR) and to compare this with images obtained by filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR) in newborns and infants with congenital heart disease (CHD). Thirty-seven children (age 4.8 ± 3.7 months; weight 4.79 ± 0.47 kg) with suspected CHD underwent CTA on a 64detector MDCT without ECG gating (80 kVp, 40 mA using tube current modulation). Total dose length product was recorded in all patients. Images were reconstructed using FBP, ASIR, and MBIR. Objective image qualities (density, noise) were measured in the great vessels and heart chambers. The contrast-to-noise ratio (CNR) was calculated by measuring the density and noise of myocardial walls. Two radiologists evaluated images for subjective noise, diagnostic confidence, and sharpness at the level prior to the first branch of the main pulmonary artery. Images were compared with respect to reconstruction method, and reconstruction times were measured. Images from all patients were diagnostic, and the effective dose was 0.22 mSv. The objective image noise of MBIR was significantly lower than those of FBP and ASIR in the great vessels and heart chambers (P < 0.05); however, with respect to attenuations in the four chambers, ascending aorta, descending aorta, and pulmonary trunk, no statistically significant difference was observed among the three methods (P > 0.05). Mean CNR values were 8.73 for FBP, 14.54 for ASIR, and 22.95 for MBIR. In addition, the subjective image noise of MBIR was significantly lower than those of the others (P < 0.01). Furthermore, while FBP had the highest score for image sharpness, ASIR had the highest score for diagnostic confidence (P < 0.05), and mean reconstruction times were 5.1 ± 2.3 s for FBP and ASIR and 15.1 ± 2.4 min for MBIR. While CTA with MBIR in newborns and infants with CHD can reduce image noise and

  7. Combining Automatic Tube Current Modulation with Adaptive Statistical Iterative Reconstruction for Low-Dose Chest CT Screening

    PubMed Central

    Chen, Jiang-Hong; Jin, Er-Hu; He, Wen; Zhao, Li-Qin

    2014-01-01

    Objective To reduce radiation dose while maintaining image quality in low-dose chest computed tomography (CT) by combining adaptive statistical iterative reconstruction (ASIR) and automatic tube current modulation (ATCM). Methods Patients undergoing cancer screening (n = 200) were subjected to 64-slice multidetector chest CT scanning with ASIR and ATCM. Patients were divided into groups 1, 2, 3, and 4 (n = 50 each), with a noise index (NI) of 15, 20, 30, and 40, respectively. Each image set was reconstructed with 4 ASIR levels (0% ASIR, 30% ASIR, 50% ASIR, and 80% ASIR) in each group. Two radiologists assessed subjective image noise, image artifacts, and visibility of the anatomical structures. Objective image noise and signal-to-noise ratio (SNR) were measured, and effective dose (ED) was recorded. Results Increased NI was associated with increased subjective and objective image noise results (P<0.001), and SNR decreased with increasing NI (P<0.001). These values improved with increased ASIR levels (P<0.001). Images from all 4 groups were clinically diagnosable. Images with NI = 30 and 50% ASIR had average subjective image noise scores and nearly average anatomical structure visibility scores, with a mean objective image noise of 23.42 HU. The EDs for groups 1, 2, 3 and 4 were 2.79±1.17, 1.69±0.59, 0.74±0.29, and 0.37±0.22 mSv, respectively. Compared to group 1 (NI = 15), the ED reductions were 39.43%, 73.48%, and 86.74% for groups 2, 3, and 4, respectively. Conclusions Using NI = 30 with 50% ASIR in the chest CT protocol, we obtained average or above-average image quality but a reduced ED. PMID:24691208

  8. Can use of adaptive statistical iterative reconstruction reduce radiation dose in unenhanced head CT? An analysis of qualitative and quantitative image quality

    PubMed Central

    Heggen, Kristin Livelten; Pedersen, Hans Kristian; Andersen, Hilde Kjernlie; Martinsen, Anne Catrine T

    2016-01-01

    Background Iterative reconstruction can reduce image noise and thereby facilitate dose reduction. Purpose To evaluate qualitative and quantitative image quality for full dose and dose reduced head computed tomography (CT) protocols reconstructed using filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR). Material and Methods Fourteen patients undergoing follow-up head CT were included. All patients underwent full dose (FD) exam and subsequent 15% dose reduced (DR) exam, reconstructed using FBP and 30% ASIR. Qualitative image quality was assessed using visual grading characteristics. Quantitative image quality was assessed using ROI measurements in cerebrospinal fluid (CSF), white matter, peripheral and central gray matter. Additionally, quantitative image quality was measured in Catphan and vendor’s water phantom. Results There was no significant difference in qualitative image quality between FD FBP and DR ASIR. Comparing same scan FBP versus ASIR, a noise reduction of 28.6% in CSF and between −3.7 and 3.5% in brain parenchyma was observed. Comparing FD FBP versus DR ASIR, a noise reduction of 25.7% in CSF, and −7.5 and 6.3% in brain parenchyma was observed. Image contrast increased in ASIR reconstructions. Contrast-to-noise ratio was improved in DR ASIR compared to FD FBP. In phantoms, noise reduction was in the range of 3 to 28% with image content. Conclusion There was no significant difference in qualitative image quality between full dose FBP and dose reduced ASIR. CNR improved in DR ASIR compared to FD FBP mostly due to increased contrast, not reduced noise. Therefore, we recommend using caution if reducing dose and applying ASIR to maintain image quality. PMID:27583169

  9. ASSESSMENT OF CLINICAL IMAGE QUALITY IN PAEDIATRIC ABDOMINAL CT EXAMINATIONS: DEPENDENCY ON THE LEVEL OF ADAPTIVE STATISTICAL ITERATIVE RECONSTRUCTION (ASiR) AND THE TYPE OF CONVOLUTION KERNEL.

    PubMed

    Larsson, Joel; Båth, Magnus; Ledenius, Kerstin; Caisander, Håkan; Thilander-Klang, Anne

    2016-06-01

    The purpose of this study was to investigate the effect of different combinations of convolution kernel and the level of Adaptive Statistical iterative Reconstruction (ASiR™) on diagnostic image quality as well as visualisation of anatomical structures in paediatric abdominal computed tomography (CT) examinations. Thirty-five paediatric patients with abdominal pain with non-specified pathology undergoing abdominal CT were included in the study. Transaxial stacks of 5-mm-thick images were retrospectively reconstructed at various ASiR levels, in combination with three convolution kernels. Four paediatric radiologists rated the diagnostic image quality and the delineation of six anatomical structures in a blinded randomised visual grading study. Image quality at a given ASiR level was found to be dependent on the kernel, and a more edge-enhancing kernel benefitted from a higher ASiR level. An ASiR level of 70 % together with the Soft™ or Standard™ kernel was suggested to be the optimal combination for paediatric abdominal CT examinations.

  10. SU-E-I-86: Ultra-Low Dose Computed Tomography Attenuation Correction for Pediatric PET CT Using Adaptive Statistical Iterative Reconstruction (ASiR™)

    SciTech Connect

    Brady, S; Shulkin, B

    2015-06-15

    Purpose: To develop ultra-low dose computed tomography (CT) attenuation correction (CTAC) acquisition protocols for pediatric positron emission tomography CT (PET CT). Methods: A GE Discovery 690 PET CT hybrid scanner was used to investigate the change to quantitative PET and CT measurements when operated at ultra-low doses (10–35 mAs). CT quantitation: noise, low-contrast resolution, and CT numbers for eleven tissue substitutes were analyzed in-phantom. CT quantitation was analyzed to a reduction of 90% CTDIvol (0.39/3.64; mGy) radiation dose from baseline. To minimize noise infiltration, 100% adaptive statistical iterative reconstruction (ASiR) was used for CT reconstruction. PET images were reconstructed with the lower-dose CTAC iterations and analyzed for: maximum body weight standardized uptake value (SUVbw) of various diameter targets (range 8–37 mm), background uniformity, and spatial resolution. Radiation organ dose, as derived from patient exam size specific dose estimate (SSDE), was converted to effective dose using the standard ICRP report 103 method. Effective dose and CTAC noise magnitude were compared for 140 patient examinations (76 post-ASiR implementation) to determine relative patient population dose reduction and noise control. Results: CT numbers were constant to within 10% from the non-dose reduced CTAC image down to 90% dose reduction. No change in SUVbw, background percent uniformity, or spatial resolution for PET images reconstructed with CTAC protocols reconstructed with ASiR and down to 90% dose reduction. Patient population effective dose analysis demonstrated relative CTAC dose reductions between 62%–86% (3.2/8.3−0.9/6.2; mSv). Noise magnitude in dose-reduced patient images increased but was not statistically different from pre dose-reduced patient images. Conclusion: Using ASiR allowed for aggressive reduction in CTAC dose with no change in PET reconstructed images while maintaining sufficient image quality for co

  11. Adaptive self-calibrating iterative GRAPPA reconstruction.

    PubMed

    Park, Suhyung; Park, Jaeseok

    2012-06-01

    Parallel magnetic resonance imaging in k-space such as generalized auto-calibrating partially parallel acquisition exploits spatial correlation among neighboring signals over multiple coils in calibration to estimate missing signals in reconstruction. It is often challenging to achieve accurate calibration information due to data corruption with noises and spatially varying correlation. The purpose of this work is to address these problems simultaneously by developing a new, adaptive iterative generalized auto-calibrating partially parallel acquisition with dynamic self-calibration. With increasing iterations, under a framework of the Kalman filter spatial correlation is estimated dynamically updating calibration signals in a measurement model and using fixed-point state transition in a process model while missing signals outside the step-varying calibration region are reconstructed, leading to adaptive self-calibration and reconstruction. Noise statistic is incorporated in the Kalman filter models, yielding coil-weighted de-noising in reconstruction. Numerical and in vivo studies are performed, demonstrating that the proposed method yields highly accurate calibration and thus reduces artifacts and noises even at high acceleration. PMID:21994010

  12. Adaptive self-calibrating iterative GRAPPA reconstruction.

    PubMed

    Park, Suhyung; Park, Jaeseok

    2012-06-01

    Parallel magnetic resonance imaging in k-space such as generalized auto-calibrating partially parallel acquisition exploits spatial correlation among neighboring signals over multiple coils in calibration to estimate missing signals in reconstruction. It is often challenging to achieve accurate calibration information due to data corruption with noises and spatially varying correlation. The purpose of this work is to address these problems simultaneously by developing a new, adaptive iterative generalized auto-calibrating partially parallel acquisition with dynamic self-calibration. With increasing iterations, under a framework of the Kalman filter spatial correlation is estimated dynamically updating calibration signals in a measurement model and using fixed-point state transition in a process model while missing signals outside the step-varying calibration region are reconstructed, leading to adaptive self-calibration and reconstruction. Noise statistic is incorporated in the Kalman filter models, yielding coil-weighted de-noising in reconstruction. Numerical and in vivo studies are performed, demonstrating that the proposed method yields highly accurate calibration and thus reduces artifacts and noises even at high acceleration.

  13. Statistical Physics for Adaptive Distributed Control

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2005-01-01

    A viewgraph presentation on statistical physics for distributed adaptive control is shown. The topics include: 1) The Golden Rule; 2) Advantages; 3) Roadmap; 4) What is Distributed Control? 5) Review of Information Theory; 6) Iterative Distributed Control; 7) Minimizing L(q) Via Gradient Descent; and 8) Adaptive Distributed Control.

  14. Matched filter based iterative adaptive approach

    NASA Astrophysics Data System (ADS)

    Nepal, Ramesh; Zhang, Yan Rockee; Li, Zhengzheng; Blake, William

    2016-05-01

    Matched Filter sidelobes from diversified LPI waveform design and sensor resolution are two important considerations in radars and active sensors in general. Matched Filter sidelobes can potentially mask weaker targets, and low sensor resolution not only causes a high margin of error but also limits sensing in target-rich environment/ sector. The improvement in those factors, in part, concern with the transmitted waveform and consequently pulse compression techniques. An adaptive pulse compression algorithm is hence desired that can mitigate the aforementioned limitations. A new Matched Filter based Iterative Adaptive Approach, MF-IAA, as an extension to traditional Iterative Adaptive Approach, IAA, has been developed. MF-IAA takes its input as the Matched Filter output. The motivation here is to facilitate implementation of Iterative Adaptive Approach without disrupting the processing chain of traditional Matched Filter. Similar to IAA, MF-IAA is a user parameter free, iterative, weighted least square based spectral identification algorithm. This work focuses on the implementation of MF-IAA. The feasibility of MF-IAA is studied using a realistic airborne radar simulator as well as actual measured airborne radar data. The performance of MF-IAA is measured with different test waveforms, and different Signal-to-Noise (SNR) levels. In addition, Range-Doppler super-resolution using MF-IAA is investigated. Sidelobe reduction as well as super-resolution enhancement is validated. The robustness of MF-IAA with respect to different LPI waveforms and SNR levels is also demonstrated.

  15. Low kilovoltage peak (kVp) with an adaptive statistical iterative reconstruction algorithm in computed tomography urography: evaluation of image quality and radiation dose

    PubMed Central

    Zhou, Zhiguo; Chen, Haixi; Wei, Wei; Zhou, Shanghui; Xu, Jingbo; Wang, Xifu; Wang, Qingguo; Zhang, Guixiang; Zhang, Zhuoli; Zheng, Linfeng

    2016-01-01

    Purpose: The purpose of this study was to evaluate the image quality and radiation dose in computed tomography urography (CTU) images acquired with a low kilovoltage peak (kVp) in combination with an adaptive statistical iterative reconstruction (ASiR) algorithm. Methods: A total of 45 subjects (18 women, 27 men) who underwent CTU with kV assist software for automatic selection of the optimal kVp were included and divided into two groups (A and B) based on the kVp and image reconstruction algorithm: group A consisted of patients who underwent CTU with a 80 or 100 kVp and whose images were reconstructed with the 50% ASiR algorithm (n=32); group B consisted of patients who underwent CTU with a 120 kVp and whose images were reconstructed with the filtered back projection (FBP) algorithm (n=13). The images were separately reconstructed with volume rendering (VR) and maximum intensity projection (MIP). Finally, the image quality was evaluated using an image score, CT attenuation, image noise, the contrast-to-noise ratio (CNR) of the renal pelvis-to-abdominal visceral fat and the signal-to-noise ratio (SNR) of the renal pelvis. The radiation dose was assessed using volume CT dose index (CTDIvol), dose-length product (DLP) and effective dose (ED). Results: For groups A and B, the subjective image scores for the VR reconstruction images were 3.9±0.4 and 3.8±0.4, respectively, while those for the MIP reconstruction images were 3.8±0.4 and 3.6±0.6, respectively. No significant difference was found (p>0.05) between the two groups’ image scores for either the VR or MIP reconstruction images. Additionally, the inter-reviewer image scores did not significantly differ (p>0.05). The mean attenuation of the bilateral renal pelvis in group A was significantly higher than that in group B (271.4±57.6 vs. 221.8±35.3 HU, p<0.05), whereas the image noise in group A was significantly lower than that in group B (7.9±2.1 vs. 10.5±2.3 HU, p<0.05). The CNR and SNR in group A were

  16. Electronic noise modeling in statistical iterative reconstruction.

    PubMed

    Xu, Jingyan; Tsui, Benjamin M W

    2009-06-01

    We consider electronic noise modeling in tomographic image reconstruction when the measured signal is the sum of a Gaussian distributed electronic noise component and another random variable whose log-likelihood function satisfies a certain linearity condition. Examples of such likelihood functions include the Poisson distribution and an exponential dispersion (ED) model that can approximate the signal statistics in integration mode X-ray detectors. We formulate the image reconstruction problem as a maximum-likelihood estimation problem. Using an expectation-maximization approach, we demonstrate that a reconstruction algorithm can be obtained following a simple substitution rule from the one previously derived without electronic noise considerations. To illustrate the applicability of the substitution rule, we present examples of a fully iterative reconstruction algorithm and a sinogram smoothing algorithm both in transmission CT reconstruction when the measured signal contains additive electronic noise. Our simulation studies show the potential usefulness of accurate electronic noise modeling in low-dose CT applications.

  17. Adaptable Iterative and Recursive Kalman Filter Schemes

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato

    2014-01-01

    Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.

  18. Statistical properties of an iterated arithmetic mapping

    SciTech Connect

    Feix, M.R.; Rouet, J.L.

    1994-07-01

    We study the (3x = 1)/2 problem from a probabilistic viewpoint and show a forgetting mechanism for the last k binary digits of the seed after k iterations. The problem is subsequently generalized to a trifurcation process, the (lx + m)/3 problem. Finally the sequence of a set of seeds is empirically shown to be equivalent to a random walk of the variable log{sub 2}x (or log{sub 3} x) though computer simulations.

  19. Nuclear Forensic Inferences Using Iterative Multidimensional Statistics

    SciTech Connect

    Robel, M; Kristo, M J; Heller, M A

    2009-06-09

    Nuclear forensics involves the analysis of interdicted nuclear material for specific material characteristics (referred to as 'signatures') that imply specific geographical locations, production processes, culprit intentions, etc. Predictive signatures rely on expert knowledge of physics, chemistry, and engineering to develop inferences from these material characteristics. Comparative signatures, on the other hand, rely on comparison of the material characteristics of the interdicted sample (the 'questioned sample' in FBI parlance) with those of a set of known samples. In the ideal case, the set of known samples would be a comprehensive nuclear forensics database, a database which does not currently exist. In fact, our ability to analyze interdicted samples and produce an extensive list of precise materials characteristics far exceeds our ability to interpret the results. Therefore, as we seek to develop the extensive databases necessary for nuclear forensics, we must also develop the methods necessary to produce the necessary inferences from comparison of our analytical results with these large, multidimensional sets of data. In the work reported here, we used a large, multidimensional dataset of results from quality control analyses of uranium ore concentrate (UOC, sometimes called 'yellowcake'). We have found that traditional multidimensional techniques, such as principal components analysis (PCA), are especially useful for understanding such datasets and drawing relevant conclusions. In particular, we have developed an iterative partial least squares-discriminant analysis (PLS-DA) procedure that has proven especially adept at identifying the production location of unknown UOC samples. By removing classes which fell far outside the initial decision boundary, and then rebuilding the PLS-DA model, we have consistently produced better and more definitive attributions than with a single pass classification approach. Performance of the iterative PLS-DA method

  20. Statistical Physics of Adaptation

    NASA Astrophysics Data System (ADS)

    Perunov, Nikolay; Marsland, Robert A.; England, Jeremy L.

    2016-04-01

    Whether by virtue of being prepared in a slowly relaxing, high-free energy initial condition, or because they are constantly dissipating energy absorbed from a strong external drive, many systems subject to thermal fluctuations are not expected to behave in the way they would at thermal equilibrium. Rather, the probability of finding such a system in a given microscopic arrangement may deviate strongly from the Boltzmann distribution, raising the question of whether thermodynamics still has anything to tell us about which arrangements are the most likely to be observed. In this work, we build on past results governing nonequilibrium thermodynamics and define a generalized Helmholtz free energy that exactly delineates the various factors that quantitatively contribute to the relative probabilities of different outcomes in far-from-equilibrium stochastic dynamics. By applying this expression to the analysis of two examples—namely, a particle hopping in an oscillating energy landscape and a population composed of two types of exponentially growing self-replicators—we illustrate a simple relationship between outcome-likelihood and dissipative history. In closing, we discuss the possible relevance of such a thermodynamic principle for our understanding of self-organization in complex systems, paying particular attention to a possible analogy to the way evolutionary adaptations emerge in living things.

  1. Feasibility Study of Using Gemstone Spectral Imaging (GSI) and Adaptive Statistical Iterative Reconstruction (ASIR) for Reducing Radiation and Iodine Contrast Dose in Abdominal CT Patients with High BMI Values

    PubMed Central

    Zhu, Zheng; Zhao, Xin-ming; Zhao, Yan-feng; Wang, Xiao-yi; Zhou, Chun-wu

    2015-01-01

    Purpose To prospectively investigate the effect of using Gemstone Spectral Imaging (GSI) and adaptive statistical iterative reconstruction (ASIR) for reducing radiation and iodine contrast dose in abdominal CT patients with high BMI values. Materials and Methods 26 patients (weight > 65kg and BMI ≥ 22) underwent abdominal CT using GSI mode with 300mgI/kg contrast material as study group (group A). Another 21 patients (weight ≤ 65kg and BMI ≥ 22) were scanned with a conventional 120 kVp tube voltage for noise index (NI) of 11 with 450mgI/kg contrast material as control group (group B). GSI images were reconstructed at 60keV with 50%ASIR and the conventional 120kVp images were reconstructed with FBP reconstruction. The CT values, standard deviation (SD), signal-noise-ratio (SNR), contrast-noise-ratio (CNR) of 26 landmarks were quantitatively measured and image quality qualitatively assessed using statistical analysis. Results As for the quantitative analysis, the difference of CNR between groups A and B was all significant except for the mesenteric vein. The SNR in group A was higher than B except the mesenteric artery and splenic artery. As for the qualitative analysis, all images had diagnostic quality and the agreement for image quality assessment between the reviewers was substantial (kappa = 0.684). CT dose index (CTDI) values for non-enhanced, arterial phase and portal phase in group A were decreased by 49.04%, 40.51% and 40.54% compared with group B (P = 0.000), respectively. The total dose and the injection rate for the contrast material were reduced by 14.40% and 14.95% in A compared with B. Conclusion The use of GSI and ASIR provides similar enhancement in vessels and image quality with reduced radiation dose and contrast dose, compared with the use of conventional scan protocol. PMID:26079259

  2. Adaptive, template moderated, spatially varying statistical classification.

    PubMed

    Warfield, S K; Kaus, M; Jolesz, F A; Kikinis, R

    2000-03-01

    A novel image segmentation algorithm was developed to allow the automatic segmentation of both normal and abnormal anatomy from medical images. The new algorithm is a form of spatially varying statistical classification, in which an explicit anatomical template is used to moderate the segmentation obtained by statistical classification. The algorithm consists of an iterated sequence of spatially varying classification and nonlinear registration, which forms an adaptive, template moderated (ATM), spatially varying statistical classification (SVC). Classification methods and nonlinear registration methods are often complementary, both in the tasks where they succeed and in the tasks where they fail. By integrating these approaches the new algorithm avoids many of the disadvantages of each approach alone while exploiting the combination. The ATM SVC algorithm was applied to several segmentation problems, involving different image contrast mechanisms and different locations in the body. Segmentation and validation experiments were carried out for problems involving the quantification of normal anatomy (MRI of brains of neonates) and pathology of various types (MRI of patients with multiple sclerosis, MRI of patients with brain tumors, MRI of patients with damaged knee cartilage). In each case, the ATM SVC algorithm provided a better segmentation than statistical classification or elastic matching alone. PMID:10972320

  3. Investigation of statistical iterative reconstruction for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Makeev, Andrey; Das, Mini; Glick, Stephen J.

    2012-03-01

    Dedicated breast CT has great potential for improving the detection and diagnosis of breast cancer. In this study, statistical iterative reconstruction with a penalized likelihood objective function and a Huber prior are investigated for use with breast CT. This prior has two free parameters, the penalty weight and the edgepreservation threshold, that need to be evaluated to determine those values that give optimal performance. Computer simulations with breast-like phantoms were used to study these parameters using various figuresof- merit that relate to performance in detecting microcalcifications. Results suggested that a narrow range of Huber prior parameters give optimal performance. Furthermore, iterative reconstruction provided improved performance measures as compared to conventional filtered back-projection.

  4. Adaptive Strategies in the Iterated Exchange Problem

    NASA Astrophysics Data System (ADS)

    Baraov, Arthur

    2011-03-01

    We argue for clear separation of the exchange problem from the exchange paradox to avoid confusion about the subject matter of these two distinct problems. The exchange problem in its current format belongs to the domain of optimal decision making—it doesn't make any sense as a game of competition. But it takes just a tiny modification in the statement of the problem to breathe new life into it and make it a practicable and meaningful game of competition. In this paper, we offer an explanation for paradoxical priors and discuss adaptive strategies for both the house and the player in the restated exchange problem.

  5. A successive overrelaxation iterative technique for an adaptive equalizer

    NASA Technical Reports Server (NTRS)

    Kosovych, O. S.

    1973-01-01

    An adaptive strategy for the equalization of pulse-amplitude-modulated signals in the presence of intersymbol interference and additive noise is reported. The successive overrelaxation iterative technique is used as the algorithm for the iterative adjustment of the equalizer coefficents during a training period for the minimization of the mean square error. With 2-cyclic and nonnegative Jacobi matrices substantial improvement is demonstrated in the rate of convergence over the commonly used gradient techniques. The Jacobi theorems are also extended to nonpositive Jacobi matrices. Numerical examples strongly indicate that the improvements obtained for the special cases are possible for general channel characteristics. The technique is analytically demonstrated to decrease the mean square error at each iteration for a large range of parameter values for light or moderate intersymbol interference and for small intervals for general channels. Analytically, convergence of the relaxation algorithm was proven in a noisy environment and the coefficient variance was demonstrated to be bounded.

  6. Adaptively Tuned Iterative Low Dose CT Image Denoising.

    PubMed

    Hashemi, SayedMasoud; Paul, Narinder S; Beheshti, Soosan; Cobbold, Richard S C

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  7. Adaptively Tuned Iterative Low Dose CT Image Denoising

    PubMed Central

    Hashemi, SayedMasoud; Paul, Narinder S.; Beheshti, Soosan; Cobbold, Richard S. C.

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  8. Estimated spectrum adaptive postfilter and the iterative prepost filtering algirighms

    NASA Technical Reports Server (NTRS)

    Linares, Irving (Inventor)

    2004-01-01

    The invention presents The Estimated Spectrum Adaptive Postfilter (ESAP) and the Iterative Prepost Filter (IPF) algorithms. These algorithms model a number of image-adaptive post-filtering and pre-post filtering methods. They are designed to minimize Discrete Cosine Transform (DCT) blocking distortion caused when images are highly compressed with the Joint Photographic Expert Group (JPEG) standard. The ESAP and the IPF techniques of the present invention minimize the mean square error (MSE) to improve the objective and subjective quality of low-bit-rate JPEG gray-scale images while simultaneously enhancing perceptual visual quality with respect to baseline JPEG images.

  9. Iterative Re-Weighted Instance Transfer for Domain Adaptation

    NASA Astrophysics Data System (ADS)

    Paul, A.; Rottensteiner, F.; Heipke, C.

    2016-06-01

    Domain adaptation techniques in transfer learning try to reduce the amount of training data required for classification by adapting a classifier trained on samples from a source domain to a new data set (target domain) where the features may have different distributions. In this paper, we propose a new technique for domain adaptation based on logistic regression. Starting with a classifier trained on training data from the source domain, we iteratively include target domain samples for which class labels have been obtained from the current state of the classifier, while at the same time removing source domain samples. In each iteration the classifier is re-trained, so that the decision boundaries are slowly transferred to the distribution of the target features. To make the transfer procedure more robust we introduce weights as a function of distance from the decision boundary and a new way of regularisation. Our methodology is evaluated using a benchmark data set consisting of aerial images and digital surface models. The experimental results show that in the majority of cases our domain adaptation approach can lead to an improvement of the classification accuracy without additional training data, but also indicate remaining problems if the difference in the feature distributions becomes too large.

  10. Iterative-Transform Phase Retrieval Using Adaptive Diversity

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein

  11. Adaptive restoration of river terrace vegetation through iterative experiments

    USGS Publications Warehouse

    Dela Cruz, Michelle P.; Beauchamp, Vanessa B.; Shafroth, Patrick B.; Decker, Cheryl E.; O’Neil, Aviva

    2014-01-01

    Restoration projects can involve a high degree of uncertainty and risk, which can ultimately result in failure. An adaptive restoration approach can reduce uncertainty through controlled, replicated experiments designed to test specific hypotheses and alternative management approaches. Key components of adaptive restoration include willingness of project managers to accept the risk inherent in experimentation, interest of researchers, availability of funding for experimentation and monitoring, and ability to restore sites as iterative experiments where results from early efforts can inform the design of later phases. This paper highlights an ongoing adaptive restoration project at Zion National Park (ZNP), aimed at reducing the cover of exotic annual Bromus on riparian terraces, and revegetating these areas with native plant species. Rather than using a trial-and-error approach, ZNP staff partnered with academic, government, and private-sector collaborators to conduct small-scale experiments to explicitly address uncertainties concerning biomass removal of annual bromes, herbicide application rates and timing, and effective seeding methods for native species. Adaptive restoration has succeeded at ZNP because managers accept the risk inherent in experimentation and ZNP personnel are committed to continue these projects over a several-year period. Techniques that result in exotic annual Bromus removal and restoration of native plant species at ZNP can be used as a starting point for adaptive restoration projects elsewhere in the region.

  12. Krylov iterative methods and synthetic acceleration for transport in binary statistical media

    SciTech Connect

    Fichtl, Erin D; Warsa, James S; Prinja, Anil K

    2008-01-01

    In particle transport applications there are numerous physical constructs in which heterogeneities are randomly distributed. The quantity of interest in these problems is the ensemble average of the flux, or the average of the flux over all possible material 'realizations.' The Levermore-Pomraning closure assumes Markovian mixing statistics and allows a closed, coupled system of equations to be written for the ensemble averages of the flux in each material. Generally, binary statistical mixtures are considered in which there are two (homogeneous) materials and corresponding coupled equations. The solution process is iterative, but convergence may be slow as either or both materials approach the diffusion and/or atomic mix limits. A three-part acceleration scheme is devised to expedite convergence, particularly in the atomic mix-diffusion limit where computation is extremely slow. The iteration is first divided into a series of 'inner' material and source iterations to attenuate the diffusion and atomic mix error modes separately. Secondly, atomic mix synthetic acceleration is applied to the inner material iteration and S{sup 2} synthetic acceleration to the inner source iterations to offset the cost of doing several inner iterations per outer iteration. Finally, a Krylov iterative solver is wrapped around each iteration, inner and outer, to further expedite convergence. A spectral analysis is conducted and iteration counts and computing cost for the new two-step scheme are compared against those for a simple one-step iteration, to which a Krylov iterative method can also be applied.

  13. Iterative Monte Carlo with bead-adapted sampling for complex-time correlation functions.

    PubMed

    Jadhao, Vikram; Makri, Nancy

    2010-03-14

    In a recent communication [V. Jadhao and N. Makri, J. Chem. Phys. 129, 161102 (2008)], we introduced an iterative Monte Carlo (IMC) path integral methodology for calculating complex-time correlation functions. This method constitutes a stepwise evaluation of the path integral on a grid selected by a Monte Carlo procedure, circumventing the exponential growth of statistical error with increasing propagation time, while realizing the advantageous scaling of importance sampling in the grid selection and integral evaluation. In the present paper, we present an improved formulation of IMC, which is based on a bead-adapted sampling procedure; thus leading to grid point distributions that closely resemble the absolute value of the integrand at each iteration. We show that the statistical error of IMC does not grow upon repeated iteration, in sharp contrast to the performance of the conventional path integral approach which leads to exponential increase in statistical uncertainty. Numerical results on systems with up to 13 degrees of freedom and propagation up to 30 times the "thermal" time variant Planck's over 2pibeta/2 illustrate these features.

  14. Iterative Monte Carlo with bead-adapted sampling for complex-time correlation functions

    NASA Astrophysics Data System (ADS)

    Jadhao, Vikram; Makri, Nancy

    2010-03-01

    In a recent communication [V. Jadhao and N. Makri, J. Chem. Phys. 129, 161102 (2008)], we introduced an iterative Monte Carlo (IMC) path integral methodology for calculating complex-time correlation functions. This method constitutes a stepwise evaluation of the path integral on a grid selected by a Monte Carlo procedure, circumventing the exponential growth of statistical error with increasing propagation time, while realizing the advantageous scaling of importance sampling in the grid selection and integral evaluation. In the present paper, we present an improved formulation of IMC, which is based on a bead-adapted sampling procedure; thus leading to grid point distributions that closely resemble the absolute value of the integrand at each iteration. We show that the statistical error of IMC does not grow upon repeated iteration, in sharp contrast to the performance of the conventional path integral approach which leads to exponential increase in statistical uncertainty. Numerical results on systems with up to 13 degrees of freedom and propagation up to 30 times the "thermal" time ℏβ /2 illustrate these features.

  15. Adaptive iterated extended Kalman filter and its application to autonomous integrated navigation for indoor robot.

    PubMed

    Xu, Yuan; Chen, Xiyuan; Li, Qinghua

    2014-01-01

    As the core of the integrated navigation system, the data fusion algorithm should be designed seriously. In order to improve the accuracy of data fusion, this work proposed an adaptive iterated extended Kalman (AIEKF) which used the noise statistics estimator in the iterated extended Kalman (IEKF), and then AIEKF is used to deal with the nonlinear problem in the inertial navigation systems (INS)/wireless sensors networks (WSNs)-integrated navigation system. Practical test has been done to evaluate the performance of the proposed method. The results show that the proposed method is effective to reduce the mean root-mean-square error (RMSE) of position by about 92.53%, 67.93%, 55.97%, and 30.09% compared with the INS only, WSN, EKF, and IEKF.

  16. Adaptive Iterated Extended Kalman Filter and Its Application to Autonomous Integrated Navigation for Indoor Robot

    PubMed Central

    Chen, Xiyuan; Li, Qinghua

    2014-01-01

    As the core of the integrated navigation system, the data fusion algorithm should be designed seriously. In order to improve the accuracy of data fusion, this work proposed an adaptive iterated extended Kalman (AIEKF) which used the noise statistics estimator in the iterated extended Kalman (IEKF), and then AIEKF is used to deal with the nonlinear problem in the inertial navigation systems (INS)/wireless sensors networks (WSNs)-integrated navigation system. Practical test has been done to evaluate the performance of the proposed method. The results show that the proposed method is effective to reduce the mean root-mean-square error (RMSE) of position by about 92.53%, 67.93%, 55.97%, and 30.09% compared with the INS only, WSN, EKF, and IEKF. PMID:24693225

  17. Investigation of statistical iterative reconstruction for dedicated breast CT

    SciTech Connect

    Makeev, Andrey; Glick, Stephen J.

    2013-08-15

    Purpose: Dedicated breast CT has great potential for improving the detection and diagnosis of breast cancer. Statistical iterative reconstruction (SIR) in dedicated breast CT is a promising alternative to traditional filtered backprojection (FBP). One of the difficulties in using SIR is the presence of free parameters in the algorithm that control the appearance of the resulting image. These parameters require tuning in order to achieve high quality reconstructions. In this study, the authors investigated the penalized maximum likelihood (PML) method with two commonly used types of roughness penalty functions: hyperbolic potential and anisotropic total variation (TV) norm. Reconstructed images were compared with images obtained using standard FBP. Optimal parameters for PML with the hyperbolic prior are reported for the task of detecting microcalcifications embedded in breast tissue.Methods: Computer simulations were used to acquire projections in a half-cone beam geometry. The modeled setup describes a realistic breast CT benchtop system, with an x-ray spectra produced by a point source and an a-Si, CsI:Tl flat-panel detector. A voxelized anthropomorphic breast phantom with 280 μm microcalcification spheres embedded in it was used to model attenuation properties of the uncompressed woman's breast in a pendant position. The reconstruction of 3D images was performed using the separable paraboloidal surrogates algorithm with ordered subsets. Task performance was assessed with the ideal observer detectability index to determine optimal PML parameters.Results: The authors' findings suggest that there is a preferred range of values of the roughness penalty weight and the edge preservation threshold in the penalized objective function with the hyperbolic potential, which resulted in low noise images with high contrast microcalcifications preserved. In terms of numerical observer detectability index, the PML method with optimal parameters yielded substantially improved

  18. Finite-approximation-error-based discrete-time iterative adaptive dynamic programming.

    PubMed

    Wei, Qinglai; Wang, Fei-Yue; Liu, Derong; Yang, Xiong

    2014-12-01

    In this paper, a new iterative adaptive dynamic programming (ADP) algorithm is developed to solve optimal control problems for infinite horizon discrete-time nonlinear systems with finite approximation errors. First, a new generalized value iteration algorithm of ADP is developed to make the iterative performance index function converge to the solution of the Hamilton-Jacobi-Bellman equation. The generalized value iteration algorithm permits an arbitrary positive semi-definite function to initialize it, which overcomes the disadvantage of traditional value iteration algorithms. When the iterative control law and iterative performance index function in each iteration cannot accurately be obtained, for the first time a new "design method of the convergence criteria" for the finite-approximation-error-based generalized value iteration algorithm is established. A suitable approximation error can be designed adaptively to make the iterative performance index function converge to a finite neighborhood of the optimal performance index function. Neural networks are used to implement the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the developed method. PMID:25265640

  19. Performance Enhancement for a GPS Vector-Tracking Loop Utilizing an Adaptive Iterated Extended Kalman Filter

    PubMed Central

    Chen, Xiyuan; Wang, Xiying; Xu, Yuan

    2014-01-01

    This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively. PMID:25502124

  20. Performance enhancement for a GPS vector-tracking loop utilizing an adaptive iterated extended Kalman filter.

    PubMed

    Chen, Xiyuan; Wang, Xiying; Xu, Yuan

    2014-12-09

    This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively.

  1. Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Hanquan

    2016-03-01

    In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.

  2. Iter

    NASA Astrophysics Data System (ADS)

    Iotti, Robert

    2015-04-01

    ITER is an international experimental facility being built by seven Parties to demonstrate the long term potential of fusion energy. The ITER Joint Implementation Agreement (JIA) defines the structure and governance model of such cooperation. There are a number of necessary conditions for such international projects to be successful: a complete design, strong systems engineering working with an agreed set of requirements, an experienced organization with systems and plans in place to manage the project, a cost estimate backed by industry, and someone in charge. Unfortunately for ITER many of these conditions were not present. The paper discusses the priorities in the JIA which led to setting up the project with a Central Integrating Organization (IO) in Cadarache, France as the ITER HQ, and seven Domestic Agencies (DAs) located in the countries of the Parties, responsible for delivering 90%+ of the project hardware as Contributions-in-Kind and also financial contributions to the IO, as ``Contributions-in-Cash.'' Theoretically the Director General (DG) is responsible for everything. In practice the DG does not have the power to control the work of the DAs, and there is not an effective management structure enabling the IO and the DAs to arbitrate disputes, so the project is not really managed, but is a loose collaboration of competing interests. Any DA can effectively block a decision reached by the DG. Inefficiencies in completing design while setting up a competent organization from scratch contributed to the delays and cost increases during the initial few years. So did the fact that the original estimate was not developed from industry input. Unforeseen inflation and market demand on certain commodities/materials further exacerbated the cost increases. Since then, improvements are debatable. Does this mean that the governance model of ITER is a wrong model for international scientific cooperation? I do not believe so. Had the necessary conditions for success

  3. Statistical iterative reconstruction using fast optimization transfer algorithm with successively increasing factor in Digital Breast Tomosynthesis

    NASA Astrophysics Data System (ADS)

    Xu, Shiyu; Zhang, Zhenxi; Chen, Ying

    2014-03-01

    Statistical iterative reconstruction exhibits particularly promising since it provides the flexibility of accurate physical noise modeling and geometric system description in transmission tomography system. However, to solve the objective function is computationally intensive compared to analytical reconstruction methods due to multiple iterations needed for convergence and each iteration involving forward/back-projections by using a complex geometric system model. Optimization transfer (OT) is a general algorithm converting a high dimensional optimization to a parallel 1-D update. OT-based algorithm provides a monotonic convergence and a parallel computing framework but slower convergence rate especially around the global optimal. Based on an indirect estimation on the spectrum of the OT convergence rate matrix, we proposed a successively increasing factor- scaled optimization transfer (OT) algorithm to seek an optimal step size for a faster rate. Compared to a representative OT based method such as separable parabolic surrogate with pre-computed curvature (PC-SPS), our algorithm provides comparable image quality (IQ) with fewer iterations. Each iteration retains a similar computational cost to PC-SPS. The initial experiment with a simulated Digital Breast Tomosynthesis (DBT) system shows that a total 40% computing time is saved by the proposed algorithm. In general, the successively increasing factor-scaled OT exhibits a tremendous potential to be a iterative method with a parallel computation, a monotonic and global convergence with fast rate.

  4. Policy iteration adaptive dynamic programming algorithm for discrete-time nonlinear systems.

    PubMed

    Liu, Derong; Wei, Qinglai

    2014-03-01

    This paper is concerned with a new discrete-time policy iteration adaptive dynamic programming (ADP) method for solving the infinite horizon optimal control problem of nonlinear systems. The idea is to use an iterative ADP technique to obtain the iterative control law, which optimizes the iterative performance index function. The main contribution of this paper is to analyze the convergence and stability properties of policy iteration method for discrete-time nonlinear systems for the first time. It shows that the iterative performance index function is nonincreasingly convergent to the optimal solution of the Hamilton-Jacobi-Bellman equation. It is also proven that any of the iterative control laws can stabilize the nonlinear systems. Neural networks are used to approximate the performance index function and compute the optimal control law, respectively, for facilitating the implementation of the iterative ADP algorithm, where the convergence of the weight matrices is analyzed. Finally, the numerical results and analysis are presented to illustrate the performance of the developed method. PMID:24807455

  5. Statistical iterative reconstruction algorithm for X-ray phase-contrast CT

    PubMed Central

    Hahn, Dieter; Thibault, Pierre; Fehringer, Andreas; Bech, Martin; Koehler, Thomas; Pfeiffer, Franz; Noël, Peter B.

    2015-01-01

    Grating-based phase-contrast computed tomography (PCCT) is a promising imaging tool on the horizon for pre-clinical and clinical applications. Until now PCCT has been plagued by strong artifacts when dense materials like bones are present. In this paper, we present a new statistical iterative reconstruction algorithm which overcomes this limitation. It makes use of the fact that an X-ray interferometer provides a conventional absorption as well as a dark-field signal in addition to the phase-contrast signal. The method is based on a statistical iterative reconstruction algorithm utilizing maximum-a-posteriori principles and integrating the statistical properties of the raw data as well as information of dense objects gained from the absorption signal. Reconstruction of a pre-clinical mouse scan illustrates that artifacts caused by bones are significantly reduced and image quality is improved when employing our approach. Especially small structures, which are usually lost because of streaks, are recovered in our results. In comparison with the current state-of-the-art algorithms our approach provides significantly improved image quality with respect to quantitative and qualitative results. In summary, we expect that our new statistical iterative reconstruction method to increase the general usability of PCCT imaging for medical diagnosis apart from applications focused solely on soft tissue visualization. PMID:26067714

  6. Statistical iterative reconstruction algorithm for X-ray phase-contrast CT.

    PubMed

    Hahn, Dieter; Thibault, Pierre; Fehringer, Andreas; Bech, Martin; Koehler, Thomas; Pfeiffer, Franz; Noël, Peter B

    2015-01-01

    Grating-based phase-contrast computed tomography (PCCT) is a promising imaging tool on the horizon for pre-clinical and clinical applications. Until now PCCT has been plagued by strong artifacts when dense materials like bones are present. In this paper, we present a new statistical iterative reconstruction algorithm which overcomes this limitation. It makes use of the fact that an X-ray interferometer provides a conventional absorption as well as a dark-field signal in addition to the phase-contrast signal. The method is based on a statistical iterative reconstruction algorithm utilizing maximum-a-posteriori principles and integrating the statistical properties of the raw data as well as information of dense objects gained from the absorption signal. Reconstruction of a pre-clinical mouse scan illustrates that artifacts caused by bones are significantly reduced and image quality is improved when employing our approach. Especially small structures, which are usually lost because of streaks, are recovered in our results. In comparison with the current state-of-the-art algorithms our approach provides significantly improved image quality with respect to quantitative and qualitative results. In summary, we expect that our new statistical iterative reconstruction method to increase the general usability of PCCT imaging for medical diagnosis apart from applications focused solely on soft tissue visualization.

  7. Direct adaptive iterative learning control of nonlinear systems using an output-recurrent fuzzy neural network.

    PubMed

    Wang, Ying-Chung; Chien, Chiang-Ju; Teng, Ching-Cheng

    2004-06-01

    In this paper, a direct adaptive iterative learning control (DAILC) based on a new output-recurrent fuzzy neural network (ORFNN) is presented for a class of repeatable nonlinear systems with unknown nonlinearities and variable initial resetting errors. In order to overcome the design difficulty due to initial state errors at the beginning of each iteration, a concept of time-varying boundary layer is employed to construct an error equation. The learning controller is then designed by using the given ORFNN to approximate an optimal equivalent controller. Some auxiliary control components are applied to eliminate approximation error and ensure learning convergence. Since the optimal ORFNN parameters for a best approximation are generally unavailable, an adaptive algorithm with projection mechanism is derived to update all the consequent, premise, and recurrent parameters during iteration processes. Only one network is required to design the ORFNN-based DAILC and the plant nonlinearities, especially the nonlinear input gain, are allowed to be totally unknown. Based on a Lyapunov-like analysis, we show that all adjustable parameters and internal signals remain bounded for all iterations. Furthermore, the norm of state tracking error vector will asymptotically converge to a tunable residual set as iteration goes to infinity. Finally, iterative learning control of two nonlinear systems, inverted pendulum system and Chua's chaotic circuit, are performed to verify the tracking performance of the proposed learning scheme.

  8. Non-iterative adaptive optical microscopy using wavefront sensing

    NASA Astrophysics Data System (ADS)

    Tao, X.; Azucena, O.; Kubby, J.

    2016-03-01

    This paper will review the development of wide-field and confocal microscopes with wavefront sensing and adaptive optics for correcting refractive aberrations and compensating scattering when imaging through thick tissues (Drosophila embryos and mouse brain tissue). To make wavefront measurements in biological specimens we have modified the laser guide-star techniques used in astronomy for measuring wavefront aberrations that occur as star light passes through Earth's turbulent atmosphere. Here sodium atoms in Earth's mesosphere, at an altitude of 95 km, are excited to fluoresce at resonance by a high-power sodium laser. The fluorescent light creates a guide-star reference beacon at the top of the atmosphere that can be used for measuring wavefront aberrations that occur as the light passes through the atmosphere. We have developed a related approach for making wavefront measurements in biological specimens using cellular structures labeled with fluorescent proteins as laser guide-stars. An example is a fluorescently labeled centrosome in a fruit fly embryo or neurons and dendrites in mouse brains. Using adaptive optical microscopy we show that the Strehl ratio, the ratio of the peak intensity of an aberrated point source relative to the diffraction limited image, can be improved by an order of magnitude when imaging deeply into live dynamic specimens, enabling near diffraction limited deep tissue imaging.

  9. Adaptive iterative learning control for a class of non-linearly parameterised systems with input saturations

    NASA Astrophysics Data System (ADS)

    Zhang, Ruikun; Hou, Zhongsheng; Ji, Honghai; Yin, Chenkun

    2016-04-01

    In this paper, an adaptive iterative learning control scheme is proposed for a class of non-linearly parameterised systems with unknown time-varying parameters and input saturations. By incorporating a saturation function, a new iterative learning control mechanism is presented which includes a feedback term and a parameter updating term. Through the use of parameter separation technique, the non-linear parameters are separated from the non-linear function and then a saturated difference updating law is designed in iteration domain by combining the unknown parametric term of the local Lipschitz continuous function and the unknown time-varying gain into an unknown time-varying function. The analysis of convergence is based on a time-weighted Lyapunov-Krasovskii-like composite energy function which consists of time-weighted input, state and parameter estimation information. The proposed learning control mechanism warrants a L2[0, T] convergence of the tracking error sequence along the iteration axis. Simulation results are provided to illustrate the effectiveness of the adaptive iterative learning control scheme.

  10. GOSIM: A multi-scale iterative multiple-point statistics algorithm with global optimization

    NASA Astrophysics Data System (ADS)

    Yang, Liang; Hou, Weisheng; Cui, Chanjie; Cui, Jie

    2016-04-01

    Most current multiple-point statistics (MPS) algorithms are based on a sequential simulation procedure, during which grid values are updated according to the local data events. Because the realization is updated only once during the sequential process, errors that occur while updating data events cannot be corrected. Error accumulation during simulations decreases the realization quality. Aimed at improving simulation quality, this study presents an MPS algorithm based on global optimization, called GOSIM. An objective function is defined for representing the dissimilarity between a realization and the TI in GOSIM, which is minimized by a multi-scale EM-like iterative method that contains an E-step and M-step in each iteration. The E-step searches for TI patterns that are most similar to the realization and match the conditioning data. A modified PatchMatch algorithm is used to accelerate the search process in E-step. M-step updates the realization based on the most similar patterns found in E-step and matches the global statistics of TI. During categorical data simulation, k-means clustering is used for transforming the obtained continuous realization into a categorical realization. The qualitative and quantitative comparison results of GOSIM, MS-CCSIM and SNESIM suggest that GOSIM has a better pattern reproduction ability for both unconditional and conditional simulations. A sensitivity analysis illustrates that pattern size significantly impacts the time costs and simulation quality. In conditional simulations, the weights of conditioning data should be as small as possible to maintain a good simulation quality. The study shows that big iteration numbers at coarser scales increase simulation quality and small iteration numbers at finer scales significantly save simulation time.

  11. Iterative Robust Capon Beamforming with Adaptively Updated Array Steering Vector Mismatch Levels

    PubMed Central

    Sun, Liguo

    2014-01-01

    The performance of the conventional adaptive beamformer is sensitive to the array steering vector (ASV) mismatch. And the output signal-to interference and noise ratio (SINR) suffers deterioration, especially in the presence of large direction of arrival (DOA) error. To improve the robustness of traditional approach, we propose a new approach to iteratively search the ASV of the desired signal based on the robust capon beamformer (RCB) with adaptively updated uncertainty levels, which are derived in the form of quadratically constrained quadratic programming (QCQP) problem based on the subspace projection theory. The estimated levels in this iterative beamformer present the trend of decreasing. Additionally, other array imperfections also degrade the performance of beamformer in practice. To cover several kinds of mismatches together, the adaptive flat ellipsoid models are introduced in our method as tight as possible. In the simulations, our beamformer is compared with other methods and its excellent performance is demonstrated via the numerical examples. PMID:27355008

  12. Design of computer-generated beam-shaping holograms by iterative finite-element mesh adaption.

    PubMed

    Dresel, T; Beyerlein, M; Schwider, J

    1996-12-10

    Computer-generated phase-only holograms can be used for laser beam shaping, i.e., for focusing a given aperture with intensity and phase distributions into a pregiven intensity pattern in their focal planes. A numerical approach based on iterative finite-element mesh adaption permits the design of appropriate phase functions for the task of focusing into two-dimensional reconstruction patterns. Both the hologram aperture and the reconstruction pattern are covered by mesh mappings. An iterative procedure delivers meshes with intensities equally distributed over the constituting elements. This design algorithm adds new elementary focuser functions to what we call object-oriented hologram design. Some design examples are discussed.

  13. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  14. Practical improvements of multi-grid iteration for adaptive mesh refinement method

    NASA Astrophysics Data System (ADS)

    Miyashita, Hisashi; Yamada, Yoshiyuki

    2005-03-01

    Adaptive mesh refinement(AMR) is a powerful tool to efficiently solve multi-scaled problems. However, the vanilla AMR method has a well-known critical demerit, i.e., it cannot be applied to non-local problems. Although multi-grid iteration (MGI) can be regarded as a good remedy for a non-local problem such as the Poisson equation, we observed fundamental difficulties in applying the MGI technique in AMR to realistic problems under complicated mesh layouts because it does not converge or it requires too many iterations even if it does converge. To cope with the problem, when updating the next approximation in the MGI process, we calculate the precise total corrections that are relatively accurate to the current residual by introducing a new iteration for such a total correction. This procedure greatly accelerates the MGI convergence speed especially under complicated mesh layouts.

  15. Bias in iterative reconstruction of low-statistics PET data: benefits of a resolution model

    NASA Astrophysics Data System (ADS)

    Walker, M. D.; Asselin, M.-C.; Julyan, P. J.; Feldmann, M.; Talbot, P. S.; Jones, T.; Matthews, J. C.

    2011-02-01

    Iterative image reconstruction methods such as ordered-subset expectation maximization (OSEM) are widely used in PET. Reconstructions via OSEM are however reported to be biased for low-count data. We investigated this and considered the impact for dynamic PET. Patient listmode data were acquired in [11C]DASB and [15O]H2O scans on the HRRT brain PET scanner. These data were subsampled to create many independent, low-count replicates. The data were reconstructed and the images from low-count data were compared to the high-count originals (from the same reconstruction method). This comparison enabled low-statistics bias to be calculated for the given reconstruction, as a function of the noise-equivalent counts (NEC). Two iterative reconstruction methods were tested, one with and one without an image-based resolution model (RM). Significant bias was observed when reconstructing data of low statistical quality, for both subsampled human and simulated data. For human data, this bias was substantially reduced by including a RM. For [11C]DASB the low-statistics bias in the caudate head at 1.7 M NEC (approx. 30 s) was -5.5% and -13% with and without RM, respectively. We predicted biases in the binding potential of -4% and -10%. For quantification of cerebral blood flow for the whole-brain grey- or white-matter, using [15O]H2O and the PET autoradiographic method, a low-statistics bias of <2.5% and <4% was predicted for reconstruction with and without the RM. The use of a resolution model reduces low-statistics bias and can hence be beneficial for quantitative dynamic PET.

  16. Distributed adaptive fuzzy iterative learning control of coordination problems for higher order multi-agent systems

    NASA Astrophysics Data System (ADS)

    Li, Jinsha; Li, Junmin

    2016-07-01

    In this paper, the adaptive fuzzy iterative learning control scheme is proposed for coordination problems of Mth order (M ≥ 2) distributed multi-agent systems. Every follower agent has a higher order integrator with unknown nonlinear dynamics and input disturbance. The dynamics of the leader are a higher order nonlinear systems and only available to a portion of the follower agents. With distributed initial state learning, the unified distributed protocols combined time-domain and iteration-domain adaptive laws guarantee that the follower agents track the leader uniformly on [0, T]. Then, the proposed algorithm extends to achieve the formation control. A numerical example and a multiple robotic system are provided to demonstrate the performance of the proposed approach.

  17. Statistical iterative reconstruction to improve image quality for digital breast tomosynthesis

    PubMed Central

    Xu, Shiyu; Lu, Jianping; Zhou, Otto; Chen, Ying

    2015-01-01

    Purpose: Digital breast tomosynthesis (DBT) is a novel modality with the potential to improve early detection of breast cancer by providing three-dimensional (3D) imaging with a low radiation dose. 3D image reconstruction presents some challenges: cone-beam and flat-panel geometry, and highly incomplete sampling. A promising means to overcome these challenges is statistical iterative reconstruction (IR), since it provides the flexibility of accurate physics modeling and a general description of system geometry. The authors’ goal was to develop techniques for applying statistical IR to tomosynthesis imaging data. Methods: These techniques include the following: a physics model with a local voxel-pair based prior with flexible parameters to fine-tune image quality; a precomputed parameter λ in the prior, to remove data dependence and to achieve a uniform resolution property; an effective ray-driven technique to compute the forward and backprojection; and an oversampled, ray-driven method to perform high resolution reconstruction with a practical region-of-interest technique. To assess the performance of these techniques, the authors acquired phantom data on the stationary DBT prototype system. To solve the estimation problem, the authors proposed an optimization-transfer based algorithm framework that potentially allows fewer iterations to achieve an acceptably converged reconstruction. Results: IR improved the detectability of low-contrast and small microcalcifications, reduced cross-plane artifacts, improved spatial resolution, and lowered noise in reconstructed images. Conclusions: Although the computational load remains a significant challenge for practical development, the superior image quality provided by statistical IR, combined with advancing computational techniques, may bring benefits to screening, diagnostics, and intraoperative imaging in clinical applications. PMID:26328987

  18. Adaptive approximation of higher order posterior statistics

    SciTech Connect

    Lee, Wonjung

    2014-02-01

    Filtering is an approach for incorporating observed data into time-evolving systems. Instead of a family of Dirac delta masses that is widely used in Monte Carlo methods, we here use the Wiener chaos expansion for the parametrization of the conditioned probability distribution to solve the nonlinear filtering problem. The Wiener chaos expansion is not the best method for uncertainty propagation without observations. Nevertheless, the projection of the system variables in a fixed polynomial basis spanning the probability space might be a competitive representation in the presence of relatively frequent observations because the Wiener chaos approach not only leads to an accurate and efficient prediction for short time uncertainty quantification, but it also allows to apply several data assimilation methods that can be used to yield a better approximate filtering solution. The aim of the present paper is to investigate this hypothesis. We answer in the affirmative for the (stochastic) Lorenz-63 system based on numerical simulations in which the uncertainty quantification method and the data assimilation method are adaptively selected by whether the dynamics is driven by Brownian motion and the near-Gaussianity of the measure to be updated, respectively.

  19. Comparison of image quality from filtered back projection, statistical iterative reconstruction, and model-based iterative reconstruction algorithms in abdominal computed tomography

    PubMed Central

    Kuo, Yu; Lin, Yi-Yang; Lee, Rheun-Chuan; Lin, Chung-Jung; Chiou, Yi-You; Guo, Wan-Yuo

    2016-01-01

    Abstract The purpose of this study was to compare the image noise-reducing abilities of iterative model reconstruction (IMR) with those of traditional filtered back projection (FBP) and statistical iterative reconstruction (IR) in abdominal computed tomography (CT) images This institutional review board-approved retrospective study enrolled 103 patients; informed consent was waived. Urinary bladder (n = 83) and renal cysts (n = 44) were used as targets for evaluating imaging quality. Raw data were retrospectively reconstructed using FBP, statistical IR, and IMR. Objective image noise and signal-to-noise ratio (SNR) were calculated and analyzed using one-way analysis of variance. Subjective image quality was evaluated and analyzed using Wilcoxon signed-rank test with Bonferroni correction. Objective analysis revealed a reduction in image noise for statistical IR compared with that for FBP, with no significant differences in SNR. In the urinary bladder group, IMR achieved up to 53.7% noise reduction, demonstrating a superior performance to that of statistical IR. IMR also yielded a significantly superior SNR to that of statistical IR. Similar results were obtained in the cyst group. Subjective analysis revealed reduced image noise for IMR, without inferior margin delineation or diagnostic confidence. IMR reduced noise and increased SNR to greater degrees than did FBP and statistical IR. Applying the IMR technique to abdominal CT imaging has potential for reducing the radiation dose without sacrificing imaging quality. PMID:27495078

  20. Comparison of image quality from filtered back projection, statistical iterative reconstruction, and model-based iterative reconstruction algorithms in abdominal computed tomography.

    PubMed

    Kuo, Yu; Lin, Yi-Yang; Lee, Rheun-Chuan; Lin, Chung-Jung; Chiou, Yi-You; Guo, Wan-Yuo

    2016-08-01

    The purpose of this study was to compare the image noise-reducing abilities of iterative model reconstruction (IMR) with those of traditional filtered back projection (FBP) and statistical iterative reconstruction (IR) in abdominal computed tomography (CT) imagesThis institutional review board-approved retrospective study enrolled 103 patients; informed consent was waived. Urinary bladder (n = 83) and renal cysts (n = 44) were used as targets for evaluating imaging quality. Raw data were retrospectively reconstructed using FBP, statistical IR, and IMR. Objective image noise and signal-to-noise ratio (SNR) were calculated and analyzed using one-way analysis of variance. Subjective image quality was evaluated and analyzed using Wilcoxon signed-rank test with Bonferroni correction.Objective analysis revealed a reduction in image noise for statistical IR compared with that for FBP, with no significant differences in SNR. In the urinary bladder group, IMR achieved up to 53.7% noise reduction, demonstrating a superior performance to that of statistical IR. IMR also yielded a significantly superior SNR to that of statistical IR. Similar results were obtained in the cyst group. Subjective analysis revealed reduced image noise for IMR, without inferior margin delineation or diagnostic confidence.IMR reduced noise and increased SNR to greater degrees than did FBP and statistical IR. Applying the IMR technique to abdominal CT imaging has potential for reducing the radiation dose without sacrificing imaging quality. PMID:27495078

  1. Enhancement and bias removal of optical coherence tomography images: An iterative approach with adaptive bilateral filtering.

    PubMed

    Sudeep, P V; Issac Niwas, S; Palanisamy, P; Rajan, Jeny; Xiaojun, Yu; Wang, Xianghong; Luo, Yuemei; Liu, Linbo

    2016-04-01

    Optical coherence tomography (OCT) has continually evolved and expanded as one of the most valuable routine tests in ophthalmology. However, noise (speckle) in the acquired images causes quality degradation of OCT images and makes it difficult to analyze the acquired images. In this paper, an iterative approach based on bilateral filtering is proposed for speckle reduction in multiframe OCT data. Gamma noise model is assumed for the observed OCT image. First, the adaptive version of the conventional bilateral filter is applied to enhance the multiframe OCT data and then the bias due to noise is reduced from each of the filtered frames. These unbiased filtered frames are then refined using an iterative approach. Finally, these refined frames are averaged to produce the denoised OCT image. Experimental results on phantom images and real OCT retinal images demonstrate the effectiveness of the proposed filter. PMID:26907572

  2. Paradigms for adaptive statistical information designs: practical experiences and strategies.

    PubMed

    Wang, Sue-Jane; Hung, H M James; O'Neill, Robert

    2012-11-10

    In the last decade or so, interest in adaptive design clinical trials has gradually been directed towards their use in regulatory submissions by pharmaceutical drug sponsors to evaluate investigational new drugs. Methodological advances of adaptive designs are abundant in the statistical literature since the 1970s. The adaptive design paradigm has been enthusiastically perceived to increase the efficiency and to be more cost-effective than the fixed design paradigm for drug development. Much interest in adaptive designs is in those studies with two-stages, where stage 1 is exploratory and stage 2 depends upon stage 1 results, but where the data of both stages will be combined to yield statistical evidence for use as that of a pivotal registration trial. It was not until the recent release of the US Food and Drug Administration Draft Guidance for Industry on Adaptive Design Clinical Trials for Drugs and Biologics (2010) that the boundaries of flexibility for adaptive designs were specifically considered for regulatory purposes, including what are exploratory goals, and what are the goals of adequate and well-controlled (A&WC) trials (2002). The guidance carefully described these distinctions in an attempt to minimize the confusion between the goals of preliminary learning phases of drug development, which are inherently substantially uncertain, and the definitive inference-based phases of drug development. In this paper, in addition to discussing some aspects of adaptive designs in a confirmatory study setting, we underscore the value of adaptive designs when used in exploratory trials to improve planning of subsequent A&WC trials. One type of adaptation that is receiving attention is the re-estimation of the sample size during the course of the trial. We refer to this type of adaptation as an adaptive statistical information design. Specifically, a case example is used to illustrate how challenging it is to plan a confirmatory adaptive statistical information

  3. Applying statistical process control to the adaptive rate control problem

    NASA Astrophysics Data System (ADS)

    Manohar, Nelson R.; Willebeek-LeMair, Marc H.; Prakash, Atul

    1997-12-01

    Due to the heterogeneity and shared resource nature of today's computer network environments, the end-to-end delivery of multimedia requires adaptive mechanisms to be effective. We present a framework for the adaptive streaming of heterogeneous media. We introduce the application of online statistical process control (SPC) to the problem of dynamic rate control. In SPC, the goal is to establish (and preserve) a state of statistical quality control (i.e., controlled variability around a target mean) over a process. We consider the end-to-end streaming of multimedia content over the internet as the process to be controlled. First, at each client, we measure process performance and apply statistical quality control (SQC) with respect to application-level requirements. Then, we guide an adaptive rate control (ARC) problem at the server based on the statistical significance of trends and departures on these measurements. We show this scheme facilitates handling of heterogeneous media. Last, because SPC is designed to monitor long-term process performance, we show that our online SPC scheme could be used to adapt to various degrees of long-term (network) variability (i.e., statistically significant process shifts as opposed to short-term random fluctuations). We develop several examples and analyze its statistical behavior and guarantees.

  4. An adaptive inverse iteration algorithm using interpolating multiwavelets for structural eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Wang, Youming; Chen, Xuefeng; He, Zhengjia

    2011-02-01

    Structural eigenvalues have been broadly applied in modal analysis, damage detection, vibration control, etc. In this paper, the interpolating multiwavelets are custom designed based on stable completion method to solve structural eigenvalue problems. The operator-orthogonality of interpolating multiwavelets gives rise to highly sparse multilevel stiffness and mass matrices of structural eigenvalue problems and permits the incremental computation of the eigenvalue solution in an efficient manner. An adaptive inverse iteration algorithm using the interpolating multiwavelets is presented to solve structural eigenvalue problems. Numerical examples validate the accuracy and efficiency of the proposed algorithm.

  5. Multigrid iterative method with adaptive spatial support for computed tomography reconstruction from few-view data

    NASA Astrophysics Data System (ADS)

    Lee, Ping-Chang

    2014-03-01

    Computed tomography (CT) plays a key role in modern medical system, whether it be for diagnosis or therapy. As an increased risk of cancer development is associated with exposure to radiation, reducing radiation exposure in CT becomes an essential issue. Based on the compressive sensing (CS) theory, iterative based method with total variation (TV) minimization is proven to be a powerful framework for few-view tomographic image reconstruction. Multigrid method is an iterative method for solving both linear and nonlinear systems, especially when the system contains a huge number of components. In medical imaging, image background is often defined by zero intensity, thus attaining spatial support of the image, which is helpful for iterative reconstruction. In the proposed method, the image support is not considered as a priori knowledge. Rather, it evolves during the reconstruction process. Based on the CS framework, we proposed a multigrid method with adaptive spatial support constraint. The simultaneous algebraic reconstruction (SART) with TV minimization is implemented for comparison purpose. The numerical result shows: 1. Multigrid method has better performance while less than 60 views of projection data were used, 2. Spatial support highly improves the CS reconstruction, and 3. When few views of projection data were measured, our method performs better than the SART+TV method with spatial support constraint.

  6. Adaptive switching detection algorithm for iterative-MIMO systems to enable power savings

    NASA Astrophysics Data System (ADS)

    Tadza, N.; Laurenson, D.; Thompson, J. S.

    2014-11-01

    This paper attempts to tackle one of the challenges faced in soft input soft output Multiple Input Multiple Output (MIMO) detection systems, which is to achieve optimal error rate performance with minimal power consumption. This is realized by proposing a new algorithm design that comprises multiple thresholds within the detector that, in real time, specify the receiver behavior according to the current channel in both slow and fast fading conditions, giving it adaptivity. This adaptivity enables energy savings within the system since the receiver chooses whether to accept or to reject the transmission, according to the success rate of detecting thresholds. The thresholds are calculated using the mutual information of the instantaneous channel conditions between the transmitting and receiving antennas of iterative-MIMO systems. In addition, the power saving technique, Dynamic Voltage and Frequency Scaling, helps to reduce the circuit power demands of the adaptive algorithm. This adaptivity has the potential to save up to 30% of the total energy when it is implemented on Xilinx®Virtex-5 simulation hardware. Results indicate the benefits of having this "intelligence" in the adaptive algorithm due to the promising performance-complexity tradeoff parameters in both software and hardware codesign simulation.

  7. Adaptive mesh refinement and multilevel iteration for multiphase, multicomponent flow in porous media

    SciTech Connect

    Hornung, R.D.

    1996-12-31

    An adaptive local mesh refinement (AMR) algorithm originally developed for unsteady gas dynamics is extended to multi-phase flow in porous media. Within the AMR framework, we combine specialized numerical methods to treat the different aspects of the partial differential equations. Multi-level iteration and domain decomposition techniques are incorporated to accommodate elliptic/parabolic behavior. High-resolution shock capturing schemes are used in the time integration of the hyperbolic mass conservation equations. When combined with AMR, these numerical schemes provide high resolution locally in a more efficient manner than if they were applied on a uniformly fine computational mesh. We will discuss the interplay of physical, mathematical, and numerical concerns in the application of adaptive mesh refinement to flow in porous media problems of practical interest.

  8. Detection of fiducial points in ECG waves using iteration based adaptive thresholds.

    PubMed

    Wonjune Kang; Kyunguen Byun; Hong-Goo Kang

    2015-08-01

    This paper presents an algorithm for the detection of fiducial points in electrocardiogram (ECG) waves using iteration based adaptive thresholds. By setting the search range of the processing frame to the interval between two consecutive R peaks, the peaks of T and P waves are used as reference salient points (RSPs) to detect the fiducial points. The RSPs are selected from candidates whose slope variation factors are larger than iteratively defined adaptive thresholds. Considering the fact that the number of RSPs varies depending on whether the ECG wave is normal or not, the proposed algorithm proceeds with a different methodology for determining fiducial points based on the number of detected RSPs. Testing was performed using twelve records from the MIT-BIH Arrhythmia Database that were manually marked for comparison with the estimated locations of the fiducial points. The means of absolute distances between the true locations and the points estimated by the algorithm are 12.2 ms and 7.9 ms for the starting points of P and Q waves, and 9.3 ms and 13.9 ms for the ending points of S and T waves. Since the computational complexity of the proposed algorithm is very low, it is feasible for use in mobile devices. PMID:26736854

  9. GSHSite: Exploiting an Iteratively Statistical Method to Identify S-Glutathionylation Sites with Substrate Specificity

    PubMed Central

    Chen, Yi-Ju; Lu, Cheng-Tsung; Huang, Kai-Yao; Wu, Hsin-Yi; Chen, Yu-Ju; Lee, Tzong-Yi

    2015-01-01

    S-glutathionylation, the covalent attachment of a glutathione (GSH) to the sulfur atom of cysteine, is a selective and reversible protein post-translational modification (PTM) that regulates protein activity, localization, and stability. Despite its implication in the regulation of protein functions and cell signaling, the substrate specificity of cysteine S-glutathionylation remains unknown. Based on a total of 1783 experimentally identified S-glutathionylation sites from mouse macrophages, this work presents an informatics investigation on S-glutathionylation sites including structural factors such as the flanking amino acids composition and the accessible surface area (ASA). TwoSampleLogo presents that positively charged amino acids flanking the S-glutathionylated cysteine may influence the formation of S-glutathionylation in closed three-dimensional environment. A statistical method is further applied to iteratively detect the conserved substrate motifs with statistical significance. Support vector machine (SVM) is then applied to generate predictive model considering the substrate motifs. According to five-fold cross-validation, the SVMs trained with substrate motifs could achieve an enhanced sensitivity, specificity, and accuracy, and provides a promising performance in an independent test set. The effectiveness of the proposed method is demonstrated by the correct identification of previously reported S-glutathionylation sites of mouse thioredoxin (TXN) and human protein tyrosine phosphatase 1b (PTP1B). Finally, the constructed models are adopted to implement an effective web-based tool, named GSHSite (http://csb.cse.yzu.edu.tw/GSHSite/), for identifying uncharacterized GSH substrate sites on the protein sequences. PMID:25849935

  10. Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method

    SciTech Connect

    Tao, Yinghua; Chen, Guang-Hong; Hacker, Timothy A.; Raval, Amish N.; Van Lysel, Michael S.; Speidel, Michael A.

    2014-07-15

    Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937

  11. Adaptive iterative learning control for nonlinearly parameterised systems with unknown time-varying delays and input saturations

    NASA Astrophysics Data System (ADS)

    Zhang, Ruikun; Hou, Zhongsheng; Chi, Ronghu; Ji, Honghai

    2015-06-01

    In this work, an adaptive iterative learning control (AILC) scheme is proposed to address a class of nonlinearly parameterised systems with both unknown time-varying delays and input saturations. By incorporating a saturation function, a novel iterative learning control mechanism is constructed with a feedback term in the time domain and a fully saturated adaptive learning term in the iteration domain, which is used to estimate the unknown time-varying system uncertainty. A new time-weighted Lyapunov-Krasovskii-like composite energy function (LKL-CEF) is designed for the convergence analysis where time-weighted inputs, states and estimates of system uncertainty are all considered. Despite the existence of time-varying parametric uncertainties, time-varying delays, input saturations and local Lipschitz nonlinearities, the learning convergence is guaranteed with rigorous mathematical analysis. Simulation results verify the correctness and effectiveness of the proposed method further.

  12. Towards Validation of an Adaptive Flight Control Simulation Using Statistical Emulation

    NASA Technical Reports Server (NTRS)

    He, Yuning; Lee, Herbert K. H.; Davies, Misty D.

    2012-01-01

    Traditional validation of flight control systems is based primarily upon empirical testing. Empirical testing is sufficient for simple systems in which a.) the behavior is approximately linear and b.) humans are in-the-loop and responsible for off-nominal flight regimes. A different possible concept of operation is to use adaptive flight control systems with online learning neural networks (OLNNs) in combination with a human pilot for off-nominal flight behavior (such as when a plane has been damaged). Validating these systems is difficult because the controller is changing during the flight in a nonlinear way, and because the pilot and the control system have the potential to co-adapt in adverse ways traditional empirical methods are unlikely to provide any guarantees in this case. Additionally, the time it takes to find unsafe regions within the flight envelope using empirical testing means that the time between adaptive controller design iterations is large. This paper describes a new concept for validating adaptive control systems using methods based on Bayesian statistics. This validation framework allows the analyst to build nonlinear models with modal behavior, and to have an uncertainty estimate for the difference between the behaviors of the model and system under test.

  13. Adaptive optimal control of unknown constrained-input systems using policy iteration and neural networks.

    PubMed

    Modares, Hamidreza; Lewis, Frank L; Naghibi-Sistani, Mohammad-Bagher

    2013-10-01

    This paper presents an online policy iteration (PI) algorithm to learn the continuous-time optimal control solution for unknown constrained-input systems. The proposed PI algorithm is implemented on an actor-critic structure where two neural networks (NNs) are tuned online and simultaneously to generate the optimal bounded control policy. The requirement of complete knowledge of the system dynamics is obviated by employing a novel NN identifier in conjunction with the actor and critic NNs. It is shown how the identifier weights estimation error affects the convergence of the critic NN. A novel learning rule is developed to guarantee that the identifier weights converge to small neighborhoods of their ideal values exponentially fast. To provide an easy-to-check persistence of excitation condition, the experience replay technique is used. That is, recorded past experiences are used simultaneously with current data for the adaptation of the identifier weights. Stability of the whole system consisting of the actor, critic, system state, and system identifier is guaranteed while all three networks undergo adaptation. Convergence to a near-optimal control law is also shown. The effectiveness of the proposed method is illustrated with a simulation example. PMID:24808590

  14. Array model interpolation and subband iterative adaptive filters applied to beamforming-based acoustic echo cancellation.

    PubMed

    Bai, Mingsian R; Chi, Li-Wen; Liang, Li-Huang; Lo, Yi-Yang

    2016-02-01

    In this paper, an evolutionary exposition is given in regard to the enhancing strategies for acoustic echo cancellers (AECs). A fixed beamformer (FBF) is utilized to focus on the near-end speaker while suppressing the echo from the far end. In reality, the array steering vector could differ considerably from the ideal freefield plane wave model. Therefore, an experimental procedure is developed to interpolate a practical array model from the measured frequency responses. Subband (SB) filtering with polyphase implementation is exploited to accelerate the cancellation process. Generalized sidelobe canceller (GSC) composed of an FBF and an adaptive blocking module is combined with AEC to maximize cancellation performance. Another enhancement is an internal iteration (IIT) procedure that enables efficient convergence in the adaptive SB filters within a sample time. Objective tests in terms of echo return loss enhancement (ERLE), perceptual evaluation of speech quality (PESQ), word recognition rate for automatic speech recognition (ASR), and subjective listening tests are conducted to validate the proposed AEC approaches. The results show that the GSC-SB-AEC-IIT approach has attained the highest ERLE without speech quality degradation, even in double-talk scenarios. PMID:26936567

  15. Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.

    PubMed

    Liu, Li; Lin, Weikai; Jin, Mingwu

    2015-01-01

    In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm.

  16. Adapting iterative algorithms for solving large sparse linear systems for efficient use on the CDC CYBER 205

    NASA Technical Reports Server (NTRS)

    Kincaid, D. R.; Young, D. M.

    1984-01-01

    Adapting and designing mathematical software to achieve optimum performance on the CYBER 205 is discussed. Comments and observations are made in light of recent work done on modifying the ITPACK software package and on writing new software for vector supercomputers. The goal was to develop very efficient vector algorithms and software for solving large sparse linear systems using iterative methods.

  17. Efficient pulse compression for LPI waveforms based on a nonparametric iterative adaptive approach

    NASA Astrophysics Data System (ADS)

    Li, Zhengzheng; Nepal, Ramesh; Zhang, Yan; Blake, WIlliam

    2015-05-01

    In order to achieve low probability-of-intercept (LPI), radar waveforms are usually long and randomly generated. Due to the randomized nature, Matched filter responses (autocorrelation) of those waveforms can have high sidelobes which would mask weaker targets near a strong target, limiting radar's ability to distinguish close-by targets. To improve resolution and reduced sidelobe contaminations, a waveform independent pulse compression filter is desired. Furthermore, the pulse compression filter needs to be able to adapt to received signal to achieve optimized performance. As many existing pulse techniques require intensive computation, real-time implementation is infeasible. This paper introduces a new adaptive pulse compression technique for LPI waveforms that is based on a nonparametric iterative adaptive approach (IAA). Due to the nonparametric nature, no parameter tuning is required for different waveforms. IAA can achieve super-resolution and sidelobe suppression in both range and Doppler domains. Also it can be extended to directly handle the matched filter (MF) output (called MF-IAA), which further reduces the computational load. The practical impact of LPI waveform operations on IAA and MF-IAA has not been carefully studied in previous work. Herein the typical LPI waveforms such as random phase coding and other non- PI waveforms are tested with both single-pulse and multi-pulse IAA processing. A realistic airborne radar simulator as well as actual measured radar data are used for the validations. It is validated that in spite of noticeable difference with different test waveforms, the IAA algorithms and its improvement can effectively achieve range-Doppler super-resolution in realistic data.

  18. Statistical-uncertainty-based adaptive filtering of lidar signals

    SciTech Connect

    Fuehrer, P. L.; Friehe, C. A.; Hristov, T. S.; Cooper, D. I.; Eichinger, W. E.

    2000-02-10

    An adaptive filter signal processing technique is developed to overcome the problem of Raman lidar water-vapor mixing ratio (the ratio of the water-vapor density to the dry-air density) with a highly variable statistical uncertainty that increases with decreasing photomultiplier-tube signal strength and masks the true desired water-vapor structure. The technique, applied to horizontal scans, assumes only statistical horizontal homogeneity. The result is a variable spatial resolution water-vapor signal with a constant variance out to a range limit set by a specified signal-to-noise ratio. The technique was applied to Raman water-vapor lidar data obtained at a coastal pier site together with in situ instruments located 320 m from the lidar. The micrometerological humidity data were used to calibrate the ratio of the lidar gains of the H{sub 2}O and the N{sub 2} photomultiplier tubes and set the water-vapor mixing ratio variance for the adaptive filter. For the coastal experiment the effective limit of the lidar range was found to be approximately 200 m for a maximum noise-to-signal variance ratio of 0.1 with the implemented data-reduction procedure. The technique can be adapted to off-horizontal scans with a small reduction in the constraints and is also applicable to other remote-sensing devices that exhibit the same inherent range-dependent signal-to-noise ratio problem. (c) 2000 Optical Society of America.

  19. Statistical-uncertainty-based adaptive filtering of lidar signals.

    PubMed

    Fuehrer, P L; Friehe, C A; Hristov, T S; Cooper, D I; Eichinger, W E

    2000-02-10

    An adaptive filter signal processing technique is developed to overcome the problem of Raman lidar water-vapor mixing ratio (the ratio of the water-vapor density to the dry-air density) with a highly variable statistical uncertainty that increases with decreasing photomultiplier-tube signal strength and masks the true desired water-vapor structure. The technique, applied to horizontal scans, assumes only statistical horizontal homogeneity. The result is a variable spatial resolution water-vapor signal with a constant variance out to a range limit set by a specified signal-to-noise ratio. The technique was applied to Raman water-vapor lidar data obtained at a coastal pier site together with in situ instruments located 320 m from the lidar. The micrometeorological humidity data were used to calibrate the ratio of the lidar gains of the H(2)O and the N(2) photomultiplier tubes and set the water-vapor mixing ratio variance for the adaptive filter. For the coastal experiment the effective limit of the lidar range was found to be approximately 200 m for a maximum noise-to-signal variance ratio of 0.1 with the implemented data-reduction procedure. The technique can be adapted to off-horizontal scans with a small reduction in the constraints and is also applicable to other remote-sensing devices that exhibit the same inherent range-dependent signal-to-noise ratio problem.

  20. A Self-Adaptive Missile Guidance System for Statistical Inputs

    NASA Technical Reports Server (NTRS)

    Peery, H. Rodney

    1960-01-01

    A method of designing a self-adaptive missile guidance system is presented. The system inputs are assumed to be known in a statistical sense only. Newton's modified Wiener theory is utilized in the design of the system and to establish the performance criterion. The missile is assumed to be a beam rider, to have a g limiter, and to operate over a flight envelope where the open-loop gain varies by a factor of 20. It is shown that the percent of time that missile acceleration limiting occurs can be used effectively to adjust the coefficients of the Wiener filter. The result is a guidance system which adapts itself to a changing environment and gives essentially optimum filtering and minimum miss distance.

  1. Statistical model based iterative reconstruction (MBIR) in clinical CT systems: Experimental assessment of noise performance

    PubMed Central

    Li, Ke; Tang, Jie; Chen, Guang-Hong

    2014-01-01

    Purpose: To reduce radiation dose in CT imaging, the statistical model based iterative reconstruction (MBIR) method has been introduced for clinical use. Based on the principle of MBIR and its nonlinear nature, the noise performance of MBIR is expected to be different from that of the well-understood filtered backprojection (FBP) reconstruction method. The purpose of this work is to experimentally assess the unique noise characteristics of MBIR using a state-of-the-art clinical CT system. Methods: Three physical phantoms, including a water cylinder and two pediatric head phantoms, were scanned in axial scanning mode using a 64-slice CT scanner (Discovery CT750 HD, GE Healthcare, Waukesha, WI) at seven different mAs levels (5, 12.5, 25, 50, 100, 200, 300). At each mAs level, each phantom was repeatedly scanned 50 times to generate an image ensemble for noise analysis. Both the FBP method with a standard kernel and the MBIR method (Veo®, GE Healthcare, Waukesha, WI) were used for CT image reconstruction. Three-dimensional (3D) noise power spectrum (NPS), two-dimensional (2D) NPS, and zero-dimensional NPS (noise variance) were assessed both globally and locally. Noise magnitude, noise spatial correlation, noise spatial uniformity and their dose dependence were examined for the two reconstruction methods. Results: (1) At each dose level and at each frequency, the magnitude of the NPS of MBIR was smaller than that of FBP. (2) While the shape of the NPS of FBP was dose-independent, the shape of the NPS of MBIR was strongly dose-dependent; lower dose lead to a “redder” NPS with a lower mean frequency value. (3) The noise standard deviation (σ) of MBIR and dose were found to be related through a power law of σ ∝ (dose)−β with the component β ≈ 0.25, which violated the classical σ ∝ (dose)−0.5 power law in FBP. (4) With MBIR, noise reduction was most prominent for thin image slices. (5) MBIR lead to better noise spatial uniformity when compared

  2. Statistical model based iterative reconstruction (MBIR) in clinical CT systems: Experimental assessment of noise performance

    SciTech Connect

    Li, Ke; Tang, Jie; Chen, Guang-Hong

    2014-04-15

    Purpose: To reduce radiation dose in CT imaging, the statistical model based iterative reconstruction (MBIR) method has been introduced for clinical use. Based on the principle of MBIR and its nonlinear nature, the noise performance of MBIR is expected to be different from that of the well-understood filtered backprojection (FBP) reconstruction method. The purpose of this work is to experimentally assess the unique noise characteristics of MBIR using a state-of-the-art clinical CT system. Methods: Three physical phantoms, including a water cylinder and two pediatric head phantoms, were scanned in axial scanning mode using a 64-slice CT scanner (Discovery CT750 HD, GE Healthcare, Waukesha, WI) at seven different mAs levels (5, 12.5, 25, 50, 100, 200, 300). At each mAs level, each phantom was repeatedly scanned 50 times to generate an image ensemble for noise analysis. Both the FBP method with a standard kernel and the MBIR method (Veo{sup ®}, GE Healthcare, Waukesha, WI) were used for CT image reconstruction. Three-dimensional (3D) noise power spectrum (NPS), two-dimensional (2D) NPS, and zero-dimensional NPS (noise variance) were assessed both globally and locally. Noise magnitude, noise spatial correlation, noise spatial uniformity and their dose dependence were examined for the two reconstruction methods. Results: (1) At each dose level and at each frequency, the magnitude of the NPS of MBIR was smaller than that of FBP. (2) While the shape of the NPS of FBP was dose-independent, the shape of the NPS of MBIR was strongly dose-dependent; lower dose lead to a “redder” NPS with a lower mean frequency value. (3) The noise standard deviation (σ) of MBIR and dose were found to be related through a power law of σ ∝ (dose){sup −β} with the component β ≈ 0.25, which violated the classical σ ∝ (dose){sup −0.5} power law in FBP. (4) With MBIR, noise reduction was most prominent for thin image slices. (5) MBIR lead to better noise spatial

  3. Competition and time-dependent behavior in spatial iterated prisoner’s dilemma incorporating adaptive zero-determinant strategies

    NASA Astrophysics Data System (ADS)

    Li, Yong; Xu, Chen; Liu, Jie; Hui, Pak Ming

    2016-10-01

    We propose and study the competitiveness of a class of adaptive zero-determinant strategies (ZDSs) in a population with spatial structure against four classic strategies in iterated prisoner’s dilemma. Besides strategy updating via a probabilistic mechanism by imitating the strategy of a better performing opponent, players using the ZDSs can also adapt their strategies to take advantage of their local competing environment with another probability. The adapted ZDSs could be extortionate-like to avoid being continually cheated by defectors or to take advantage of unconditional cooperators. The adapted ZDSs could also be a compliance strategy so as to cooperate with the conditionally cooperative players. This flexibility makes adaptive ZDSs more competitive than nonadaptive ZDSs. Results show that adaptive ZDSs can either dominate over other strategies or at least coexist with them when the ZDSs are allowed to adapt more readily than to imitate other strategies. The effectiveness of the adaptive ZDSs relies on how fast they can adapt to the competing environment before they are replaced by other strategies. The adaptive ZDSs generally work well as they could adapt gradually and make use of other strategies for suppressing their enemies. When adaptation happens more readily than imitation for the ZDSs, they outperform other strategies over a wide range of cost-to-benefit ratios.

  4. Iterative development and the scope for plasticity: contrasts among trait categories in an adaptive radiation

    PubMed Central

    Foster, S A; Wund, M A; Graham, M A; Earley, R L; Gardiner, R; Kearns, T; Baker, J A

    2015-01-01

    Phenotypic plasticity can influence evolutionary change in a lineage, ranging from facilitation of population persistence in a novel environment to directing the patterns of evolutionary change. As the specific nature of plasticity can impact evolutionary consequences, it is essential to consider how plasticity is manifested if we are to understand the contribution of plasticity to phenotypic evolution. Most morphological traits are developmentally plastic, irreversible, and generally considered to be costly, at least when the resultant phenotype is mis-matched to the environment. At the other extreme, behavioral phenotypes are typically activational (modifiable on very short time scales), and not immediately costly as they are produced by constitutive neural networks. Although patterns of morphological and behavioral plasticity are often compared, patterns of plasticity of life history phenotypes are rarely considered. Here we review patterns of plasticity in these trait categories within and among populations, comprising the adaptive radiation of the threespine stickleback fish Gasterosteus aculeatus. We immediately found it necessary to consider the possibility of iterated development, the concept that behavioral and life history trajectories can be repeatedly reset on activational (usually behavior) or developmental (usually life history) time frames, offering fine tuning of the response to environmental context. Morphology in stickleback is primarily reset only in that developmental trajectories can be altered as environments change over the course of development. As anticipated, the boundaries between the trait categories are not clear and are likely to be linked by shared, underlying physiological and genetic systems. PMID:26243135

  5. Pilot Study on Image Quality and Radiation Dose of CT Colonography with Adaptive Iterative Dose Reduction Three-Dimensional

    PubMed Central

    Shen, Hesong; Liang, Dan; Luo, Mingyue; Duan, Chaijie; Cai, Wenli; Zhu, Shanshan; Qiu, Jianping; Li, Wenru

    2015-01-01

    Objective To investigate image quality and radiation dose of CT colonography (CTC) with adaptive iterative dose reduction three-dimensional (AIDR3D). Methods Ten segments of porcine colon phantom were collected, and 30 pedunculate polyps with diameters ranging from 1 to 15 mm were simulated on each segment. Image data were acquired with tube voltage of 120 kVp, and current doses of 10 mAs, 20 mAs, 30 mAs, 40 mAs, 50 mAs, respectively. CTC images were reconstructed using filtered back projection (FBP) and AIDR3D. Two radiologists blindly evaluated image quality. Quantitative evaluation of image quality included image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). Qualitative image quality was evaluated with a five-score scale. Radiation dose was calculated based on dose-length product. Ten volunteers were examined supine 50 mAs with FBP and prone 20 mAs with AIDR3D, and image qualities were assessed. Paired t test was performed for statistical analysis. Results For 20 mAs with AIDR3D and 50 mAs with FBP, image noise, SNRs and CNRs were (16.4 ± 1.6) HU vs. (16.8 ± 2.6) HU, 1.9 ± 0.2 vs. 1.9 ± 0.4, and 62.3 ± 6.8 vs. 62.0 ± 6.2, respectively; qualitative image quality scores were 4.1 and 4.3, respectively; their differences were all not statistically significant. Compared with 50 mAs with FBP, radiation dose (1.62 mSv) of 20 mAs with AIDR3D was decreased by 60.0%. There was no statistically significant difference in image noise, SNRs, CNRs and qualitative image quality scores between prone 20 mAs with AIDR3D and supine 50 mAs with FBP in 10 volunteers, the former reduced radiation dose by 61.1%. Conclusion Image quality of CTC using 20 mAs with AIDR3D could be comparable to standard 50 mAs with FBP, radiation dose of the former reduced by about 60.0% and was only 1.62 mSv. PMID:25635839

  6. Statistics of intensity in adaptive-optics images and their usefulness for detection and photometry of exoplanets.

    PubMed

    Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C

    2010-11-01

    This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.

  7. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    SciTech Connect

    Rolland, Joran Simonnet, Eric

    2015-02-15

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations.

  8. Polychromatic Iterative Statistical Material Image Reconstruction for Photon-Counting Computed Tomography

    PubMed Central

    Weidinger, Thomas; Buzug, Thorsten M.; Flohr, Thomas; Kappler, Steffen; Stierstorfer, Karl

    2016-01-01

    This work proposes a dedicated statistical algorithm to perform a direct reconstruction of material-decomposed images from data acquired with photon-counting detectors (PCDs) in computed tomography. It is based on local approximations (surrogates) of the negative logarithmic Poisson probability function. Exploiting the convexity of this function allows for parallel updates of all image pixels. Parallel updates can compensate for the rather slow convergence that is intrinsic to statistical algorithms. We investigate the accuracy of the algorithm for ideal photon-counting detectors. Complementarily, we apply the algorithm to simulation data of a realistic PCD with its spectral resolution limited by K-escape, charge sharing, and pulse-pileup. For data from both an ideal and realistic PCD, the proposed algorithm is able to correct beam-hardening artifacts and quantitatively determine the material fractions of the chosen basis materials. Via regularization we were able to achieve a reduction of image noise for the realistic PCD that is up to 90% lower compared to material images form a linear, image-based material decomposition using FBP images. Additionally, we find a dependence of the algorithms convergence speed on the threshold selection within the PCD. PMID:27195003

  9. Polychromatic Iterative Statistical Material Image Reconstruction for Photon-Counting Computed Tomography.

    PubMed

    Weidinger, Thomas; Buzug, Thorsten M; Flohr, Thomas; Kappler, Steffen; Stierstorfer, Karl

    2016-01-01

    This work proposes a dedicated statistical algorithm to perform a direct reconstruction of material-decomposed images from data acquired with photon-counting detectors (PCDs) in computed tomography. It is based on local approximations (surrogates) of the negative logarithmic Poisson probability function. Exploiting the convexity of this function allows for parallel updates of all image pixels. Parallel updates can compensate for the rather slow convergence that is intrinsic to statistical algorithms. We investigate the accuracy of the algorithm for ideal photon-counting detectors. Complementarily, we apply the algorithm to simulation data of a realistic PCD with its spectral resolution limited by K-escape, charge sharing, and pulse-pileup. For data from both an ideal and realistic PCD, the proposed algorithm is able to correct beam-hardening artifacts and quantitatively determine the material fractions of the chosen basis materials. Via regularization we were able to achieve a reduction of image noise for the realistic PCD that is up to 90% lower compared to material images form a linear, image-based material decomposition using FBP images. Additionally, we find a dependence of the algorithms convergence speed on the threshold selection within the PCD. PMID:27195003

  10. Non-iterative adaptive time-stepping scheme with temporal truncation error control for simulating variable-density flow

    NASA Astrophysics Data System (ADS)

    Hirthe, Eugenia M.; Graf, Thomas

    2012-12-01

    The automatic non-iterative second-order time-stepping scheme based on the temporal truncation error proposed by Kavetski et al. [Kavetski D, Binning P, Sloan SW. Non-iterative time-stepping schemes with adaptive truncation error control for the solution of Richards equation. Water Resour Res 2002;38(10):1211, http://dx.doi.org/10.1029/2001WR000720.] is implemented into the code of the HydroGeoSphere model. This time-stepping scheme is applied for the first time to the low-Rayleigh-number thermal Elder problem of free convection in porous media [van Reeuwijk M, Mathias SA, Simmons CT, Ward JD. Insights from a pseudospectral approach to the Elder problem. Water Resour Res 2009;45:W04416, http://dx.doi.org/10.1029/2008WR007421.], and to the solutal [Shikaze SG, Sudicky EA, Schwartz FW. Density-dependent solute transport in discretely-fractured geological media: is prediction possible? J Contam Hydrol 1998;34:273-91] problem of free convection in fractured-porous media. Numerical simulations demonstrate that the proposed scheme efficiently limits the temporal truncation error to a user-defined tolerance by controlling the time-step size. The non-iterative second-order time-stepping scheme can be applied to (i) thermal and solutal variable-density flow problems, (ii) linear and non-linear density functions, and (iii) problems including porous and fractured-porous media.

  11. Iterative adaptive radiations of fossil canids show no evidence for diversity-dependent trait evolution

    NASA Astrophysics Data System (ADS)

    Slater, Graham J.

    2015-04-01

    A long-standing hypothesis in adaptive radiation theory is that ecological opportunity constrains rates of phenotypic evolution, generating a burst of morphological disparity early in clade history. Empirical support for the early burst model is rare in comparative data, however. One possible reason for this lack of support is that most phylogenetic tests have focused on extant clades, neglecting information from fossil taxa. Here, I test for the expected signature of adaptive radiation using the outstanding 40-My fossil record of North American canids. Models implying time- and diversity-dependent rates of morphological evolution are strongly rejected for two ecologically important traits, body size and grinding area of the molar teeth. Instead, Ornstein-Uhlenbeck processes implying repeated, and sometimes rapid, attraction to distinct dietary adaptive peaks receive substantial support. Diversity-dependent rates of morphological evolution seem uncommon in clades, such as canids, that exhibit a pattern of replicated adaptive radiation. Instead, these clades might best be thought of as deterministic radiations in constrained Simpsonian subzones of a major adaptive zone. Support for adaptive peak models may be diagnostic of subzonal radiations. It remains to be seen whether early burst or ecological opportunity models can explain broader adaptive radiations, such as the evolution of higher taxa.

  12. Iterative adaptive radiations of fossil canids show no evidence for diversity-dependent trait evolution.

    PubMed

    Slater, Graham J

    2015-04-21

    A long-standing hypothesis in adaptive radiation theory is that ecological opportunity constrains rates of phenotypic evolution, generating a burst of morphological disparity early in clade history. Empirical support for the early burst model is rare in comparative data, however. One possible reason for this lack of support is that most phylogenetic tests have focused on extant clades, neglecting information from fossil taxa. Here, I test for the expected signature of adaptive radiation using the outstanding 40-My fossil record of North American canids. Models implying time- and diversity-dependent rates of morphological evolution are strongly rejected for two ecologically important traits, body size and grinding area of the molar teeth. Instead, Ornstein-Uhlenbeck processes implying repeated, and sometimes rapid, attraction to distinct dietary adaptive peaks receive substantial support. Diversity-dependent rates of morphological evolution seem uncommon in clades, such as canids, that exhibit a pattern of replicated adaptive radiation. Instead, these clades might best be thought of as deterministic radiations in constrained Simpsonian subzones of a major adaptive zone. Support for adaptive peak models may be diagnostic of subzonal radiations. It remains to be seen whether early burst or ecological opportunity models can explain broader adaptive radiations, such as the evolution of higher taxa.

  13. Iterative adaptive radiations of fossil canids show no evidence for diversity-dependent trait evolution

    PubMed Central

    Slater, Graham J.

    2015-01-01

    A long-standing hypothesis in adaptive radiation theory is that ecological opportunity constrains rates of phenotypic evolution, generating a burst of morphological disparity early in clade history. Empirical support for the early burst model is rare in comparative data, however. One possible reason for this lack of support is that most phylogenetic tests have focused on extant clades, neglecting information from fossil taxa. Here, I test for the expected signature of adaptive radiation using the outstanding 40-My fossil record of North American canids. Models implying time- and diversity-dependent rates of morphological evolution are strongly rejected for two ecologically important traits, body size and grinding area of the molar teeth. Instead, Ornstein–Uhlenbeck processes implying repeated, and sometimes rapid, attraction to distinct dietary adaptive peaks receive substantial support. Diversity-dependent rates of morphological evolution seem uncommon in clades, such as canids, that exhibit a pattern of replicated adaptive radiation. Instead, these clades might best be thought of as deterministic radiations in constrained Simpsonian subzones of a major adaptive zone. Support for adaptive peak models may be diagnostic of subzonal radiations. It remains to be seen whether early burst or ecological opportunity models can explain broader adaptive radiations, such as the evolution of higher taxa. PMID:25901311

  14. Rapid cytometric antibiotic susceptibility testing utilizing adaptive multidimensional statistical metrics.

    PubMed

    Huang, Tzu-Hsueh; Ning, Xinghai; Wang, Xiaojian; Murthy, Niren; Tzeng, Yih-Ling; Dickson, Robert M

    2015-02-01

    Flow cytometry holds promise to accelerate antibiotic susceptibility determinations; however, without robust multidimensional statistical analysis, general discrimination criteria have remained elusive. In this study, a new statistical method, probability binning signature quadratic form (PB-sQF), was developed and applied to analyze flow cytometric data of bacterial responses to antibiotic exposure. Both sensitive lab strains (Escherichia coli and Pseudomonas aeruginosa) and a multidrug resistant, clinically isolated strain (E. coli) were incubated with the bacteria-targeted dye, maltohexaose-conjugated IR786, and each of many bactericidal or bacteriostatic antibiotics to identify changes induced around corresponding minimum inhibition concentrations (MIC). The antibiotic-induced damages were monitored by flow cytometry after 1-h incubation through forward scatter, side scatter, and fluorescence channels. The 3-dimensional differences between the flow cytometric data of the no-antibiotic treated bacteria and the antibiotic-treated bacteria were characterized by PB-sQF into a 1-dimensional linear distance. A 99% confidence level was established by statistical bootstrapping for each antibiotic-bacteria pair. For the susceptible E. coli strain, statistically significant increments from this 99% confidence level were observed from 1/16x MIC to 1x MIC for all the antibiotics. The same increments were recorded for P. aeruginosa, which has been reported to cause difficulty in flow-based viability tests. For the multidrug resistant E. coli, significant distances from control samples were observed only when an effective antibiotic treatment was utilized. Our results suggest that a rapid and robust antimicrobial susceptibility test (AST) can be constructed by statistically characterizing the differences between sample and control flow cytometric populations, even in a label-free scheme with scattered light alone. These distances vs paired controls coupled with rigorous

  15. Diversity of immune strategies explained by adaptation to pathogen statistics

    PubMed Central

    Mayer, Andreas; Mora, Thierry; Rivoire, Olivier; Walczak, Aleksandra M.

    2016-01-01

    Biological organisms have evolved a wide range of immune mechanisms to defend themselves against pathogens. Beyond molecular details, these mechanisms differ in how protection is acquired, processed, and passed on to subsequent generations—differences that may be essential to long-term survival. Here, we introduce a mathematical framework to compare the long-term adaptation of populations as a function of the pathogen dynamics that they experience and of the immune strategy that they adopt. We find that the two key determinants of an optimal immune strategy are the frequency and the characteristic timescale of the pathogens. Depending on these two parameters, our framework identifies distinct modes of immunity, including adaptive, innate, bet-hedging, and CRISPR-like immunities, which recapitulate the diversity of natural immune systems. PMID:27432970

  16. TH-C-18A-01: Is Automatic Tube Current Modulation Still Necessary with Statistical Iterative Reconstruction?

    SciTech Connect

    Li, K; Zhao, W; Gomez-Cardona, D; Chen, G

    2014-06-15

    Purpose: Automatic tube current modulation (TCM) has been widely used in modern multi-detector CT to reduce noise spatial nonuniformity and streaks to improve dose efficiency. With the advent of statistical iterative reconstruction (SIR), it is expected that the importance of TCM may diminish, since SIR incorporates statistical weighting factors to reduce the negative influence of photon-starved rays. The purpose of this work is to address the following questions: Does SIR offer the same benefits as TCM? If yes, are there still any clinical benefits to using TCM? Methods: An anthropomorphic CIRS chest phantom was scanned using a state-of-the-art clinical CT system equipped with an SIR engine (Veo™, GE Healthcare). The phantom was first scanned with TCM using a routine protocol and a low-dose (LD) protocol. It was then scanned without TCM using the same protocols. For each acquisition, both FBP and Veo reconstructions were performed. All scans were repeated 50 times to generate an image ensemble from which noise spatial nonuniformity (NSN) and streak artifact levels were quantified. Monte-Carlo experiments were performed to estimate skin dose. Results: For FBP, noise streaks were reduced by 4% using TCM for both routine and LD scans. NSN values were actually slightly higher with TCM (0.25) than without TCM (0.24) for both routine and LD scans. In contrast, for Veo, noise streaks became negligible (<1%) with or without TCM for both routine and LD scans, and the NSN was reduced to 0.10 (low dose) or 0.08 (routine). The overall skin dose was 2% lower at the shoulders and more uniformly distributed across the skin without TCM. Conclusion: SIR without TCM offers superior reduction in noise nonuniformity and streaks relative to FBP with TCM. For some clinical applications in which skin dose may be a concern, SIR without TCM may be a better option. K. Li, W. Zhao, D. Gomez-Cardona: Nothing to disclose; G.-H. Chen: Research funded, General Electric Company Research funded

  17. Specificity and timescales of cortical adaptation as inferences about natural movie statistics

    PubMed Central

    Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia

    2016-01-01

    Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation. PMID:27699416

  18. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    PubMed

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  19. Fast Parallel MR Image Reconstruction via B1-based, Adaptive Restart, Iterative Soft Thresholding Algorithms (BARISTA)

    PubMed Central

    Noll, Douglas C.; Fessler, Jeffrey A.

    2014-01-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484

  20. Statistical model based iterative reconstruction (MBIR) in clinical CT systems. Part II. Experimental assessment of spatial resolution performance

    SciTech Connect

    Li, Ke; Chen, Guang-Hong; Garrett, John; Ge, Yongshuai

    2014-07-15

    Purpose: Statistical model based iterative reconstruction (MBIR) methods have been introduced to clinical CT systems and are being used in some clinical diagnostic applications. The purpose of this paper is to experimentally assess the unique spatial resolution characteristics of this nonlinear reconstruction method and identify its potential impact on the detectabilities and the associated radiation dose levels for specific imaging tasks. Methods: The thoracic section of a pediatric phantom was repeatedly scanned 50 or 100 times using a 64-slice clinical CT scanner at four different dose levels [CTDI{sub vol} =4, 8, 12, 16 (mGy)]. Both filtered backprojection (FBP) and MBIR (Veo{sup ®}, GE Healthcare, Waukesha, WI) were used for image reconstruction and results were compared with one another. Eight test objects in the phantom with contrast levels ranging from 13 to 1710 HU were used to assess spatial resolution. The axial spatial resolution was quantified with the point spread function (PSF), while the z resolution was quantified with the slice sensitivity profile. Both were measured locally on the test objects and in the image domain. The dependence of spatial resolution on contrast and dose levels was studied. The study also features a systematic investigation of the potential trade-off between spatial resolution and locally defined noise and their joint impact on the overall image quality, which was quantified by the image domain-based channelized Hotelling observer (CHO) detectability index d′. Results: (1) The axial spatial resolution of MBIR depends on both radiation dose level and image contrast level, whereas it is supposedly independent of these two factors in FBP. The axial spatial resolution of MBIR always improved with an increasing radiation dose level and/or contrast level. (2) The axial spatial resolution of MBIR became equivalent to that of FBP at some transitional contrast level, above which MBIR demonstrated superior spatial resolution than

  1. The brain uses adaptive internal models of scene statistics for sensorimotor estimation and planning.

    PubMed

    Kwon, Oh-Sang; Knill, David C

    2013-03-12

    Because of uncertainty and noise, the brain should use accurate internal models of the statistics of objects in scenes to interpret sensory signals. Moreover, the brain should adapt its internal models to the statistics within local stimulus contexts. Consider the problem of hitting a baseball. The impoverished nature of the visual information available makes it imperative that batters use knowledge of the temporal statistics and history of previous pitches to accurately estimate pitch speed. Using a laboratory analog of hitting a baseball, we tested the hypothesis that the brain uses adaptive internal models of the statistics of object speeds to plan hand movements to intercept moving objects. We fit Bayesian observer models to subjects' performance to estimate the statistical environments in which subjects' performance would be ideal and compared the estimated statistics with the true statistics of stimuli in an experiment. A first experiment showed that subjects accurately estimated and used the variance of object speeds in a stimulus set to time hitting behavior but also showed serial biases that are suboptimal for stimuli that were uncorrelated over time. A second experiment showed that the strength of the serial biases depended on the temporal correlations within a stimulus set, even when the biases were estimated from uncorrelated stimulus pairs subsampled from the larger set. Taken together, the results show that subjects adapted their internal models of the variance and covariance of object speeds within a stimulus set to plan interceptive movements but retained a bias to positive correlations.

  2. The brain uses adaptive internal models of scene statistics for sensorimotor estimation and planning.

    PubMed

    Kwon, Oh-Sang; Knill, David C

    2013-03-12

    Because of uncertainty and noise, the brain should use accurate internal models of the statistics of objects in scenes to interpret sensory signals. Moreover, the brain should adapt its internal models to the statistics within local stimulus contexts. Consider the problem of hitting a baseball. The impoverished nature of the visual information available makes it imperative that batters use knowledge of the temporal statistics and history of previous pitches to accurately estimate pitch speed. Using a laboratory analog of hitting a baseball, we tested the hypothesis that the brain uses adaptive internal models of the statistics of object speeds to plan hand movements to intercept moving objects. We fit Bayesian observer models to subjects' performance to estimate the statistical environments in which subjects' performance would be ideal and compared the estimated statistics with the true statistics of stimuli in an experiment. A first experiment showed that subjects accurately estimated and used the variance of object speeds in a stimulus set to time hitting behavior but also showed serial biases that are suboptimal for stimuli that were uncorrelated over time. A second experiment showed that the strength of the serial biases depended on the temporal correlations within a stimulus set, even when the biases were estimated from uncorrelated stimulus pairs subsampled from the larger set. Taken together, the results show that subjects adapted their internal models of the variance and covariance of object speeds within a stimulus set to plan interceptive movements but retained a bias to positive correlations. PMID:23440185

  3. Radiation dose reduction for coronary artery calcium scoring at 320-detector CT with adaptive iterative dose reduction 3D.

    PubMed

    Tatsugami, Fuminari; Higaki, Toru; Fukumoto, Wataru; Kaichi, Yoko; Fujioka, Chikako; Kiguchi, Masao; Yamamoto, Hideya; Kihara, Yasuki; Awai, Kazuo

    2015-06-01

    To assess the possibility of reducing the radiation dose for coronary artery calcium (CAC) scoring by using adaptive iterative dose reduction 3D (AIDR 3D) on a 320-detector CT scanner. Fifty-four patients underwent routine- and low-dose CT for CAC scoring. Low-dose CT was performed at one-third of the tube current used for routine-dose CT. Routine-dose CT was reconstructed with filtered back projection (FBP) and low-dose CT was reconstructed with AIDR 3D. We compared the calculated Agatston-, volume-, and mass scores of these images. The overall percentage difference in the Agatston-, volume-, and mass scores between routine- and low-dose CT studies was 15.9, 11.6, and 12.6%, respectively. There were no significant differences in the routine- and low-dose CT studies irrespective of the scoring algorithms applied. The CAC measurements of both imaging modalities were highly correlated with respect to the Agatston- (r = 0.996), volume- (r = 0.996), and mass score (r = 0.997; p < 0.001, all); the Bland-Altman limits of agreement scores were -37.4 to 51.4, -31.2 to 36.4 and -30.3 to 40.9%, respectively, suggesting that AIDR 3D was a good alternative for FBP. The mean effective radiation dose for routine- and low-dose CT was 2.2 and 0.7 mSv, respectively. The use of AIDR 3D made it possible to reduce the radiation dose by 67% for CAC scoring without impairing the quantification of coronary calcification.

  4. Radiation dose reduction for coronary artery calcium scoring at 320-detector CT with adaptive iterative dose reduction 3D.

    PubMed

    Tatsugami, Fuminari; Higaki, Toru; Fukumoto, Wataru; Kaichi, Yoko; Fujioka, Chikako; Kiguchi, Masao; Yamamoto, Hideya; Kihara, Yasuki; Awai, Kazuo

    2015-06-01

    To assess the possibility of reducing the radiation dose for coronary artery calcium (CAC) scoring by using adaptive iterative dose reduction 3D (AIDR 3D) on a 320-detector CT scanner. Fifty-four patients underwent routine- and low-dose CT for CAC scoring. Low-dose CT was performed at one-third of the tube current used for routine-dose CT. Routine-dose CT was reconstructed with filtered back projection (FBP) and low-dose CT was reconstructed with AIDR 3D. We compared the calculated Agatston-, volume-, and mass scores of these images. The overall percentage difference in the Agatston-, volume-, and mass scores between routine- and low-dose CT studies was 15.9, 11.6, and 12.6%, respectively. There were no significant differences in the routine- and low-dose CT studies irrespective of the scoring algorithms applied. The CAC measurements of both imaging modalities were highly correlated with respect to the Agatston- (r = 0.996), volume- (r = 0.996), and mass score (r = 0.997; p < 0.001, all); the Bland-Altman limits of agreement scores were -37.4 to 51.4, -31.2 to 36.4 and -30.3 to 40.9%, respectively, suggesting that AIDR 3D was a good alternative for FBP. The mean effective radiation dose for routine- and low-dose CT was 2.2 and 0.7 mSv, respectively. The use of AIDR 3D made it possible to reduce the radiation dose by 67% for CAC scoring without impairing the quantification of coronary calcification. PMID:25754302

  5. Image Quality and Radiation Dose of CT Coronary Angiography with Automatic Tube Current Modulation and Strong Adaptive Iterative Dose Reduction Three-Dimensional (AIDR3D)

    PubMed Central

    Shen, Hesong; Dai, Guochao; Luo, Mingyue; Duan, Chaijie; Cai, Wenli; Liang, Dan; Wang, Xinhua; Zhu, Dongyun; Li, Wenru; Qiu, Jianping

    2015-01-01

    Purpose To investigate image quality and radiation dose of CT coronary angiography (CTCA) scanned using automatic tube current modulation (ATCM) and reconstructed by strong adaptive iterative dose reduction three-dimensional (AIDR3D). Methods Eighty-four consecutive CTCA patients were collected for the study. All patients were scanned using ATCM and reconstructed with strong AIDR3D, standard AIDR3D and filtered back-projection (FBP) respectively. Two radiologists who were blinded to the patients' clinical data and reconstruction methods evaluated image quality. Quantitative image quality evaluation included image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). To evaluate image quality qualitatively, coronary artery is classified into 15 segments based on the modified guidelines of the American Heart Association. Qualitative image quality was evaluated using a 4-point scale. Radiation dose was calculated based on dose-length product. Results Compared with standard AIDR3D, strong AIDR3D had lower image noise, higher SNR and CNR, their differences were all statistically significant (P<0.05); compared with FBP, strong AIDR3D decreased image noise by 46.1%, increased SNR by 84.7%, and improved CNR by 82.2%, their differences were all statistically significant (P<0.05 or 0.001). Segments with diagnostic image quality for strong AIDR3D were 336 (100.0%), 486 (96.4%), and 394 (93.8%) in proximal, middle, and distal part respectively; whereas those for standard AIDR3D were 332 (98.8%), 472 (93.7%), 378 (90.0%), respectively; those for FBP were 217 (64.6%), 173 (34.3%), 114 (27.1%), respectively; total segments with diagnostic image quality in strong AIDR3D (1216, 96.5%) were higher than those of standard AIDR3D (1182, 93.8%) and FBP (504, 40.0%); the differences between strong AIDR3D and standard AIDR3D, strong AIDR3D and FBP were all statistically significant (P<0.05 or 0.001). The mean effective radiation dose was (2.55±1.21) mSv. Conclusion

  6. CUSUM-Based Person-Fit Statistics for Adaptive Testing. Research Report 99-05.

    ERIC Educational Resources Information Center

    van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R.

    Item scores that do not fit an assumed item response theory model may cause the latent trait value to be estimated inaccurately. Several person-fit statistics for detecting nonfitting score patterns for paper-and-pencil tests have been proposed. In the context of computerized adaptive tests (CAT), the use of person-fit analysis has hardly been…

  7. Adaptive Perfectionism, Maladaptive Perfectionism and Statistics Anxiety in Graduate Psychology Students

    ERIC Educational Resources Information Center

    Comerchero, Victoria; Fortugno, Dominick

    2013-01-01

    The current study examined if correlations between statistics anxiety and dimensions of perfectionism (adaptive and maladaptive) were present amongst a sample of psychology graduate students (N = 96). Results demonstrated that scores on the APS-R Discrepancy scale, corresponding to maladaptive perfectionism, correlated with higher levels of…

  8. Statistical model based iterative reconstruction in myocardial CT perfusion: exploitation of the low dimensionality of the spatial-temporal image matrix

    NASA Astrophysics Data System (ADS)

    Li, Yinsheng; Niu, Kai; Chen, Guang-Hong

    2015-03-01

    Time-resolved CT imaging methods play an increasingly important role in clinical practice, particularly, in the diagnosis and treatment of vascular diseases. In a time-resolved CT imaging protocol, it is often necessary to irradiate the patients for an extended period of time. As a result, the cumulative radiation dose in these CT applications is often higher than that of the static CT imaging protocols. Therefore, it is important to develop new means of reducing radiation dose for time-resolved CT imaging. In this paper, we present a novel statistical model based iterative reconstruction method that enables the reconstruction of low noise time-resolved CT images at low radiation exposure levels. Unlike other well known statistical reconstruction methods, this new method primarily exploits the intrinsic low dimensionality of time-resolved CT images to regularize the reconstruction. Numerical simulations were used to validate the proposed method.

  9. Using iterative cluster merging with improved gap statistics to perform online phenotype discovery in the context of high-throughput RNAi screens

    PubMed Central

    Yin, Zheng; Zhou, Xiaobo; Bakal, Chris; Li, Fuhai; Sun, Youxian; Perrimon, Norbert; Wong, Stephen TC

    2008-01-01

    Background The recent emergence of high-throughput automated image acquisition technologies has forever changed how cell biologists collect and analyze data. Historically, the interpretation of cellular phenotypes in different experimental conditions has been dependent upon the expert opinions of well-trained biologists. Such qualitative analysis is particularly effective in detecting subtle, but important, deviations in phenotypes. However, while the rapid and continuing development of automated microscope-based technologies now facilitates the acquisition of trillions of cells in thousands of diverse experimental conditions, such as in the context of RNA interference (RNAi) or small-molecule screens, the massive size of these datasets precludes human analysis. Thus, the development of automated methods which aim to identify novel and biological relevant phenotypes online is one of the major challenges in high-throughput image-based screening. Ideally, phenotype discovery methods should be designed to utilize prior/existing information and tackle three challenging tasks, i.e. restoring pre-defined biological meaningful phenotypes, differentiating novel phenotypes from known ones and clarifying novel phenotypes from each other. Arbitrarily extracted information causes biased analysis, while combining the complete existing datasets with each new image is intractable in high-throughput screens. Results Here we present the design and implementation of a novel and robust online phenotype discovery method with broad applicability that can be used in diverse experimental contexts, especially high-throughput RNAi screens. This method features phenotype modelling and iterative cluster merging using improved gap statistics. A Gaussian Mixture Model (GMM) is employed to estimate the distribution of each existing phenotype, and then used as reference distribution in gap statistics. This method is broadly applicable to a number of different types of image-based datasets

  10. Modified H-statistic with adaptive Winsorized mean in two groups test

    NASA Astrophysics Data System (ADS)

    Teh, Kian Wooi; Abdullah, Suhaida; Yahaya, Sharipah Soaad Syed; Yusof, Zahayu Md

    2014-06-01

    t-test is a commonly used test statistics when comparing two independent groups. The computation of this test is simple yet it is powerful under normal distribution and equal variance dataset. However, in real life data, sometimes it is hard to get dataset which has this package. The violation of assumptions (normality and equal variances) will give the devastating effect on the Type I error rate control to the t-test. On the same time, the statistical power also will be reduced. Therefore in this study, the adaptive Winsorised mean with hinge estimator in H-statistic (AWM-H) is proposed. The H-statistic is one of the robust statistics that able to handle the problem of nonnormality in comparing independent group. This procedure originally used Modified One-step M (MOM) estimator which employed trimming process. In the AWM-H procedure, the MOM estimator is replaced with the adaptive Winsorized mean (AWM) as the central tendency measure of the test. The Winsorization process is based on hinge estimator HQ or HQ1. Overall results showed that the proposed method performed better than the original method and the classical method especially under heavy tailed distribution.

  11. Research and Teaching: Statistics across the Curriculum Using an Iterative, Interactive Approach in an Inquiry-Based Lab Sequence

    ERIC Educational Resources Information Center

    Remsburg, Alysa J.; Harris, Michelle A.; Batzli, Janet M.

    2014-01-01

    How can science instructors prepare students for the statistics needed in authentic inquiry labs? We designed and assessed four instructional modules with the goals of increasing student confidence, appreciation, and performance in both experimental design and data analysis. Using extensions from a just-in-time teaching approach, we introduced…

  12. Adaptation to Changes in Higher-Order Stimulus Statistics in the Salamander Retina

    PubMed Central

    Tkačik, Gašper; Ghosh, Anandamohan; Schneidman, Elad; Segev, Ronen

    2014-01-01

    Adaptation in the retina is thought to optimize the encoding of natural light signals into sequences of spikes sent to the brain. While adaptive changes in retinal processing to the variations of the mean luminance level and second-order stimulus statistics have been documented before, no such measurements have been performed when higher-order moments of the light distribution change. We therefore measured the ganglion cell responses in the tiger salamander retina to controlled changes in the second (contrast), third (skew) and fourth (kurtosis) moments of the light intensity distribution of spatially uniform temporally independent stimuli. The skew and kurtosis of the stimuli were chosen to cover the range observed in natural scenes. We quantified adaptation in ganglion cells by studying linear-nonlinear models that capture well the retinal encoding properties across all stimuli. We found that the encoding properties of retinal ganglion cells change only marginally when higher-order statistics change, compared to the changes observed in response to the variation in contrast. By analyzing optimal coding in LN-type models, we showed that neurons can maintain a high information rate without large dynamic adaptation to changes in skew or kurtosis. This is because, for uncorrelated stimuli, spatio-temporal summation within the receptive field averages away non-gaussian aspects of the light intensity distribution. PMID:24465742

  13. Adaptation to changes in higher-order stimulus statistics in the salamander retina.

    PubMed

    Tkačik, Gašper; Ghosh, Anandamohan; Schneidman, Elad; Segev, Ronen

    2014-01-01

    Adaptation in the retina is thought to optimize the encoding of natural light signals into sequences of spikes sent to the brain. While adaptive changes in retinal processing to the variations of the mean luminance level and second-order stimulus statistics have been documented before, no such measurements have been performed when higher-order moments of the light distribution change. We therefore measured the ganglion cell responses in the tiger salamander retina to controlled changes in the second (contrast), third (skew) and fourth (kurtosis) moments of the light intensity distribution of spatially uniform temporally independent stimuli. The skew and kurtosis of the stimuli were chosen to cover the range observed in natural scenes. We quantified adaptation in ganglion cells by studying linear-nonlinear models that capture well the retinal encoding properties across all stimuli. We found that the encoding properties of retinal ganglion cells change only marginally when higher-order statistics change, compared to the changes observed in response to the variation in contrast. By analyzing optimal coding in LN-type models, we showed that neurons can maintain a high information rate without large dynamic adaptation to changes in skew or kurtosis. This is because, for uncorrelated stimuli, spatio-temporal summation within the receptive field averages away non-gaussian aspects of the light intensity distribution.

  14. Drifter-based Predictions of the Spread of Surface Contamination Using Iterative Statistics: A Local Example with Global Applications

    NASA Astrophysics Data System (ADS)

    Fertitta, D. A.; Macdonald, A. M.; Rypina, I.

    2015-12-01

    In the aftermath of the 2011 Fukushima nuclear power plant accident, it became critical to determine how radionuclides, both from atmospheric deposition and direct ocean discharge, were spreading in the ocean. One successful method used drifter observations from the Global Drifter Program (GDP) to predict the timing of the spread of surface contamination. U.S. coasts are home to a number of nuclear power plants as well as other industries capable of leaking contamination into the surface ocean. Here, the spread of surface contamination from a hypothetical accident at the existing Pilgrim nuclear power plant on the coast of Massachusetts is used as an example to show how the historical drifter dataset can be used as a prediction tool. Our investigation uses a combined dataset of drifter tracks from the GDP and the NOAA Northeast Fisheries Science Center. Two scenarios are examined to estimate the spread of surface contamination: a local direct leakage scenario and a broader atmospheric deposition scenario that could result from an explosion. The local leakage scenario is used to study the spread of contamination within and beyond Cape Cod Bay, and the atmospheric deposition scenario is used to study the large-scale spread of contamination throughout the North Atlantic Basin. A multiple-iteration method of estimating probability makes best use of the available drifter data. This technique, which allows for direct observationally-based predictions, can be applied anywhere that drifter data are available to calculate estimates of the likelihood and general timing of the spread of surface contamination in the ocean.

  15. Adaptive statistic tracking control based on two-step neural networks with time delays.

    PubMed

    Yi, Yang; Guo, Lei; Wang, Hong

    2009-03-01

    This paper presents a new type of control framework for dynamical stochastic systems, called statistic tracking control (STC). The system considered is general and non-Gaussian and the tracking objective is the statistical information of a given target probability density function (pdf), rather than a deterministic signal. The control aims at making the statistical information of the output pdfs to follow those of a target pdf. For such a control framework, a variable structure adaptive tracking control strategy is first established using two-step neural network models. Following the B-spline neural network approximation to the integrated performance function, the concerned problem is transferred into the tracking of given weights. The dynamic neural network (DNN) is employed to identify the unknown nonlinear dynamics between the control input and the weights related to the integrated function. To achieve the required control objective, an adaptive controller based on the proposed DNN is developed so as to track a reference trajectory. Stability analysis for both the identification and tracking errors is developed via the use of Lyapunov stability criterion. Simulations are given to demonstrate the efficiency of the proposed approach. PMID:19179249

  16. Statistical adaptive reversible steganographic technique using bicubic interpolation and difference expansion

    NASA Astrophysics Data System (ADS)

    Liu, Yu-Chi; Tsai, Chwei-Shyong; Yang, Wen-Lung; Tsai, Yi-Chang; Yu, Shyr-Shen

    2010-08-01

    The reversible steganographic technique allows extraction of secret messages and restoration of original images without any distortion from the embedded image. In this work, a statistical adaptive reversible steganographic technique is proposed to improve difference expansion (DE)-based schemes, consisting of two parts. First, bicubic interpolation is adopted as the pixel prediction to obtain more embeddable pixels. Meanwhile, since differences are generated between the accurate predicted value and its original value, quality of difference is also considered. Second, a statistical adaptive reversible embedding algorithm is proposed to overcome the restriction of the embedding capacity under single-layer embedding. The relationship between the complexity of the neighboring pixels and the difference distribution for the image is generalized as the variance conditional in statistics. With the maximum modifiable degree of the predicted pixel, the proposed scheme provides a suitable embedding capacity for all embeddable pixels with less additional information. The experimental results demonstrate advantages of the proposed scheme and prove that it is able to provide high capacity with good visual quality for the embedded image.

  17. The effect of a coronagraph on the statistics of Adaptive Optics Pinned Speckles

    NASA Astrophysics Data System (ADS)

    Aime, C.; Soummer, R.

    In this communication we study the statistics of Adaptive Optics remnant speckles, and we discuss how a coronagraph can defeat the noise associated with these speckles. At high Strehl Ratio regimes, residual speckles are pinned on the diffraction rings of the airy pattern. It can be shown that these speckles are due to small defaults of the wavefront, amplified by the coherent part of the wave and that the statistics of their intensity can be described by a modified Rice distribution. At low flux levels, a Poisson-Mandel transformation provides an analytical expression of the Probablility Density Function. We show the results of a numerical simulation and compare the results to the theoretical model. Simple analytical expressions can be derived for the variance of the noise. We discuss the efficiency of a coronagraph in terms of Signal to Noise Ratio, based on the analysis of the noise contributions which can be reduced by a coronagraph.

  18. Weighted log-rank statistic to compare shared-path adaptive treatment strategies.

    PubMed

    Kidwell, Kelley M; Wahed, Abdus S

    2013-04-01

    Adaptive treatment strategies (ATSs) more closely mimic the reality of a physician's prescription process where the physician prescribes a medication to his/her patient, and based on that patient's response to the medication, modifies the treatment. Two-stage randomization designs, more generally, sequential multiple assignment randomization trial designs, are useful to assess ATSs where the interest is in comparing the entire sequence of treatments, including the patient's intermediate response. In this paper, we introduce the notion of shared-path and separate-path ATSs and propose a weighted log-rank statistic to compare overall survival distributions of multiple two-stage ATSs, some of which may be shared-path. Large sample properties of the statistic are derived and the type I error rate and power of the test are compared with the standard log-rank test through simulation. PMID:23178734

  19. Statistical analysis of multilook polarimetric SAR data and terrain classification with adaptive distribution

    NASA Astrophysics Data System (ADS)

    Liu, Guoqing; Huang, ShunJi; Torre, Andrea; Rubertone, Franco S.

    1995-11-01

    This paper deals with analysis of statistical properties of multi-look processed polarimetric SAR data. Based on an assumption that the multi-look polarimetric measurement is a product between a Gamma-distributed texture variable and a Wishart-distributed polarimetric speckle variable, it is shown that the multi-look polarimetric measurement from a nonhomogeneous region obeys a generalized K-distribution. In order to validate this statistical model, two of its varied versions, multi-look intensity and amplitude K-distributions are particularly compared with histograms of the observed multi-look SAR data of three terrain types, ocean, forest-like and city regions, and with four empirical distribution models, Gaussian, log-normal, gamma and Weibull models. A qualitative relation between the degree of nonhomogeneity of a textured scene and the well-fitting statistical model is then empirically established. Finally, a classifier with adaptive distributions guided by the order parameter of the texture distribution estimated with local statistics is introduced to perform terrain classification, experimental results with both multi-look fully polarimetric data and multi-look single-channel intensity/amplitude data indicate its effectiveness.

  20. Iterative adaption of the bidimensional wall of the French T2 wind tunnel around a C5 axisymmetrical model: Infinite variation of the Mach number at zero incidence and a test at increased incidence

    NASA Technical Reports Server (NTRS)

    Archambaud, J. P.; Dor, J. B.; Payry, M. J.; Lamarche, L.

    1986-01-01

    The top and bottom two-dimensional walls of the T2 wind tunnel are adapted through an iterative process. The adaptation calculation takes into account the flow three-dimensionally. This method makes it possible to start with any shape of walls. The tests were performed with a C5 axisymmetric model at ambient temperature. Comparisons are made with the results of a true three-dimensional adaptation.

  1. Small sample properties of an adaptive filter with application to low volume statistical process control

    SciTech Connect

    Crowder, S.V.; Eshleman, L.

    1998-08-01

    In many manufacturing environments such as the nuclear weapons complex, emphasis has shifted from the regular production and delivery of large orders to infrequent small orders. However, the challenge to maintain the same high quality and reliability standards white building much smaller lot sizes remains. To meet this challenge, specific areas need more attention, including fast and on-target process start-up, low volume statistical process control, process characterization with small experiments, and estimating reliability given few actual performance tests of the product. In this paper the authors address the issue of low volume statistical process control. They investigate an adaptive filtering approach to process monitoring with a relatively short time series of autocorrelated data. The emphasis is on estimation and minimization of mean squared error rather than the traditional hypothesis testing and run length analyses associated with process control charting. The authors develop an adaptive filtering technique that assumes initial process parameters are unknown, and updates the parameters as more data become available. Using simulation techniques, they study the data requirements (the length of a time series of autocorrelated data) necessary to adequately estimate process parameters. They show that far fewer data values are needed than is typically recommended for process control applications. And they demonstrate the techniques with a case study from the nuclear weapons manufacturing complex.

  2. Small Sample Properties of an Adaptive Filter with Application to Low Volume Statistical Process Control

    SciTech Connect

    CROWDER, STEPHEN V.

    1999-09-01

    In many manufacturing environments such as the nuclear weapons complex, emphasis has shifted from the regular production and delivery of large orders to infrequent small orders. However, the challenge to maintain the same high quality and reliability standards while building much smaller lot sizes remains. To meet this challenge, specific areas need more attention, including fast and on-target process start-up, low volume statistical process control, process characterization with small experiments, and estimating reliability given few actual performance tests of the product. In this paper we address the issue of low volume statistical process control. We investigate an adaptive filtering approach to process monitoring with a relatively short time series of autocorrelated data. The emphasis is on estimation and minimization of mean squared error rather than the traditional hypothesis testing and run length analyses associated with process control charting. We develop an adaptive filtering technique that assumes initial process parameters are unknown, and updates the parameters as more data become available. Using simulation techniques, we study the data requirements (the length of a time series of autocorrelated data) necessary to adequately estimate process parameters. We show that far fewer data values are needed than is typically recommended for process control applications. We also demonstrate the techniques with a case study from the nuclear weapons manufacturing complex.

  3. An Adaptive Association Test for Multiple Phenotypes with GWAS Summary Statistics.

    PubMed

    Kim, Junghi; Bai, Yun; Pan, Wei

    2015-12-01

    We study the problem of testing for single marker-multiple phenotype associations based on genome-wide association study (GWAS) summary statistics without access to individual-level genotype and phenotype data. For most published GWASs, because obtaining summary data is substantially easier than accessing individual-level phenotype and genotype data, while often multiple correlated traits have been collected, the problem studied here has become increasingly important. We propose a powerful adaptive test and compare its performance with some existing tests. We illustrate its applications to analyses of a meta-analyzed GWAS dataset with three blood lipid traits and another with sex-stratified anthropometric traits, and further demonstrate its potential power gain over some existing methods through realistic simulation studies. We start from the situation with only one set of (possibly meta-analyzed) genome-wide summary statistics, then extend the method to meta-analysis of multiple sets of genome-wide summary statistics, each from one GWAS. We expect the proposed test to be useful in practice as more powerful than or complementary to existing methods.

  4. Person Fit Based on Statistical Process Control in an Adaptive Testing Environment. Research Report 98-13.

    ERIC Educational Resources Information Center

    van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R.

    Person-fit research in the context of paper-and-pencil tests is reviewed, and some specific problems regarding person fit in the context of computerized adaptive testing (CAT) are discussed. Some new methods are proposed to investigate person fit in a CAT environment. These statistics are based on Statistical Process Control (SPC) theory. A…

  5. WE-G-18A-04: 3D Dictionary Learning Based Statistical Iterative Reconstruction for Low-Dose Cone Beam CT Imaging

    SciTech Connect

    Bai, T; Yan, H; Shi, F; Jia, X; Jiang, Steve B.; Lou, Y; Xu, Q; Mou, X

    2014-06-15

    clinical application. A high zresolution is preferred to stabilize statistical iterative reconstruction. This work was supported in part by NIH(1R01CA154747-01), NSFC((No. 61172163), Research Fund for the Doctoral Program of Higher Education of China (No. 20110201110011), China Scholarship Council.

  6. Adaptive Markov chain Monte Carlo forward projection for statistical analysis in epidemic modelling of human papillomavirus.

    PubMed

    Korostil, Igor A; Peters, Gareth W; Cornebise, Julien; Regan, David G

    2013-05-20

    A Bayesian statistical model and estimation methodology based on forward projection adaptive Markov chain Monte Carlo is developed in order to perform the calibration of a high-dimensional nonlinear system of ordinary differential equations representing an epidemic model for human papillomavirus types 6 and 11 (HPV-6, HPV-11). The model is compartmental and involves stratification by age, gender and sexual-activity group. Developing this model and a means to calibrate it efficiently is relevant because HPV is a very multi-typed and common sexually transmitted infection with more than 100 types currently known. The two types studied in this paper, types 6 and 11, are causing about 90% of anogenital warts. We extend the development of a sexual mixing matrix on the basis of a formulation first suggested by Garnett and Anderson, frequently used to model sexually transmitted infections. In particular, we consider a stochastic mixing matrix framework that allows us to jointly estimate unknown attributes and parameters of the mixing matrix along with the parameters involved in the calibration of the HPV epidemic model. This matrix describes the sexual interactions between members of the population under study and relies on several quantities that are a priori unknown. The Bayesian model developed allows one to estimate jointly the HPV-6 and HPV-11 epidemic model parameters as well as unknown sexual mixing matrix parameters related to assortativity. Finally, we explore the ability of an extension to the class of adaptive Markov chain Monte Carlo algorithms to incorporate a forward projection strategy for the ordinary differential equation state trajectories. Efficient exploration of the Bayesian posterior distribution developed for the ordinary differential equation parameters provides a challenge for any Markov chain sampling methodology, hence the interest in adaptive Markov chain methods. We conclude with simulation studies on synthetic and recent actual data. PMID

  7. Statistics

    Cancer.gov

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  8. Image Restoration Using the Damped Richardson-Lucy Iteration

    NASA Astrophysics Data System (ADS)

    White, R. L.

    The most widely used image restoration technique for optical astronomical data is the Richardson-Lucy (RL) iteration. The RL method is well-suited to optical and ultraviolet because it converges to the maximum likelihood solution for Poisson statistics in the data, which is appropriate for astronomical images taken with CCD or photon-counting detectors. Images restored using the RL iteration have good good photometric linearity and can be used for quantitative analysis, and typical RL restorations require a manageable amount of computer time. Despite its advantages, the RL method has some serious shortcomings. Noise amplification is a problem, as for all maximum likelihood techniques. If one performs many RL iterations on an image containing an extended object such as a galaxy, the extended emission develops a ``speckled'' appearance. The speckles are the result of fitting the noise in the data too closely. The only limit on the amount of noise amplification in the RL method is the requirement that the image not become negative. The usual practical approach to limiting noise amplification is simply to stop the iteration when the restored image appears to become too noisy. However, in most cases the number of iterations needed is different for different parts of the image. Hundreds of iterations may be required to get a good fit to the high signal-to-noise image of a bright star, while a smooth, extended object may be fitted well after only a few iterations. Thus, one would like to be able to slow or stop the iteration automatically in regions where a smooth model fits the data adequately, while continuing to iterate in regions where there are sharp features (edges or point sources). The need for a spatially adaptive convergence criterion is exacerbated when CCD readout noise is included in the RL algorithm (Snyder, Hammoud, & White, 1993, JOSA A , 10 , 1014), because the rate of convergence is then slower for faint stars than for bright stars. This paper will

  9. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    PubMed Central

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  10. Identifying minefields and verifying clearance: adapting statistical methods for UXO target detection

    NASA Astrophysics Data System (ADS)

    Gilbert, Richard O.; O'Brien, Robert F.; Wilson, John E.; Pulsipher, Brent A.; McKinstry, Craig A.

    2003-09-01

    It may not be feasible to completely survey large tracts of land suspected of containing minefields. It is desirable to develop a characterization protocol that will confidently identify minefields within these large land tracts if they exist. Naturally, surveying areas of greatest concern and most likely locations would be necessary but will not provide the needed confidence that an unknown minefield had not eluded detection. Once minefields are detected, methods are needed to bound the area that will require detailed mine detection surveys. The US Department of Defense Strategic Environmental Research and Development Program (SERDP) is sponsoring the development of statistical survey methods and tools for detecting potential UXO targets. These methods may be directly applicable to demining efforts. Statistical methods are employed to determine the optimal geophysical survey transect spacing to have confidence of detecting target areas of a critical size, shape, and anomaly density. Other methods under development determine the proportion of a land area that must be surveyed to confidently conclude that there are no UXO present. Adaptive sampling schemes are also being developed as an approach for bounding the target areas. These methods and tools will be presented and the status of relevant research in this area will be discussed.

  11. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    PubMed

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  12. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    PubMed

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  13. Adaptive contour-based statistical background subtraction method for moving target detection in infrared video sequences

    NASA Astrophysics Data System (ADS)

    Akula, Aparna; Khanna, Nidhi; Ghosh, Ripul; Kumar, Satish; Das, Amitava; Sardana, H. K.

    2014-03-01

    A robust contour-based statistical background subtraction method for detection of non-uniform thermal targets in infrared imagery is presented. The foremost step of the method comprises of generation of background frame using statistical information of an initial set of frames not containing any targets. The generated background frame is made adaptive by continuously updating the background using the motion information of the scene. The background subtraction method followed by a clutter rejection stage ensure the detection of foreground objects. The next step comprises of detection of contours and distinguishing the target boundaries from the noisy background. This is achieved by using the Canny edge detector that extracts the contours followed by a k-means clustering approach to differentiate the object contour from the background contours. The post processing step comprises of morphological edge linking approach to close any broken contours and finally flood fill is performed to generate the silhouettes of moving targets. This method is validated on infrared video data consisting of a variety of moving targets. Experimental results demonstrate a high detection rate with minimal false alarms establishing the robustness of the proposed method.

  14. Performances of the fractal iterative method with an internal model control law on the ESO end-to-end ELT adaptive optics simulator

    NASA Astrophysics Data System (ADS)

    Béchet, C.; Le Louarn, M.; Tallon, M.; Thiébaut, É.

    2008-07-01

    Adaptive Optics systems under study for the Extremely Large Telescopes gave rise to a new generation of algorithms for both wavefront reconstruction and the control law. In the first place, the large number of controlled actuators impose the use of computationally efficient methods. Secondly, the performance criterion is no longer solely based on nulling residual measurements. Priors on turbulence must be inserted. In order to satisfy these two requirements, we suggested to associate the Fractal Iterative Method for the estimation step with an Internal Model Control. This combination has now been tested on an end-to-end adaptive optics numerical simulator at ESO, named Octopus. Results are presented here and performance of our method is compared to the classical Matrix-Vector Multiplication combined with a pure integrator. In the light of a theoretical analysis of our control algorithm, we investigate the influence of several errors contributions on our simulations. The reconstruction error varies with the signal-to-noise ratio but is limited by the use of priors. The ratio between the system loop delay and the wavefront coherence time also impacts on the reachable Strehl ratio. Whereas no instabilities are observed, correction quality is obviously affected at low flux, when subapertures extinctions are frequent. Last but not least, the simulations have demonstrated the robustness of the method with respect to sensor modeling errors and actuators misalignments.

  15. Autonomous spatially adaptive sampling in experiments based on curvature, statistical error and sample spacing with applications in LDA measurements

    NASA Astrophysics Data System (ADS)

    Theunissen, Raf; Kadosh, Jesse S.; Allen, Christian B.

    2015-06-01

    Spatially varying signals are typically sampled by collecting uniformly spaced samples irrespective of the signal content. For signals with inhomogeneous information content, this leads to unnecessarily dense sampling in regions of low interest or insufficient sample density at important features, or both. A new adaptive sampling technique is presented directing sample collection in proportion to local information content, capturing adequately the short-period features while sparsely sampling less dynamic regions. The proposed method incorporates a data-adapted sampling strategy on the basis of signal curvature, sample space-filling, variable experimental uncertainty and iterative improvement. Numerical assessment has indicated a reduction in the number of samples required to achieve a predefined uncertainty level overall while improving local accuracy for important features. The potential of the proposed method has been further demonstrated on the basis of Laser Doppler Anemometry experiments examining the wake behind a NACA0012 airfoil and the boundary layer characterisation of a flat plate.

  16. Multiple solution of systems of linear algebraic equations by an iterative method with the adaptive recalculation of the preconditioner

    NASA Astrophysics Data System (ADS)

    Akhunov, R. R.; Gazizov, T. R.; Kuksenko, S. P.

    2016-08-01

    The mean time needed to solve a series of systems of linear algebraic equations (SLAEs) as a function of the number of SLAEs is investigated. It is proved that this function has an extremum point. An algorithm for adaptively determining the time when the preconditioner matrix should be recalculated when a series of SLAEs is solved is developed. A numerical experiment with multiply solving a series of SLAEs using the proposed algorithm for computing 100 capacitance matrices with two different structures—microstrip when its thickness varies and a modal filter as the gap between the conductors varies—is carried out. The speedups turned out to be close to the optimal ones.

  17. Vibration-based structural health monitoring using adaptive statistical method under varying environmental condition

    NASA Astrophysics Data System (ADS)

    Jin, Seung-Seop; Jung, Hyung-Jo

    2014-03-01

    It is well known that the dynamic properties of a structure such as natural frequencies depend not only on damage but also on environmental condition (e.g., temperature). The variation in dynamic characteristics of a structure due to environmental condition may mask damage of the structure. Without taking the change of environmental condition into account, false-positive or false-negative damage diagnosis may occur so that structural health monitoring becomes unreliable. In order to address this problem, an approach to construct a regression model based on structural responses considering environmental factors has been usually used by many researchers. The key to success of this approach is the formulation between the input and output variables of the regression model to take into account the environmental variations. However, it is quite challenging to determine proper environmental variables and measurement locations in advance for fully representing the relationship between the structural responses and the environmental variations. One alternative (i.e., novelty detection) is to remove the variations caused by environmental factors from the structural responses by using multivariate statistical analysis (e.g., principal component analysis (PCA), factor analysis, etc.). The success of this method is deeply depending on the accuracy of the description of normal condition. Generally, there is no prior information on normal condition during data acquisition, so that the normal condition is determined by subjective perspective with human-intervention. The proposed method is a novel adaptive multivariate statistical analysis for monitoring of structural damage detection under environmental change. One advantage of this method is the ability of a generative learning to capture the intrinsic characteristics of the normal condition. The proposed method is tested on numerically simulated data for a range of noise in measurement under environmental variation. A comparative

  18. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels.

    PubMed

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.

  19. Adaptation of the human visual system to the statistics of letters and line configurations.

    PubMed

    Chang, Claire H C; Pallier, Christophe; Wu, Denise H; Nakamura, Kimihiro; Jobert, Antoinette; Kuo, W-J; Dehaene, Stanislas

    2015-10-15

    By adulthood, literate humans have been exposed to millions of visual scenes and pages of text. Does the human visual system become attuned to the statistics of its inputs? Using functional magnetic resonance imaging, we examined whether the brain responses to line configurations are proportional to their natural-scene frequency. To further distinguish prior cortical competence from adaptation induced by learning to read, we manipulated whether the selected configurations formed letters and whether they were presented on the horizontal meridian, the familiar location where words usually appear, or on the vertical meridian. While no natural-scene frequency effect was observed, we observed letter-status and letter frequency effects on bilateral occipital activation, mainly for horizontal stimuli. The findings suggest a reorganization of the visual pathway resulting from reading acquisition under genetic and connectional constraints. Even early retinotopic areas showed a stronger response to letters than to rotated versions of the same shapes, suggesting an early visual tuning to large visual features such as letters. PMID:26190404

  20. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels

    PubMed Central

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J.

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively “hiding” its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research. PMID:25505378

  1. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels.

    PubMed

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research. PMID:25505378

  2. Data-driven and adaptive statistical residual evaluation for fault detection with an automotive application

    NASA Astrophysics Data System (ADS)

    Svärd, Carl; Nyberg, Mattias; Frisk, Erik; Krysander, Mattias

    2014-03-01

    An important step in model-based fault detection is residual evaluation, where residuals are evaluated with the aim to detect changes in their behavior caused by faults. To handle residuals subject to time-varying uncertainties and disturbances, which indeed are present in practice, a novel statistical residual evaluation approach is presented. The main contribution is to base the residual evaluation on an explicit comparison of the probability distribution of the residual, estimated online using current data, with a no-fault residual distribution. The no-fault distribution is based on a set of a priori known no-fault residual distributions, and is continuously adapted to the current situation. As a second contribution, a method is proposed for estimating the required set of no-fault residual distributions off-line from no-fault training data. The proposed residual evaluation approach is evaluated with measurement data on a residual for fault detection in the gas-flow system of a Scania truck diesel engine. Results show that small faults can be reliably detected with the proposed approach in cases where regular methods fail.

  3. The Impact of Different Levels of Adaptive Iterative Dose Reduction 3D on Image Quality of 320-Row Coronary CT Angiography: A Clinical Trial

    PubMed Central

    Feger, Sarah; Rief, Matthias; Zimmermann, Elke; Martus, Peter; Schuijf, Joanne Désirée; Blobel, Jörg; Richter, Felicitas; Dewey, Marc

    2015-01-01

    Purpose The aim of this study was the systematic image quality evaluation of coronary CT angiography (CTA), reconstructed with the 3 different levels of adaptive iterative dose reduction (AIDR 3D) and compared to filtered back projection (FBP) with quantum denoising software (QDS). Methods Standard-dose CTA raw data of 30 patients with mean radiation dose of 3.2 ± 2.6 mSv were reconstructed using AIDR 3D mild, standard, strong and compared to FBP/QDS. Objective image quality comparison (signal, noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), contour sharpness) was performed using 21 measurement points per patient, including measurements in each coronary artery from proximal to distal. Results Objective image quality parameters improved with increasing levels of AIDR 3D. Noise was lowest in AIDR 3D strong (p≤0.001 at 20/21 measurement points; compared with FBP/QDS). Signal and contour sharpness analysis showed no significant difference between the reconstruction algorithms for most measurement points. Best coronary SNR and CNR were achieved with AIDR 3D strong. No loss of SNR or CNR in distal segments was seen with AIDR 3D as compared to FBP. Conclusions On standard-dose coronary CTA images, AIDR 3D strong showed higher objective image quality than FBP/QDS without reducing contour sharpness. Trial Registration Clinicaltrials.gov NCT00967876 PMID:25945924

  4. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  5. Cross-cultural adaptation of research instruments: language, setting, time and statistical considerations

    PubMed Central

    2010-01-01

    Background Research questionnaires are not always translated appropriately before they are used in new temporal, cultural or linguistic settings. The results based on such instruments may therefore not accurately reflect what they are supposed to measure. This paper aims to illustrate the process and required steps involved in the cross-cultural adaptation of a research instrument using the adaptation process of an attitudinal instrument as an example. Methods A questionnaire was needed for the implementation of a study in Norway 2007. There was no appropriate instruments available in Norwegian, thus an Australian-English instrument was cross-culturally adapted. Results The adaptation process included investigation of conceptual and item equivalence. Two forward and two back-translations were synthesized and compared by an expert committee. Thereafter the instrument was pretested and adjusted accordingly. The final questionnaire was administered to opioid maintenance treatment staff (n=140) and harm reduction staff (n=180). The overall response rate was 84%. The original instrument failed confirmatory analysis. Instead a new two-factor scale was identified and found valid in the new setting. Conclusions The failure of the original scale highlights the importance of adapting instruments to current research settings. It also emphasizes the importance of ensuring that concepts within an instrument are equal between the original and target language, time and context. If the described stages in the cross-cultural adaptation process had been omitted, the findings would have been misleading, even if presented with apparent precision. Thus, it is important to consider possible barriers when making a direct comparison between different nations, cultures and times. PMID:20144247

  6. On the Adaptive Control of the False Discovery Rate in Multiple Testing with Independent Statistics.

    ERIC Educational Resources Information Center

    Benjamini, Yoav; Hochberg, Yosef

    2000-01-01

    Presents an adaptive approach to multiple significance testing based on the procedure of Y. Benjamini and Y. Hochberg (1995) that first estimates the number of true null hypotheses and then uses that estimate in the Benjamini and Hochberg procedure. Uses the new procedure in examples from educational and behavioral studies and shows its control of…

  7. Research of adaptive threshold edge detection algorithm based on statistics canny operator

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Wang, Huaisuo; Huang, Hua

    2015-12-01

    The traditional Canny operator cannot get the optimal threshold in different scene, on this foundation, an improved Canny edge detection algorithm based on adaptive threshold is proposed. The result of the experiment pictures indicate that the improved algorithm can get responsible threshold, and has the better accuracy and precision in the edge detection.

  8. Statistical Indexes for Monitoring Item Behavior under Computer Adaptive Testing Environment.

    ERIC Educational Resources Information Center

    Zhu, Renbang; Yu, Feng; Liu, Su

    A computerized adaptive test (CAT) administration usually requires a large supply of items with accurately estimated psychometric properties, such as item response theory (IRT) parameter estimates, to ensure the precision of examinee ability estimation. However, an estimated IRT model of a given item in any given pool does not always correctly…

  9. Adaptive nonlocal means-based regularization for statistical image reconstruction of low-dose X-ray CT

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Ma, Jianhua; Wang, Jing; Liu, Yan; Han, Hao; Li, Lihong; Moore, William; Liang, Zhengrong

    2015-03-01

    To reduce radiation dose in X-ray computed tomography (CT) imaging, one of the common strategies is to lower the milliampere-second (mAs) setting during projection data acquisition. However, this strategy would inevitably increase the projection data noise, and the resulting image by the filtered back-projection (FBP) method may suffer from excessive noise and streak artifacts. The edge-preserving nonlocal means (NLM) filtering can help to reduce the noise-induced artifacts in the FBP reconstructed image, but it sometimes cannot completely eliminate them, especially under very low-dose circumstance when the image is severely degraded. To deal with this situation, we proposed a statistical image reconstruction scheme using a NLM-based regularization, which can suppress the noise and streak artifacts more effectively. However, we noticed that using uniform filtering parameter in the NLM-based regularization was rarely optimal for the entire image. Therefore, in this study, we further developed a novel approach for designing adaptive filtering parameters by considering local characteristics of the image, and the resulting regularization is referred to as adaptive NLM-based regularization. Experimental results with physical phantom and clinical patient data validated the superiority of using the proposed adaptive NLM-regularized statistical image reconstruction method for low-dose X-ray CT, in terms of noise/streak artifacts suppression and edge/detail/contrast/texture preservation.

  10. A Unifying Framework for Adaptive Radar Detection in Homogeneous Plus Structured Interference— Part I: On the Maximal Invariant Statistic

    NASA Astrophysics Data System (ADS)

    Ciuonzo, D.; De Maio, A.; Orlando, D.

    2016-06-01

    This paper deals with the problem of adaptive multidimensional/multichannel signal detection in homogeneous Gaussian disturbance with unknown covariance matrix and structured deterministic interference. The aforementioned problem corresponds to a generalization of the well-known Generalized Multivariate Analysis of Variance (GMANOVA). In this first part of the work, we formulate the considered problem in canonical form and, after identifying a desirable group of transformations for the considered hypothesis testing, we derive a Maximal Invariant Statistic (MIS) for the problem at hand. Furthermore, we provide the MIS distribution in the form of a stochastic representation. Finally, strong connections to the MIS obtained in the open literature in simpler scenarios are underlined.

  11. Frame selection performance limits for statistical image reconstruction of adaptive optics compensated images

    NASA Astrophysics Data System (ADS)

    Ford, Stephen D.

    1994-12-01

    The U.S. Air Force uses adaptive optics systems to collect images of extended objects beyond the atmosphere. These systems use wavefront sensors and deformable mirrors to compensate for atmospheric turbulence induced aberrations. Adaptive optics greatly enhance image quality, however, wavefront aberrations are not completely eliminated. Therefore, post-detection processing techniques are employed to further improve the compensated images. Typically, many short exposure images are collected, recentered to compensate for tilt, and then averaged to overcome randomness in the images and improve signal-to-noise ratio. Experience shows that some short exposure images in a data set are better than others. Frame selection exploits this fact by using a quality metric to discard low quality frames. A composite image is then created by averaging only the best frames. Performance limits associated with the frame selection technique are investigated in this thesis. Limits imposed by photon noise result in a minimum object brightness of visual magnitude +8 for point sources and +4 for a typical satellite model. Effective average point spread functions for point source and extended objects after frame selection processing are almost identical across a wide range of conditions. This discovery allows the use of deconvolution techniques to sharpen images after using the frame selection technique. A new post-detection processing method, frame weighting, is investigated and may offer some improvement for dim objects during poor atmospheric seeing. Frame selection is demonstrated for the first time on actual imagery from an adaptive optics system. Data analysis indicates that signal-to-noise ratio improvements are degraded for exposure times longer than that allowed to 'freeze' individual realizations of the turbulence effects.

  12. Statistical learning and adaptive decision-making underlie human response time variability in inhibitory control

    PubMed Central

    Ma, Ning; Yu, Angela J.

    2015-01-01

    Response time (RT) is an oft-reported behavioral measure in psychological and neurocognitive experiments, but the high level of observed trial-to-trial variability in this measure has often limited its usefulness. Here, we combine computational modeling and psychophysics to examine the hypothesis that fluctuations in this noisy measure reflect dynamic computations in human statistical learning and corresponding cognitive adjustments. We present data from the stop-signal task (SST), in which subjects respond to a go stimulus on each trial, unless instructed not to by a subsequent, infrequently presented stop signal. We model across-trial learning of stop signal frequency, P(stop), and stop-signal onset time, SSD (stop-signal delay), with a Bayesian hidden Markov model, and within-trial decision-making with an optimal stochastic control model. The combined model predicts that RT should increase with both expected P(stop) and SSD. The human behavioral data (n = 20) bear out this prediction, showing P(stop) and SSD both to be significant, independent predictors of RT, with P(stop) being a more prominent predictor in 75% of the subjects, and SSD being more prominent in the remaining 25%. The results demonstrate that humans indeed readily internalize environmental statistics and adjust their cognitive/behavioral strategy accordingly, and that subtle patterns in RT variability can serve as a valuable tool for validating models of statistical learning and decision-making. More broadly, the modeling tools presented in this work can be generalized to a large body of behavioral paradigms, in order to extract insights about cognitive and neural processing from apparently quite noisy behavioral measures. We also discuss how this behaviorally validated model can then be used to conduct model-based analysis of neural data, in order to help identify specific brain areas for representing and encoding key computational quantities in learning and decision-making. PMID:26321966

  13. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    PubMed

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical.

  14. The Use of Statistical Process Control-Charts for Person-Fit Analysis on Computerized Adaptive Testing. LSAC Research Report Series.

    ERIC Educational Resources Information Center

    Meijer, Rob R.; van Krimpen-Stoop, Edith M. L. A.

    In this study a cumulative-sum (CUSUM) procedure from the theory of Statistical Process Control was modified and applied in the context of person-fit analysis in a computerized adaptive testing (CAT) environment. Six person-fit statistics were proposed using the CUSUM procedure, and three of them could be used to investigate the CAT in online test…

  15. Dual adaptive statistical approach for quantitative noise reduction in photon-counting medical imaging: application to nuclear medicine images

    NASA Astrophysics Data System (ADS)

    Hannequin, Pascal Paul

    2015-06-01

    Noise reduction in photon-counting images remains challenging, especially at low count levels. We have developed an original procedure which associates two complementary filters using a Wiener-derived approach. This approach combines two statistically adaptive filters into a dual-weighted (DW) filter. The first one, a statistically weighted adaptive (SWA) filter, replaces the central pixel of a sliding window with a statistically weighted sum of its neighbors. The second one, a statistical and heuristic noise extraction (extended) (SHINE-Ext) filter, performs a discrete cosine transformation (DCT) using sliding blocks. Each block is reconstructed using its significant components which are selected using tests derived from multiple linear regression (MLR). The two filters are weighted according to Wiener theory. This approach has been validated using a numerical phantom and a real planar Jaszczak phantom. It has also been illustrated using planar bone scintigraphy and myocardial single-photon emission computed tomography (SPECT) data. Performances of filters have been tested using mean normalized absolute error (MNAE) between the filtered images and the reference noiseless or high-count images. Results show that the proposed filters quantitatively decrease the MNAE in the images and then increase the signal-to-noise Ratio (SNR). This allows one to work with lower count images. The SHINE-Ext filter is well suited to high-size images and low-variance areas. DW filtering is efficient for low-size images and in high-variance areas. The relative proportion of eliminated noise generally decreases when count level increases. In practice, SHINE filtering alone is recommended when pixel spacing is less than one-quarter of the effective resolution of the system and/or the size of the objects of interest. It can also be used when the practical interest of high frequencies is low. In any case, DW filtering will be preferable. The proposed filters have been applied to nuclear

  16. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  17. Particle System Based Adaptive Sampling on Spherical Parameter Space to Improve the MDL Method for Construction of Statistical Shape Models

    PubMed Central

    Zhou, Xiangrong; Hirano, Yasushi; Tachibana, Rie; Hara, Takeshi; Kido, Shoji; Fujita, Hiroshi

    2013-01-01

    Minimum description length (MDL) based group-wise registration was a state-of-the-art method to determine the corresponding points of 3D shapes for the construction of statistical shape models (SSMs). However, it suffered from the problem that determined corresponding points did not uniformly spread on original shapes, since corresponding points were obtained by uniformly sampling the aligned shape on the parameterized space of unit sphere. We proposed a particle-system based method to obtain adaptive sampling positions on the unit sphere to resolve this problem. Here, a set of particles was placed on the unit sphere to construct a particle system whose energy was related to the distortions of parameterized meshes. By minimizing this energy, each particle was moved on the unit sphere. When the system became steady, particles were treated as vertices to build a spherical mesh, which was then relaxed to slightly adjust vertices to obtain optimal sampling-positions. We used 47 cases of (left and right) lungs and 50 cases of livers, (left and right) kidneys, and spleens for evaluations. Experiments showed that the proposed method was able to resolve the problem of the original MDL method, and the proposed method performed better in the generalization and specificity tests. PMID:23861721

  18. US ITER Moving Forward

    ScienceCinema

    US ITER / ORNL

    2016-07-12

    US ITER Project Manager Ned Sauthoff, joined by Wayne Reiersen, Team Leader Magnet Systems, and Jan Berry, Team Leader Tokamak Cooling System, discuss the U.S.'s role in the ITER international collaboration.

  19. Iterative consolidation of unorganized point clouds.

    PubMed

    Liu, Shengjun; Chan, Kwan-Chung; Wang, Charlie C L

    2012-01-01

    Unorganized point clouds obtained from 3D shape acquisition devices usually present noise, outliers, and nonuniformities. The proposed framework consolidates unorganized points through an iterative procedure of interlaced downsampling and upsampling. Selection operations remove outliers while preserving geometric details. The framework improves the uniformity of points by moving the downsampled particles and refining point samples. Surface extrapolation fills missed regions. Moreover, an adaptive sampling strategy speeds up the iterations. Experimental results demonstrate the framework's effectiveness.

  20. Statistical adaptation of ALADIN RCM outputs over the French alpine massifs - application to future climate and snow cover

    NASA Astrophysics Data System (ADS)

    Rousselot, M.; Durand, Y.; Giraud, G.; Mérindol, L.; Dombrowski-Etchevers, I.; Déqué, M.

    2012-01-01

    In this study, snowpack scenarios are modelled across the French Alps using dynamically downscaled variables from the ALADIN Regional Climate Model (RCM) for the control period (1961-1990) and three emission scenarios (SRES B1, A1B and A2) by the mid- and late of the 21st century (2021-2050 and 2071-2100). These variables are statistically adapted to the different elevations, aspects and slopes of the alpine massifs. For this purpose, we use a simple analogue criterion with ERA40 series as well as an existing detailed climatology of the French Alps (Durand et al., 2009a) that provides complete meteorological fields from the SAFRAN analysis model. The resulting scenarios of precipitation, temperature, wind, cloudiness, longwave and shortwave radiation, and humidity are used to run the physical snow model CROCUS and simulate snowpack evolution over the massifs studied. The seasonal and regional characteristics of the simulated climate and snow cover changes are explored, as is the influence of the scenarios on these changes. Preliminary results suggest that the Snow Water Equivalent (SWE) of the snowpack will decrease dramatically in the next century, especially in the Southern and Extreme Southern part of the Alps. This decrease seems to result primarily from a general warming throughout the year, and possibly a deficit of precipitation in the autumn. The magnitude of the snow cover decline follows a marked altitudinal gradient, with the highest altitudes being less exposed to climate change. Scenario A2, with its high concentrations of greenhouse gases, results in a SWE reduction roughly twice as large as in the low-emission scenario B1 by the end of the century. This study needs to be completed using simulations from other RCMs, since a multi-model approach is essential for uncertainty analysis.

  1. Statistical adaptation of ALADIN RCM outputs over the French Alps - application to future climate and snow cover

    NASA Astrophysics Data System (ADS)

    Rousselot, M.; Durand, Y.; Giraud, G.; Mérindol, L.; Dombrowski-Etchevers, I.; Déqué, M.; Castebrunet, H.

    2012-07-01

    In this study, snowpack scenarios are modelled across the French Alps using dynamically downscaled variables from the ALADIN Regional Climate Model (RCM) for the control period (1961-1990) and three emission scenarios (SRES B1, A1B and A2) for the mid- and late 21st century (2021-2050 and 2071-2100). These variables are statistically adapted to the different elevations, aspects and slopes of the Alpine massifs. For this purpose, we use a simple analogue criterion with ERA40 series as well as an existing detailed climatology of the French Alps (Durand et al., 2009a) that provides complete meteorological fields from the SAFRAN analysis model. The resulting scenarios of precipitation, temperature, wind, cloudiness, longwave and shortwave radiation, and humidity are used to run the physical snow model CROCUS and simulate snowpack evolution over the massifs studied. The seasonal and regional characteristics of the simulated climate and snow cover changes are explored, as is the influence of the scenarios on these changes. Preliminary results suggest that the snow water equivalent (SWE) of the snowpack will decrease dramatically in the next century, especially in the Southern and Extreme Southern parts of the Alps. This decrease seems to result primarily from a general warming throughout the year, and possibly a deficit of precipitation in the autumn. The magnitude of the snow cover decline follows a marked altitudinal gradient, with the highest altitudes being less exposed to climate change. Scenario A2, with its high concentrations of greenhouse gases, results in a SWE reduction roughly twice as large as in the low-emission scenario B1 by the end of the century. This study needs to be completed using simulations from other RCMs, since a multi-model approach is essential for uncertainty analysis.

  2. Matching pollution with adaptive changes in mangrove plants by multivariate statistics. A case study, Rhizophora mangle from four neotropical mangroves in Brazil.

    PubMed

    Souza, Iara da Costa; Morozesk, Mariana; Duarte, Ian Drumond; Bonomo, Marina Marques; Rocha, Lívia Dorsch; Furlan, Larissa Maria; Arrivabene, Hiulana Pereira; Monferrán, Magdalena Victoria; Matsumoto, Silvia Tamie; Milanez, Camilla Rozindo Dias; Wunderlin, Daniel Alberto; Fernandes, Marisa Narciso

    2014-08-01

    Roots of mangrove trees have an important role in depurating water and sediments by retaining metals that may accumulate in different plant tissues, affecting physiological processes and anatomy. The present study aimed to evaluate adaptive changes in root of Rhizophora mangle in response to different levels of chemical elements (metals/metalloids) in interstitial water and sediments from four neotropical mangroves in Brazil. What sets this study apart from other studies is that we not only investigate adaptive modifications in R. mangle but also changes in environments where this plant grows, evaluating correspondence between physical, chemical and biological issues by a combined set of multivariate statistical methods (pattern recognition). Thus, we looked to match changes in the environment with adaptations in plants. Multivariate statistics highlighted that the lignified periderm and the air gaps are directly related to the environmental contamination. Current results provide new evidences of root anatomical strategies to deal with contaminated environments. Multivariate statistics greatly contributes to extrapolate results from complex data matrixes obtained when analyzing environmental issues, pointing out parameters involved in environmental changes and also evidencing the adaptive response of the exposed biota.

  3. Matching pollution with adaptive changes in mangrove plants by multivariate statistics. A case study, Rhizophora mangle from four neotropical mangroves in Brazil.

    PubMed

    Souza, Iara da Costa; Morozesk, Mariana; Duarte, Ian Drumond; Bonomo, Marina Marques; Rocha, Lívia Dorsch; Furlan, Larissa Maria; Arrivabene, Hiulana Pereira; Monferrán, Magdalena Victoria; Matsumoto, Silvia Tamie; Milanez, Camilla Rozindo Dias; Wunderlin, Daniel Alberto; Fernandes, Marisa Narciso

    2014-08-01

    Roots of mangrove trees have an important role in depurating water and sediments by retaining metals that may accumulate in different plant tissues, affecting physiological processes and anatomy. The present study aimed to evaluate adaptive changes in root of Rhizophora mangle in response to different levels of chemical elements (metals/metalloids) in interstitial water and sediments from four neotropical mangroves in Brazil. What sets this study apart from other studies is that we not only investigate adaptive modifications in R. mangle but also changes in environments where this plant grows, evaluating correspondence between physical, chemical and biological issues by a combined set of multivariate statistical methods (pattern recognition). Thus, we looked to match changes in the environment with adaptations in plants. Multivariate statistics highlighted that the lignified periderm and the air gaps are directly related to the environmental contamination. Current results provide new evidences of root anatomical strategies to deal with contaminated environments. Multivariate statistics greatly contributes to extrapolate results from complex data matrixes obtained when analyzing environmental issues, pointing out parameters involved in environmental changes and also evidencing the adaptive response of the exposed biota. PMID:24875920

  4. Comparison of Iterative and Non-Iterative Strain-Gage Balance Load Calculation Methods

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2010-01-01

    The accuracy of iterative and non-iterative strain-gage balance load calculation methods was compared using data from the calibration of a force balance. Two iterative and one non-iterative method were investigated. In addition, transformations were applied to balance loads in order to process the calibration data in both direct read and force balance format. NASA's regression model optimization tool BALFIT was used to generate optimized regression models of the calibration data for each of the three load calculation methods. This approach made sure that the selected regression models met strict statistical quality requirements. The comparison of the standard deviation of the load residuals showed that the first iterative method may be applied to data in both the direct read and force balance format. The second iterative method, on the other hand, implicitly assumes that the primary gage sensitivities of all balance gages exist. Therefore, the second iterative method only works if the given balance data is processed in force balance format. The calibration data set was also processed using the non-iterative method. Standard deviations of the load residuals for the three load calculation methods were compared. Overall, the standard deviations show very good agreement. The load prediction accuracies of the three methods appear to be compatible as long as regression models used to analyze the calibration data meet strict statistical quality requirements. Recent improvements of the regression model optimization tool BALFIT are also discussed in the paper.

  5. Designing Diagnostics to Survive in ITER

    NASA Astrophysics Data System (ADS)

    Watts, Christopher; ITER Team

    2014-10-01

    Adapting diagnostics to withstand the incredibly harsh environment of the ITER D-T plasma is a formidable engineering task. Hindrances include not only the nuclear environment, but also the high radiative heat fluxes, high particle fluxes and stray ECH radiation. Strategies to mitigate the impact of these run the gamut from shielding, through recessing, through appropriate materials selection, to refurbishment. Examples include the Langmuir probe system, where individual probes are protected by passive heat shields; retroreflectors recessed into the tokamak first wall in deep, baffled tunnels; plasma mirror cleaning systems; electronics components like piezo crystals and x-ray detectors vetted for the nuclear environment. These and other ITER diagnostic system designs will be highlighted to emphasize their strategies for dealing with the ITER environment. *The views and opinions expressed herein do not necessarily reflect those of the ITER Organization.

  6. ITER test programme

    NASA Astrophysics Data System (ADS)

    Abdou, M.; Baker, C.; Casini, G.

    1991-07-01

    The International Thermonuclear Experimental Reactor (ITER) was designed to operate in two phases. The first phase, which lasts for 6 years, is devoted to machine checkout and physics testing. The second phase lasts for 8 years and is devoted primarily to technology testing. This report describes the technology test program development for ITER, the ancillary equipment outside the torus necessary to support the test modules, the international collaboration aspects of conducting the test program on ITER, the requirements on the machine major parameters and the R and D program required to develop the test modules for testing in ITER.

  7. Iteration, Not Induction

    ERIC Educational Resources Information Center

    Dobbs, David E.

    2009-01-01

    The main purpose of this note is to present and justify proof via iteration as an intuitive, creative and empowering method that is often available and preferable as an alternative to proofs via either mathematical induction or the well-ordering principle. The method of iteration depends only on the fact that any strictly decreasing sequence of…

  8. Reducing the latency of the Fractal Iterative Method to half an iteration

    NASA Astrophysics Data System (ADS)

    Béchet, Clémentine; Tallon, Michel

    2013-12-01

    The fractal iterative method for atmospheric tomography (FRiM-3D) has been introduced to solve the wavefront reconstruction at the dimensions of an ELT with a low-computational cost. Previous studies reported the requirement of only 3 iterations of the algorithm in order to provide the best adaptive optics (AO) performance. Nevertheless, any iterative method in adaptive optics suffer from the intrinsic latency induced by the fact that one iteration can start only once the previous one is completed. Iterations hardly match the low-latency requirement of the AO real-time computer. We present here a new approach to avoid iterations in the computation of the commands with FRiM-3D, thus allowing low-latency AO response even at the scale of the European ELT (E-ELT). The method highlights the importance of "warm-start" strategy in adaptive optics. To our knowledge, this particular way to use the "warm-start" has not been reported before. Futhermore, removing the requirement of iterating to compute the commands, the computational cost of the reconstruction with FRiM-3D can be simplified and at least reduced to half the computational cost of a classical iteration. Thanks to simulations of both single-conjugate and multi-conjugate AO for the E-ELT,with FRiM-3D on Octopus ESO simulator, we demonstrate the benefit of this approach. We finally enhance the robustness of this new implementation with respect to increasing measurement noise, wind speed and even modeling errors.

  9. Unsupervised iterative detection of land mines in highly cluttered environments.

    PubMed

    Batman, Sinan; Goutsias, John

    2003-01-01

    An unsupervised iterative scheme is proposed for land mine detection in heavily cluttered scenes. This scheme is based on iterating hybrid multispectral filters that consist of a decorrelating linear transform coupled with a nonlinear morphological detector. Detections extracted from the first pass are used to improve results in subsequent iterations. The procedure stops after a predetermined number of iterations. The proposed scheme addresses several weaknesses associated with previous adaptations of morphological approaches to land mine detection. Improvement in detection performance, robustness with respect to clutter inhomogeneities, a completely unsupervised operation, and computational efficiency are the main highlights of the method. Experimental results reveal excellent performance.

  10. Energy confinement scaling and the extrapolation to ITER

    SciTech Connect

    1997-11-01

    The fusion performance of ITER is predicted using three different techniques; statistical analysis of the global energy confinement data, a dimensionless physics parameter similarity method and the full 1-D modeling of the plasma profiles. Although the three methods give overlapping predictions for the performance of ITER, the confidence interval of all of the techniques is still quite wide.

  11. Perl Modules for Constructing Iterators

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt

    2009-01-01

    The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.

  12. Diagnostics for ITER

    SciTech Connect

    Donne, A. J. H.; Hellermann, M. G. von; Barnsley, R.

    2008-10-22

    After an introduction into the specific challenges in the field of diagnostics for ITER (specifically high level of nuclear radiation, long pulses, high fluxes of particles to plasma facing components, need for reliability and robustness), an overview will be given of the spectroscopic diagnostics foreseen for ITER. The paper will describe both active neutral-beam based diagnostics as well as passive spectroscopic diagnostics operating in the visible, ultra-violet and x-ray spectral regions.

  13. ITER convertible blanket evaluation

    SciTech Connect

    Wong, C.P.C.; Cheng, E.

    1995-09-01

    Proposed International Thermonuclear Experimental Reactor (ITER) convertible blankets were reviewed. Key design difficulties were identified. A new particle filter concept is introduced and key performance parameters estimated. Results show that this particle filter concept can satisfy all of the convertible blanket design requirements except the generic issue of Be blanket lifetime. If the convertible blanket is an acceptable approach for ITER operation, this particle filter option should be a strong candidate.

  14. Fusion Physics Toward ITER

    NASA Astrophysics Data System (ADS)

    Stambaugh, R. D.

    2006-04-01

    Stars are powered by fusion, the energy released by fusing together light nuclei, using gravitational confinement of plasma. Fusion on earth will be done in a 100 million degree plasma made of deuterium and tritium and confined by magnetic fields or inertia. The worldwide fusion research community will construct ITER, the first experiment that will burn a DT plasma by copious fusion reactions. ITER's nominal goal is to create 500 MW of fusion power. An energy gain of 10 will mean the plasma is dominantly self-heated by the fusion-produced alpha particles. ITER's all superconducting magnet technology and steady-state heat removal technology will enable nominal 400 s pulses to allow the study of burning plasmas on the longest intrinsic timescale of the confined plasma - diffusive redistribution of the electrical currents in the plasma. The advances in magnetic confinement physics that have led to this opportunity will be described, as well as the research opportunities afforded by ITER. The physics of confining stable plasmas and heating them will produce the high gain state in ITER. Sustained burn will come from the physics of controlling currents in plasmas and how the hot plasma is interfaced to its room temperature surroundings. ITER will provide our first experience with how fusion plasma self-heating will profoundly affect the complex, interlinked physical processes that occur in confined plasmas.

  15. Toward Construction of ITER

    NASA Astrophysics Data System (ADS)

    Shimomura, Yasuo

    The ITER Project has been significantly developed in the past years in preparation for its construction. The ITER Negotiators have developed a draft Joint Implementation Agreement (JIA), ready for completion following the nomination of the Project’s Director General (DG). The ITER International Team and Participant Teams have continued technical and organizational preparations. The actual construction will be able to start immediately after the international ITER organization will be established, following signature of the JIA. The Project is now strongly supported by all the participants as well as by the scientific community with the final high-level negotiations, focused on siting and the concluding details of cost sharing, started in December 2003. The EU, with Cadarache, and Japan, with Rokkasho, have both promised large contributions to the project to strongly support their construction site proposals. The extent to which they both wish to host the ITER facility is such that large contributions to a broader collaboration among the Parties are also proposed by them. This covers complementary activities to help accelerate fusion development towards a viable power source, as well as may allow the Participants to reach a conclusion on ITER siting.

  16. Adaptive Management of Ecosystems

    EPA Science Inventory

    Adaptive management is an approach to natural resource management that emphasizes learning through management. As such, management may be treated as experiment, with replication, or management may be conducted in an iterative manner. Although the concept has resonated with many...

  17. Lanczos iterated time-reversal.

    PubMed

    Oberai, Assad A; Feijóo, Gonzalo R; Barbone, Paul E

    2009-02-01

    A new iterative time-reversal algorithm capable of identifying and focusing on multiple scatterers in a relatively small number of iterations is developed. It is recognized that the traditional iterated time-reversal method is based on utilizing power iterations to determine the dominant eigenpairs of the time-reversal operator. The convergence properties of these iterations are known to be suboptimal. Motivated by this, a new method based on Lanczos iterations is developed. In several illustrative examples it is demonstrated that for the same number of transmitted and received signals, the Lanczos iterations based approach is substantially more accurate. PMID:19206835

  18. Robust iterative methods

    SciTech Connect

    Saadd, Y.

    1994-12-31

    In spite of the tremendous progress achieved in recent years in the general area of iterative solution techniques, there are still a few obstacles to the acceptance of iterative methods in a number of applications. These applications give rise to very indefinite or highly ill-conditioned non Hermitian matrices. Trying to solve these systems with the simple-minded standard preconditioned Krylov subspace methods can be a frustrating experience. With the mathematical and physical models becoming more sophisticated, the typical linear systems which we encounter today are far more difficult to solve than those of just a few years ago. This trend is likely to accentuate. This workshop will discuss (1) these applications and the types of problems that they give rise to; and (2) recent progress in solving these problems with iterative methods. The workshop will end with a hopefully stimulating panel discussion with the speakers.

  19. Rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, produce modified schedules, quickly, and exhibits 'anytime' behavior. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. We also show the anytime characteristics of the system. These experiments were performed within the domain of Space Shuttle ground processing.

  20. From disease ontology to disease-ontology lite: statistical methods to adapt a general-purpose ontology for the test of gene-ontology associations.

    PubMed

    Du, Pan; Feng, Gang; Flatow, Jared; Song, Jie; Holko, Michelle; Kibbe, Warren A; Lin, Simon M

    2009-06-15

    Subjective methods have been reported to adapt a general-purpose ontology for a specific application. For example, Gene Ontology (GO) Slim was created from GO to generate a highly aggregated report of the human-genome annotation. We propose statistical methods to adapt the general purpose, OBO Foundry Disease Ontology (DO) for the identification of gene-disease associations. Thus, we need a simplified definition of disease categories derived from implicated genes. On the basis of the assumption that the DO terms having similar associated genes are closely related, we group the DO terms based on the similarity of gene-to-DO mapping profiles. Two types of binary distance metrics are defined to measure the overall and subset similarity between DO terms. A compactness-scalable fuzzy clustering method is then applied to group similar DO terms. To reduce false clustering, the semantic similarities between DO terms are also used to constrain clustering results. As such, the DO terms are aggregated and the redundant DO terms are largely removed. Using these methods, we constructed a simplified vocabulary list from the DO called Disease Ontology Lite (DOLite). We demonstrated that DOLite results in more interpretable results than DO for gene-disease association tests. The resultant DOLite has been used in the Functional Disease Ontology (FunDO) Web application at http://www.projects.bioinformatics.northwestern.edu/fundo.

  1. ITER Fusion Energy

    ScienceCinema

    Dr. Norbert Holtkamp

    2016-07-12

    ITER (in Latin “the way”) is designed to demonstrate the scientific and technological feasibility of fusion energy. Fusion is the process by which two light atomic nuclei combine to form a heavier over one and thus release energy. In the fusion process two isotopes of hydrogen – deuterium and tritium – fuse together to form a helium atom and a neutron. Thus fusion could provide large scale energy production without greenhouse effects; essentially limitless fuel would be available all over the world. The principal goals of ITER are to generate 500 megawatts of fusion power for periods of 300 to 500 seconds with a fusion power multiplication factor, Q, of at least 10. Q ? 10 (input power 50 MW / output power 500 MW). The ITER Organization was officially established in Cadarache, France, on 24 October 2007. The seven members engaged in the project – China, the European Union, India, Japan, Korea, Russia and the United States – represent more than half the world’s population. The costs for ITER are shared by the seven members. The cost for the construction will be approximately 5.5 billion Euros, a similar amount is foreseen for the twenty-year phase of operation and the subsequent decommissioning.

  2. An Iterative Angle Trisection

    ERIC Educational Resources Information Center

    Muench, Donald L.

    2007-01-01

    The problem of angle trisection continues to fascinate people even though it has long been known that it can't be done with straightedge and compass alone. However, for practical purposes, a good iterative procedure can get you as close as you want. In this note, we present such a procedure. Using only straightedge and compass, our procedure…

  3. Iterative software kernels

    SciTech Connect

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  4. ITER global stability limits

    SciTech Connect

    Hogan, J.T.; Uckan, N.A.

    1990-01-01

    The MHD stability limits to the ITER operational space have been examined with the PEST ideal stability code. Constraints on ITER operation have been examined for the nominal operational scenarios and for possible design variants. Rather than rely on evaluation of a relatively small number of sample cases, the approach has been to construct an approximation to the overall operational space, and to compare this with the observed limits in high-{beta} tokamaks. An extensive database with {approximately}20,000 stability results has been compiled for use by the ITER design team. Results from these studies show that the design values of the Troyon factor (g {approximately} 2.5 for ignition studies, and g {approximately} 3 for the technology phase) which are based on present experiments, are also expected to be attainable for ITER conditions, for which the configuration and wall-stabilisation environment differ from those in present experiments. Strongly peaked pressure profiles lead to degraded high-{beta} performance. Values of g {approximately} 4 are found for higher safety factor (q {sub {Psi}} {le} 4) than that of the present design (q{sub {Psi}} {approximately} 3). Profiles with q(0) < 1 are shown to give g {approximately} 2.5, if the current density profile provides optimum shear. The overall operational spaces are presented for g-q{sub {Psi}}, q{sub {Psi}}-1{sub i}, q-{alpha}{sub p} and l{sub i}-q{sub {psi}}.

  5. Adaptive management

    USGS Publications Warehouse

    Allen, Craig R.; Garmestani, Ahjond S.

    2015-01-01

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.

  6. Parallel iterative methods for sparse linear and nonlinear equations

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1989-01-01

    As three-dimensional models are gaining importance, iterative methods will become almost mandatory. Among these, preconditioned Krylov subspace methods have been viewed as the most efficient and reliable, when solving linear as well as nonlinear systems of equations. There has been several different approaches taken to adapt iterative methods for supercomputers. Some of these approaches are discussed and the methods that deal more specifically with general unstructured sparse matrices, such as those arising from finite element methods, are emphasized.

  7. Neutron activation for ITER

    SciTech Connect

    Barnes, C.W.; Loughlin, M.J.; Nishitani, Takeo

    1996-04-29

    There are three primary goals for the Neutron Activation system for ITER: maintain a robust relative measure of fusion power with stability and high dynamic range (7 orders of magnitude); allow an absolute calibration of fusion power (energy); and provide a flexible and reliable system for materials testing. The nature of the activation technique is such that stability and high dynamic range can be intrinsic properties of the system. It has also been the technique that demonstrated (on JET and TFTR) the highest accuracy neutron measurements in DT operation. Since the gamma-ray detectors are not located on the tokamak and are therefore amenable to accurate characterization, and if material foils are placed very close to the ITER plasma with minimum scattering or attenuation, high overall accuracy in the fusion energy production (7--10%) should be achievable on ITER. In the paper, a conceptual design is presented. A system is shown to be capable of meeting these three goals, also detailed design issues remain to be solved.

  8. F-8C adaptive control law refinement and software development

    NASA Technical Reports Server (NTRS)

    Hartmann, G. L.; Stein, G.

    1981-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.

  9. Statistical Engineering in Air Traffic Management Research

    NASA Technical Reports Server (NTRS)

    Wilson, Sara R.

    2015-01-01

    NASA is working to develop an integrated set of advanced technologies to enable efficient arrival operations in high-density terminal airspace for the Next Generation Air Transportation System. This integrated arrival solution is being validated and verified in laboratories and transitioned to a field prototype for an operational demonstration at a major U.S. airport. Within NASA, this is a collaborative effort between Ames and Langley Research Centers involving a multi-year iterative experimentation process. Designing and analyzing a series of sequential batch computer simulations and human-in-the-loop experiments across multiple facilities and simulation environments involves a number of statistical challenges. Experiments conducted in separate laboratories typically have different limitations and constraints, and can take different approaches with respect to the fundamental principles of statistical design of experiments. This often makes it difficult to compare results from multiple experiments and incorporate findings into the next experiment in the series. A statistical engineering approach is being employed within this project to support risk-informed decision making and maximize the knowledge gained within the available resources. This presentation describes a statistical engineering case study from NASA, highlights statistical challenges, and discusses areas where existing statistical methodology is adapted and extended.

  10. Impossible expectations: fMRI adaptation in the lateral occipital complex (LOC) is modulated by the statistical regularities of 3D structural information.

    PubMed

    Freud, Erez; Ganel, Tzvi; Avidan, Galia

    2015-11-15

    fMRI adaptation (fMRIa), the attenuation of fMRI signal which follows repeated presentation of a stimulus, is a well-documented phenomenon. Yet, the underlying neural mechanisms supporting this effect are not fully understood. Recently, short-term perceptual expectations, induced by specific experimental settings, were shown to play an important modulating role in fMRIa. Here we examined the role of long-term expectations, based on 3D structural statistical regularities, in the modulation of fMRIa. To this end, human participants underwent fMRI scanning while performing a same-different task on pairs of possible (regular, expected) objects and spatially impossible (irregular, unexpected) objects. We hypothesized that given the spatial irregularity of impossible objects in relation to real-world visual experience, the visual system would always generate a prediction which is biased to the possible version of the objects. Consistently, fMRIa effects in the lateral occipital cortex (LOC) were found for possible, but not for impossible objects. Additionally, in alternating trials the order of stimulus presentation modulated LOC activity. That is, reduced activation was observed in trials in which the impossible version of the object served as the prime object (i.e. first object) and was followed by the possible version compared to the reverse order. These results were also supported by the behavioral advantage observed for trials that were primed by possible objects. Together, these findings strongly emphasize the importance of perceptual expectations in object representation and provide novel evidence for the role of real-world statistical regularities in eliciting fMRIa.

  11. Synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON): A statistical model based iterative image reconstruction method to eliminate limited-view artifacts and to mitigate the temporal-average artifacts in time-resolved CT

    PubMed Central

    Chen, Guang-Hong; Li, Yinsheng

    2015-01-01

    Purpose: In x-ray computed tomography (CT), a violation of the Tuy data sufficiency condition leads to limited-view artifacts. In some applications, it is desirable to use data corresponding to a narrow temporal window to reconstruct images with reduced temporal-average artifacts. However, the need to reduce temporal-average artifacts in practice may result in a violation of the Tuy condition and thus undesirable limited-view artifacts. In this paper, the authors present a new iterative reconstruction method, synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON), to eliminate limited-view artifacts using data acquired within an ultranarrow temporal window that severely violates the Tuy condition. Methods: In time-resolved contrast enhanced CT acquisitions, image contrast dynamically changes during data acquisition. Each image reconstructed from data acquired in a given temporal window represents one time frame and can be denoted as an image vector. Conventionally, each individual time frame is reconstructed independently. In this paper, all image frames are grouped into a spatial–temporal image matrix and are reconstructed together. Rather than the spatial and/or temporal smoothing regularizers commonly used in iterative image reconstruction, the nuclear norm of the spatial–temporal image matrix is used in SMART-RECON to regularize the reconstruction of all image time frames. This regularizer exploits the low-dimensional structure of the spatial–temporal image matrix to mitigate limited-view artifacts when an ultranarrow temporal window is desired in some applications to reduce temporal-average artifacts. Both numerical simulations in two dimensional image slices with known ground truth and in vivo human subject data acquired in a contrast enhanced cone beam CT exam have been used to validate the proposed SMART-RECON algorithm and to demonstrate the initial performance of the algorithm. Reconstruction errors and temporal fidelity

  12. Iterative PET Image Reconstruction Using Translation Invariant Wavelet Transform

    PubMed Central

    Zhou, Jian; Senhadji, Lotfi; Coatrieux, Jean-Louis; Luo, Limin

    2009-01-01

    The present work describes a Bayesian maximum a posteriori (MAP) method using a statistical multiscale wavelet prior model. Rather than using the orthogonal discrete wavelet transform (DWT), this prior is built on the translation invariant wavelet transform (TIWT). The statistical modeling of wavelet coefficients relies on the generalized Gaussian distribution. Image reconstruction is performed in spatial domain with a fast block sequential iteration algorithm. We study theoretically the TIWT MAP method by analyzing the Hessian of the prior function to provide some insights on noise and resolution properties of image reconstruction. We adapt the key concept of local shift invariance and explore how the TIWT MAP algorithm behaves with different scales. It is also shown that larger support wavelet filters do not offer better performance in contrast recovery studies. These theoretical developments are confirmed through simulation studies. The results show that the proposed method is more attractive than other MAP methods using either the conventional Gibbs prior or the DWT-based wavelet prior. PMID:21869846

  13. Experimental investigation of iterative reconstruction techniques for high resolution mammography

    NASA Astrophysics Data System (ADS)

    Vengrinovich, Valery L.; Zolotarev, Sergei A.; Linev, Vladimir N.

    2014-02-01

    The further development of the new iterative reconstruction algorithms to improve three-dimensional breast images quality restored from incomplete and noisy mammograms, is provided. The algebraic reconstruction method with simultaneous iterations - Simultaneous Algebraic Reconstruction Technique (SART) and the iterative method of statistical reconstruction Bayesian Iterative Reconstruction (BIR) are referred here as the preferable iterative methods suitable to improve the image quality. For better processing we use the Graphics Processing Unit (GPU). Method of minimizing the Total Variation (TV) is used as a priori support for regularization of iteration process and to reduce the level of noise in the reconstructed image. Preliminary results with physical phantoms show that all examined methods are capable to reconstruct structures layer-by-layer and to separate layers which images are overlapped in the Z- direction. It was found that the method of traditional Shift-And-Add tomosynthesis (SAA) is worse than iterative methods SART and BIR in terms of suppression of the anatomical noise and image blurring in between the adjacent layers. Despite of the fact that the measured contrast/noise ratio in the presence of low contrast internal structures is higher for the method of tomosynthesis SAA than for SART and BIR methods, its effectiveness in the presence of structured background is rather poor. In our opinion the optimal results can be achieved using Bayesian iterative reconstruction BIR.

  14. High contrast laminography using iterative algorithms

    NASA Astrophysics Data System (ADS)

    Kroupa, M.; Jakubek, J.

    2011-01-01

    3D X-ray imaging of internal structure of large flat objects is often complicated by limited access to all viewing angles or extremely high absorption in certain directions, therefore the standard method of computed tomography (CT) fails. This problem can be solved by the method of laminography. During a laminographic measurement the imaging detector is placed close to the sample while the X-ray source irradiates both sample and detector at different angles. The application of the state-of-the-art pixel detector Medipix in laminography together with adapted tomographic iterative alghorithms for 3D reconstruction of sample structure has been investigated. Iterative algorithms such as EM (Expectation Maximization) and OSEM (Ordered Subset Expectation Maximization) improve the quality of the reconstruction and allow including more complex physical models. In this contribution results and proposed future approaches which could be used for resolution enhancement are presented.

  15. Cosmic statistics of statistics

    NASA Astrophysics Data System (ADS)

    Szapudi, István; Colombi, Stéphane; Bernardeau, Francis

    1999-12-01

    The errors on statistics measured in finite galaxy catalogues are exhaustively investigated. The theory of errors on factorial moments by Szapudi & Colombi is applied to cumulants via a series expansion method. All results are subsequently extended to the weakly non-linear regime. Together with previous investigations this yields an analytic theory of the errors for moments and connected moments of counts in cells from highly non-linear to weakly non-linear scales. For non-linear functions of unbiased estimators, such as the cumulants, the phenomenon of cosmic bias is identified and computed. Since it is subdued by the cosmic errors in the range of applicability of the theory, correction for it is inconsequential. In addition, the method of Colombi, Szapudi & Szalay concerning sampling effects is generalized, adapting the theory for inhomogeneous galaxy catalogues. While previous work focused on the variance only, the present article calculates the cross-correlations between moments and connected moments as well for a statistically complete description. The final analytic formulae representing the full theory are explicit but somewhat complicated. Therefore we have made available a fortran program capable of calculating the described quantities numerically (for further details e-mail SC at colombi@iap.fr). An important special case is the evaluation of the errors on the two-point correlation function, for which this should be more accurate than any method put forward previously. This tool will be immensely useful in the future for assessing the precision of measurements from existing catalogues, as well as aiding the design of new galaxy surveys. To illustrate the applicability of the results and to explore the numerical aspects of the theory qualitatively and quantitatively, the errors and cross-correlations are predicted under a wide range of assumptions for the future Sloan Digital Sky Survey. The principal results concerning the cumulants ξ, Q3 and Q4 is that

  16. Quantum iterated function systems.

    PubMed

    Łoziński, Artur; Zyczkowski, Karol; Słomczyński, Wojciech

    2003-10-01

    An iterated function system (IFS) is defined by specifying a set of functions in a classical phase space, which act randomly on an initial point. In an analogous way, we define a quantum IFS (QIFS), where functions act randomly with prescribed probabilities in the Hilbert space. In a more general setting, a QIFS consists of completely positive maps acting in the space of density operators. This formalism is designed to describe certain problems of nonunitary quantum dynamics. We present exemplary classical IFSs, the invariant measure of which exhibits fractal structure, and study properties of the corresponding QIFSs and their invariant states.

  17. Iterative Magnetometer Calibration

    NASA Technical Reports Server (NTRS)

    Sedlak, Joseph

    2006-01-01

    This paper presents an iterative method for three-axis magnetometer (TAM) calibration that makes use of three existing utilities recently incorporated into the attitude ground support system used at NASA's Goddard Space Flight Center. The method combines attitude-independent and attitude-dependent calibration algorithms with a new spinning spacecraft Kalman filter to solve for biases, scale factors, nonorthogonal corrections to the alignment, and the orthogonal sensor alignment. The method is particularly well-suited to spin-stabilized spacecraft, but may also be useful for three-axis stabilized missions given sufficient data to provide observability.

  18. MLP iterative construction algorithm

    NASA Astrophysics Data System (ADS)

    Rathbun, Thomas F.; Rogers, Steven K.; DeSimio, Martin P.; Oxley, Mark E.

    1997-04-01

    The MLP Iterative Construction Algorithm (MICA) designs a Multi-Layer Perceptron (MLP) neural network as it trains. MICA adds Hidden Layer Nodes one at a time, separating classes on a pair-wise basis, until the data is projected into a linear separable space by class. Then MICA trains the Output Layer Nodes, which results in an MLP that achieves 100% accuracy on the training data. MICA, like Backprop, produces an MLP that is a minimum mean squared error approximation of the Bayes optimal discriminant function. Moreover, MICA's training technique yields novel feature selection technique and hidden node pruning technique

  19. A holistic strategy for adaptive land management

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Adaptive management is widely applied to natural resources management. Adaptive management can be generally defined as an iterative decision-making process that incorporates formulation of management objectives, actions designed to address these objectives, monitoring of results, and repeated adapta...

  20. ECRH System For ITER

    SciTech Connect

    Darbos, C.; Henderson, M.; Gandini, F.; Albajar, F.; Bomcelli, T.; Heidinger, R.; Saibene, G.; Chavan, R.; Goodman, T.; Hogge, J. P.; Sauter, O.; Denisov, G.; Farina, D.; Kajiwara, K.; Kasugai, A.; Kobayashi, N.; Oda, Y.; Ramponi, G.

    2009-11-26

    A 26 MW Electron Cyclotron Heating and Current Drive (EC H and CD) system is to be installed for ITER. The main objectives are to provide, start-up assist, central H and CD and control of MHD activity. These are achieved by a combination of two types of launchers, one located in an equatorial port and the second type in four upper ports. The physics applications are partitioned between the two launchers, based on the deposition location and driven current profiles. The equatorial launcher (EL) will access from the plasma axis to mid radius with a relatively broad profile useful for central heating and current drive applications, while the upper launchers (ULs) will access roughly the outer half of the plasma radius with a very narrow peaked profile for the control of the Neoclassical Tearing Modes (NTM) and sawtooth oscillations. The EC power can be switched between launchers on a time scale as needed by the immediate physics requirements. A revision of all injection angles of all launchers is under consideration for increased EC physics capabilities while relaxing the engineering constraints of both the EL and ULs. A series of design reviews are being planned with the five parties (EU, IN, JA, RF, US) procuring the EC system, the EC community and ITER Organization (IO). The review meetings qualify the design and provide an environment for enhancing performances while reducing costs, simplifying interfaces, predicting technology upgrades and commercial availability. In parallel, the test programs for critical components are being supported by IO and performed by the Domestic Agencies (DAs) for minimizing risks. The wide participation of the DAs provides a broad representation from the EC community, with the aim of collecting all expertise in guiding the EC system optimization. Still a strong relationship between IO and the DA is essential for optimizing the design of the EC system and for the installation and commissioning of all ex-vessel components when several

  1. Modeling ITER ECH Waveguide Performance

    NASA Astrophysics Data System (ADS)

    Kaufman, M. C.; Lau, C. H.

    2014-10-01

    There are stringent requirements for mode purity and for on-target power as a percentage of source power for the ECH transmission lines on ITER. The design goal is less than 10% total power loss through the line and 95% HE11 mode at the diamond window. The dominant loss mechanism is mode conversion (MC) into higher order modes, and to maintain mode purity, these losses must be minimized. Miter bends and waveguide curvature are major sources of mode conversion. This work uses a code which calculates the mode conversion and attenuation of an arbitrary set of polarized waveguide modes in circular corrugated waveguide with non-zero axial curvature and miter bends. The transmission line is modeled as a structural beam with deformations due to misalignment of waveguide supports, tilts at the interfaces between waveguide sections, gravitational loading, and the extrusion and fabrication process. As these sources of curvature are statistical in nature, the resulting MC losses are found via Monte Carlo modeling. The results of this analysis will provide design guidance for waveguide support span lengths, requirements for minimum alignment offsets, and requirements for waveguide fabrication and quality control.

  2. Two-step iterative reconstruction of region-of-interest with truncated projection in computed tomography

    NASA Astrophysics Data System (ADS)

    Yamakawa, Keisuke; Kojima, Shinichi

    2014-03-01

    Iteratively reconstructing data only inside the region of interest (ROI) is widely used to acquire CT images in less computation time while maintaining high spatial resolution. A method that subtracts projected data outside the ROI from full-coverage measured data has been proposed. A serious problem with this method is that the accuracy of the measured data confined inside the ROI decreases according to the truncation error outside the ROI. We propose a two-step iterative method that reconstructs image inside the full-coverage in addition to a conventional iterative method inside the ROI to reduce the truncation error inside full-coverage images. Statistical information (e.g., quantum-noise distributions) acquired by detected X-ray photons is generally used in iterative methods as a photon weight to efficiently reduce image noise. Our proposed method applies one of two kinds of weights (photon or constant weights) chosen adaptively by taking into consideration the influence of truncation error. The effectiveness of the proposed method compared with that of the conventional method was evaluated in terms of simulated CT values by using elliptical phantoms and an abdomen phantom. The standard deviation of error and the average absolute error of the proposed method on the profile curve were respectively reduced from 3.4 to 0.4 [HU] and from 2.8 to 0.8 [HU] compared with that of the conventional method. As a result, applying a suitable weight on the basis of a target object made it possible to effectively reduce the errors in CT images.

  3. Iterated crowdsourcing dilemma game

    NASA Astrophysics Data System (ADS)

    Oishi, Koji; Cebrian, Manuel; Abeliuk, Andres; Masuda, Naoki

    2014-02-01

    The Internet has enabled the emergence of collective problem solving, also known as crowdsourcing, as a viable option for solving complex tasks. However, the openness of crowdsourcing presents a challenge because solutions obtained by it can be sabotaged, stolen, and manipulated at a low cost for the attacker. We extend a previously proposed crowdsourcing dilemma game to an iterated game to address this question. We enumerate pure evolutionarily stable strategies within the class of so-called reactive strategies, i.e., those depending on the last action of the opponent. Among the 4096 possible reactive strategies, we find 16 strategies each of which is stable in some parameter regions. Repeated encounters of the players can improve social welfare when the damage inflicted by an attack and the cost of attack are both small. Under the current framework, repeated interactions do not really ameliorate the crowdsourcing dilemma in a majority of the parameter space.

  4. Improving IRT Item Bias Detection with Iterative Linking and Ability Scale Purification.

    ERIC Educational Resources Information Center

    Park, Dong-Gun; Lautenschlager, Gary J.

    1990-01-01

    The effectiveness of two iterative methods of item response theory (IRT) item bias detection was examined in a simulation study. A modified form of the iterative item parameter linking method of F. Drasgow and an adaptation of the test purification procedure of F. M. Lord were compared. (SLD)

  5. Adaptive importance sampling for network growth models

    PubMed Central

    Holmes, Susan P.

    2016-01-01

    Network Growth Models such as Preferential Attachment and Duplication/Divergence are popular generative models with which to study complex networks in biology, sociology, and computer science. However, analyzing them within the framework of model selection and statistical inference is often complicated and computationally difficult, particularly when comparing models that are not directly related or nested. In practice, ad hoc methods are often used with uncertain results. If possible, the use of standard likelihood-based statistical model selection techniques is desirable. With this in mind, we develop an Adaptive Importance Sampling algorithm for estimating likelihoods of Network Growth Models. We introduce the use of the classic Plackett-Luce model of rankings as a family of importance distributions. Updates to importance distributions are performed iteratively via the Cross-Entropy Method with an additional correction for degeneracy/over-fitting inspired by the Minimum Description Length principle. This correction can be applied to other estimation problems using the Cross-Entropy method for integration/approximate counting, and it provides an interpretation of Adaptive Importance Sampling as iterative model selection. Empirical results for the Preferential Attachment model are given, along with a comparison to an alternative established technique, Annealed Importance Sampling. PMID:27182098

  6. ITER Diagnostic First Wal

    SciTech Connect

    G. Douglas Loesser, et. al.

    2012-09-21

    The ITER Diagnostic Division is responsible for designing and procuring the First Wall Blankets that are mounted on the vacuum vessel port plugs at both the upper and equatorial levels This paper will discuss the effects of the diagnostic aperture shape and configuration on the coolant circuit design. The DFW design is driven in large part by the need to conform the coolant arrangement to a wide variety of diagnostic apertures combined with the more severe heating conditions at the surface facing the plasma, the first wall. At the first wall, a radiant heat flux of 35W/cm2 combines with approximate peak volumetric heating rates of 8W/cm3 (equatorial ports) and 5W/cm3 (upper ports). Here at the FW, a fast thermal response is desirable and leads to a thin element between the heat flux and coolant. This requirement is opposed by the wish for a thicker FW element to accommodate surface erosion and other off-normal plasma events.

  7. Mode conversion in ITER

    NASA Astrophysics Data System (ADS)

    Jaeger, E. F.; Berry, L. A.; Myra, J. R.

    2006-10-01

    Fast magnetosonic waves in the ion cyclotron range of frequencies (ICRF) can convert to much shorter wavelength modes such as ion Bernstein waves (IBW) and ion cyclotron waves (ICW) [1]. These modes are potentially useful for plasma control through the generation of localized currents and sheared flows. As part of the SciDAC Center for Simulation of Wave-Plasma Interactions project, the AORSA global-wave solver [2] has been ported to the new, dual-core Cray XT-3 (Jaguar) at ORNL where it demonstrates excellent scaling with the number of processors. Preliminary calculations using 4096 processors have allowed the first full-wave simulations of mode conversion in ITER. Mode conversion from the fast wave to the ICW is observed in mixtures of deuterium, tritium and helium3 at 53 MHz. The resulting flow velocity and electric field shear will be calculated. [1] F.W. Perkins, Nucl. Fusion 17, 1197 (1977). [2] E.F. Jaeger, L.A. Berry, J.R. Myra, et al., Phys. Rev. Lett. 90, 195001-1 (2003).

  8. Gamma ray spectrometer for ITER

    SciTech Connect

    Gin, D.; Chugunov, I.; Shevelev, A.; Khilkevitch, E.; Doinikov, D.; Naidenov, V.; Pasternak, A.; Polunovsky, I.; Kiptily, V.

    2014-08-21

    Gamma diagnostics is considered to be primary for the confined α-particles and runaway electrons measurements on ITER. The gamma spectrometer will be embedded into a neutron dump of the ITER Neutral Particle Analyzer diagnostic complex. It will supplement NPA measurements on the fuel isotope ratio and confined alphas/fast ions. In this paper an update on ITER gamma spectrometer developments is given. A new geometry of the system is described and detailed analysis of expected signals for the spectrometer is presented.

  9. Channeled spectropolarimetry using iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Dennis J.; LaCasse, Charles F.; Craven, Julia M.

    2016-05-01

    Channeled spectropolarimeters (CSP) measure the polarization state of light as a function of wavelength. Conventional Fourier reconstruction suffers from noise, assumes the channels are band-limited, and requires uniformly spaced samples. To address these problems, we propose an iterative reconstruction algorithm. We develop a mathematical model of CSP measurements and minimize a cost function based on this model. We simulate a measured spectrum using example Stokes parameters, from which we compare conventional Fourier reconstruction and iterative reconstruction. Importantly, our iterative approach can reconstruct signals that contain more bandwidth, an advancement over Fourier reconstruction. Our results also show that iterative reconstruction mitigates noise effects, processes non-uniformly spaced samples without interpolation, and more faithfully recovers the ground truth Stokes parameters. This work offers a significant improvement to Fourier reconstruction for channeled spectropolarimetry.

  10. The ITER project construction status

    NASA Astrophysics Data System (ADS)

    Motojima, O.

    2015-10-01

    The pace of the ITER project in St Paul-lez-Durance, France is accelerating rapidly into its peak construction phase. With the completion of the B2 slab in August 2014, which will support about 400 000 metric tons of the tokamak complex structures and components, the construction is advancing on a daily basis. Magnet, vacuum vessel, cryostat, thermal shield, first wall and divertor structures are under construction or in prototype phase in the ITER member states of China, Europe, India, Japan, Korea, Russia, and the United States. Each of these member states has its own domestic agency (DA) to manage their procurements of components for ITER. Plant systems engineering is being transformed to fully integrate the tokamak and its auxiliary systems in preparation for the assembly and operations phase. CODAC, diagnostics, and the three main heating and current drive systems are also progressing, including the construction of the neutral beam test facility building in Padua, Italy. The conceptual design of the Chinese test blanket module system for ITER has been completed and those of the EU are well under way. Significant progress has been made addressing several outstanding physics issues including disruption load characterization, prediction, avoidance, and mitigation, first wall and divertor shaping, edge pedestal and SOL plasma stability, fuelling and plasma behaviour during confinement transients and W impurity transport. Further development of the ITER Research Plan has included a definition of the required plant configuration for 1st plasma and subsequent phases of ITER operation as well as the major plasma commissioning activities and the needs of the accompanying R&D program to ITER construction by the ITER parties.

  11. Application of Adaptive Design Methodology in Development of a Long-Acting Glucagon-Like Peptide-1 Analog (Dulaglutide): Statistical Design and Simulations

    PubMed Central

    Skrivanek, Zachary; Berry, Scott; Berry, Don; Chien, Jenny; Geiger, Mary Jane; Anderson, James H.; Gaydos, Brenda

    2012-01-01

    Background Dulaglutide (dula, LY2189265), a long-acting glucagon-like peptide-1 analog, is being developed to treat type 2 diabetes mellitus. Methods To foster the development of dula, we designed a two-stage adaptive, dose-finding, inferentially seamless phase 2/3 study. The Bayesian theoretical framework is used to adaptively randomize patients in stage 1 to 7 dula doses and, at the decision point, to either stop for futility or to select up to 2 dula doses for stage 2. After dose selection, patients continue to be randomized to the selected dula doses or comparator arms. Data from patients assigned the selected doses will be pooled across both stages and analyzed with an analysis of covariance model, using baseline hemoglobin A1c and country as covariates. The operating characteristics of the trial were assessed by extensive simulation studies. Results Simulations demonstrated that the adaptive design would identify the correct doses 88% of the time, compared to as low as 6% for a fixed-dose design (the latter value based on frequentist decision rules analogous to the Bayesian decision rules for adaptive design). Conclusions This article discusses the decision rules used to select the dula dose(s); the mathematical details of the adaptive algorithm—including a description of the clinical utility index used to mathematically quantify the desirability of a dose based on safety and efficacy measurements; and a description of the simulation process and results that quantify the operating characteristics of the design. PMID:23294775

  12. Iterative reconstruction of detector response of an Anger gamma camera.

    PubMed

    Morozov, A; Solovov, V; Alves, F; Domingos, V; Martins, R; Neves, F; Chepel, V

    2015-05-21

    Statistical event reconstruction techniques can give better results for gamma cameras than the traditional centroid method. However, implementation of such techniques requires detailed knowledge of the photomultiplier tube light-response functions. Here we describe an iterative method which allows one to obtain the response functions from flood irradiation data without imposing strict requirements on the spatial uniformity of the event distribution. A successful application of the method for medical gamma cameras is demonstrated using both simulated and experimental data. An implementation of the iterative reconstruction technique capable of operating in real time is presented. We show that this technique can also be used for monitoring photomultiplier gain variations. PMID:25951792

  13. Iterative reconstruction of detector response of an Anger gamma camera

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Solovov, V.; Alves, F.; Domingos, V.; Martins, R.; Neves, F.; Chepel, V.

    2015-05-01

    Statistical event reconstruction techniques can give better results for gamma cameras than the traditional centroid method. However, implementation of such techniques requires detailed knowledge of the photomultiplier tube light-response functions. Here we describe an iterative method which allows one to obtain the response functions from flood irradiation data without imposing strict requirements on the spatial uniformity of the event distribution. A successful application of the method for medical gamma cameras is demonstrated using both simulated and experimental data. An implementation of the iterative reconstruction technique capable of operating in real time is presented. We show that this technique can also be used for monitoring photomultiplier gain variations.

  14. ITER safety challenges and opportunities

    SciTech Connect

    Piet, S.J.

    1991-01-01

    Results of the Conceptual Design Activity (CDA) for the International Thermonuclear Experimental Reactor (ITER) suggest challenges and opportunities. ITER is capable of meeting anticipated regulatory dose limits,'' but proof is difficult because of large radioactive inventories needing stringent radioactivity confinement. We need much research and development (R D) and design analysis to establish that ITER meets regulatory requirements. We have a further opportunity to do more to prove more of fusion's potential safety and environmental advantages and maximize the amount of ITER technology on the path toward fusion power plants. To fulfill these tasks, we need to overcome three programmatic challenges and three technical challenges. The first programmatic challenge is to fund a comprehensive safety and environmental ITER R D plan. Second is to strengthen safety and environment work and personnel in the international team. Third is to establish an external consultant group to advise the ITER Joint Team on designing ITER to meet safety requirements for siting by any of the Parties. The first of the three key technical challenges is plasma engineering -- burn control, plasma shutdown, disruptions, tritium burn fraction, and steady state operation. The second is the divertor, including tritium inventory, activation hazards, chemical reactions, and coolant disturbances. The third technical challenge is optimization of design requirements considering safety risk, technical risk, and cost. Some design requirements are now too strict; some are too lax. Fuel cycle design requirements are presently too strict, mandating inappropriate T separation from H and D. Heat sink requirements are presently too lax; they should be strengthened to ensure that maximum loss of coolant accident temperatures drop.

  15. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-01

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  16. ITER Construction--Plant System Integration

    SciTech Connect

    Tada, E.; Matsuda, S.

    2009-02-19

    This brief paper introduces how the ITER will be built in the international collaboration. The ITER Organization plays a central role in constructing ITER and leading it into operation. Since most of the ITER components are to be provided in-kind from the member countries, integral project management should be scoped in advance of real work. Those include design, procurement, system assembly, testing, licensing and commissioning of ITER.

  17. Ordinal neural networks without iterative tuning.

    PubMed

    Fernández-Navarro, Francisco; Riccardi, Annalisa; Carloni, Sante

    2014-11-01

    Ordinal regression (OR) is an important branch of supervised learning in between the multiclass classification and regression. In this paper, the traditional classification scheme of neural network is adapted to learn ordinal ranks. The model proposed imposes monotonicity constraints on the weights connecting the hidden layer with the output layer. To do so, the weights are transcribed using padding variables. This reformulation leads to the so-called inequality constrained least squares (ICLS) problem. Its numerical solution can be obtained by several iterative methods, for example, trust region or line search algorithms. In this proposal, the optimum is determined analytically according to the closed-form solution of the ICLS problem estimated from the Karush-Kuhn-Tucker conditions. Furthermore, following the guidelines of the extreme learning machine framework, the weights connecting the input and the hidden layers are randomly generated, so the final model estimates all its parameters without iterative tuning. The model proposed achieves competitive performance compared with the state-of-the-art neural networks methods for OR. PMID:25330430

  18. ITER project and fusion technology

    NASA Astrophysics Data System (ADS)

    Takatsu, H.

    2011-09-01

    In the sessions of ITR, FTP and SEE of the 23rd IAEA Fusion Energy Conference, 159 papers were presented in total, highlighted by the remarkable progress of the ITER project: ITER baseline has been established and procurement activities have been started as planned with a target of realizing the first plasma in 2019; ITER physics basis is sound and operation scenarios and operational issues have been extensively studied in close collaboration with the worldwide physics community; the test blanket module programme has been incorporated into the ITER programme and extensive R&D works are ongoing in the member countries with a view to delivering their own modules in a timely manner according to the ITER master schedule. Good progress was also reported in the areas of a variety of complementary activities to DEMO, including Broader Approach activities and long-term technology. This paper summarizes the highlights of the papers presented in the ITR, FTP and SEE sessions with a minimum set of background information.

  19. Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report

    SciTech Connect

    Saad, Yousef

    2014-01-16

    The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners for solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the

  20. On pre-image iterations for speech enhancement.

    PubMed

    Leitner, Christina; Pernkopf, Franz

    2015-01-01

    In this paper, we apply kernel PCA for speech enhancement and derive pre-image iterations for speech enhancement. Both methods make use of a Gaussian kernel. The kernel variance serves as tuning parameter that has to be adapted according to the SNR and the desired degree of de-noising. We develop a method to derive a suitable value for the kernel variance from a noise estimate to adapt pre-image iterations to arbitrary SNRs. In experiments, we compare the performance of kernel PCA and pre-image iterations in terms of objective speech quality measures and automatic speech recognition. The speech data is corrupted by white and colored noise at 0, 5, 10, and 15 dB SNR. As a benchmark, we provide results of the generalized subspace method, of spectral subtraction, and of the minimum mean-square error log-spectral amplitude estimator. In terms of the scores of the PEASS (Perceptual Evaluation Methods for Audio Source Separation) toolbox, the proposed methods achieve a similar performance as the reference methods. The speech recognition experiments show that the utterances processed by pre-image iterations achieve a consistently better word recognition accuracy than the unprocessed noisy utterances and than the utterances processed by the generalized subspace method. PMID:26085973

  1. Construction Safety Forecast for ITER

    SciTech Connect

    cadwallader, lee charles

    2006-11-01

    The International Thermonuclear Experimental Reactor (ITER) project is poised to begin its construction activity. This paper gives an estimate of construction safety as if the experiment was being built in the United States. This estimate of construction injuries and potential fatalities serves as a useful forecast of what can be expected for construction of such a major facility in any country. These data should be considered by the ITER International Team as it plans for safety during the construction phase. Based on average U.S. construction rates, ITER may expect a lost workday case rate of < 4.0 and a fatality count of 0.5 to 0.9 persons per year.

  2. Error Field Correction in ITER

    SciTech Connect

    Park, Jong-kyu; Boozer, Allen H.; Menard, Jonathan E.; Schaffer, Michael J.

    2008-05-22

    A new method for correcting magnetic field errors in the ITER tokamak is developed using the Ideal Perturbed Equilibrium Code (IPEC). The dominant external magnetic field for driving islands is shown to be localized to the outboard midplane for three ITER equilibria that represent the projected range of operational scenarios. The coupling matrices between the poloidal harmonics of the external magnetic perturbations and the resonant fields on the rational surfaces that drive islands are combined for different equilibria and used to determine an ordered list of the dominant errors in the external magnetic field. It is found that efficient and robust error field correction is possible with a fixed setting of the correction currents relative to the currents in the main coils across the range of ITER operating scenarios that was considered.

  3. US--ITER activation analysis

    SciTech Connect

    Attaya, H.; Gohar, Y.; Smith, D.

    1990-09-01

    Activation analysis has been made for the US ITER design. The radioactivity and the decay heat have been calculated, during operation and after shutdown for the two ITER phases, the Physics Phase and the Technology Phase. The Physics Phase operates about 24 full power days (FPDs) at fusion power level of 1100 MW and the Technology Phase has 860 MW fusion power and operates for about 1360 FPDs. The point-wise gamma sources have been calculated everywhere in the reactor at several times after shutdown of the two phases and are then used to calculate the biological dose everywhere in the reactor. Activation calculations have been made also for ITER divertor. The results are presented for different continuous operation times and for only one pulse. The effect of the pulsed operation on the radioactivity is analyzed. 6 refs., 12 figs., 1 tab.

  4. The real mission of ITER

    SciTech Connect

    Wurden, G A

    2009-01-01

    For future machines, the plasma stored energy is going up by factors of 20-40x, and plasma currents by 2-3x, while the surface to volume ratio is at the same time decreasing. Therefore the disruption forces, even for constant B, (which scale like IxB), and associated possible localized heating on machine components, are more severe. Notably, Tore Supra has demonstrated removal of more than 1 GJ of input energy, over nearly a 400 second period. However, the instantaneous stored energy in the Tore Supra system (which is most directly related to the potential for disruption damage) is quite small compared to other large tokamaks. The goal of ITER is routinely described as studying DT burning plasmas with a Q {approx} 10. In reality, ITER has a much more important first order mission. In fact, if it fails at this mission, the consequences are that ITER will never get to the eventual stated purpose of studying a burning plasma. The real mission of ITER is to study (and demonstrate successfully) plasma control with {approx}10-17 MA toroidal currents and {approx}100-400 MJ plasma stored energy levels in long-pulse scenarios. Before DT operation is ever given a go-ahead in ITER, the reality is that ITER must demonstrate routine and reliable control of high energy hydrogen (and deuterium) plasmas. The difficulty is that ITER must simultaneously deal with several technical problems: (1) heat removal at the plasma/wall interface, (2) protection of the wall components from off-normal events, and (3) generation of dust/redeposition of first wall materials. All previous tokamaks have encountered hundred's of major disruptions in the course of their operation. The consequences of a few MA of runaway electrons (at 20-50 MeV) being generated in ITER, and then being lost to the walls are simply catastrophic. They will not be deposited globally, but will drift out (up, down, whatever, depending on control system), and impact internal structures, unless 'ameliorated'. Basically, this

  5. Iterated binomial sums and their associated iterated integrals

    NASA Astrophysics Data System (ADS)

    Ablinger, J.; Blümlein, J.; Raab, C. G.; Schneider, C.

    2014-11-01

    We consider finite iterated generalized harmonic sums weighted by the binomial binom{2k}{k} in numerators and denominators. A large class of these functions emerges in the calculation of massive Feynman diagrams with local operator insertions starting at 3-loop order in the coupling constant and extends the classes of the nested harmonic, generalized harmonic, and cyclotomic sums. The binomially weighted sums are associated by the Mellin transform to iterated integrals over square-root valued alphabets. The values of the sums for N → ∞ and the iterated integrals at x = 1 lead to new constants, extending the set of special numbers given by the multiple zeta values, the cyclotomic zeta values and special constants which emerge in the limit N → ∞ of generalized harmonic sums. We develop algorithms to obtain the Mellin representations of these sums in a systematic way. They are of importance for the derivation of the asymptotic expansion of these sums and their analytic continuation to N in {C}. The associated convolution relations are derived for real parameters and can therefore be used in a wider context, as, e.g., for multi-scale processes. We also derive algorithms to transform iterated integrals over root-valued alphabets into binomial sums. Using generating functions we study a few aspects of infinite (inverse) binomial sums.

  6. Iterative CBCT reconstruction using Hessian penalty

    NASA Astrophysics Data System (ADS)

    Sun, Tao; Sun, Nanbo; Wang, Jing; Tan, Shan

    2015-03-01

    Statistical iterative reconstruction algorithms have shown potential to improve cone-beam CT (CBCT) image quality. Most iterative reconstruction algorithms utilize prior knowledge as a penalty term in the objective function. The penalty term greatly affects the performance of a reconstruction algorithm. The total variation (TV) penalty has demonstrated great ability in suppressing noise and improving image quality. However, calculated from the first-order derivatives, the TV penalty leads to the well-known staircase effect, which sometimes makes the reconstructed images oversharpen and unnatural. In this study, we proposed to use a second-order derivative penalty that involves the Frobenius norm of the Hessian matrix of an image for CBCT reconstruction. The second-order penalty retains some of the most favorable properties of the TV penalty like convexity, homogeneity, and rotation and translation invariance, and has a better ability in preserving the structures of gradual transition in the reconstructed images. An effective algorithm was developed to minimize the objective function with the majorization-minimization (MM) approach. The experiments on a digital phantom and two physical phantoms demonstrated the priority of the proposed penalty, particularly in suppressing the staircase effect of the TV penalty.

  7. Delayed Over-Relaxation for iterative methods

    NASA Astrophysics Data System (ADS)

    Antuono, M.; Colicchio, G.

    2016-09-01

    We propose a variant of the relaxation step used in the most widespread iterative methods (e.g. Jacobi Over-Relaxation, Successive Over-Relaxation) which combines the iteration at the predicted step, namely (n + 1), with the iteration at step (n - 1). We provide a theoretical analysis of the proposed algorithm by applying such a delayed relaxation step to a generic (convergent) iterative scheme. We prove that, under proper assumptions, this significantly improves the convergence rate of the initial iterative method. As a relevant example, we apply the proposed algorithm to the solution of the Poisson equation, highlighting the advantages in comparison with classical iterative models.

  8. ODE System Solver W. Krylov Iteration & Rootfinding

    1991-09-09

    LSODKR is a new initial value ODE solver for stiff and nonstiff systems. It is a variant of the LSODPK and LSODE solvers, intended mainly for large stiff systems. The main differences between LSODKR and LSODE are the following: (a) for stiff systems, LSODKR uses a corrector iteration composed of Newton iteration and one of four preconditioned Krylov subspace iteration methods. The user must supply routines for the preconditioning operations, (b) Within the corrector iteration,more » LSODKR does automatic switching between functional (fixpoint) iteration and modified Newton iteration, (c) LSODKR includes the ability to find roots of given functions of the solution during the integration.« less

  9. Learning to improve iterative repair scheduling

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene

    1992-01-01

    This paper presents a general learning method for dynamically selecting between repair heuristics in an iterative repair scheduling system. The system employs a version of explanation-based learning called Plausible Explanation-Based Learning (PEBL) that uses multiple examples to confirm conjectured explanations. The basic approach is to conjecture contradictions between a heuristic and statistics that measure the quality of the heuristic. When these contradictions are confirmed, a different heuristic is selected. To motivate the utility of this approach we present an empirical evaluation of the performance of a scheduling system with respect to two different repair strategies. We show that the scheduler that learns to choose between the heuristics outperforms the same scheduler with any one of two heuristics alone.

  10. Generating fracture networks using iterated function systems

    NASA Astrophysics Data System (ADS)

    Mohrlok, U.; Liedl, R.

    In order to model flow and transport in fractured rocks it is important to know the geometry of the fracture network. A stochastic approach is commonly used to generate a synthetic fracture network from the statistics measured at a natural fracture network. The approach presented herein is able to incorporate the structures found in a natural fracture network into the synthetic fracture network. These synthetic fracture networks are the images generated by Iterated Function Systems (IFS) as introduced by Barnsley (1988). The conditions these IFS have to fulfil to determine images resembling fracture networks and the effects of their parameters on the images are discussed. It is possible to define the parameters of the IFS in order to generate some properties of a fracture network. The image of an IFS consists of many single points and has to be suitably processed for further use.

  11. Generating fracture networks using iterated function systems

    NASA Astrophysics Data System (ADS)

    Mohrlok, U.; Liedl, R.

    1996-03-01

    In order to model flow and transport in fractured rocks it is important to know the geometry of the fracture network. A stochastic approach is commonly used to generate a synthetic fracture network from the statistics measured at a natural fracture network. The approach presented herein is able to incorporate the structures found in a natural fracture network into the synthetic fracture network. These synthetic fracture networks are the images generated by Iterated Function Systems (IFS) as introduced by Barnsley (1988). The conditions these IFS have to fulfil to determine images resembling fracture networks and the effects of their parameters on the images are discussed. It is possible to define the parameters of the IFS in order to generate some properties of a fracture network. The image of an IFS consists of many single points and has to be suitably processed for further use.

  12. Continued Fractions and Iterative Processes.

    ERIC Educational Resources Information Center

    Bevis, Jean H.; Boal, Jan L.

    1982-01-01

    Continued fractions and associated sequences are viewed to constitute a rich area of study for mathematics students, by supporting instruction on algebraic and computational skills, mathematical induction, convergence of sequences, and interpretation of function graphs. An iterative method of approximating square roots opens suggestions for…

  13. Energetic ions in ITER plasmas

    NASA Astrophysics Data System (ADS)

    Pinches, S. D.; Chapman, I. T.; Lauber, Ph. W.; Oliver, H. J. C.; Sharapov, S. E.; Shinohara, K.; Tani, K.

    2015-02-01

    This paper discusses the behaviour and consequences of the expected populations of energetic ions in ITER plasmas. It begins with a careful analytic and numerical consideration of the stability of Alfvén Eigenmodes in the ITER 15 MA baseline scenario. The stability threshold is determined by balancing the energetic ion drive against the dominant damping mechanisms and it is found that only in the outer half of the plasma ( r / a > 0.5 ) can the fast ions overcome the thermal ion Landau damping. This is in spite of the reduced numbers of alpha-particles and beam ions in this region but means that any Alfvén Eigenmode-induced redistribution is not expected to influence the fusion burn process. The influence of energetic ions upon the main global MHD phenomena expected in ITER's primary operating scenarios, including sawteeth, neoclassical tearing modes and Resistive Wall Modes, is also reviewed. Fast ion losses due to the non-axisymmetric fields arising from the finite number of toroidal field coils, the inclusion of ferromagnetic inserts, the presence of test blanket modules containing ferromagnetic material, and the fields created by the Edge Localised Mode (ELM) control coils in ITER are discussed. The greatest losses and associated heat loads onto the plasma facing components arise due to the use of the ELM control coils and come from neutral beam ions that are ionised in the plasma edge.

  14. Networking Theories by Iterative Unpacking

    ERIC Educational Resources Information Center

    Koichu, Boris

    2014-01-01

    An iterative unpacking strategy consists of sequencing empirically-based theoretical developments so that at each step of theorizing one theory serves as an overarching conceptual framework, in which another theory, either existing or emerging, is embedded in order to elaborate on the chosen element(s) of the overarching theory. The strategy is…

  15. Active beam spectroscopy for ITER

    NASA Astrophysics Data System (ADS)

    von Hellermann, M. G.; Barnsley, R.; Biel, W.; Delabie, E.; Hawkes, N.; Jaspers, R.; Johnson, D.; Klinkhamer, F.; Lischtschenko, O.; Marchuk, O.; Schunke, B.; Singh, M. J.; Snijders, B.; Summers, H. P.; Thomas, D.; Tugarinov, S.; Vasu, P.

    2010-11-01

    Since the first feasibility studies of active beam spectroscopy on ITER in 1995 the proposed diagnostic has developed into a well advanced and mature system. Substantial progress has been achieved on the physics side including comprehensive performance studies based on an advanced predictive code, which simulates active and passive features of the expected spectral ranges. The simulation has enabled detailed specifications for an optimized instrumentation and has helped to specify suitable diagnostic neutral beam parameters. Four ITER partners share presently the task of developing a suite of ITER active beam diagnostics, which make use of the two 0.5 MeV/amu 18 MW heating neutral beams and a dedicated 0.1 MeV/amu, 3.6 MW diagnostic neutral beam. The IN ITER team is responsible for the DNB development and also for beam physics related aspects of the diagnostic. The RF will be responsible for edge CXRS system covering the outer region of the plasma (1> r/ a>0.4) using an equatorial observation port, and the EU will develop the core CXRS system for the very core (0< r/ a<0.7) using a top observation port. Thus optimum radial resolution is ensured for each system with better than a/30 resolution. Finally, the US will develop a dedicated MSE system making use of the HNBs and two equatorial ports. With appropriate modification, these systems could also potentially provide information on alpha particle slowing-down features. . On the engineering side, comprehensive preparations were made involving the development of an observation periscope, a neutron labyrinth optical system and design studies for remote maintenance including the exchange of the first mirror assembly, a critical issue for the operation of the CXRS diagnostic in the harsh ITER environment. Additionally, an essential change of the orientation of the DNB injection angle and specification of suitable blanket aperture has been made to avoid trapped particle damage to the first wall.

  16. A randomised trial of adaptive pacing therapy, cognitive behaviour therapy, graded exercise, and specialist medical care for chronic fatigue syndrome (PACE): statistical analysis plan

    PubMed Central

    2013-01-01

    Background The publication of protocols by medical journals is increasingly becoming an accepted means for promoting good quality research and maximising transparency. Recently, Finfer and Bellomo have suggested the publication of statistical analysis plans (SAPs).The aim of this paper is to make public and to report in detail the planned analyses that were approved by the Trial Steering Committee in May 2010 for the principal papers of the PACE (Pacing, graded Activity, and Cognitive behaviour therapy: a randomised Evaluation) trial, a treatment trial for chronic fatigue syndrome. It illustrates planned analyses of a complex intervention trial that allows for the impact of clustering by care providers, where multiple care-providers are present for each patient in some but not all arms of the trial. Results The trial design, objectives and data collection are reported. Considerations relating to blinding, samples, adherence to the protocol, stratification, centre and other clustering effects, missing data, multiplicity and compliance are described. Descriptive, interim and final analyses of the primary and secondary outcomes are then outlined. Conclusions This SAP maximises transparency, providing a record of all planned analyses, and it may be a resource for those who are developing SAPs, acting as an illustrative example for teaching and methodological research. It is not the sum of the statistical analysis sections of the principal papers, being completed well before individual papers were drafted. Trial registration ISRCTN54285094 assigned 22 May 2003; First participant was randomised on 18 March 2005. PMID:24225069

  17. Fast iterative image reconstruction of 3D PET data

    SciTech Connect

    Kinahan, P.E.; Townsend, D.W.; Michel, C.

    1996-12-31

    For count-limited PET imaging protocols, two different approaches to reducing statistical noise are volume, or 3D, imaging to increase sensitivity, and statistical reconstruction methods to reduce noise propagation. These two approaches have largely been developed independently, likely due to the perception of the large computational demands of iterative 3D reconstruction methods. We present results of combining the sensitivity of 3D PET imaging with the noise reduction and reconstruction speed of 2D iterative image reconstruction methods. This combination is made possible by using the recently-developed Fourier rebinning technique (FORE), which accurately and noiselessly rebins 3D PET data into a 2D data set. The resulting 2D sinograms are then reconstructed independently by the ordered-subset EM (OSEM) iterative reconstruction method, although any other 2D reconstruction algorithm could be used. We demonstrate significant improvements in image quality for whole-body 3D PET scans by using the FORE+OSEM approach compared with the standard 3D Reprojection (3DRP) algorithm. In addition, the FORE+OSEM approach involves only 2D reconstruction and it therefore requires considerably less reconstruction time than the 3DRP algorithm, or any fully 3D statistical reconstruction algorithm.

  18. Boundary plasma modelling for ITER

    SciTech Connect

    Braams, B.J.

    1993-01-01

    Computer programs were developed to model the effect of nonaxisymmetric magnetic perturbations upon divertor heat load, and have explored what kind of externally applied perturbations are the most effective for heat load reduction without destroying core plasma confinement. We find that a carefully tuned set of coils located about 0.3 m outside the separatrix can be used to spread the heat load over about 0.1 m perpendicular to flux surfaces at the ITER divertor plate, even at a very low level of anomalous cross-field heat transport. As such a spreading would significantly extend the permissible regime of operation for ITER, we recommend that this study be pursued at the level of detail required for engineering design. In other work under this grant we are in the process of modifying the B2 code to handle correctly a non-orthogonal geometry.

  19. Bioinspired Iterative Synthesis of Polyketides

    NASA Astrophysics Data System (ADS)

    Hong, Ran; Zheng, Kuan; Xie, Changmin

    2015-05-01

    Diverse array of biopolymers and second metabolites (particularly polyketide natural products) has been manufactured in nature through an enzymatic iterative assembly of simple building blocks. Inspired by this strategy, molecules with inherent modularity can be efficiently synthesized by repeated succession of similar reaction sequences. This privileged strategy has been widely adopted in synthetic supramolecular chemistry. Its value also has been reorganized in natural product synthesis. A brief overview of this approach is given with a particular emphasis on the total synthesis of polyol-embedded polyketides, a class of vastly diverse structures and biologically significant natural products. This viewpoint also illustrates the limits of known individual modules in terms of diastereoselectivity and enantioselectivity. More efficient and practical iterative strategies are anticipated to emerge in the future development.

  20. Bioinspired iterative synthesis of polyketides

    PubMed Central

    Zheng, Kuan; Xie, Changmin; Hong, Ran

    2015-01-01

    Diverse array of biopolymers and second metabolites (particularly polyketide natural products) has been manufactured in nature through an enzymatic iterative assembly of simple building blocks. Inspired by this strategy, molecules with inherent modularity can be efficiently synthesized by repeated succession of similar reaction sequences. This privileged strategy has been widely adopted in synthetic supramolecular chemistry. Its value also has been reorganized in natural product synthesis. A brief overview of this approach is given with a particular emphasis on the total synthesis of polyol-embedded polyketides, a class of vastly diverse structures and biologically significant natural products. This viewpoint also illustrates the limits of known individual modules in terms of diastereoselectivity and enantioselectivity. More efficient and practical iterative strategies are anticipated to emerge in the future development. PMID:26052510

  1. An Iterative Reweighted Method for Tucker Decomposition of Incomplete Tensors

    NASA Astrophysics Data System (ADS)

    Yang, Linxiao; Fang, Jun; Li, Hongbin; Zeng, Bing

    2016-09-01

    We consider the problem of low-rank decomposition of incomplete multiway tensors. Since many real-world data lie on an intrinsically low dimensional subspace, tensor low-rank decomposition with missing entries has applications in many data analysis problems such as recommender systems and image inpainting. In this paper, we focus on Tucker decomposition which represents an Nth-order tensor in terms of N factor matrices and a core tensor via multilinear operations. To exploit the underlying multilinear low-rank structure in high-dimensional datasets, we propose a group-based log-sum penalty functional to place structural sparsity over the core tensor, which leads to a compact representation with smallest core tensor. The method for Tucker decomposition is developed by iteratively minimizing a surrogate function that majorizes the original objective function, which results in an iterative reweighted process. In addition, to reduce the computational complexity, an over-relaxed monotone fast iterative shrinkage-thresholding technique is adapted and embedded in the iterative reweighted process. The proposed method is able to determine the model complexity (i.e. multilinear rank) in an automatic way. Simulation results show that the proposed algorithm offers competitive performance compared with other existing algorithms.

  2. ITER Plasma Control System Development

    NASA Astrophysics Data System (ADS)

    Snipes, Joseph; ITER PCS Design Team

    2015-11-01

    The development of the ITER Plasma Control System (PCS) continues with the preliminary design phase for 1st plasma and early plasma operation in H/He up to Ip = 15 MA in L-mode. The design is being developed through a contract between the ITER Organization and a consortium of plasma control experts from EU and US fusion laboratories, which is expected to be completed in time for a design review at the end of 2016. This design phase concentrates on breakdown including early ECH power and magnetic control of the poloidal field null, plasma current, shape, and position. Basic kinetic control of the heating (ECH, ICH, NBI) and fueling systems is also included. Disruption prediction, mitigation, and maintaining stable operation are also included because of the high magnetic and kinetic stored energy present already for early plasma operation. Support functions for error field topology and equilibrium reconstruction are also required. All of the control functions also must be integrated into an architecture that will be capable of the required complexity of all ITER scenarios. A database is also being developed to collect and manage PCS functional requirements from operational scenarios that were defined in the Conceptual Design with links to proposed event handling strategies and control algorithms for initial basic control functions. A brief status of the PCS development will be presented together with a proposed schedule for design phases up to DT operation.

  3. Progress on US ITER Diagnostics

    NASA Astrophysics Data System (ADS)

    Johnson, David; Feder, Russ

    2010-11-01

    There have been significant advances in the design concepts for the 8 ITER diagnostic systems being provided by the US. Concepts for integration of the diagnostics into the port plugs have also evolved. A prerequisite for the signoff of the procurement arrangements for these each diagnostic is a Conceptual Design Review organized by the ITER Organization. US experts under contract with the USIPO have been assisting the IO to prepare for these Reviews. In addition, a design team at PPPL has been working with these experts and designers from other ITER parties to package diagnostic front-ends into the 5 US plugs. Modular diagnostic shield modules are now being considered in order to simplify the interfaces between the diagnostics within each plug. Diagnostic first wall elements are envisioned to be integral with these shield modules. This simplifies the remote handling of the diagnostics and provides flexibility for future removal of one diagnostic minimally affecting others. Front-end configurations will be presented, along with lists of issues needing resolution prior to the start of preliminary design.

  4. Flight data processing with the F-8 adaptive algorithm

    NASA Technical Reports Server (NTRS)

    Hartmann, G.; Stein, G.; Petersen, K.

    1977-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters has been designed for NASA's DFBW F-8 aircraft. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm has been implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer and surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software. The software and its performance evaluation based on flight data are described

  5. Three Dimensional Iterative Reconstruction Techniques in Positron Tomography.

    NASA Astrophysics Data System (ADS)

    Sloka, Scott

    The acquisition of positron tomographic data in three dimensions is an improvement over the two dimensional acquisition of data because the greater the number of measurements taken of a stochastic process, the more accurately determined the desired parameter may be. This research pursues the goal of three dimensional image reconstruction in Positron Tomography using an iterative approach. This thesis has followed a systematic approach to the exploration of a system for three dimensional iterative reconstruction. System design parameters were discussed such as the advantages and disadvantages of iterative vs analytic methods, the implementation of two, three dimensional iterative algorithms, the selection of a ray passing method, and the choice of an analytic method for comparison to the iterative methods. Several qualitative and quantitative tests were used/developed and performed to analyse and compare the results. Three dimensional reconstruction in Positron Tomography using two iterative techniques (ART and ML-EM) was demonstrated. The ML-EM algorithm was adapted to satisfy the objective of equalizing the estimates with the measurements via division of the sampling density. A new multi-objective function methodology was developed for two dimensions and its extension to three dimensions discussed. A smoothly-varying Gaussian phantom was created for comparing artifacts from different ray passing methods. The analysis of voxel trends over many iterations was used. The use of the output from a two dimensional filtered backprojection algorithm as the seed for three dimensional algorithms to accelerate the reconstruction the was explored. The importance of the selection of a good ray ordering in ART and its effects on the total squared error were explored. For the phantoms studied in this thesis, the ML -EM algorithm tended to perform better under most conditions. This algorithm is slower than ART to achieve both a low total squared error and good contrast, but the

  6. Liver recognition based on statistical shape model in CT images

    NASA Astrophysics Data System (ADS)

    Xiang, Dehui; Jiang, Xueqing; Shi, Fei; Zhu, Weifang; Chen, Xinjian

    2016-03-01

    In this paper, an automatic method is proposed to recognize the liver on clinical 3D CT images. The proposed method effectively use statistical shape model of the liver. Our approach consist of three main parts: (1) model training, in which shape variability is detected using principal component analysis from the manual annotation; (2) model localization, in which a fast Euclidean distance transformation based method is able to localize the liver in CT images; (3) liver recognition, the initial mesh is locally and iteratively adapted to the liver boundary, which is constrained with the trained shape model. We validate our algorithm on a dataset which consists of 20 3D CT images obtained from different patients. The average ARVD was 8.99%, the average ASSD was 2.69mm, the average RMSD was 4.92mm, the average MSD was 28.841mm, and the average MSD was 13.31%.

  7. Multimodal and Adaptive Learning Management: An Iterative Design

    ERIC Educational Resources Information Center

    Squires, David R.; Orey, Michael A.

    2015-01-01

    The purpose of this study is to measure the outcome of a comprehensive learning management system implemented at a Spinal Cord Injury (SCI) hospital in the Southeast United States. Specifically this SCI hospital has been experiencing an evident volume of patients returning seeking more information about the nature of their injuries. Recognizing…

  8. A pleiotropy-informed Bayesian false discovery rate adapted to a shared control design finds new disease associations from GWAS summary statistics.

    PubMed

    Liley, James; Wallace, Chris

    2015-02-01

    Genome-wide association studies (GWAS) have been successful in identifying single nucleotide polymorphisms (SNPs) associated with many traits and diseases. However, at existing sample sizes, these variants explain only part of the estimated heritability. Leverage of GWAS results from related phenotypes may improve detection without the need for larger datasets. The Bayesian conditional false discovery rate (cFDR) constitutes an upper bound on the expected false discovery rate (FDR) across a set of SNPs whose p values for two diseases are both less than two disease-specific thresholds. Calculation of the cFDR requires only summary statistics and have several advantages over traditional GWAS analysis. However, existing methods require distinct control samples between studies. Here, we extend the technique to allow for some or all controls to be shared, increasing applicability. Several different SNP sets can be defined with the same cFDR value, and we show that the expected FDR across the union of these sets may exceed expected FDR in any single set. We describe a procedure to establish an upper bound for the expected FDR among the union of such sets of SNPs. We apply our technique to pairwise analysis of p values from ten autoimmune diseases with variable sharing of controls, enabling discovery of 59 SNP-disease associations which do not reach GWAS significance after genomic control in individual datasets. Most of the SNPs we highlight have previously been confirmed using replication studies or larger GWAS, a useful validation of our technique; we report eight SNP-disease associations across five diseases not previously declared. Our technique extends and strengthens the previous algorithm, and establishes robust limits on the expected FDR. This approach can improve SNP detection in GWAS, and give insight into shared aetiology between phenotypically related conditions.

  9. Adaptive management: Chapter 1

    USGS Publications Warehouse

    Allen, Craig R.; Garmestani, Ahjond S.; Allen, Craig R.; Garmestani, Ahjond S.

    2015-01-01

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.

  10. An iterative and regenerative method for DNA sequencing.

    PubMed

    Jones, D H

    1997-05-01

    This paper presents, to our knowledge, the first iterative DNA sequencing method that regenerates the product of interest during each iterative cycle, allowing it to overcome the critical obstacles that impede alternative iterative approaches to DNA sequencing: loss of product and the accumulation of background signal due to incomplete reactions. It can sequence numerous double-stranded (ds) DNA segments in parallel without gel resolution of DNA fragments and can sequence DNA that is almost entirely double-stranded, preventing the secondary structures that impede sequencing by hybridization. This method uses ligation of an adaptor containing the recognition domain for a class-IIS restriction endonuclease and digestion with a class-IIS restriction endonuclease that recognizes the adaptor's recognition domain. This generates a set of DNA templates that are each composed of a short overhang positioned at a fixed interval with respect to one end of the original dsDNA fragment. Adaptor ligation also appends a unique sequence during each iterative cycle, so that the polymerase chain reaction can be used to regenerate the desired template-precursor before class-IIS restriction endonuclease digestion. Following class-IIS restriction endonuclease digestion, sequencing of a nucleotide in each overhang occurs by template-directed ligation during adaptor ligation or through a separate template-directed polymerization step with labeled ddNTPs. DNA sequencing occurs in strides determined by the number of nucleotides separating the recognition and cleavage domains for the class-IIS restriction endonuclease encoded in the ligated adaptor, maximizing the span of DNA sequenced for a given number of iterative cycles. This method allows the concurrent sequencing of numerous dsDNA segments in a microplate format, and in the future it can be adapted to biochip format. PMID:9149879

  11. Rater variables associated with ITER ratings.

    PubMed

    Paget, Michael; Wu, Caren; McIlwrick, Joann; Woloschuk, Wayne; Wright, Bruce; McLaughlin, Kevin

    2013-10-01

    Advocates of holistic assessment consider the ITER a more authentic way to assess performance. But this assessment format is subjective and, therefore, susceptible to rater bias. Here our objective was to study the association between rater variables and ITER ratings. In this observational study our participants were clerks at the University of Calgary and preceptors who completed online ITERs between February 2008 and July 2009. Our outcome variable was global rating on the ITER (rated 1-5), and we used a generalized estimating equation model to identify variables associated with this rating. Students were rated "above expected level" or "outstanding" on 66.4 % of 1050 online ITERs completed during the study period. Two rater variables attenuated ITER ratings: the log transformed time taken to complete the ITER [β = -0.06, 95 % confidence interval (-0.10, -0.02), p = 0.002], and the number of ITERs that a preceptor completed over the time period of the study [β = -0.008 (-0.02, -0.001), p = 0.02]. In this study we found evidence of leniency bias that resulted in two thirds of students being rated above expected level of performance. This leniency bias appeared to be attenuated by delay in ITER completion, and was also blunted in preceptors who rated more students. As all biases threaten the internal validity of the assessment process, further research is needed to confirm these and other sources of rater bias in ITER ratings, and to explore ways of limiting their impact.

  12. Adaptive Image Denoising by Mixture Adaptation

    NASA Astrophysics Data System (ADS)

    Luo, Enming; Chan, Stanley H.; Nguyen, Truong Q.

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the Expectation-Maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad-hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper: First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. Experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  13. The physics role of ITER

    SciTech Connect

    Rutherford, P.H.

    1997-04-01

    Experimental research on the International Thermonuclear Experimental Reactor (ITER) will go far beyond what is possible on present-day tokamaks to address new and challenging issues in the physics of reactor-like plasmas. First and foremost, experiments in ITER will explore the physics issues of burning plasmas--plasmas that are dominantly self-heated by alpha-particles created by the fusion reactions themselves. Such issues will include (i) new plasma-physical effects introduced by the presence within the plasma of an intense population of energetic alpha particles; (ii) the physics of magnetic confinement for a burning plasma, which will involve a complex interplay of transport, stability and an internal self-generated heat source; and (iii) the physics of very-long-pulse/steady-state burning plasmas, in which much of the plasma current is also self-generated and which will require effective control of plasma purity and plasma-wall interactions. Achieving and sustaining burning plasma regimes in a tokamak necessarily requires plasmas that are larger than those in present experiments and have higher energy content and power flow, as well as much longer pulse length. Accordingly, the experimental program on ITER will embrace the study of issues of plasma physics and plasma-materials interactions that are specific to a reactor-scale fusion experiment. Such issues will include (i) confinement physics for a tokamak in which, for the first time, the core-plasma and the edge-plasma are simultaneously in a reactor-like regime; (ii) phenomena arising during plasma transients, including so-called disruptions, in regimes of high plasma current and thermal energy; and (iii) physics of a radiative divertor designed for handling high power flow for long pulses, including novel plasma and atomic-physics effects as well as materials science of surfaces subject to intense plasma interaction. Experiments on ITER will be conducted by researchers in control rooms situated at major

  14. Iterates of maps with symmetry

    NASA Technical Reports Server (NTRS)

    Chossat, Pascal; Golubitsky, Martin

    1988-01-01

    Fixed-point bifurcation, period doubling, and Hopf bifurcation (HB) for iterates of equivariant mappings are investigated analytically, with a focus on HB in the presence of symmetry. An algebraic formulation for the hypotheses of the theorem of Ruelle (1973) is derived, and the case of standing waves in a system of ordinary differential equations with O(2) symmetry is considered in detail. In this case, it is shown that HB can lead directly to motion on an invariant 3-torus, with an unexpected third frequency due to drift of standing waves along the torus.

  15. A noise power spectrum study of a new model-based iterative reconstruction system: Veo 3.0.

    PubMed

    Li, Guang; Liu, Xinming; Dodge, Cristina T; Jensen, Corey T; Rong, X John

    2016-01-01

    The purpose of this study was to evaluate performance of the third generation of model-based iterative reconstruction (MBIR) system, Veo 3.0, based on noise power spectrum (NPS) analysis with various clinical presets over a wide range of clinically applicable dose levels. A CatPhan 600 surrounded by an oval, fat-equivalent ring to mimic patient size/shape was scanned 10 times at each of six dose levels on a GE HD 750 scanner. NPS analysis was performed on images reconstructed with various Veo 3.0 preset combinations for comparisons of those images reconstructed using Veo 2.0, filtered back projection (FBP) and adaptive statistical iterative reconstruc-tion (ASiR). The new Target Thickness setting resulted in higher noise in thicker axial images. The new Texture Enhancement function achieved a more isotropic noise behavior with less image artifacts. Veo 3.0 provides additional reconstruction options designed to allow the user choice of balance between spatial resolution and image noise, relative to Veo 2.0. Veo 3.0 provides more user selectable options and in general improved isotropic noise behavior in comparison to Veo 2.0. The overall noise reduction performance of both versions of MBIR was improved in comparison to FBP and ASiR, especially at low-dose levels. PMID:27685118

  16. ETR/ITER systems code

    SciTech Connect

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.; Bulmer, R.H.; Busigin, A.; DuBois, P.F.; Fenstermacher, M.E.; Fink, J.; Finn, P.A.; Galambos, J.D.; Gohar, Y.; Gorker, G.E.; Haines, J.R.; Hassanein, A.M.; Hicks, D.R.; Ho, S.K.; Kalsi, S.S.; Kalyanam, K.M.; Kerns, J.A.; Lee, J.D.; Miller, J.R.; Miller, R.L.; Myall, J.O.; Peng, Y-K.M.; Perkins, L.J.; Spampinato, P.T.; Strickler, D.J.; Thomson, S.L.; Wagner, C.E.; Willms, R.S.; Reid, R.L.

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs.

  17. ITER Port Interspace Pressure Calculations

    SciTech Connect

    Carbajo, Juan J; Van Hove, Walter A

    2016-01-01

    The ITER Vacuum Vessel (VV) is equipped with 54 access ports. Each of these ports has an opening in the bioshield that communicates with a dedicated port cell. During Tokamak operation, the bioshield opening must be closed with a concrete plug to shield the radiation coming from the plasma. This port plug separates the port cell into a Port Interspace (between VV closure lid and Port Plug) on the inner side and the Port Cell on the outer side. This paper presents calculations of pressures and temperatures in the ITER (Ref. 1) Port Interspace after a double-ended guillotine break (DEGB) of a pipe of the Tokamak Cooling Water System (TCWS) with high temperature water. It is assumed that this DEGB occurs during the worst possible conditions, which are during water baking operation, with water at a temperature of 523 K (250 C) and at a pressure of 4.4 MPa. These conditions are more severe than during normal Tokamak operation, with the water at 398 K (125 C) and 2 MPa. Two computer codes are employed in these calculations: RELAP5-3D Version 4.2.1 (Ref. 2) to calculate the blowdown releases from the pipe break, and MELCOR, Version 1.8.6 (Ref. 3) to calculate the pressures and temperatures in the Port Interspace. A sensitivity study has been performed to optimize some flow areas.

  18. Iterated Stretching of Viscoelastic Jets

    NASA Technical Reports Server (NTRS)

    Chang, Hsueh-Chia; Demekhin, Evgeny A.; Kalaidin, Evgeny

    1999-01-01

    We examine, with asymptotic analysis and numerical simulation, the iterated stretching dynamics of FENE and Oldroyd-B jets of initial radius r(sub 0), shear viscosity nu, Weissenberg number We, retardation number S, and capillary number Ca. The usual Rayleigh instability stretches the local uniaxial extensional flow region near a minimum in jet radius into a primary filament of radius [Ca(1 - S)/ We](sup 1/2)r(sub 0) between two beads. The strain-rate within the filament remains constant while its radius (elastic stress) decreases (increases) exponentially in time with a long elastic relaxation time 3We(r(sup 2, sub 0)/nu). Instabilities convected from the bead relieve the tension at the necks during this slow elastic drainage and trigger a filament recoil. Secondary filaments then form at the necks from the resulting stretching. This iterated stretching is predicted to occur successively to generate high-generation filaments of radius r(sub n), (r(sub n)/r(sub 0)) = square root of 2[r(sub n-1)/r(sub 0)](sup 3/2) until finite-extensibility effects set in.

  19. Experimental Evidence on Iterated Reasoning in Games

    PubMed Central

    Grehl, Sascha; Tutić, Andreas

    2015-01-01

    We present experimental evidence on two forms of iterated reasoning in games, i.e. backward induction and interactive knowledge. Besides reliable estimates of the cognitive skills of the subjects, our design allows us to disentangle two possible explanations for the observed limits in performed iterated reasoning: Restrictions in subjects’ cognitive abilities and their beliefs concerning the rationality of co-players. In comparison to previous literature, our estimates regarding subjects’ skills in iterated reasoning are quite pessimistic. Also, we find that beliefs concerning the rationality of co-players are completely irrelevant in explaining the observed limited amount of iterated reasoning in the dirty faces game. In addition, it is demonstrated that skills in backward induction are a solid predictor for skills in iterated knowledge, which points to some generalized ability of the subjects in iterated reasoning. PMID:26312486

  20. Iterative LQG Controller Design Through Closed-Loop Identification

    NASA Technical Reports Server (NTRS)

    Hsiao, Min-Hung; Huang, Jen-Kuang; Cox, David E.

    1996-01-01

    This paper presents an iterative Linear Quadratic Gaussian (LQG) controller design approach for a linear stochastic system with an uncertain open-loop model and unknown noise statistics. This approach consists of closed-loop identification and controller redesign cycles. In each cycle, the closed-loop identification method is used to identify an open-loop model and a steady-state Kalman filter gain from closed-loop input/output test data obtained by using a feedback LQG controller designed from the previous cycle. Then the identified open-loop model is used to redesign the state feedback. The state feedback and the identified Kalman filter gain are used to form an updated LQC controller for the next cycle. This iterative process continues until the updated controller converges. The proposed controller design is demonstrated by numerical simulations and experiments on a highly unstable large-gap magnetic suspension system.

  1. Preconditioned iterations to calculate extreme eigenvalues

    SciTech Connect

    Brand, C.W.; Petrova, S.

    1994-12-31

    Common iterative algorithms to calculate a few extreme eigenvalues of a large, sparse matrix are Lanczos methods or power iterations. They converge at a rate proportional to the separation of the extreme eigenvalues from the rest of the spectrum. Appropriate preconditioning improves the separation of the eigenvalues. Davidson`s method and its generalizations exploit this fact. The authors examine a preconditioned iteration that resembles a truncated version of Davidson`s method with a different preconditioning strategy.

  2. A simple and flexible graphical approach for adaptive group-sequential clinical trials.

    PubMed

    Sugitani, Toshifumi; Bretz, Frank; Maurer, Willi

    2016-01-01

    In this article, we introduce a graphical approach to testing multiple hypotheses in group-sequential clinical trials allowing for midterm design modifications. It is intended for structured study objectives in adaptive clinical trials and extends the graphical group-sequential designs from Maurer and Bretz (Statistics in Biopharmaceutical Research 2013; 5: 311-320) to adaptive trial designs. The resulting test strategies can be visualized graphically and performed iteratively. We illustrate the methodology with two examples from our clinical trial practice. First, we consider a three-armed gold-standard trial with the option to reallocate patients to either the test drug or the active control group, while stopping the recruitment of patients to placebo, after having demonstrated superiority of the test drug over placebo at an interim analysis. Second, we consider a confirmatory two-stage adaptive design with treatment selection at interim.

  3. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    SciTech Connect

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.

    2014-08-21

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ–ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.

  4. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    NASA Astrophysics Data System (ADS)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.

    2014-08-01

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ-ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.

  5. Sequence analysis by iterated maps, a review.

    PubMed

    Almeida, Jonas S

    2014-05-01

    Among alignment-free methods, Iterated Maps (IMs) are on a particular extreme: they are also scale free (order free). The use of IMs for sequence analysis is also distinct from other alignment-free methodologies in being rooted in statistical mechanics instead of computational linguistics. Both of these roots go back over two decades to the use of fractal geometry in the characterization of phase-space representations. The time series analysis origin of the field is betrayed by the title of the manuscript that started this alignment-free subdomain in 1990, 'Chaos Game Representation'. The clash between the analysis of sequences as continuous series and the better established use of Markovian approaches to discrete series was almost immediate, with a defining critique published in same journal 2 years later. The rest of that decade would go by before the scale-free nature of the IM space was uncovered. The ensuing decade saw this scalability generalized for non-genomic alphabets as well as an interest in its use for graphic representation of biological sequences. Finally, in the past couple of years, in step with the emergence of BigData and MapReduce as a new computational paradigm, there is a surprising third act in the IM story. Multiple reports have described gains in computational efficiency of multiple orders of magnitude over more conventional sequence analysis methodologies. The stage appears to be now set for a recasting of IMs with a central role in processing nextgen sequencing results.

  6. Sequence analysis by iterated maps, a review.

    PubMed

    Almeida, Jonas S

    2014-05-01

    Among alignment-free methods, Iterated Maps (IMs) are on a particular extreme: they are also scale free (order free). The use of IMs for sequence analysis is also distinct from other alignment-free methodologies in being rooted in statistical mechanics instead of computational linguistics. Both of these roots go back over two decades to the use of fractal geometry in the characterization of phase-space representations. The time series analysis origin of the field is betrayed by the title of the manuscript that started this alignment-free subdomain in 1990, 'Chaos Game Representation'. The clash between the analysis of sequences as continuous series and the better established use of Markovian approaches to discrete series was almost immediate, with a defining critique published in same journal 2 years later. The rest of that decade would go by before the scale-free nature of the IM space was uncovered. The ensuing decade saw this scalability generalized for non-genomic alphabets as well as an interest in its use for graphic representation of biological sequences. Finally, in the past couple of years, in step with the emergence of BigData and MapReduce as a new computational paradigm, there is a surprising third act in the IM story. Multiple reports have described gains in computational efficiency of multiple orders of magnitude over more conventional sequence analysis methodologies. The stage appears to be now set for a recasting of IMs with a central role in processing nextgen sequencing results. PMID:24162172

  7. Mixed Confidence Estimation for Iterative CT Reconstruction.

    PubMed

    Perlmutter, David S; Kim, Soo Mee; Kinahan, Paul E; Alessio, Adam M

    2016-09-01

    Dynamic (4D) CT imaging is used in a variety of applications, but the two major drawbacks of the technique are its increased radiation dose and longer reconstruction time. Here we present a statistical analysis of our previously proposed Mixed Confidence Estimation (MCE) method that addresses both these issues. This method, where framed iterative reconstruction is only performed on the dynamic regions of each frame while static regions are fixed across frames to a composite image, was proposed to reduce computation time. In this work, we generalize the previous method to describe any application where a portion of the image is known with higher confidence (static, composite, lower-frequency content, etc.) and a portion of the image is known with lower confidence (dynamic, targeted, etc). We show that by splitting the image space into higher and lower confidence components, MCE can lower the estimator variance in both regions compared to conventional reconstruction. We present a theoretical argument for this reduction in estimator variance and verify this argument with proof-of-principle simulations. We also propose a fast approximation of the variance of images reconstructed with MCE and confirm that this approximation is accurate compared to analytic calculations of and multi-realization image variance. This MCE method requires less computation time and provides reduced image variance for imaging scenarios where portions of the image are known with more certainty than others allowing for potentially reduced radiation dose and/or improved dynamic imaging. PMID:27008663

  8. Morbidity statistics

    PubMed Central

    Smith, Alwyn

    1969-01-01

    This paper is based on an analysis of questionnaires sent to the health ministries of Member States of WHO asking for information about the extent, nature, and scope of morbidity statistical information. It is clear that most countries collect some statistics of morbidity and many countries collect extensive data. However, few countries relate their collection to the needs of health administrators for information, and many countries collect statistics principally for publication in annual volumes which may appear anything up to 3 years after the year to which they refer. The desiderata of morbidity statistics may be summarized as reliability, representativeness, and relevance to current health problems. PMID:5306722

  9. Planning as an Iterative Process

    NASA Technical Reports Server (NTRS)

    Smith, David E.

    2012-01-01

    Activity planning for missions such as the Mars Exploration Rover mission presents many technical challenges, including oversubscription, consideration of time, concurrency, resources, preferences, and uncertainty. These challenges have all been addressed by the research community to varying degrees, but significant technical hurdles still remain. In addition, the integration of these capabilities into a single planning engine remains largely unaddressed. However, I argue that there is a deeper set of issues that needs to be considered namely the integration of planning into an iterative process that begins before the goals, objectives, and preferences are fully defined. This introduces a number of technical challenges for planning, including the ability to more naturally specify and utilize constraints on the planning process, the ability to generate multiple qualitatively different plans, and the ability to provide deep explanation of plans.

  10. ITER Safety Analyses with ISAS

    NASA Astrophysics Data System (ADS)

    Gulden, W.; Nisan, S.; Porfiri, M.-T.; Toumi, I.; de Gramont, T. Boubée

    1997-06-01

    Detailed analyses of accident sequences for the International Thermonuclear Experimental Reactor (ITER), from an initiating event to the environmental release of activity, have involved in the past the use of different types of computer codes in a sequential manner. Since these codes were developed at different time scales in different countries, there is no common computing structure to enable automatic data transfer from one code to the other, and no possibility exists to model or to quantify the effect of coupled physical phenomena. To solve this problem, the Integrated Safety Analysis System of codes (ISAS) is being developed, which allows users to integrate existing computer codes in a coherent manner. This approach is based on the utilization of a command language (GIBIANE) acting as a “glue” to integrate the various codes as modules of a common environment. The present version of ISAS allows comprehensive (coupled) calculations of a chain of codes such as ATHENA (thermal-hydraulic analysis of transients and accidents), INTRA (analysis of in-vessel chemical reactions, pressure built-up, and distribution of reaction products inside the vacuum vessel and adjacent rooms), and NAUA (transport of radiological species within buildings and to the environment). In the near future, the integration of S AFALY (simultaneous analysis of plasma dynamics and thermal behavior of in-vessel components) is also foreseen. The paper briefly describes the essential features of ISAS development and the associated software architecture. It gives first results of a typical ITER accident sequence, a loss of coolant accident (LOCA) in the divertor cooling loop inside the vacuum vessel, amply demonstrating ISAS capabilities.

  11. Experimental adaptive Bayesian tomography

    NASA Astrophysics Data System (ADS)

    Kravtsov, K. S.; Straupe, S. S.; Radchenko, I. V.; Houlsby, N. M. T.; Huszár, F.; Kulik, S. P.

    2013-06-01

    We report an experimental realization of an adaptive quantum state tomography protocol. Our method takes advantage of a Bayesian approach to statistical inference and is naturally tailored for adaptive strategies. For pure states, we observe close to N-1 scaling of infidelity with overall number of registered events, while the best nonadaptive protocols allow for N-1/2 scaling only. Experiments are performed for polarization qubits, but the approach is readily adapted to any dimension.

  12. Performance evaluation of iterative reconstruction algorithms for achieving CT radiation dose reduction - a phantom study.

    PubMed

    Dodge, Cristina T; Tamm, Eric P; Cody, Dianna D; Liu, Xinming; Jensen, Corey T; Wei, Wei; Kundra, Vikas; Rong, X John

    2016-01-01

    The purpose of this study was to characterize image quality and dose performance with GE CT iterative reconstruction techniques, adaptive statistical iterative recontruction (ASiR), and model-based iterative reconstruction (MBIR), over a range of typical to low-dose intervals using the Catphan 600 and the anthropomorphic Kyoto Kagaku abdomen phantoms. The scope of the project was to quantitatively describe the advantages and limitations of these approaches. The Catphan 600 phantom, supplemented with a fat-equivalent oval ring, was scanned using a GE Discovery HD750 scanner at 120 kVp, 0.8 s rotation time, and pitch factors of 0.516, 0.984, and 1.375. The mA was selected for each pitch factor to achieve CTDIvol values of 24, 18, 12, 6, 3, 2, and 1 mGy. Images were reconstructed at 2.5 mm thickness with filtered back-projection (FBP); 20%, 40%, and 70% ASiR; and MBIR. The potential for dose reduction and low-contrast detectability were evaluated from noise and contrast-to-noise ratio (CNR) measurements in the CTP 404 module of the Catphan. Hounsfield units (HUs) of several materials were evaluated from the cylinder inserts in the CTP 404 module, and the modulation transfer function (MTF) was calculated from the air insert. The results were con-firmed in the anthropomorphic Kyoto Kagaku abdomen phantom at 6, 3, 2, and 1mGy. MBIR reduced noise levels five-fold and increased CNR by a factor of five compared to FBP below 6mGy CTDIvol, resulting in a substantial improvement in image quality. Compared to ASiR and FBP, HU in images reconstructed with MBIR were consistently lower, and this discrepancy was reversed by higher pitch factors in some materials. MBIR improved the conspicuity of the high-contrast spatial resolution bar pattern, and MTF quantification confirmed the superior spatial resolution performance of MBIR versus FBP and ASiR at higher dose levels. While ASiR and FBP were relatively insensitive to changes in dose and pitch, the spatial resolution for MBIR

  13. Statistical Diversions

    ERIC Educational Resources Information Center

    Petocz, Peter; Sowey, Eric

    2012-01-01

    The term "data snooping" refers to the practice of choosing which statistical analyses to apply to a set of data after having first looked at those data. Data snooping contradicts a fundamental precept of applied statistics, that the scheme of analysis is to be planned in advance. In this column, the authors shall elucidate the statistical…

  14. An accelerated subspace iteration for eigenvector derivatives

    NASA Technical Reports Server (NTRS)

    Ting, Tienko

    1991-01-01

    An accelerated subspace iteration method for calculating eigenvector derivatives has been developed. Factors affecting the effectiveness and the reliability of the subspace iteration are identified, and effective strategies concerning these factors are presented. The method has been implemented, and the results of a demonstration problem are presented.

  15. Iterative methods for weighted least-squares

    SciTech Connect

    Bobrovnikova, E.Y.; Vavasis, S.A.

    1996-12-31

    A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.

  16. Rater Variables Associated with ITER Ratings

    ERIC Educational Resources Information Center

    Paget, Michael; Wu, Caren; McIlwrick, Joann; Woloschuk, Wayne; Wright, Bruce; McLaughlin, Kevin

    2013-01-01

    Advocates of holistic assessment consider the ITER a more authentic way to assess performance. But this assessment format is subjective and, therefore, susceptible to rater bias. Here our objective was to study the association between rater variables and ITER ratings. In this observational study our participants were clerks at the University of…

  17. New concurrent iterative methods with monotonic convergence

    SciTech Connect

    Yao, Qingchuan

    1996-12-31

    This paper proposes the new concurrent iterative methods without using any derivatives for finding all zeros of polynomials simultaneously. The new methods are of monotonic convergence for both simple and multiple real-zeros of polynomials and are quadratically convergent. The corresponding accelerated concurrent iterative methods are obtained too. The new methods are good candidates for the application in solving symmetric eigenproblems.

  18. Fixed Point Transformations Based Iterative Control of a Polymerization Reaction

    NASA Astrophysics Data System (ADS)

    Tar, József K.; Rudas, Imre J.

    As a paradigm of strongly coupled non-linear multi-variable dynamic systems the mathematical model of the free-radical polymerization of methyl-metachrylate with azobis (isobutyro-nitrile) as an initiator and toluene as a solvent taking place in a jacketed Continuous Stirred Tank Reactor (CSTR) is considered. In the adaptive control of this system only a single input variable is used as the control signal (the process input, i.e. dimensionless volumetric flow rate of the initiator), and a single output variable is observed (the process output, i.e. the number-average molecular weight of the polymer). Simulation examples illustrate that on the basis of a very rough and primitive model consisting of two scalar variables various fixed-point transformations based convergent iterations result in a novel, sophisticated adaptive control.

  19. Statistics Clinic

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James

    2014-01-01

    Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.

  20. TRANSP simulations of ITER plasmas

    SciTech Connect

    Budny, R.V.; McCune, D.C.; Redi, M.H.; Schivell, J.; Wieland, R.M.

    1995-12-01

    The TRANSP code is used to construct comprehensive, self-consistent models for ITER discharges. Plasma parameters are studied for two discharges from the ITER ``Interim Design`` database producing 1.5 GW fusion power with a plasma current of 21 MA and 20 toroidal field coils generating 5.7 T Steady state profiles for T{sub ion}, T{sub e}, n{sub e}, Z{sub eff}, and P{sub rad} from the database are specified. TRANSP models the full up/down asymmetric plasma boundary within the separatrix. Effects of high-energy neutral beam injection, sawteeth mixing, toroidal field ripple, and helium ash transport are included. Results are given for the fusion rate profiles, and parameters describing effects such as alpha particle slowing down, the heating of electrons and thermal ions, and the thermalization rates. The deposition of 1 MeV neutral beam ions is predicted to peak near the plasma center, and the average beam ion energy is predicted to be half the injected energy. Sawtooth mixing is predicted to broaden the fast alpha profile. The toroidal ripple losses rate of alpha energy is estimated to be 3% before sawtooth crashes and to increase by a factor of three to four immediately following sawtooth crashes. Assumptions for the thermal He transport and the He recycling coefficient at the boundary are discussed. If the ratio of helium and energy confinement times, {tau}*{sub He}/{tau}{sub E} is less than 15, the steady state fusion power is predicted to 1.5 GW or greater. The values of the transport coefficients required for this fusion power depend on the He recycling coefficient at the separatrix. If R{sub rec} is near 1, the required He diffusivity must be much larger than that measured in tokamaks. If R{sub rec} {le} 0.50, and if the inward pinch is small, values comparable to those measured are compatible with 1.5 GW.

  1. On the interplay between inner and outer iterations for a class of iterative methods

    SciTech Connect

    Giladi, E.

    1994-12-31

    Iterative algorithms for solving linear systems of equations often involve the solution of a subproblem at each step. This subproblem is usually another linear system of equations. For example, a preconditioned iteration involves the solution of a preconditioner at each step. In this paper, the author considers algorithms for which the subproblem is also solved iteratively. The subproblem is then said to be solved by {open_quotes}inner iterations{close_quotes} while the term {open_quotes}outer iteration{close_quotes} refers to a step of the basic algorithm. The cost of performing an outer iteration is dominated by the solution of the subproblem, and can be measured by the number of inner iterations. A good measure of the total amount of work needed to solve the original problem to some accuracy c is then, the total number of inner iterations. To lower the amount of work, one can consider solving the subproblems {open_quotes}inexactly{close_quotes} i.e. not to full accuracy. Although this diminishes the cost of solving each subproblem, it usually slows down the convergence of the outer iteration. It is therefore interesting to study the effect of solving each subproblem inexactly on the total amount of work. Specifically, the author considers strategies in which the accuracy to which the inner problem is solved, changes from one outer iteration to the other. The author seeks the `optimal strategy`, that is, the one that yields the lowest possible cost. Here, the author develops a methodology to find the optimal strategy, from the set of slowly varying strategies, for some iterative algorithms. This methodology is applied to the Chebychev iteration and it is shown that for Chebychev iteration, a strategy in which the inner-tolerance remains constant is optimal. The author also estimates this optimal constant. Then generalizations to other iterative procedures are discussed.

  2. DSC -- Disruption Simulation Code for Tokamaks and ITER applications

    NASA Astrophysics Data System (ADS)

    Galkin, S. A.; Grubert, J. E.; Zakharov, L. E.

    2010-11-01

    Arguably the most important issue facing the further development of magnetic fusion via advanced tokamaks is to predict, avoid, or mitigate disruptions. This recently became the hottest challenging topic in fusion research because of several potentially damaging effects, which could impact the ITER device. To address this issue, two versions of a new 3D adaptive Disruption Simulation Code (DSC) will be developed. The first version will solve the ideal reduced 3D MHD model in the real geometry with a thin conducting wall structure, utilizing the adaptive meshless technique. The second version will solve the resistive reduced 3D MHD model in the real geometry of the conducting structure of the tokamak vessel and will finally be parallelized. The DSC will be calibrated against the JET disruption data and will be capable of predicting the disruption effects in ITER, as well as contributing to the development of the disruption mitigation scheme and suppression of the RE generation. The progress on the first version of the 3D DSC development will be presented.

  3. SEER Statistics

    Cancer.gov

    The Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute works to provide information on cancer statistics in an effort to reduce the burden of cancer among the U.S. population.

  4. Cancer Statistics

    MedlinePlus

    ... cancer statistics across the world. U.S. Cancer Mortality Trends The best indicator of progress against cancer is ... the number of cancer survivors has increased. These trends show that progress is being made against the ...

  5. Statistical Physics

    NASA Astrophysics Data System (ADS)

    Hermann, Claudine

    Statistical Physics bridges the properties of a macroscopic system and the microscopic behavior of its constituting particles, otherwise impossible due to the giant magnitude of Avogadro's number. Numerous systems of today's key technologies - such as semiconductors or lasers - are macroscopic quantum objects; only statistical physics allows for understanding their fundamentals. Therefore, this graduate text also focuses on particular applications such as the properties of electrons in solids with applications, and radiation thermodynamics and the greenhouse effect.

  6. Progress on ITER Diagnostic Integration

    NASA Astrophysics Data System (ADS)

    Johnson, David; Feder, Russ; Klabacha, Jonathan; Loesser, Doug; Messineo, Mike; Stratton, Brentley; Wood, Rick; Zhai, Yuhu; Andrew, Phillip; Barnsley, Robin; Bertschinger, Guenter; Debock, Maarten; Reichle, Roger; Udintsev, Victor; Vayakis, George; Watts, Christopher; Walsh, Michael

    2013-10-01

    On ITER, front-end components must operate reliably in a hostile environment. Many will be housed in massive port plugs, which also shield the machine from radiation. Multiple diagnostics reside in a single plug, presenting new challenges for developers. Front-end components must tolerate thermally-induced stresses, disruption-induced mechanical loads, stray ECH radiation, displacement damage, and degradation due to plasma-induced coatings. The impact of failures is amplified due to the difficulty in performing robotic maintenance on these large structures. Motivated by needs to minimize disruption loads on the plugs, standardize the handling of shield modules, and decouple the parallel efforts of the many parties, the packaging strategy for diagnostics has recently focused on the use of 3 vertical shield modules inserted from the plasma side into each equatorial plug structure. At the front of each is a detachable first wall element with customized apertures. Progress on US equatorial and upper plugs will be used as examples, including the layout of components in the interspace and port cell regions. Supported by PPPL under contract DE-AC02-09CH11466 and UT-Battelle, LLC under contract DE-AC05-00OR22725 with the U.S. DOE.

  7. Iterants, Fermions and Majorana Operators

    NASA Astrophysics Data System (ADS)

    Kauffman, Louis H.

    Beginning with an elementary, oscillatory discrete dynamical system associated with the square root of minus one, we study both the foundations of mathematics and physics. Position and momentum do not commute in our discrete physics. Their commutator is related to the diffusion constant for a Brownian process and to the Heisenberg commutator in quantum mechanics. We take John Wheeler's idea of It from Bit as an essential clue and we rework the structure of that bit to a logical particle that is its own anti-particle, a logical Marjorana particle. This is our key example of the amphibian nature of mathematics and the external world. We show how the dynamical system for the square root of minus one is essentially the dynamics of a distinction whose self-reference leads to both the fusion algebra and the operator algebra for the Majorana Fermion. In the course of this, we develop an iterant algebra that supports all of matrix algebra and we end the essay with a discussion of the Dirac equation based on these principles.

  8. Magnetic fusion and project ITER

    SciTech Connect

    Park, H.K.

    1992-01-01

    It has already been demonstrated that our economics and international relationship are impacted by an energy crisis. For the continuing prosperity of the human race, a new and viable energy source must be developed within the next century. It is evident that the cost will be high and will require a long term commitment to achieve this goal due to a high degree of technological and scientific knowledge. Energy from the controlled nuclear fusion is a safe, competitive, and environmentally attractive but has not yet been completely conquered. Magnetic fusion is one of the most difficult technological challenges. In modem magnetic fusion devices, temperatures that are significantly higher than the temperatures of the sun have been achieved routinely and the successful generation of tens of million watts as a result of scientific break-even is expected from the deuterium and tritium experiment within the next few years. For the practical future fusion reactor, we need to develop reactor relevant materials and technologies. The international project called International Thermonuclear Experimental Reactor (ITER)'' will fulfill this need and the success of this project will provide the most attractive long-term energy source for mankind.

  9. Magnetic fusion and project ITER

    SciTech Connect

    Park, H.K.

    1992-09-01

    It has already been demonstrated that our economics and international relationship are impacted by an energy crisis. For the continuing prosperity of the human race, a new and viable energy source must be developed within the next century. It is evident that the cost will be high and will require a long term commitment to achieve this goal due to a high degree of technological and scientific knowledge. Energy from the controlled nuclear fusion is a safe, competitive, and environmentally attractive but has not yet been completely conquered. Magnetic fusion is one of the most difficult technological challenges. In modem magnetic fusion devices, temperatures that are significantly higher than the temperatures of the sun have been achieved routinely and the successful generation of tens of million watts as a result of scientific break-even is expected from the deuterium and tritium experiment within the next few years. For the practical future fusion reactor, we need to develop reactor relevant materials and technologies. The international project called ``International Thermonuclear Experimental Reactor (ITER)`` will fulfill this need and the success of this project will provide the most attractive long-term energy source for mankind.

  10. Adaptive management of watersheds and related resources

    USGS Publications Warehouse

    Williams, Byron K.

    2009-01-01

    The concept of learning about natural resources through the practice of management has been around for several decades and by now is associated with the term adaptive management. The objectives of this paper are to offer a framework for adaptive management that includes an operational definition, a description of conditions in which it can be usefully applied, and a systematic approach to its application. Adaptive decisionmaking is described as iterative, learning-based management in two phases, each with its own mechanisms for feedback and adaptation. The linkages between traditional experimental science and adaptive management are discussed.

  11. A component analysis based on serial results analyzing performance of parallel iterative programs

    SciTech Connect

    Richman, S.C.

    1994-12-31

    This research is concerned with the parallel performance of iterative methods for solving large, sparse, nonsymmetric linear systems. Most of the iterative methods are first presented with their time costs and convergence rates examined intensively on sequential machines, and then adapted to parallel machines. The analysis of the parallel iterative performance is more complicated than that of serial performance, since the former can be affected by many new factors, such as data communication schemes, number of processors used, and Ordering and mapping techniques. Although the author is able to summarize results from data obtained after examining certain cases by experiments, two questions remain: (1) How to explain the results obtained? (2) How to extend the results from the certain cases to general cases? To answer these two questions quantitatively, the author introduces a tool called component analysis based on serial results. This component analysis is introduced because the iterative methods consist mainly of several basic functions such as linked triads, inner products, and triangular solves, which have different intrinsic parallelisms and are suitable for different parallel techniques. The parallel performance of each iterative method is first expressed as a weighted sum of the parallel performance of the basic functions that are the components of the method. Then, one separately examines the performance of basic functions and the weighting distributions of iterative methods, from which two independent sets of information are obtained when solving a given problem. In this component approach, all the weightings require only serial costs not parallel costs, and each iterative method for solving a given problem is represented by its unique weighting distribution. The information given by the basic functions is independent of iterative method, while that given by weightings is independent of parallel technique, parallel machine and number of processors.

  12. An Efficient Augmented Lagrangian Method for Statistical X-Ray CT Image Reconstruction

    PubMed Central

    Li, Jiaojiao; Niu, Shanzhou; Huang, Jing; Bian, Zhaoying; Feng, Qianjin; Yu, Gaohang; Liang, Zhengrong; Chen, Wufan; Ma, Jianhua

    2015-01-01

    Statistical iterative reconstruction (SIR) for X-ray computed tomography (CT) under the penalized weighted least-squares criteria can yield significant gains over conventional analytical reconstruction from the noisy measurement. However, due to the nonlinear expression of the objective function, most exiting algorithms related to the SIR unavoidably suffer from heavy computation load and slow convergence rate, especially when an edge-preserving or sparsity-based penalty or regularization is incorporated. In this work, to address abovementioned issues of the general algorithms related to the SIR, we propose an adaptive nonmonotone alternating direction algorithm in the framework of augmented Lagrangian multiplier method, which is termed as “ALM-ANAD”. The algorithm effectively combines an alternating direction technique with an adaptive nonmonotone line search to minimize the augmented Lagrangian function at each iteration. To evaluate the present ALM-ANAD algorithm, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present ALM-ANAD algorithm can achieve noticeable gains over the classical nonlinear conjugate gradient algorithm and state-of-the-art split Bregman algorithm in terms of noise reduction, contrast-to-noise ratio, convergence rate, and universal quality index metrics. PMID:26495975

  13. Sequence analysis by iterated maps, a review

    PubMed Central

    2014-01-01

    Among alignment-free methods, Iterated Maps (IMs) are on a particular extreme: they are also scale free (order free). The use of IMs for sequence analysis is also distinct from other alignment-free methodologies in being rooted in statistical mechanics instead of computational linguistics. Both of these roots go back over two decades to the use of fractal geometry in the characterization of phase-space representations. The time series analysis origin of the field is betrayed by the title of the manuscript that started this alignment-free subdomain in 1990, ‘Chaos Game Representation’. The clash between the analysis of sequences as continuous series and the better established use of Markovian approaches to discrete series was almost immediate, with a defining critique published in same journal 2 years later. The rest of that decade would go by before the scale-free nature of the IM space was uncovered. The ensuing decade saw this scalability generalized for non-genomic alphabets as well as an interest in its use for graphic representation of biological sequences. Finally, in the past couple of years, in step with the emergence of BigData and MapReduce as a new computational paradigm, there is a surprising third act in the IM story. Multiple reports have described gains in computational efficiency of multiple orders of magnitude over more conventional sequence analysis methodologies. The stage appears to be now set for a recasting of IMs with a central role in processing nextgen sequencing results. PMID:24162172

  14. The Physics Basis of ITER Confinement

    SciTech Connect

    Wagner, F.

    2009-02-19

    ITER will be the first fusion reactor and the 50 year old dream of fusion scientists will become reality. The quality of magnetic confinement will decide about the success of ITER, directly in the form of the confinement time and indirectly because it decides about the plasma parameters and the fluxes, which cross the separatrix and have to be handled externally by technical means. This lecture portrays some of the basic principles which govern plasma confinement, uses dimensionless scaling to set the limits for the predictions for ITER, an approach which also shows the limitations of the predictions, and describes briefly the major characteristics and physics behind the H-mode--the preferred confinement regime of ITER.

  15. Archimedes' Pi--An Introduction to Iteration.

    ERIC Educational Resources Information Center

    Lotspeich, Richard

    1988-01-01

    One method (attributed to Archimedes) of approximating pi offers a simple yet interesting introduction to one of the basic ideas of numerical analysis, an iteration sequence. The method is described and elaborated. (PK)

  16. Anderson Acceleration for Fixed-Point Iterations

    SciTech Connect

    Walker, Homer F.

    2015-08-31

    The purpose of this grant was to support research on acceleration methods for fixed-point iterations, with applications to computational frameworks and simulation problems that are of interest to DOE.

  17. On the safety of ITER accelerators.

    PubMed

    Li, Ge

    2013-01-01

    Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate -1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER.

  18. Novel aspects of plasma control in ITER

    NASA Astrophysics Data System (ADS)

    Humphreys, D.; Ambrosino, G.; de Vries, P.; Felici, F.; Kim, S. H.; Jackson, G.; Kallenbach, A.; Kolemen, E.; Lister, J.; Moreau, D.; Pironti, A.; Raupp, G.; Sauter, O.; Schuster, E.; Snipes, J.; Treutterer, W.; Walker, M.; Welander, A.; Winter, A.; Zabeo, L.

    2015-02-01

    ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily for ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g., current profile regulation, tearing mode (TM) suppression), control mathematics (e.g., algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g., methods for management of highly subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.

  19. Programmable Iterative Optical Image And Data Processing

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    1995-01-01

    Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

  20. Iterative methods based upon residual averaging

    NASA Technical Reports Server (NTRS)

    Neuberger, J. W.

    1980-01-01

    Iterative methods for solving boundary value problems for systems of nonlinear partial differential equations are discussed. The methods involve subtracting an average of residuals from one approximation in order to arrive at a subsequent approximation. Two abstract methods in Hilbert space are given and application of these methods to quasilinear systems to give numerical schemes for such problems is demonstrated. Potential theoretic matters related to the iteration schemes are discussed.

  1. Threshold power and energy confinement for ITER

    SciTech Connect

    Takizuka, T.

    1996-12-31

    In order to predict the threshold power for L-H transition and the energy confinement performance in ITER, assembling of database and analyses of them have been progressed. The ITER Threshold Database includes data from 10 divertor tokamaks. Investigation of the database gives a scaling of the threshold power of the form P{sub thr} {proportional_to} B{sub t} n{sub e}{sup 0.75} R{sup 2} {times} (n{sub e} R{sup 2}){sup +-0.25}, which predicts P{sub thr} = 100 {times} 2{sup 0{+-}1} MW for ITER at n{sub e} = 5 {times} 10{sup 19} m{sup {minus}3}. The ITER L-mode Confinement Database has also been expanded by data from 14 tokamaks. A scaling of the thermal energy confinement time in L-mode and ohmic phases is obtained; {tau}{sub th} {approximately} I{sub p} R{sup 1.8} n{sub e}{sup 0.4{sub P{sup {minus}0.73}}}. At the ITER parameter, it becomes about 2.2 sec. For the ignition in ITER, more than 2.5 times of improvement will be required from the L-mode. The ITER H-mode Confinement Database is expanded from data of 6 tokamaks to data of 11 tokamaks. A {tau}{sub th} scaling for ELMy H-mode obtained by a standard regression analysis predicts the ITER confinement time of {tau}{sub th} = 6 {times} (1 {+-} 0.3) sec. The degradation of {tau}{sub th} with increasing n{sub e} R{sup 2} (or decreasing {rho}{sub *}) is not found for ELMy H-mode. An offset linear law scaling with a dimensionally correct form also predicts nearly the same {tau}{sub th} value.

  2. Novel aspects of plasma control in ITER

    SciTech Connect

    Humphreys, D.; Jackson, G.; Walker, M.; Welander, A.; Ambrosino, G.; Pironti, A.; Felici, F.; Kallenbach, A.; Raupp, G.; Treutterer, W.; Kolemen, E.; Lister, J.; Sauter, O.; Moreau, D.; Schuster, E.

    2015-02-15

    ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily for ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g., current profile regulation, tearing mode (TM) suppression), control mathematics (e.g., algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g., methods for management of highly subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.

  3. Radiation Dose Reduction in Pediatric Body CT Using Iterative Reconstruction and a Novel Image-Based Denoising Method

    PubMed Central

    Yu, Lifeng; Fletcher, Joel G.; Shiung, Maria; Thomas, Kristen B.; Matsumoto, Jane M.; Zingula, Shannon N.; McCollough, Cynthia H.

    2016-01-01

    OBJECTIVE The objective of this study was to evaluate the radiation dose reduction potential of a novel image-based denoising technique in pediatric abdominopelvic and chest CT examinations and compare it with a commercial iterative reconstruction method. MATERIALS AND METHODS Data were retrospectively collected from 50 (25 abdominopelvic and 25 chest) clinically indicated pediatric CT examinations. For each examination, a validated noise-insertion tool was used to simulate half-dose data, which were reconstructed using filtered back-projection (FBP) and sinogram-affirmed iterative reconstruction (SAFIRE) methods. A newly developed denoising technique, adaptive nonlocal means (aNLM), was also applied. For each of the 50 patients, three pediatric radiologists evaluated four datasets: full dose plus FBP, half dose plus FBP, half dose plus SAFIRE, and half dose plus aNLM. For each examination, the order of preference for the four datasets was ranked. The organ-specific diagnosis and diagnostic confidence for five primary organs were recorded. RESULTS The mean (± SD) volume CT dose index for the full-dose scan was 5.3 ± 2.1 mGy for abdominopelvic examinations and 2.4 ± 1.1 mGy for chest examinations. For abdominopelvic examinations, there was no statistically significant difference between the half dose plus aNLM dataset and the full dose plus FBP dataset (3.6 ± 1.0 vs 3.6 ± 0.9, respectively; p = 0.52), and aNLM performed better than SAFIRE. For chest examinations, there was no statistically significant difference between the half dose plus SAFIRE and the full dose plus FBP (4.1 ± 0.6 vs 4.2 ± 0.6, respectively; p = 0.67), and SAFIRE performed better than aNLM. For all organs, there was more than 85% agreement in organ-specific diagnosis among the three half-dose configurations and the full dose plus FBP configuration. CONCLUSION Although a novel image-based denoising technique performed better than a commercial iterative reconstruction method in pediatric

  4. EDITORIAL: ECRH physics and technology in ITER

    NASA Astrophysics Data System (ADS)

    Luce, T. C.

    2008-05-01

    It is a great pleasure to introduce you to this special issue containing papers from the 4th IAEA Technical Meeting on ECRH Physics and Technology in ITER, which was held 6-8 June 2007 at the IAEA Headquarters in Vienna, Austria. The meeting was attended by more than 40 ECRH experts representing 13 countries and the IAEA. Presentations given at the meeting were placed into five separate categories EC wave physics: current understanding and extrapolation to ITER Application of EC waves to confinement and stability studies, including active control techniques for ITER Transmission systems/launchers: state of the art and ITER relevant techniques Gyrotron development towards ITER needs System integration and optimisation for ITER. It is notable that the participants took seriously the focal point of ITER, rather than simply contributing presentations on general EC physics and technology. The application of EC waves to ITER presents new challenges not faced in the current generation of experiments from both the physics and technology viewpoints. High electron temperatures and the nuclear environment have a significant impact on the application of EC waves. The needs of ITER have also strongly motivated source and launcher development. Finally, the demonstrated ability for precision control of instabilities or non-inductive current drive in addition to bulk heating to fusion burn has secured a key role for EC wave systems in ITER. All of the participants were encouraged to submit their contributions to this special issue, subject to the normal publication and technical merit standards of Nuclear Fusion. Almost half of the participants chose to do so; many of the others had been published in other publications and therefore could not be included in this special issue. The papers included here are a representative sample of the meeting. The International Advisory Committee also asked the three summary speakers from the meeting to supply brief written summaries (O. Sauter

  5. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles.

    PubMed

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-08-13

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well.

  6. Iterative Frequency Domain Decision Feedback Equalization and Decoding for Underwater Acoustic Communications

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Ge, Jian-Hua

    2012-12-01

    Single-carrier (SC) transmission with frequency-domain equalization (FDE) is today recognized as an attractive alternative to orthogonal frequency-division multiplexing (OFDM) for communication application with the inter-symbol interference (ISI) caused by multi-path propagation, especially in shallow water channel. In this paper, we investigate an iterative receiver based on minimum mean square error (MMSE) decision feedback equalizer (DFE) with symbol rate and fractional rate samplings in the frequency domain (FD) and serially concatenated trellis coded modulation (SCTCM) decoder. Based on sound speed profiles (SSP) measured in the lake and finite-element ray tracking (Bellhop) method, the shallow water channel is constructed to evaluate the performance of the proposed iterative receiver. Performance results show that the proposed iterative receiver can significantly improve the performance and obtain better data transmission than FD linear and adaptive decision feedback equalizers, especially in adopting fractional rate sampling.

  7. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles

    PubMed Central

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-01-01

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well. PMID:26287194

  8. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles.

    PubMed

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-01-01

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well. PMID:26287194

  9. Parallel computing for simultaneous iterative tomographic imaging by graphics processing units

    NASA Astrophysics Data System (ADS)

    Bello-Maldonado, Pedro D.; López, Ricardo; Rogers, Colleen; Jin, Yuanwei; Lu, Enyue

    2016-05-01

    In this paper, we address the problem of accelerating inversion algorithms for nonlinear acoustic tomographic imaging by parallel computing on graphics processing units (GPUs). Nonlinear inversion algorithms for tomographic imaging often rely on iterative algorithms for solving an inverse problem, thus computationally intensive. We study the simultaneous iterative reconstruction technique (SIRT) for the multiple-input-multiple-output (MIMO) tomography algorithm which enables parallel computations of the grid points as well as the parallel execution of multiple source excitation. Using graphics processing units (GPUs) and the Compute Unified Device Architecture (CUDA) programming model an overall improvement of 26.33x was achieved when combining both approaches compared with sequential algorithms. Furthermore we propose an adaptive iterative relaxation factor and the use of non-uniform weights to improve the overall convergence of the algorithm. Using these techniques, fast computations can be performed in parallel without the loss of image quality during the reconstruction process.

  10. An efficient iterative algorithm for computation of scattering from dielectric objects.

    SciTech Connect

    Liao, L.; Gopalsami, N.; Venugopal, A.; Heifetz, A.; Raptis, A. C.

    2011-02-14

    We have developed an efficient iterative algorithm for electromagnetic scattering of arbitrary but relatively smooth dielectric objects. The algorithm iteratively adapts the equivalent surface currents until the electromagnetic fields inside and outside the dielectric objects match the boundary conditions. Theoretical convergence is analyzed for two examples that solve scattering of plane waves incident upon air/dielectric slabs of semi-infinite and finite thicknesses. We applied the iterative algorithm for simulation of sinusoidally-perturbed dielectric slab on one side and the method converged for such unsmooth surfaces. We next simulated the shift in radiation pattern of a 6-inch dielectric lens for different offsets of the feed antenna on the focal plane. The result is compared to that of the Geometrical Optics (GO).

  11. Adaptive Algebraic Multigrid Methods

    SciTech Connect

    Brezina, M; Falgout, R; MacLachlan, S; Manteuffel, T; McCormick, S; Ruge, J

    2004-04-09

    Our ability to simulate physical processes numerically is constrained by our ability to solve the resulting linear systems, prompting substantial research into the development of multiscale iterative methods capable of solving these linear systems with an optimal amount of effort. Overcoming the limitations of geometric multigrid methods to simple geometries and differential equations, algebraic multigrid methods construct the multigrid hierarchy based only on the given matrix. While this allows for efficient black-box solution of the linear systems associated with discretizations of many elliptic differential equations, it also results in a lack of robustness due to assumptions made on the near-null spaces of these matrices. This paper introduces an extension to algebraic multigrid methods that removes the need to make such assumptions by utilizing an adaptive process. The principles which guide the adaptivity are highlighted, as well as their application to algebraic multigrid solution of certain symmetric positive-definite linear systems.

  12. Statistical Optics

    NASA Astrophysics Data System (ADS)

    Goodman, Joseph W.

    2000-07-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research

  13. Adaptive ILC algorithms of nonlinear continuous systems with non-parametric uncertainties for non-repetitive trajectory tracking

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Dong; Lv, Mang-Mang; Ho, John K. L.

    2016-07-01

    In this article, two adaptive iterative learning control (ILC) algorithms are presented for nonlinear continuous systems with non-parametric uncertainties. Unlike general ILC techniques, the proposed adaptive ILC algorithms allow that both the initial error at each iteration and the reference trajectory are iteration-varying in the ILC process, and can achieve non-repetitive trajectory tracking beyond a small initial time interval. Compared to the neural network or fuzzy system-based adaptive ILC schemes and the classical ILC methods, in which the number of iterative variables is generally larger than or equal to the number of control inputs, the first adaptive ILC algorithm proposed in this paper uses just two iterative variables, while the second even uses a single iterative variable provided that some bound information on system dynamics is known. As a result, the memory space in real-time ILC implementations is greatly reduced.

  14. Kernel-based least squares policy iteration for reinforcement learning.

    PubMed

    Xu, Xin; Hu, Dewen; Lu, Xicheng

    2007-07-01

    In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating

  15. PREFACE: Progress in the ITER Physics Basis

    NASA Astrophysics Data System (ADS)

    Ikeda, K.

    2007-06-01

    I would firstly like to congratulate all who have contributed to the preparation of the `Progress in the ITER Physics Basis' (PIPB) on its publication and express my deep appreciation of the hard work and commitment of the many scientists involved. With the signing of the ITER Joint Implementing Agreement in November 2006, the ITER Members have now established the framework for construction of the project, and the ITER Organization has begun work at Cadarache. The review of recent progress in the physics basis for burning plasma experiments encompassed by the PIPB will be a valuable resource for the project and, in particular, for the current Design Review. The ITER design has been derived from a physics basis developed through experimental, modelling and theoretical work on the properties of tokamak plasmas and, in particular, on studies of burning plasma physics. The `ITER Physics Basis' (IPB), published in 1999, has been the reference for the projection methodologies for the design of ITER, but the IPB also highlighted several key issues which needed to be resolved to provide a robust basis for ITER operation. In the intervening period scientists of the ITER Participant Teams have addressed these issues intensively. The International Tokamak Physics Activity (ITPA) has provided an excellent forum for scientists involved in these studies, focusing their work on the high priority physics issues for ITER. Significant progress has been made in many of the issues identified in the IPB and this progress is discussed in depth in the PIPB. In this respect, the publication of the PIPB symbolizes the strong interest and enthusiasm of the plasma physics community for the success of the ITER project, which we all recognize as one of the great scientific challenges of the 21st century. I wish to emphasize my appreciation of the work of the ITPA Coordinating Committee members, who are listed below. Their support and encouragement for the preparation of the PIPB were

  16. Adaptive wavelets and relativistic magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hirschmann, Eric; Neilsen, David; Anderson, Matthe; Debuhr, Jackson; Zhang, Bo

    2016-03-01

    We present a method for integrating the relativistic magnetohydrodynamics equations using iterated interpolating wavelets. Such provide an adaptive implementation for simulations in multidimensions. A measure of the local approximation error for the solution is provided by the wavelet coefficients. They place collocation points in locations naturally adapted to the flow while providing expected conservation. We present demanding 1D and 2D tests includingthe Kelvin-Helmholtz instability and the Rayleigh-Taylor instability. Finally, we consider an outgoing blast wave that models a GRB outflow.

  17. Iterative minimization algorithm for efficient calculations of transition states

    NASA Astrophysics Data System (ADS)

    Gao, Weiguo; Leng, Jing; Zhou, Xiang

    2016-03-01

    This paper presents an efficient algorithmic implementation of the iterative minimization formulation (IMF) for fast local search of transition state on potential energy surface. The IMF is a second order iterative scheme providing a general and rigorous description for the eigenvector-following (min-mode following) methodology. We offer a unified interpretation in numerics via the IMF for existing eigenvector-following methods, such as the gentlest ascent dynamics, the dimer method and many other variants. We then propose our new algorithm based on the IMF. The main feature of our algorithm is that the translation step is replaced by solving an optimization subproblem associated with an auxiliary objective function which is constructed from the min-mode information. We show that using an efficient scheme for the inexact solver and enforcing an adaptive stopping criterion for this subproblem, the overall computational cost will be effectively reduced and a super-linear rate between the accuracy and the computational cost can be achieved. A series of numerical tests demonstrate the significant improvement in the computational efficiency for the new algorithm.

  18. Current status of the ITER MSE diagnostic

    NASA Astrophysics Data System (ADS)

    Yuh, Howard; Levinton, F.; La Fleur, H.; Foley, E.; Feder, R.; Zakharov, L.

    2013-10-01

    The U.S. is providing ITER with a Motional Stark Effect (MSE) diagnostic to provide a measurement to guide reconstructions of the plasma q-profile. The diagnostic design has gone through many iterations, driven primarily by the evolution of the ITER port plug design and the steering of the heating beams. The present two port, three view design viewing both heating beams and the DNB has recently passed a conceptual design review at the IO. The traditional line polarization (MSE-LP) technique employed on many devices around the world faces many challenges in ITER, including strong background light and mirror degradation. To mitigate these effects, a multi-wavelength polarimeter and high resolution spectrometer will be used to subtract polarized background, while retroreflecting polarizers will provide mirror calibration concurrent with MSE-LP measurements. However, without a proven plasma-facing mirror cleaning technique, inherent risks to MSE-LP remain. The high field and high beam energy on ITER offers optimal conditions for a spectroscopic measurement of the electric field using line splitting (MSE-LS), a technique which does not depend on mirror polarization properties. The current design is presented with a roadmap of the R&D needed to address remaining challenges. This work is supported by DOE contracts S009627-R and S012380-F.

  19. Iterative scatter correction based on artifact assessment

    NASA Astrophysics Data System (ADS)

    Wiegert, Jens; Hohmann, Steffen; Bertram, Matthias

    2008-03-01

    In this paper we propose a novel scatter correction methodology for X-ray based cone-beam CT that allows to combine the advantages of projection-based and volume-based correction approaches. The basic idea is to use a potentially non-optimal projection-based scatter correction method and to iteratively optimize its performance by repeatedly assessing remaining scatter-induced artifacts in intermediately reconstructed volumes. The novel approach exploits the fact that due to the flatness of the scatter-background, compensation itself is most easily performed in the projection-domain, while the scatter-induced artifacts can be better observed in the reconstructed volume. The presented method foresees to evaluate the scatter correction efficiency after each iteration by means of a quantitative measure characterizing the amount of residual cupping and to adjust the parameters of the projection-based scatter correction for the next iteration accordingly. The potential of this iterative scatter correction approach is demonstrated using voxelized Monte Carlo scatter simulations as ground truth. Using the proposed iterative scatter correction method, remarkable scatter correction performance was achieved both using simple parametric heuristic techniques as well as by optimizing previously published scatter estimation schemes. For the human head, scatter induced artifacts were reduced from initially 148 HU to less than 8.1 HU to 9.1 HU for different studied methods, corresponding to an artifact reduction exceeding 93%.

  20. [Statistical materials].

    PubMed

    1986-01-01

    Official population data for the USSR are presented for 1985 and 1986. Part 1 (pp. 65-72) contains data on capitals of union republics and cities with over one million inhabitants, including population estimates for 1986 and vital statistics for 1985. Part 2 (p. 72) presents population estimates by sex and union republic, 1986. Part 3 (pp. 73-6) presents data on population growth, including birth, death, and natural increase rates, 1984-1985; seasonal distribution of births and deaths; birth order; age-specific birth rates in urban and rural areas and by union republic; marriages; age at marriage; and divorces. PMID:12178831

  1. Adaptive techniques in electrical impedance tomography reconstruction.

    PubMed

    Li, Taoran; Isaacson, David; Newell, Jonathan C; Saulnier, Gary J

    2014-06-01

    We present an adaptive algorithm for solving the inverse problem in electrical impedance tomography. To strike a balance between the accuracy of the reconstructed images and the computational efficiency of the forward and inverse solvers, we propose to combine an adaptive mesh refinement technique with the adaptive Kaczmarz method. The iterative algorithm adaptively generates the optimal current patterns and a locally-refined mesh given the conductivity estimate and solves for the unknown conductivity distribution with the block Kaczmarz update step. Simulation and experimental results with numerical analysis demonstrate the accuracy and the efficiency of the proposed algorithm.

  2. A general mean-based iterative winner-take-all neural network.

    PubMed

    Yang, J F; Chen, C M; Wang, W C; Lee, J Y

    1995-01-01

    In this paper, a new iterative winner-take-all (WTA) neural network is developed and analyzed. The proposed WTA neural net with one-layer structure is established under the concept of the statistical mean. For three typical distributions of initial activations, the convergence behaviors of the existing and the proposed WTA neural nets are evaluated by theoretical analyses and Monte Carlo simulations. We found that the suggested WTA neural network on average requires fewer than Log(2)M iterations to complete a WTA process for the three distributed inputs, where M is the number of competitors. Furthermore, the fault tolerances of the iterative WTA nets are analyzed and simulated. From the view points of convergence speed, hardware complexity, and robustness to the errors, the proposed WTA is suitable for various applications.

  3. Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace

    PubMed Central

    Zhang, Cheng; Lai, Chun-Liang; Pettitt, B. Montgomery

    2016-01-01

    The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution. PMID:27453632

  4. An iterative contractive framework for probe methods: LASSO

    NASA Astrophysics Data System (ADS)

    Potthast, R. W. E.

    2011-10-01

    We present a new iterative approach called Line Adaptation for the Singular Sources Objective (LASSO) to object or shape reconstruction based on the singular sources method (or probe method) for the reconstruction of scatterers from the far-field pattern of scattered acoustic or electromagnetic waves. The scheme is based on the construction of an indicator function given by the scattered field for incident point sources in its source point from the given far-field patterns for plane waves. The indicator function is then used to drive the contraction of a surface which surrounds the unknown scatterers. A stopping criterion for those parts of the surfaces that touch the unknown scatterers is formulated. A splitting approach for the contracting surfaces is formulated, such that scatterers consisting of several separate components can be reconstructed. Convergence of the scheme is shown, and its feasibility is demonstrated using a numerical study with several examples.

  5. ITER Experts' meeting on density limits

    SciTech Connect

    Borrass, K.; Igitkhanov, Y.L.; Uckan, N.A.

    1989-12-01

    The necessity of achieving a prescribed wall load or fusion power essentially determines the plasma pressure in a device like ITER. The range of operation densities and temperatures compatible with this condition is constrained by the problems of power exhaust and the disruptive density limit. The maximum allowable heat loads on the divertor plates and the maximum allowable sheath edge temperature practically impose a lower limit on the operating densities, whereas the disruptive density limit imposes an upper limit. For most of the density limit scalings proposed in the past an overlap of the two constraints or at best a very narrow accessible density range is predicted for ITER. Improved understanding of the underlying mechanisms is therefore a crucial issue in order to provide a more reliable basis for extrapolation to ITER and to identify possible ways of alleviating the problem.

  6. Accelerating an iterative process by explicit annihilation

    NASA Technical Reports Server (NTRS)

    Jespersen, D. C.; Buning, P. G.

    1985-01-01

    A slowly convergent stationary iterative process can be accelerated by explicitly annihilating (i.e., eliminating) the dominant eigenvector component of the error. The dominant eigenvalue or complex pair of eigenvalues can be estimated from the solution during the iteration. The corresponding eigenvector or complex pair of eigenvectors can then be annihilated by applying an explicit Richardson process over the basic iterative method. This can be done entirely in real arithmetic by analytically combining the complex conjugate annihilation steps. The technique is applied to an implicit algorithm for the calculation of two dimensional steady transonic flow over a circular cylinder using the equations of compressible inviscid gas dynamics. This demonstrates the use of explicit annihilation on a nonlinear problem.

  7. Accelerating an iterative process by explicit annihilation

    NASA Technical Reports Server (NTRS)

    Jespersen, D. C.; Buning, P. G.

    1983-01-01

    A slowly convergent stationary iterative process can be accelerated by explicitly annihilating (i.e., eliminating) the dominant eigenvector component of the error. The dominant eigenvalue or complex pair of eigenvalues can be estimated from the solution during the iteration. The corresponding eigenvector or complex pair of eigenvectors can then be annihilated by applying an explicit Richardson process over the basic iterative method. This can be done entirely in real arithmetic by analytically combining the complex conjugate annihilation steps. The technique is applied to an implicit algorithm for the calculation of two dimensional steady transonic flow over a circular cylinder using the equations of compressible inviscid gas dynamics. This demonstrates the use of explicit annihilation on a nonlinear problem.

  8. Re-starting an Arnoldi iteration

    SciTech Connect

    Lehoucq, R.B.

    1996-12-31

    The Arnoldi iteration is an efficient procedure for approximating a subset of the eigensystem of a large sparse n x n matrix A. The iteration produces a partial orthogonal reduction of A into an upper Hessenberg matrix H{sub m} of order m. The eigenvalues of this small matrix H{sub m} are used to approximate a subset of the eigenvalues of the large matrix A. The eigenvalues of H{sub m} improve as estimates to those of A as m increases. Unfortunately, so does the cost and storage of the reduction. The idea of re-starting the Arnoldi iteration is motivated by the prohibitive cost associated with building a large factorization.

  9. Translating relational queries into iterative programs

    SciTech Connect

    Freytag, J.C.

    1987-01-01

    This book investigates the problem of translating relational queries into iterative programs using methods and techniques from the areas of functional programming and program transformation. The first part presents two algorithms which generate iterative programs from algebra-based query specifications. While the first algorithm is based on the transformation of recursive programs, the second uses functional expressions to generate the final iterative form. In the second part the same techniques generate efficient programs for the evaluation of aggregate functions in relational database systems. In several steps, programs which perform aggregation after sorting, are transformed into programs which perform aggregation while sorting. The third part then investigates the Lisp-dialect T as a possible implementation language for database systems.

  10. The ITER in-vessel system

    SciTech Connect

    Lousteau, D.C.

    1994-09-01

    The overall programmatic objective, as defined in the ITER Engineering Design Activities (EDA) Agreement, is to demonstrate the scientific and technological feasibility of fusion energy for peaceful purposes. The ITER EDA Phase, due to last until July 1998, will encompass the design of the device and its auxiliary systems and facilities, including the preparation of engineering drawings. The EDA also incorporates validating research and development (R&D) work, including the development and testing of key components. The purpose of this paper is to review the status of the design, as it has been developed so far, emphasizing the design and integration of those components contained within the vacuum vessel of the ITER device. The components included in the in-vessel systems are divertor and first wall; blanket and shield; plasma heating, fueling, and vacuum pumping equipment; and remote handling equipment.

  11. A fast poly-energetic iterative FBP algorithm

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Samei, Ehsan

    2014-04-01

    The beam hardening (BH) effect can influence medical interpretations in two notable ways. First, high attenuation materials, such as bones, can induce strong artifacts, which severely deteriorate the image quality. Second, voxel values can significantly deviate from the real values, which can lead to unreliable quantitative evaluation results. Some iterative methods have been proposed to eliminate the BH effect, but they cannot be widely applied for clinical practice because of the slow computational speed. The purpose of this study was to develop a new fast and practical poly-energetic iterative filtered backward projection algorithm (piFBP). The piFBP is composed of a novel poly-energetic forward projection process and a robust FBP-type backward updating process. In the forward projection process, an adaptive base material decomposition method is presented, based on which diverse body tissues (e.g., lung, fat, breast, soft tissue, and bone) and metal implants can be incorporated to accurately evaluate poly-energetic forward projections. In the backward updating process, one robust and fast FBP-type backward updating equation with a smoothing kernel is introduced to avoid the noise accumulation in the iteration process and to improve the convergence properties. Two phantoms were designed to quantitatively validate our piFBP algorithm in terms of the beam hardening index (BIdx) and the noise index (NIdx). The simulation results showed that piFBP possessed fast convergence speed, as the images could be reconstructed within four iterations. The variation range of the BIdx's of various tissues across phantom size and spectrum were reduced from [-7.5, 17.5] for FBP to [-0.1, 0.1] for piFBP while the NIdx's were maintained in the same low level (about [0.3, 1.7]). When a metal implant presented in a complex phantom, piFBP still had excellent reconstruction performance, as the variation range of the BIdx's of body tissues were reduced from [-2.9, 15.9] for FBP to [-0

  12. Global Asymptotic Behavior of Iterative Implicit Schemes

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1994-01-01

    The global asymptotic nonlinear behavior of some standard iterative procedures in solving nonlinear systems of algebraic equations arising from four implicit linear multistep methods (LMMs) in discretizing three models of 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODEs) is analyzed using the theory of dynamical systems. The iterative procedures include simple iteration and full and modified Newton iterations. The results are compared with standard Runge-Kutta explicit methods, a noniterative implicit procedure, and the Newton method of solving the steady part of the ODEs. Studies showed that aside from exhibiting spurious asymptotes, all of the four implicit LMMs can change the type and stability of the steady states of the differential equations (DEs). They also exhibit a drastic distortion but less shrinkage of the basin of attraction of the true solution than standard nonLMM explicit methods. The simple iteration procedure exhibits behavior which is similar to standard nonLMM explicit methods except that spurious steady-state numerical solutions cannot occur. The numerical basins of attraction of the noniterative implicit procedure mimic more closely the basins of attraction of the DEs and are more efficient than the three iterative implicit procedures for the four implicit LMMs. Contrary to popular belief, the initial data using the Newton method of solving the steady part of the DEs may not have to be close to the exact steady state for convergence. These results can be used as an explanation for possible causes and cures of slow convergence and nonconvergence of steady-state numerical solutions when using an implicit LMM time-dependent approach in computational fluid dynamics.

  13. The Impact of Iterative Reconstruction on Computed Tomography Radiation Dosimetry: Evaluation in a Routine Clinical Setting

    PubMed Central

    Moorin, Rachael E.; Gibson, David A. J.; Forsyth, Rene K.; Fox, Richard

    2015-01-01

    Purpose To evaluate the effect of introduction of iterative reconstruction as a mandated software upgrade on radiation dosimetry in routine clinical practice over a range of computed tomography examinations. Methods Random samples of scanning data were extracted from a centralised Picture Archiving Communication System pertaining to 10 commonly performed computed tomography examination types undertaken at two hospitals in Western Australia, before and after the introduction of iterative reconstruction. Changes in the mean dose length product and effective dose were evaluated along with estimations of associated changes to annual cancer incidence. Results We observed statistically significant reductions in the effective radiation dose for head computed tomography (22–27%) consistent with those reported in the literature. In contrast the reductions observed for non-contrast chest (37–47%); chest pulmonary embolism study (28%), chest/abdominal/pelvic study (16%) and thoracic spine (39%) computed tomography. Statistically significant reductions in radiation dose were not identified in angiographic computed tomography. Dose reductions translated to substantial lowering of the lifetime attributable risk, especially for younger females, and estimated numbers of incident cancers. Conclusion Reduction of CT dose is a priority Iterative reconstruction algorithms have the potential to significantly assist with dose reduction across a range of protocols. However, this reduction in dose is achieved via reductions in image noise. Fully realising the potential dose reduction of iterative reconstruction requires the adjustment of image factors and forgoing the noise reduction potential of the iterative algorithm. Our study has demonstrated a reduction in radiation dose for some scanning protocols, but not to the extent experimental studies had previously shown or in all protocols expected, raising questions about the extent to which iterative reconstruction achieves dose

  14. Challenges and status of ITER conductor production

    NASA Astrophysics Data System (ADS)

    Devred, A.; Backbier, I.; Bessette, D.; Bevillard, G.; Gardner, M.; Jong, C.; Lillaz, F.; Mitchell, N.; Romano, G.; Vostner, A.

    2014-04-01

    Taking the relay of the large Hadron collider (LHC) at CERN, ITER has become the largest project in applied superconductivity. In addition to its technical complexity, ITER is also a management challenge as it relies on an unprecedented collaboration of seven partners, representing more than half of the world population, who provide 90% of the components as in-kind contributions. The ITER magnet system is one of the most sophisticated superconducting magnet systems ever designed, with an enormous stored energy of 51 GJ. It involves six of the ITER partners. The coils are wound from cable-in-conduit conductors (CICCs) made up of superconducting and copper strands assembled into a multistage cable, inserted into a conduit of butt-welded austenitic steel tubes. The conductors for the toroidal field (TF) and central solenoid (CS) coils require about 600 t of Nb3Sn strands while the poloidal field (PF) and correction coil (CC) and busbar conductors need around 275 t of Nb-Ti strands. The required amount of Nb3Sn strands far exceeds pre-existing industrial capacity and has called for a significant worldwide production scale up. The TF conductors are the first ITER components to be mass produced and are more than 50% complete. During its life time, the CS coil will have to sustain several tens of thousands of electromagnetic (EM) cycles to high current and field conditions, way beyond anything a large Nb3Sn coil has ever experienced. Following a comprehensive R&D program, a technical solution has been found for the CS conductor, which ensures stable performance versus EM and thermal cycling. Productions of PF, CC and busbar conductors are also underway. After an introduction to the ITER project and magnet system, we describe the ITER conductor procurements and the quality assurance/quality control programs that have been implemented to ensure production uniformity across numerous suppliers. Then, we provide examples of technical challenges that have been encountered and

  15. Tracking Solar Events through Iterative Refinement

    NASA Astrophysics Data System (ADS)

    Kempton, D. J.; Angryk, R. A.

    2015-11-01

    In this paper, we combine two approaches to multiple-target tracking: the first is a hierarchical approach to iteratively growing track fragments across gaps in detections; the second is a network flow based optimization method for data association. The network flow based optimization method is utilized for data association in an iteratively growing manner. This process is applied to solar data, retrieved from the Heliophysics Event Knowledge base (HEK) and utilizes precomputed image parameter values. These precomputed image parameter values are used to compare visual similarity of detected events, to determine the best matching track fragment associations, which leads to a globally optimal track fragment association hypothesis.

  16. Fatigue tests on the ITER PF jacket

    NASA Astrophysics Data System (ADS)

    Qin, Jinggang; Weiss, Klaus-Peter; Wu, Yu; Wu, Zhixiong; Li, Laifeng; Liu, Sheng

    2012-10-01

    This paper focuses on fatigue tests on the ITER Poloidal Field (PF) jacket made of 316L stainless steel material. During manufacture, the conductor will be compacted and spooled after cable insertion. Therefore, sample jackets were prepared under compaction, bending and straightening in order to simulate the status of PF conductor during manufacturing and winding. The fatigue properties of materials were measured at T < 7 K, including S-N and fatigue crack growth rate (FCGR). The testing results show that the present Chinese PF jacket has good fatigue properties, which conclude that the results are accordant with the requirements of ITER.

  17. Iterative method for generating correlated binary sequences

    NASA Astrophysics Data System (ADS)

    Usatenko, O. V.; Melnik, S. S.; Apostolov, S. S.; Makarov, N. M.; Krokhin, A. A.

    2014-11-01

    We propose an efficient iterative method for generating random correlated binary sequences with a prescribed correlation function. The method is based on consecutive linear modulations of an initially uncorrelated sequence into a correlated one. Each step of modulation increases the correlations until the desired level has been reached. The robustness and efficiency of the proposed algorithm are tested by generating sequences with inverse power-law correlations. The substantial increase in the strength of correlation in the iterative method with respect to single-step filtering generation is shown for all studied correlation functions. Our results can be used for design of disordered superlattices, waveguides, and surfaces with selective transport properties.

  18. A novel resistance iterative algorithm for CCOS

    NASA Astrophysics Data System (ADS)

    Zheng, Ligong; Zhang, Xuejun

    2006-08-01

    CCOS (Computer Control Optical Surfacing) technology is widely used for making aspheric mirrors. For most manufacturers, dwell time algorithm is usually employed to determine the route and dwell time of the small tools to converge the errors. In this article, a novel damp iterative algorithm is proposed. We chose revolutions of the small tool instead of dwell time to determine fabrication stratagem. By using resistance iterative algorithm, we can solve these revolutions. Several mirrors have been manufactured by this method, all of them have fulfilled the demand of the designers, a 1m aspheric mirror was finished within 3 months.

  19. Stability of a radiative mantle in ITER

    SciTech Connect

    Mahdavi, M.A.; Staebler, G.M.; Wood, R.D.; Whyte, D.G.; West, W.P.

    1996-12-01

    We report results of a study to evaluate the efficacy of various impurities for heat dispersal by a radiative mantle and radiative divertor(including SOL). We have derived a stability criterion for the mantle radiation which favors low Z impurities and low ratios of edge to core thermal conductivities. Since on the other hand the relative strength of boundary line radiation to core bremsstrahlung favors high Z impurities, we find that for the ITER physics phase argon is the best gaseous impurity for mantle radiation. For the engineering phase of ITER, more detailed analysis is needed to select between krypton and argon.

  20. Scheduling and rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper describes the GERRY scheduling and rescheduling system being applied to coordinate Space Shuttle Ground Processing. The system uses constraint-based iterative repair, a technique that starts with a complete but possibly flawed schedule and iteratively improves it by using constraint knowledge within repair heuristics. In this paper we explore the tradeoff between the informedness and the computational cost of several repair heuristics. We show empirically that some knowledge can greatly improve the convergence speed of a repair-based system, but that too much knowledge, such as the knowledge embodied within the MIN-CONFLICTS lookahead heuristic, can overwhelm a system and result in degraded performance.

  1. Tritium Pumps for ITER Roughing System

    SciTech Connect

    Antipenkov, Alexander; Day, Christian; Mack, August; Wagner, Robert; Laesser, Rainer

    2005-07-15

    The ITER roughing system provides for both the initial pump-down of the vessel itself and the regular pump-out of the batch-regenerating cryopumps. This system must have a large pumping speed and cope with the radioactive gas tritium at the same time. The present paper shall highlight the results of the ITER roughing train optimization, discuss the modification of a Roots pump for tritium, and present the results of a ferrofluidic seal test and the first tests of a tailor-made tritium-proof Roots pump with inactive gases.

  2. Deconvolution of interferometric data using interior point iterative algorithms

    NASA Astrophysics Data System (ADS)

    Theys, C.; Lantéri, H.; Aime, C.

    2016-09-01

    We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.

  3. Series Supply of Cryogenic Venturi Flowmeters for the ITER Project

    NASA Astrophysics Data System (ADS)

    André, J.; Poncet, J. M.; Ercolani, E.; Clayton, N.; Journeaux, J. Y.

    2015-12-01

    In the framework of the ITER project, the CEA-SBT has been contracted to supply 277 venturi tube flowmeters to measure the distribution of helium in the superconducting magnets of the ITER tokamak. Six sizes of venturi tube have been designed so as to span a measurable helium flowrate range from 0.1 g/s to 400g/s. They operate, in nominal conditions, either at 4K or at 300K, and in a nuclear and magnetic environment. Due to the cryogenic conditions and the large number of venturi tubes to be supplied, an individual calibration of each venturi tube would be too expensive and time consuming. Studies have been performed to produce a design which will offer high repeatability in manufacture, reduce the geometrical uncertainties and improve the final helium flowrate measurement accuracy. On the instrumentation side, technologies for differential and absolute pressure transducers able to operate in applied magnetic fields need to be identified and validated. The complete helium mass flow measurement chain will be qualified in four test benches: - A helium loop at room temperature to insure the qualification of a statistically relevant number of venturi tubes operating at 300K.- A supercritical helium loop for the qualification of venturi tubes operating at cryogenic temperature (a modification to the HELIOS test bench). - A dedicated vacuum vessel to check the helium leak tightness of all the venturi tubes. - A magnetic test bench to qualify different technologies of pressure transducer in applied magnetic fields up to 100mT.

  4. An Iterative Uncertainty Assessment Technique for Environmental Modeling

    SciTech Connect

    Engel, David W.; Liebetrau, Albert M.; Jarman, Kenneth D.; Ferryman, Thomas A.; Scheibe, Timothy D.; Didier, Brett T.

    2004-06-28

    The reliability of and confidence in predictions from model simulations are crucial--these predictions can significantly affect risk assessment decisions. For example, the fate of contaminants at the U.S. Department of Energy's Hanford Site has critical impacts on long-term waste management strategies. In the uncertainty estimation efforts for the Hanford Site-Wide Groundwater Modeling program, computational issues severely constrain both the number of uncertain parameters that can be considered and the degree of realism that can be included in the models. Substantial improvements in the overall efficiency of uncertainty analysis are needed to fully explore and quantify significant sources of uncertainty. We have combined state-of-the-art statistical and mathematical techniques in a unique iterative, limited sampling approach to efficiently quantify both local and global prediction uncertainties resulting from model input uncertainties. The approach is designed for application to widely diverse problems across multiple scientific domains. Results are presented for both an analytical model where the response surface is ''known'' and a simplified contaminant fate transport and groundwater flow model. The results show that our iterative method for approximating a response surface (for subsequent calculation of uncertainty estimates) of specified precision requires less computing time than traditional approaches based upon noniterative sampling methods.

  5. Adaptive Management for Urban Watersheds: The Slavic Village Pilot Project

    EPA Science Inventory

    Adaptive management is an environmental management strategy that uses an iterative process of decision-making to reduce the uncertainty in environmental management via system monitoring. A central tenet of adaptive management is that management involves a learning process that ca...

  6. Bayesian classification of polarimetric SAR images using adaptive a priori probabilities

    NASA Technical Reports Server (NTRS)

    Van Zyl, J. J.; Burnette, C. F.

    1992-01-01

    The problem of classifying earth terrain by observed polarimetric scattering properties is tackled with an iterative Bayesian scheme using a priori probabilities adaptively. The first classification is based on the use of fixed and not necessarily equal a priori probabilities, and successive iterations change the a priori probabilities adaptively. The approach is applied to an SAR image in which a single water body covers 10 percent of the image area. The classification accuracy for ocean, urban, vegetated, and total area increase, and the percentage of reclassified pixels decreases greatly as the iteration number increases. The iterative scheme is found to improve the a posteriori classification accuracy of maximum likelihood classifiers by iteratively using the local homogeneity in polarimetric SAR images. A few iterations can improve the classification accuracy significantly without sacrificing key high-frequency detail or edges in the image.

  7. Students' attitudes towards learning statistics

    NASA Astrophysics Data System (ADS)

    Ghulami, Hassan Rahnaward; Hamid, Mohd Rashid Ab; Zakaria, Roslinazairimah

    2015-05-01

    Positive attitude towards learning is vital in order to master the core content of the subject matters under study. This is unexceptional in learning statistics course especially at the university level. Therefore, this study investigates the students' attitude towards learning statistics. Six variables or constructs have been identified such as affect, cognitive competence, value, difficulty, interest, and effort. The instrument used for the study is questionnaire that was adopted and adapted from the reliable instrument of Survey of Attitudes towards Statistics(SATS©). This study is conducted to engineering undergraduate students in one of the university in the East Coast of Malaysia. The respondents consist of students who were taking the applied statistics course from different faculties. The results are analysed in terms of descriptive analysis and it contributes to the descriptive understanding of students' attitude towards the teaching and learning process of statistics.

  8. Particle migration analysis in iterative classification of cryo-EM single-particle data.

    PubMed

    Chen, Bo; Shen, Bingxin; Frank, Joachim

    2014-12-01

    Recently developed classification methods have enabled resolving multiple biological structures from cryo-EM data collected on heterogeneous biological samples. However, there remains the problem of how to base the decisions in the classification on the statistics of the cryo-EM data, to reduce the subjectivity in the process. Here, we propose a quantitative analysis to determine the iteration of convergence and the number of distinguishable classes, based on the statistics of the single particles in an iterative classification scheme. We start the classification with more number of classes than anticipated based on prior knowledge, and then combine the classes that yield similar reconstructions. The classes yielding similar reconstructions can be identified from the migrating particles (jumpers) during consecutive iterations after the iteration of convergence. We therefore termed the method "jumper analysis", and applied it to the output of RELION 3D classification of a benchmark experimental dataset. This work is a step forward toward fully automated single-particle reconstruction and classification of cryo-EM data.

  9. Iterative Monte Carlo analysis of spin-dependent parton distributions

    NASA Astrophysics Data System (ADS)

    Sato, Nobuo; Melnitchouk, W.; Kuhn, S. E.; Ethier, J. J.; Accardi, A.; Jefferson Lab Angular Momentum Collaboration

    2016-04-01

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳0.1 . The study also provides the first determination of the flavor-separated twist-3 PDFs and the d2 moment of the nucleon within a global PDF analysis.

  10. Iterative solution of the Helmholtz equation

    SciTech Connect

    Larsson, E.; Otto, K.

    1996-12-31

    We have shown that the numerical solution of the two-dimensional Helmholtz equation can be obtained in a very efficient way by using a preconditioned iterative method. We discretize the equation with second-order accurate finite difference operators and take special care to obtain non-reflecting boundary conditions. We solve the large, sparse system of equations that arises with the preconditioned restarted GMRES iteration. The preconditioner is of {open_quotes}fast Poisson type{close_quotes}, and is derived as a direct solver for a modified PDE problem.The arithmetic complexity for the preconditioner is O(n log{sub 2} n), where n is the number of grid points. As a test problem we use the propagation of sound waves in water in a duct with curved bottom. Numerical experiments show that the preconditioned iterative method is very efficient for this type of problem. The convergence rate does not decrease dramatically when the frequency increases. Compared to banded Gaussian elimination, which is a standard solution method for this type of problems, the iterative method shows significant gain in both storage requirement and arithmetic complexity. Furthermore, the relative gain increases when the frequency increases.

  11. Constructing Easily Iterated Functions with Interesting Properties

    ERIC Educational Resources Information Center

    Sprows, David J.

    2009-01-01

    A number of schools have recently introduced new courses dealing with various aspects of iteration theory or at least have found ways of including topics such as chaos and fractals in existing courses. In this note, we will consider a family of functions whose members are especially well suited to illustrate many of the concepts involved in these…

  12. First mirrors for diagnostic systems of ITER

    NASA Astrophysics Data System (ADS)

    Litnovsky, A.; Voitsenya, V. S.; Costley, A.; Donné, A. J. H.; SWG on First Mirrors of the ITPA Topical Group on Diagnostics

    2007-08-01

    The majority of optical diagnostics presently foreseen for ITER will implement in-vessel metallic mirrors as plasma-viewing components. Mirrors are used for the observation of the plasma radiation in a very wide wavelength range: from about 1 nm up to a few mm. In the hostile ITER environment, mirrors are subject to erosion, deposition, particle implantation and other adverse effects which will change their optical properties, affecting the entire performance of the respective diagnostic systems. The Specialists Working Group (SWG) on first mirrors was established under the wings of the International Tokamak Physics Activity (ITPA) Topical Group (TG) on Diagnostics to coordinate and guide the investigations on diagnostic mirrors towards the development of optimal, robust and durable solutions for ITER diagnostic systems. The results of tests of various ITER-candidate mirror materials, performed in Tore-Supra, TEXTOR, DIII-D, TCV, T-10, TRIAM-1M and LHD under various plasma conditions, as well as an overview of laboratory investigations of mirror performance and mirror cleaning techniques are presented in the paper. The current tasks in the R&D of diagnostic mirrors will be addressed.

  13. Asymptotic iteration approach to supersymmetric bistable potentials

    NASA Astrophysics Data System (ADS)

    Ciftci, H.; Özer, O.; P., Roy

    2012-01-01

    We examine quasi exactly solvable bistable potentials and their supersymmetric partners within the framework of the asymptotic iteration method (AIM). It is shown that the AIM produces excellent approximate spectra and that sometimes it is found to be more useful to use the partner potential for computation. We also discuss the direct application of the AIM to the Fokker—Planck equation.

  14. Iteration of Complex Functions and Newton's Method

    ERIC Educational Resources Information Center

    Dwyer, Jerry; Barnard, Roger; Cook, David; Corte, Jennifer

    2009-01-01

    This paper discusses some common iterations of complex functions. The presentation is such that similar processes can easily be implemented and understood by undergraduate students. The aim is to illustrate some of the beauty of complex dynamics in an informal setting, while providing a couple of results that are not otherwise readily available in…

  15. On the safety of ITER accelerators

    PubMed Central

    Li, Ge

    2013-01-01

    Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate −1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER. PMID:24008267

  16. Solving Differential Equations Using Modified Picard Iteration

    ERIC Educational Resources Information Center

    Robin, W. A.

    2010-01-01

    Many classes of differential equations are shown to be open to solution through a method involving a combination of a direct integration approach with suitably modified Picard iterative procedures. The classes of differential equations considered include typical initial value, boundary value and eigenvalue problems arising in physics and…

  17. ITER faces further five-year delay

    NASA Astrophysics Data System (ADS)

    Clery, Daniel

    2016-06-01

    The €14bn ITER fusion reactor currently under construction in Cadarache, France, will require an additional cash injection of €4.6bn if it is to start up in 2025 - a target date that is already five years later than currently scheduled.

  18. Microtearing Instability In The ITER Pedestal

    SciTech Connect

    Wong, K. L.; Mikkelsen, D. R.; Rewoldt, G. M.; Budny, R.

    2010-12-01

    Unstable microtearing modes are discovered by the GS2 gyrokinetic siimulation code, in the pedestal region of a simulated ITER H-mode plasma with approximately 400 WM DT fusion power. Existing nonlinear theory indicates that these instabilities should produce stochastic magnetic fields and broaden the pedestal. The resulted electron thermal conductivity is estimated and the implications of these findings are discussed.

  19. Iteration and Anxiety in Mathematical Literature

    ERIC Educational Resources Information Center

    Capezzi, Rita; Kinsey, L. Christine

    2016-01-01

    We describe our experiences in team-teaching an honors seminar on mathematics and literature. We focus particularly on two of the texts we read: Georges Perec's "How to Ask Your Boss for a Raise" and Alain Robbe-Grillet's "Jealousy," both of which make use of iterative structures.

  20. Development of advanced inductive scenarios for ITER

    NASA Astrophysics Data System (ADS)

    Luce, T. C.; Challis, C. D.; Ide, S.; Joffrin, E.; Kamada, Y.; Politzer, P. A.; Schweinzer, J.; Sips, A. C. C.; Stober, J.; Giruzzi, G.; Kessel, C. E.; Murakami, M.; Na, Y.-S.; Park, J. M.; Polevoi, A. R.; Budny, R. V.; Citrin, J.; Garcia, J.; Hayashi, N.; Hobirk, J.; Hudson, B. F.; Imbeaux, F.; Isayama, A.; McDonald, D. C.; Nakano, T.; Oyama, N.; Parail, V. V.; Petrie, T. W.; Petty, C. C.; Suzuki, T.; Wade, M. R.; the ITPA Integrated Operation Scenario Topical Group Members; the ASDEX-Upgrade Team; the DIII-D Team; EFDA Contributors, JET; the JT-60U Team

    2014-01-01

    Since its inception in 2002, the International Tokamak Physics Activity topical group on Integrated Operational Scenarios (IOS) has coordinated experimental and modelling activity on the development of advanced inductive scenarios for applications in the ITER tokamak. The physics basis and the prospects for applications in ITER have been advanced significantly during that time, especially with respect to experimental results. The principal findings of this research activity are as follows. Inductive scenarios capable of higher normalized pressure (βN ⩾ 2.4) than the ITER baseline scenario (βN = 1.8) with normalized confinement at or above the standard H-mode scaling are well established under stationary conditions on the four largest diverted tokamaks (AUG, DIII-D, JET, JT-60U), demonstrated in a database of more than 500 plasmas from these tokamaks analysed here. The parameter range where high performance is achieved is broad in q95 and density normalized to the empirical density limit. MHD modes can play a key role in reaching stationary high performance, but also define the limits to achieved stability and confinement. Projection of performance in ITER from existing experiments uses empirical scalings and theory-based modelling. The status of the experimental validation of both approaches is summarized here. The database shows significant variation in the energy confinement normalized to standard H-mode confinement scalings, indicating the possible influence of additional physics variables absent from the scalings. Tests using the available information on rotation and the ratio of the electron and ion temperatures indicate neither of these variables in isolation can explain the variation in normalized confinement observed. Trends in the normalized confinement with the two dimensionless parameters that vary most from present-day experiments to ITER, gyroradius and collision frequency, are significant. Regression analysis on the multi-tokamak database has been

  1. Optimal application of Morrison's iterative noise removal for deconvolution. Appendices

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1987-01-01

    Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.

  2. Testing Short Samples of ITER Conductors and Projection of Their Performance in ITER Magnets

    SciTech Connect

    Martovetsky, N N

    2007-08-20

    Qualification of the ITER conductor is absolutely necessary. Testing large scale conductors is expensive and time consuming. To test straight 3-4m long samples in a bore of a split solenoid is a relatively economical way in comparison with fabrication of a coil to be tested in a bore of a background field solenoid. However, testing short sample may give ambiguous results due to different constraints in current redistribution in the cable or other end effects which are not present in the large magnet. This paper discusses processes taking place in the ITER conductor, conditions when conductor performance could be distorted and possible signal processing to deduce behavior of ITER conductors in ITER magnets from the test data.

  3. Overview on Experiments On ITER-like Antenna On JET And ICRF Antenna Design For ITER

    SciTech Connect

    Nightingale, M. P. S.; Blackman, T.; Edwards, D.; Fanthome, J.; Graham, M.; Hamlyn-Harris, C.; Hancock, D.; Jacquet, P.; Mayoral, M.-L.; Monakhov, I.; Nicholls, K.; Stork, D.; Whitehurst, A.; Wilson, D.; Wooldridge, E.

    2009-11-26

    Following an overview of the ITER Ion Cyclotron Resonance Frequency (ICRF) system, the JET ITER-like antenna (ILA) will be described. The ILA was designed to test the following ITER issues: (a) reliable operation at power densities of order 8 MW/m{sup 2} at voltages up to 45 kV using a close-packed array of straps; (b) powering through ELMs using an internal (in-vacuum) conjugate-T junction; (c) protection from arcing in a conjugate-T configuration, using both existing and novel systems; and (d) resilience to disruption forces. ITER-relevant results have been achieved: operation at high coupled power density; control of the antenna matching elements in the presence of high inter-strap coupling, use of four conjugate-T systems (as would be used in ITER, should a conjugate-T approach be used); operation with RF voltages on the antenna structures up to 42 kV; achievement of ELM tolerance with a conjugate-T configuration by operating at 3{omega} real impedance at the conjugate-T point; and validation of arc detection systems on conjugate-T configurations in ELMy H-mode plasmas. The impact of these results on the predicted performance and design of the ITER antenna will be reviewed. In particular, the implications of the RF coupling measured on JET will be discussed.

  4. Communication: Iteration-free, weighted histogram analysis method in terms of intensive variables

    PubMed Central

    Kim, Jaegil; Keyes, Thomas; Straub, John E.

    2011-01-01

    We present an iteration-free weighted histogram method in terms of intensive variables that directly determines the inverse statistical temperature, βS = ∂S/∂E, with S the microcanonical entropy. The method eliminates iterative evaluations of the partition functions intrinsic to the conventional approach and leads to a dramatic acceleration of the posterior analysis of combining statistically independent simulations with no loss in accuracy. The synergistic combination of the method with generalized ensemble weights provides insights into the nature of the underlying phase transitions via signatures in βS characteristic of finite size systems. The versatility and accuracy of the method is illustrated for the Ising and Potts models. PMID:21842919

  5. Iterative performance of various formulations of the SPN equations

    NASA Astrophysics Data System (ADS)

    Zhang, Yunhuang; Ragusa, Jean C.; Morel, Jim E.

    2013-11-01

    In this paper, the Standard, Composite, and Canonical forms of the Simplified PN (SPN) equations are reviewed and their corresponding iterative properties are compared. The Gauss-Seidel (FLIP), Explicit, and preconditioned Source Iteration iterative schemes have been analyzed for both isotropic and highly anisotropic (Fokker-Planck) scattering. The iterative performance of the various SPN forms is assessed using Fourier analysis, corroborated with numerical experiments.

  6. Illustrating the practice of statistics

    SciTech Connect

    Hamada, Christina A; Hamada, Michael S

    2009-01-01

    The practice of statistics involves analyzing data and planning data collection schemes to answer scientific questions. Issues often arise with the data that must be dealt with and can lead to new procedures. In analyzing data, these issues can sometimes be addressed through the statistical models that are developed. Simulation can also be helpful in evaluating a new procedure. Moreover, simulation coupled with optimization can be used to plan a data collection scheme. The practice of statistics as just described is much more than just using a statistical package. In analyzing the data, it involves understanding the scientific problem and incorporating the scientist's knowledge. In modeling the data, it involves understanding how the data were collected and accounting for limitations of the data where possible. Moreover, the modeling is likely to be iterative by considering a series of models and evaluating the fit of these models. Designing a data collection scheme involves understanding the scientist's goal and staying within hislher budget in terms of time and the available resources. Consequently, a practicing statistician is faced with such tasks and requires skills and tools to do them quickly. We have written this article for students to provide a glimpse of the practice of statistics. To illustrate the practice of statistics, we consider a problem motivated by some precipitation data that our relative, Masaru Hamada, collected some years ago. We describe his rain gauge observational study in Section 2. We describe modeling and an initial analysis of the precipitation data in Section 3. In Section 4, we consider alternative analyses that address potential issues with the precipitation data. In Section 5, we consider the impact of incorporating additional infonnation. We design a data collection scheme to illustrate the use of simulation and optimization in Section 6. We conclude this article in Section 7 with a discussion.

  7. Adaptive Management

    EPA Science Inventory

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...

  8. He's iteration formulation for solving nonlinear algebraic equations

    NASA Astrophysics Data System (ADS)

    Qian, W.-X.; Ye, Y.-H.; Chen, J.; Mo, L.-F.

    2008-02-01

    Newton iteration method is sensitive to initial guess and its slope. To overcome the shortcoming, He's iteration method is used to solve nonlinear algebraic equations where Newton iteration method becomes invalid. Some examples are given, showing that the method is effective.

  9. Development and benchmarking of TASSER(iter) for the iterative improvement of protein structure predictions.

    PubMed

    Lee, Seung Yup; Skolnick, Jeffrey

    2007-07-01

    To improve the accuracy of TASSER models especially in the limit where threading provided template alignments are of poor quality, we have developed the TASSER(iter) algorithm which uses the templates and contact restraints from TASSER generated models for iterative structure refinement. We apply TASSER(iter) to a large benchmark set of 2,773 nonhomologous single domain proteins that are < or = 200 in length and that cover the PDB at the level of 35% pairwise sequence identity. Overall, TASSER(iter) models have a smaller global average RMSD of 5.48 A compared to 5.81 A RMSD of the original TASSER models. Classifying the targets by the level of prediction difficulty (where Easy targets have a good template with a corresponding good threading alignment, Medium targets have a good template but a poor alignment, and Hard targets have an incorrectly identified template), TASSER(iter) (TASSER) models have an average RMSD of 4.15 A (4.35 A) for the Easy set and 9.05 A (9.52 A) for the Hard set. The largest reduction of average RMSD is for the Medium set where the TASSER(iter) models have an average global RMSD of 5.67 A compared to 6.72 A of the TASSER models. Seventy percent of the Medium set TASSER(iter) models have a smaller RMSD than the TASSER models, while 63% of the Easy and 60% of the Hard TASSER models are improved by TASSER(iter). For the foldable cases, where the targets have a RMSD to the native <6.5 A, TASSER(iter) shows obvious improvement over TASSER models: For the Medium set, it improves the success rate from 57.0 to 67.2%, followed by the Hard targets where the success rate improves from 32.0 to 34.8%, with the smallest improvement in the Easy targets from 82.6 to 84.0%. These results suggest that TASSER(iter) can provide more reliable predictions for targets of Medium difficulty, a range that had resisted improvement in the quality of protein structure predictions.

  10. RESEARCH NOTE FROM COLLABORATION: Adaptive vertex fitting

    NASA Astrophysics Data System (ADS)

    Waltenberger, Wolfgang; Frühwirth, Rudolf; Vanlaer, Pascal

    2007-12-01

    Vertex fitting frequently has to deal with both mis-associated tracks and mis-measured track errors. A robust, adaptive method is presented that is able to cope with contaminated data. The method is formulated as an iterative re-weighted Kalman filter. Annealing is introduced to avoid local minima in the optimization. For the initialization of the adaptive filter a robust algorithm is presented that turns out to perform well in a wide range of applications. The tuning of the annealing schedule and of the cut-off parameter is described using simulated data from the CMS experiment. Finally, the adaptive property of the method is illustrated in two examples.

  11. A holistic strategy for adaptive land management

    USGS Publications Warehouse

    Herrick, Jeffrey E.; Duniway, Michael C.; Pyke, David A.; Bestelmeyer, Brandon T.; Wills, Skye A.; Brown, Joel R.; Karl, Jason W.; Havstad, Kris M.

    2012-01-01

    Adaptive management is widely applied to natural resources management (Holling 1973; Walters and Holling 1990). Adaptive management can be generally defined as an iterative decision-making process that incorporates formulation of management objectives, actions designed to address these objectives, monitoring of results, and repeated adaptation of management until desired results are achieved (Brown and MacLeod 1996; Savory and Butterfield 1999). However, adaptive management is often criticized because very few projects ever complete more than one cycle, resulting in little adaptation and little knowledge gain (Lee 1999; Walters 2007). One significant criticism is that adaptive management is often used as a justification for undertaking actions with uncertain outcomes or as a surrogate for the development of specific, measurable indicators and monitoring programs (Lee 1999; Ruhl 2007).

  12. Starting with the Familiar: An Element in Climate Change Adaptation

    NASA Astrophysics Data System (ADS)

    Redmond, K. T.

    2008-12-01

    A practical strategy for adaptation would be most effective if it began with what is familiar. In many ways, the uses for climate data and information in different sectors have been to provide ever more refined adaptation to what is perceived as the current climate. This perspective offers a starting point for adaptation to a climate with different properties. A widely shared traditional assumption is that the statistics of the future will be like the statistics of the past. A large number of tools and applications have been developed that have taken this presumption as nearly an article of faith. Approaches that can continue to make use of these familiar tools, but at the same time allow for the data upon which they operate to slowly begin to deviate from present or past values, are likely to have greater acceptance. Climate change in most circumstances will be experienced not as an abrupt transition (though this may sometimes happen), but rather as a gradual departure from what has characterized the past. A major need is for the climate modeling community to express the output of future climate projections in terms that users are accustomed to working with. At present there is a very large disconnect between the modeling and user communities in this regard. As one example, there are few easy ways of obtaining a two-hundred year time series of daily or hourly data for a site of interest, spanning the past and future century, from a model or set of models, that can be readily put into a form tailored to such existing software. Many models do not even save data of interest to, and familiar to, users. Because models are not likely to have the correct statistics for the present and recent climate, a step is needed to adjust for inaccuracies, which will probably be the rule rather than the exception. For most models these errors have been diminishing with each new iteration, and are likely to be generally tolerable. At present there is no set of procedures for helping

  13. Weighted iterative reconstruction for magnetic particle imaging

    NASA Astrophysics Data System (ADS)

    Knopp, T.; Rahmer, J.; Sattel, T. F.; Biederer, S.; Weizenecker, J.; Gleich, B.; Borgert, J.; Buzug, T. M.

    2010-03-01

    Magnetic particle imaging (MPI) is a new imaging technique capable of imaging the distribution of superparamagnetic particles at high spatial and temporal resolution. For the reconstruction of the particle distribution, a system of linear equations has to be solved. The mathematical solution to this linear system can be obtained using a least-squares approach. In this paper, it is shown that the quality of the least-squares solution can be improved by incorporating a weighting matrix using the reciprocal of the matrix-row energy as weights. A further benefit of this weighting is that iterative algorithms, such as the conjugate gradient method, converge rapidly yielding the same image quality as obtained by singular value decomposition in only a few iterations. Thus, the weighting strategy in combination with the conjugate gradient method improves the image quality and substantially shortens the reconstruction time. The performance of weighting strategy and reconstruction algorithms is assessed with experimental data of a 2D MPI scanner.

  14. Iterative most likely oriented point registration.

    PubMed

    Billings, Seth; Taylor, Russell

    2014-01-01

    A new algorithm for model based registration is presented that optimizes both position and surface normal information of the shapes being registered. This algorithm extends the popular Iterative Closest Point (ICP) algorithm by incorporating the surface orientation at each point into both the correspondence and registration phases of the algorithm. For the correspondence phase an efficient search strategy is derived which computes the most probable correspondences considering both position and orientation differences in the match. For the registration phase an efficient, closed-form solution provides the maximum likelihood rigid body alignment between the oriented point matches. Experiments by simulation using human femur data demonstrate that the proposed Iterative Most Likely Oriented Point (IMLOP) algorithm has a strong accuracy advantage over ICP and has increased ability to robustly identify a successful registration result.

  15. The Cryostat and Subsystems Development at ITER

    NASA Astrophysics Data System (ADS)

    Sekachev, Igor; Meekins, Michael; Sborchia, Carlo; Vitupier, Guillaume; Xie, Han; Zhou, Caipin

    ITER is a large experimental tokamak being built to research fusion power. The ITER cryostat is a multifunctional system which provides vacuum insulation for the superconducting magnets operating at 4.5 K and for the thermal shield operating at 80 K. It also serves as a structural support for the tokamak and provides access ways and corridors to the vacuum vessel for diagnostic lines of sight, additional heating beams and the deployment of remote handling equipment. The cryostat has feed-through penetrations for all the equipment connecting elements of systems outside the cryostat to the corresponding elements inside the cryostat. The cryostat is a vacuum containment vessel having a very large volume of ∼16000 m3 designed to be evacuated to a base pressure of 10-4 Pa. Design details of the cryostat and associated systems, including Torus Cryopump Housing (TCPH), are discussed. Status report of the cryostat developments is presented.

  16. Main challenges for ITER optical diagnostics

    SciTech Connect

    Vukolov, K. Yu.; Orlovskiy, I. I.; Alekseev, A. G.; Borisov, A. A.; Andreenko, E. N.; Kukushkin, A. B.; Lisitsa, V. S.; Neverov, V. S.

    2014-08-21

    The review is made of the problems of ITER optical diagnostics. Most of these problems will be related to the intensive neutron radiation from hot plasma. At a high level of radiation loads the most types of materials gradually change their properties. This effect is most critical for optical diagnostics because of degradation of optical glasses and mirrors. The degradation of mirrors, that collect the light from plasma, basically will be induced by impurity deposition and (or) sputtering by charge exchange atoms. Main attention is paid to the search of glasses for vacuum windows and achromatic lens which are stable under ITER irradiation conditions. The last results of irradiation tests in nuclear reactor of candidate silica glasses KU-1, KS-4V and TF 200 are presented. An additional problem is discussed that deals with the stray light produced by multiple reflections from the first wall of the intense light emitted in the divertor plasma.

  17. New iterative solvers for the NAG Libraries

    SciTech Connect

    Salvini, S.; Shaw, G.

    1996-12-31

    The purpose of this paper is to introduce the work which has been carried out at NAG Ltd to update the iterative solvers for sparse systems of linear equations, both symmetric and unsymmetric, in the NAG Fortran 77 Library. Our current plans to extend this work and include it in our other numerical libraries in our range are also briefly mentioned. We have added to the Library the new Chapter F11, entirely dedicated to sparse linear algebra. At Mark 17, the F11 Chapter includes sparse iterative solvers, preconditioners, utilities and black-box routines for sparse symmetric (both positive-definite and indefinite) linear systems. Mark 18 will add solvers, preconditioners, utilities and black-boxes for sparse unsymmetric systems: the development of these has already been completed.

  18. Main challenges for ITER optical diagnostics

    NASA Astrophysics Data System (ADS)

    Vukolov, K. Yu.; Orlovskiy, I. I.; Alekseev, A. G.; Borisov, A. A.; Andreenko, E. N.; Kukushkin, A. B.; Lisitsa, V. S.; Neverov, V. S.

    2014-08-01

    The review is made of the problems of ITER optical diagnostics. Most of these problems will be related to the intensive neutron radiation from hot plasma. At a high level of radiation loads the most types of materials gradually change their properties. This effect is most critical for optical diagnostics because of degradation of optical glasses and mirrors. The degradation of mirrors, that collect the light from plasma, basically will be induced by impurity deposition and (or) sputtering by charge exchange atoms. Main attention is paid to the search of glasses for vacuum windows and achromatic lens which are stable under ITER irradiation conditions. The last results of irradiation tests in nuclear reactor of candidate silica glasses KU-1, KS-4V and TF 200 are presented. An additional problem is discussed that deals with the stray light produced by multiple reflections from the first wall of the intense light emitted in the divertor plasma.

  19. A fast iterated conditional modes algorithm for water-fat decomposition in MRI.

    PubMed

    Huang, Fangping; Narayan, Sreenath; Wilson, David; Johnson, David; Zhang, Guo-Qiang

    2011-08-01

    Decomposition of water and fat in magnetic resonance imaging (MRI) is important for biomedical research and clinical applications. In this paper, we propose a two-phased approach for the three-point water-fat decomposition problem. Our contribution consists of two components: 1) a background-masked Markov random field (MRF) energy model to formulate the local smoothness of field inhomogeneity; 2) a new iterated conditional modes (ICM) algorithm accounting for high-performance optimization of the MRF energy model. The MRF energy model is integrated with background masking to prevent error propagation of background estimates as well as improve efficiency. The central component of our new ICM algorithm is the stability tracking (ST) mechanism intended to dynamically track iterative stability on pixels so that computation per iteration is performed only on instable pixels. The ST mechanism significantly improves the efficiency of ICM. We also develop a median-based initialization algorithm to provide good initial guesses for ICM iterations, and an adaptive gradient-based scheme for parametric configuration of the MRF model. We evaluate the robust of our approach with high-resolution mouse datasets acquired from 7T MRI. PMID:21402510

  20. Helicopter trim analysis by shooting and finite element methods with optimally damped Newton iterations

    NASA Technical Reports Server (NTRS)

    Achar, N. S.; Gaonkar, G. H.

    1993-01-01

    Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used, and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.

  1. Design studies for ITER x-ray diagnostics

    SciTech Connect

    Hill, K.W.; Bitter, M.; von Goeler, S.; Hsuan, H.

    1995-01-01

    Concepts for adapting conventional tokamak x-ray diagnostics to the harsh radiation environment of ITER include use of grazing-incidence (GI) x-ray mirrors or man-made Bragg multilayer (ML) elements to remove the x-ray beam from the neutron beam, or use of bundles of glass-capillary x-ray ``light pipes`` embedded in radiation shields to reduce the neutron/gamma-ray fluxes onto the detectors while maintaining usable x-ray throughput. The x-ray optical element with the broadest bandwidth and highest throughput, the GI mirror, can provide adequate lateral deflection (10 cm for a deflected-path length of 8 m) at x-ray energies up to 12, 22, or 30 keV for one, two, or three deflections, respectively. This element can be used with the broad band, high intensity x-ray imaging system (XIS), the pulseheight analysis (PHA) survey spectrometer, or the high resolution Johann x-ray crystal spectrometer (XCS), which is used for ion-temperature measurement. The ML mirrors can isolate the detector from the neutron beam with a single deflection for energies up to 50 keV, but have much narrower bandwidth and lower x-ray power throughput than do the GI mirrors; they are unsuitable for use with the XIS or PHA, but they could be used with the XCS; in particular, these deflectors could be used between ITER and the biological shield to avoid direct plasma neutron streaming through the biological shield. Graded-d ML mirrors have good reflectivity from 20 to 70 keV, but still at grazing angles (<3 mrad). The efficiency at 70 keV for double reflection (10 percent), as required for adequate separation of the x-ray and neutron beams, is high enough for PHA requirements, but not for the XIS. Further optimization may be possible.

  2. Iterative solutions to the Dirac equation

    SciTech Connect

    Ciftci, Hakan; Hall, Richard L.; Saad, Nasser

    2005-08-15

    We consider a single particle which is bound by a central potential and obeys the Dirac equation in d dimensions. We first apply the asymptotic iteration method to recover the known exact solutions for the pure Coulomb case. For a screened Coulomb potential and for a Coulomb plus linear potential with linear scalar confinement, the method is used to obtain accurate approximate solutions for both eigenvalues and wave functions.

  3. Iterative solution of high order compact systems

    SciTech Connect

    Spotz, W.F.; Carey, G.F.

    1996-12-31

    We have recently developed a class of finite difference methods which provide higher accuracy and greater stability than standard central or upwind difference methods, but still reside on a compact patch of grid cells. In the present study we investigate the performance of several gradient-type iterative methods for solving the associated sparse systems. Both serial and parallel performance studies have been made. Representative examples are taken from elliptic PDE`s for diffusion, convection-diffusion, and viscous flow applications.

  4. Iterates of a Berezin-type transform

    NASA Astrophysics Data System (ADS)

    Liu, Congwen

    2007-05-01

    Let be the open unit ball of and dV denote the Lebesgue measure on normalized so that the measure of equals 1. Suppose . The Berezin-type transform of f is defined by We prove that if then the iterates converge to the Poisson extension of the boundary values of f, as k-->[infinity]. This can be viewed as a higher dimensional generalization of a previous result obtained independently by Englis and Zhu.

  5. Iterative pass optimization of sequence data

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  6. Iterative solution of the semiconductor device equations

    SciTech Connect

    Bova, S.W.; Carey, G.F.

    1996-12-31

    Most semiconductor device models can be described by a nonlinear Poisson equation for the electrostatic potential coupled to a system of convection-reaction-diffusion equations for the transport of charge and energy. These equations are typically solved in a decoupled fashion and e.g. Newton`s method is used to obtain the resulting sequences of linear systems. The Poisson problem leads to a symmetric, positive definite system which we solve iteratively using conjugate gradient. The transport equations lead to nonsymmetric, indefinite systems, thereby complicating the selection of an appropriate iterative method. Moreover, their solutions exhibit steep layers and are subject to numerical oscillations and instabilities if standard Galerkin-type discretization strategies are used. In the present study, we use an upwind finite element technique for the transport equations. We also evaluate the performance of different iterative methods for the transport equations and investigate various preconditioners for a few generalized gradient methods. Numerical examples are given for a representative two-dimensional depletion MOSFET.

  7. Perturbation resilience and superiorization of iterative algorithms

    NASA Astrophysics Data System (ADS)

    Censor, Y.; Davidi, R.; Herman, G. T.

    2010-06-01

    Iterative algorithms aimed at solving some problems are discussed. For certain problems, such as finding a common point in the intersection of a finite number of convex sets, there often exist iterative algorithms that impose very little demand on computer resources. For other problems, such as finding that point in the intersection at which the value of a given function is optimal, algorithms tend to need more computer memory and longer execution time. A methodology is presented whose aim is to produce automatically for an iterative algorithm of the first kind a 'superiorized version' of it that retains its computational efficiency but nevertheless goes a long way toward solving an optimization problem. This is possible to do if the original algorithm is 'perturbation resilient', which is shown to be the case for various projection algorithms for solving the consistent convex feasibility problem. The superiorized versions of such algorithms use perturbations that steer the process in the direction of a superior feasible point, which is not necessarily optimal, with respect to the given function. After presenting these intuitive ideas in a precise mathematical form, they are illustrated in image reconstruction from projections for two different projection algorithms superiorized for the function whose value is the total variation of the image.

  8. Recent ADI iteration analysis and results

    SciTech Connect

    Wachspress, E.L.

    1994-12-31

    Some recent ADI iteration analysis and results are discussed. Discovery that the Lyapunov and Sylvester matrix equations are model ADI problems stimulated much research on ADI iteration with complex spectra. The ADI rational Chebyshev analysis parallels the classical linear Chebyshev theory. Two distinct approaches have been applied to these problems. First, parameters which were optimal for real spectra were shown to be nearly optimal for certain families of complex spectra. In the linear case these were spectra bounded by ellipses in the complex plane. In the ADI rational case these were spectra bounded by {open_quotes}elliptic-function regions{close_quotes}. The logarithms of the latter appear like ellipses, and the logarithms of the optimal ADI parameters for these regions are similar to the optimal parameters for linear Chebyshev approximation over superimposed ellipses. W.B. Jordan`s bilinear transformation of real variables to reduce the two-variable problem to one variable was generalized into the complex plane. This was needed for ADI iterative solution of the Sylvester equation.

  9. Conformal mapping and convergence of Krylov iterations

    SciTech Connect

    Driscoll, T.A.; Trefethen, L.N.

    1994-12-31

    Connections between conformal mapping and matrix iterations have been known for many years. The idea underlying these connections is as follows. Suppose the spectrum of a matrix or operator A is contained in a Jordan region E in the complex plane with 0 not an element of E. Let {phi}(z) denote a conformal map of the exterior of E onto the exterior of the unit disk, with {phi}{infinity} = {infinity}. Then 1/{vert_bar}{phi}(0){vert_bar} is an upper bound for the optimal asymptotic convergence factor of any Krylov subspace iteration. This idea can be made precise in various ways, depending on the matrix iterations, on whether A is finite or infinite dimensional, and on what bounds are assumed on the non-normality of A. This paper explores these connections for a variety of matrix examples, making use of a new MATLAB Schwarz-Christoffel Mapping Toolbox developed by the first author. Unlike the earlier Fortran Schwarz-Christoffel package SCPACK, the new toolbox computes exterior as well as interior Schwarz-Christoffel maps, making it easy to experiment with spectra that are not necessarily symmetric about an axis.

  10. Modelling the ITER glow discharge plasma

    NASA Astrophysics Data System (ADS)

    Kogut, D.; Douai, D.; Hagelaar, G.; Pitts, R. A.

    2015-08-01

    The ITER glow discharge cleaning (GDC) system (Maruyama et al., 2012) is aimed to prepare in-vessel component surfaces prior to the machine start-up. In order to assess glow discharge uniformity and wall coverage, thus conditioning efficiency of the system, a new 2D multi-fluid model has been developed (Hagelaar, 2012). In this work the model is compared with published experimental data on GDC wall ion fluxes in JET and RFX (Douai et al., 2013; Canton et al., 2013). The simulations of H2-GDC in ITER for the case of 1 or 2 anodes indicate a good level of homogeneity of plasma parameters in the negative glow and of the wall ion flux in the common pressure domain for GDC: 0.1-0.5 Pa. Although the model geometry does not allow simulation of all seven ITER anodes operating simultaneously, the results can be extrapolated to the full system with an average ion current density of 0.21 A/m2, which is comparable to JET (0.10 A/m2).

  11. ITER Creation Safety File Expertise Results

    NASA Astrophysics Data System (ADS)

    Perrault, D.

    2013-06-01

    In March 2010, the ITER operator delivered the facility safety file to the French "Autorité de Sûreté Nucléaire" (ASN) as part of its request for the creation decree, legally necessary before building works can begin on the site. The French "Institut de Radioprotection et de Sûreté Nucléaire" (IRSN), in support to the ASN, recently completed its expertise of the safety measures proposed for ITER, on the basis of this file and of additional technical documents from the operator. This paper presents the IRSN's main conclusions. In particular, they focus on the radioactive materials involved, the safety and radiation protection demonstration (suitability of risk management measures…), foreseeable accidents, building and safety important component design and, finally, wastes and effluents to be produced. This assessment was just the first legally-required step in on-going safety monitoring of the ITER project, which will include other complete regulatory re-evaluations.

  12. The dynamics of iterated transportation simulations

    SciTech Connect

    Nagel, K.; Rickert, M.; Simon, P.M.

    1998-12-01

    Transportation-related decisions of people often depend on what everybody else is doing. For example, decisions about mode choice, route choice, activity scheduling, etc., can depend on congestion, caused by the aggregated behavior of others. From a conceptual viewpoint, this consistency problem causes a deadlock, since nobody can start planning because they do not know what everybody else is doing. It is the process of iterations that is examined in this paper as a method for solving the problem. In this paper, the authors concentrate on the aspect of the iterative process that is probably the most important one from a practical viewpoint, and that is the ``uniqueness`` or ``robustness`` of the results. Also, they define robustness more in terms of common sense than in terms of a mathematical formalism. For this, they do not only want a single iterative process to converge, but they want the result to be independent of any particular implementation. The authors run many computational experiments, sometimes with variations of the same code, sometimes with totally different code, in order to see if any of the results are robust against these changes.

  13. Iterative image-domain decomposition for dual-energy CT

    SciTech Connect

    Niu, Tianye; Dong, Xue; Petrongolo, Michael; Zhu, Lei

    2014-04-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the

  14. Adaptive independent component analysis to analyze electrocardiograms

    NASA Astrophysics Data System (ADS)

    Yim, Seong-Bin; Szu, Harold H.

    2001-03-01

    In this work, we apply adaptive version independent component analysis (ADAPTIVE ICA) to the nonlinear measurement of electro-cardio-graphic (ECG) signals for potential detection of abnormal conditions in the heart. In principle, unsupervised ADAPTIVE ICA neural networks can demix the components of measured ECG signals. However, the nonlinear pre-amplification and post measurement processing make the linear ADAPTIVE ICA model no longer valid. This is possible because of a proposed adaptive rectification pre-processing is used to linearize the preamplifier of ECG, and then linear ADAPTIVE ICA is used in iterative manner until the outputs having their own stable Kurtosis. We call such a new approach adaptive ADAPTIVE ICA. Each component may correspond to individual heart function, either normal or abnormal. Adaptive ADAPTIVE ICA neural networks have the potential to make abnormal components more apparent, even when they are masked by normal components in the original measured signals. This is particularly important for diagnosis well in advance of the actual onset of heart attack, in which abnormalities in the original measured ECG signals may be difficult to detect. This is the first known work that applies Adaptive ADAPTIVE ICA to ECG signals beyond noise extraction, to the detection of abnormal heart function.

  15. Fireplace adapters

    SciTech Connect

    Hunt, R.L.

    1983-12-27

    An adapter is disclosed for use with a fireplace. The stove pipe of a stove standing in a room to be heated may be connected to the flue of the chimney so that products of combustion from the stove may be safely exhausted through the flue and outwardly of the chimney. The adapter may be easily installed within the fireplace by removing the damper plate and fitting the adapter to the damper frame. Each of a pair of bolts has a portion which hooks over a portion of the damper frame and a threaded end depending from the hook portion and extending through a hole in the adapter. Nuts are threaded on the bolts and are adapted to force the adapter into a tight fit with the adapter frame.

  16. Climate Change Assessment and Adaptation Planning for the Southeast US

    NASA Astrophysics Data System (ADS)

    Georgakakos, A. P.; Yao, H.; Zhang, F.

    2012-12-01

    A climate change assessment is carried out for the Apalachicola-Chattahoochee-Flint River Basin in the southeast US following an integrated water resources assessment and planning framework. The assessment process begins with the development/selection of consistent climate, demographic, socio-economic, and land use/cover scenarios. Historical scenarios and responses are analyzed first to establish baseline conditions. Future climate scenarios are based on GCMs available through the IPCC. Statistical and/or dynamic downscaling of GCM outputs is applied to generate high resolution (12x12 km) atmospheric forcing, such as rainfall, temperature, and ET demand, over the ACF River Basin watersheds. Physically based watershed, aquifer, and estuary models (lumped and distributed) are used to quantify the hydrologic and water quality river basin response to alternative climate and land use/cover scenarios. Demand assessments are carried out for each water sector, for example, water supply for urban, agricultural, and industrial users; hydro-thermal facilities; navigation reaches; and environmental/ecological flow and lake level requirements, aiming to establish aspirational water use targets, performance metrics, and management/adaptation options. Response models for the interconnected river-reservoir-aquifer-estuary system are employed next to assess actual water use levels and other sector outputs under a specific set of hydrologic inputs, demand targets, and management/adaptation options. Adaptive optimization methods are used to generate system-wide management policies conditional on inflow forecasts. The generated information is used to inform stakeholder planning and decision processes aiming to develop consensus on adaptation measures, management strategies, and performance monitoring indicators. The assessment and planning process is driven by stakeholder input and is inherently iterative and sequential.

  17. Iterative reconstruction methods in atmospheric tomography: FEWHA, Kaczmarz and Gradient-based algorithm

    NASA Astrophysics Data System (ADS)

    Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.

    2014-07-01

    The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.

  18. An Automatic Optical and SAR Image Registration Method Using Iterative Multi-Level and Refinement Model

    NASA Astrophysics Data System (ADS)

    Xu, C.; Sui, H. G.; Li, D. R.; Sun, K. M.; Liu, J. Y.

    2016-06-01

    Automatic image registration is a vital yet challenging task, particularly for multi-sensor remote sensing images. Given the diversity of the data, it is unlikely that a single registration algorithm or a single image feature will work satisfactorily for all applications. Focusing on this issue, the mainly contribution of this paper is to propose an automatic optical-to-SAR image registration method using -level and refinement model: Firstly, a multi-level strategy of coarse-to-fine registration is presented, the visual saliency features is used to acquire coarse registration, and then specific area and line features are used to refine the registration result, after that, sub-pixel matching is applied using KNN Graph. Secondly, an iterative strategy that involves adaptive parameter adjustment for re-extracting and re-matching features is presented. Considering the fact that almost all feature-based registration methods rely on feature extraction results, the iterative strategy improve the robustness of feature matching. And all parameters can be automatically and adaptively adjusted in the iterative procedure. Thirdly, a uniform level set segmentation model for optical and SAR images is presented to segment conjugate features, and Voronoi diagram is introduced into Spectral Point Matching (VSPM) to further enhance the matching accuracy between two sets of matching points. Experimental results show that the proposed method can effectively and robustly generate sufficient, reliable point pairs and provide accurate registration.

  19. A Predictive Analysis Approach to Adaptive Testing.

    ERIC Educational Resources Information Center

    Kirisci, Levent; Hsu, Tse-Chi

    The predictive analysis approach to adaptive testing originated in the idea of statistical predictive analysis suggested by J. Aitchison and I.R. Dunsmore (1975). The adaptive testing model proposed is based on parameter-free predictive distribution. Aitchison and Dunsmore define statistical prediction analysis as the use of data obtained from an…

  20. Cosmetic Plastic Surgery Statistics

    MedlinePlus

    2014 Cosmetic Plastic Surgery Statistics Cosmetic Procedure Trends 2014 Plastic Surgery Statistics Report Please credit the AMERICAN SOCIETY OF PLASTIC SURGEONS when citing statistical data or using ...

  1. Evaluation of ITER MSE Viewing Optics

    SciTech Connect

    Allen, S; Lerner, S; Morris, K; Jayakumar, J; Holcomb, C; Makowski, M; Latkowski, J; Chipman, R

    2007-03-26

    The Motional Stark Effect (MSE) diagnostic on ITER determines the local plasma current density by measuring the polarization angle of light resulting from the interaction of a high energy neutral heating beam and the tokamak plasma. This light signal has to be transmitted from the edge and core of the plasma to a polarization analyzer located in the port plug. The optical system should either preserve the polarization information, or it should be possible to reliably calibrate any changes induced by the optics. This LLNL Work for Others project for the US ITER Project Office (USIPO) is focused on the design of the viewing optics for both the edge and core MSE systems. Several design constraints were considered, including: image quality, lack of polarization aberrations, ease of construction and cost of mirrors, neutron shielding, and geometric layout in the equatorial port plugs. The edge MSE optics are located in ITER equatorial port 3 and view Heating Beam 5, and the core system is located in equatorial port 1 viewing heating beam 4. The current work is an extension of previous preliminary design work completed by the ITER central team (ITER resources were not available to complete a detailed optimization of this system, and then the MSE was assigned to the US). The optimization of the optical systems at this level was done with the ZEMAX optical ray tracing code. The final LLNL designs decreased the ''blur'' in the optical system by nearly an order of magnitude, and the polarization blur was reduced by a factor of 3. The mirror sizes were reduced with an estimated cost savings of a factor of 3. The throughput of the system was greater than or equal to the previous ITER design. It was found that optical ray tracing was necessary to accurately measure the throughput. Metal mirrors, while they can introduce polarization aberrations, were used close to the plasma because of the anticipated high heat, particle, and neutron loads. These mirrors formed an intermediate

  2. Assessment of the dose reduction potential of a model-based iterative reconstruction algorithm using a task-based performance metrology

    SciTech Connect

    Samei, Ehsan; Richard, Samuel

    2015-01-15

    Purpose: Different computed tomography (CT) reconstruction techniques offer different image quality attributes of resolution and noise, challenging the ability to compare their dose reduction potential against each other. The purpose of this study was to evaluate and compare the task-based imaging performance of CT systems to enable the assessment of the dose performance of a model-based iterative reconstruction (MBIR) to that of an adaptive statistical iterative reconstruction (ASIR) and a filtered back projection (FBP) technique. Methods: The ACR CT phantom (model 464) was imaged across a wide range of mA setting on a 64-slice CT scanner (GE Discovery CT750 HD, Waukesha, WI). Based on previous work, the resolution was evaluated in terms of a task-based modulation transfer function (MTF) using a circular-edge technique and images from the contrast inserts located in the ACR phantom. Noise performance was assessed in terms of the noise-power spectrum (NPS) measured from the uniform section of the phantom. The task-based MTF and NPS were combined with a task function to yield a task-based estimate of imaging performance, the detectability index (d′). The detectability index was computed as a function of dose for two imaging tasks corresponding to the detection of a relatively small and a relatively large feature (1.5 and 25 mm, respectively). The performance of MBIR in terms of the d′ was compared with that of ASIR and FBP to assess its dose reduction potential. Results: Results indicated that MBIR exhibits a variability spatial resolution with respect to object contrast and noise while significantly reducing image noise. The NPS measurements for MBIR indicated a noise texture with a low-pass quality compared to the typical midpass noise found in FBP-based CT images. At comparable dose, the d′ for MBIR was higher than those of FBP and ASIR by at least 61% and 19% for the small feature and the large feature tasks, respectively. Compared to FBP and ASIR, MBIR

  3. Adaptive SPECT

    PubMed Central

    Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.

    2008-01-01

    Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485

  4. A blind robust watermarking scheme with non-cascade iterative encrypted kinoform.

    PubMed

    Deng, Ke; Yang, Guanglin; Xie, Haiyan

    2011-05-23

    A blind robust watermarking scheme is proposed. A watermark is firstly transformed into a non-cascade iterative encrypted kinoform with non-cascade phase retrieve algorithm and random fractional Fourier transform (RFrFT). An iterative algorithm and Human Visual System (HVS) are both presented to adaptively embed the kinoform watermark into corresponding 2-level DWT coefficients of the cover image. The kinoform accounts for much less data amount to be embedded than regular computer-generated hologram (CGH). And the kinoform can be extracted with the only right phase key and right fractional order, and reconstructed to represent original watermark without original cover image. The experiments have shown the scheme's high security, good imperceptibility, and robustness to resist attacks such as noise, compression, filtering, cropping.

  5. Iterative functionalism and climate management regimes: From intergovernmental panel on climate change to intergovernmental negotiating committee

    SciTech Connect

    Feldman, D.L. Tennessee Univ., Knoxville, TN . Energy, Environment and Resources Center)

    1992-01-01

    This paper contends that an iterative functionalist'' regime -- comprised of international organizations that monitor the global climate and perform scientific and policy research on prevention, mitigation, and adaptation strategies for response to possible global warming -- has developed over the past decade. A common global effort by scientists, diplomats, and others to negotiate a framework convention that would reduce emissions of carbon dioxide and other greenhouse gases'' has been brought about by this regime. Individuals that participate in this regime are engaged in several cooperative activities including: (1) international research on the causes and consequences of global change; (2) global environmental monitoring and standard-setting for analyses of climate data; and (3) negotiating a framework convention that places limits on greenhouse gas emissions by countries. The implications of this iterative approach for successful implementation of a treaty to forestall global climate change are discussed.

  6. Iterative functionalism and climate management regimes: From intergovernmental panel on climate change to intergovernmental negotiating committee

    SciTech Connect

    Feldman, D.L. |

    1992-06-01

    This paper contends that an iterative ``functionalist`` regime -- comprised of international organizations that monitor the global climate and perform scientific and policy research on prevention, mitigation, and adaptation strategies for response to possible global warming -- has developed over the past decade. A common global effort by scientists, diplomats, and others to negotiate a framework convention that would reduce emissions of carbon dioxide and other ``greenhouse gases`` has been brought about by this regime. Individuals that participate in this regime are engaged in several cooperative activities including: (1) international research on the causes and consequences of global change; (2) global environmental monitoring and standard-setting for analyses of climate data; and (3) negotiating a framework convention that places limits on greenhouse gas emissions by countries. The implications of this iterative approach for successful implementation of a treaty to forestall global climate change are discussed.

  7. Iterative procedure for in-situ EUV optical testing with an incoherent source

    SciTech Connect

    Miyawaka, Ryan; Naulleau, Patrick; Zakhor, Avideh

    2009-12-01

    We propose an iterative method for in-situ optical testing under partially coherent illumination that relies on the rapid computation of aerial images. In this method a known pattern is imaged with the test optic at several planes through focus. A model is created that iterates through possible aberration maps until the through-focus series of aerial images matches the experimental result. The computation time of calculating the through-focus series is significantly reduced by a-SOCS, an adapted form of the Sum Of Coherent Systems (SOCS) decomposition. In this method, the Hopkins formulation is described by an operator S which maps the space of pupil aberrations to the space of aerial images. This operator is well approximated by a truncated sum of its spectral components.

  8. Meteorological Data Assimilation by Adaptive Bayesian Optimization.

    NASA Astrophysics Data System (ADS)

    Purser, Robert James

    1992-01-01

    The principal aim of this research is the elucidation of the Bayesian statistical principles that underlie the theory of objective meteorological analysis. In particular, emphasis is given to aspects of data assimilation that can benefit from an iterative numerical strategy. Two such aspects that are given special consideration are statistical validation of the covariance profiles and nonlinear initialization. A new economic algorithm is presented, based on the imposition of a sparse matrix structure for all covariances and precisions held during the computations. It is shown that very large datasets may be accommodated using this structure and a good linear approximation to the analysis equations established without the need to unnaturally fragment the problem. Since the integrity of the system of analysis equations is preserved, it is a relatively straight-forward matter to extend the basic analysis algorithm to one that incorporates a check on the plausibility of the statistical model assumed for background errors--the so-called "validation" problem. Two methods of validation are described within the sparse matrix framework: the first is essentially a direct extension of the Bayesian principles to embrace, not only the regular analysis variables, but also the parameters that determine the precise form of the covariance functions; the second technique is the non-Bayesian method of generalized cross validation adapted for use within the sparse matrix framework. The later part of this study is concerned with the establishment of a consistent dynamical balance within a forecast model--the initialization problem. The formal principles of the modern theory of initialization are reviewed and a critical examination is made of the concept of the "slow manifold". It is demonstrated, in accordance with more complete nonlinear models, that even within a simple three-mode linearized system, the notion that a universal slow manifold exists is untenable. It is therefore argued

  9. Adaptive Computing.

    ERIC Educational Resources Information Center

    Harrell, William

    1999-01-01

    Provides information on various adaptive technology resources available to people with disabilities. (Contains 19 references, an annotated list of 129 websites, and 12 additional print resources.) (JOW)

  10. Contour adaptation.

    PubMed

    Anstis, Stuart

    2013-01-01

    It is known that adaptation to a disk that flickers between black and white at 3-8 Hz on a gray surround renders invisible a congruent gray test disk viewed afterwards. This is contrast adaptation. We now report that adapting simply to the flickering circular outline of the disk can have the same effect. We call this "contour adaptation." This adaptation does not transfer interocularly, and apparently applies only to luminance, not color. One can adapt selectively to only some of the contours in a display, making only these contours temporarily invisible. For instance, a plaid comprises a vertical grating superimposed on a horizontal grating. If one first adapts to appropriate flickering vertical lines, the vertical components of the plaid disappears and it looks like a horizontal grating. Also, we simulated a Cornsweet (1970) edge, and we selectively adapted out the subjective and objective contours of a Kanisza (1976) subjective square. By temporarily removing edges, contour adaptation offers a new technique to study the role of visual edges, and it demonstrates how brightness information is concentrated in edges and propagates from them as it fills in surfaces.

  11. Quality metric in matched Laplacian of Gaussian response domain for blind adaptive optics image deconvolution

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Yang, Yikang; Xu, Rong; Liu, Changhai; Li, Jisheng

    2016-04-01

    Adaptive optics (AO) in conjunction with subsequent postprocessing techniques have obviously improved the resolution of turbulence-degraded images in ground-based astronomical observations or artificial space objects detection and identification. However, important tasks involved in AO image postprocessing, such as frame selection, stopping iterative deconvolution, and algorithm comparison, commonly need manual intervention and cannot be performed automatically due to a lack of widely agreed on image quality metrics. In this work, based on the Laplacian of Gaussian (LoG) local contrast feature detection operator, we propose a LoG domain matching operation to perceive effective and universal image quality statistics. Further, we extract two no-reference quality assessment indices in the matched LoG domain that can be used for a variety of postprocessing tasks. Three typical space object images with distinct structural features are tested to verify the consistency of the proposed metric with perceptual image quality through subjective evaluation.

  12. Intelligent control and adaptive systems; Proceedings of the Meeting, Philadelphia, PA, Nov. 7, 8, 1989

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo (Editor)

    1990-01-01

    Various papers on intelligent control and adaptive systems are presented. Individual topics addressed include: control architecture for a Mars walking vehicle, representation for error detection and recovery in robot task plans, real-time operating system for robots, execution monitoring of a mobile robot system, statistical mechanics models for motion and force planning, global kinematics for manipulator planning and control, exploration of unknown mechanical assemblies through manipulation, low-level representations for robot vision, harmonic functions for robot path construction, simulation of dual behavior of an autonomous system. Also discussed are: control framework for hand-arm coordination, neural network approach to multivehicle navigation, electronic neural networks for global optimization, neural network for L1 norm linear regression, planning for assembly with robot hands, neural networks in dynamical systems, control design with iterative learning, improved fuzzy process control of spacecraft autonomous rendezvous using a genetic algorithm.

  13. Stability of resistive wall modes with plasma rotation and thick wall in ITER scenario

    NASA Astrophysics Data System (ADS)

    Zheng, L. J.; Kotschenreuther, M.; Chu, M.; Chance, M.; Turnbull, A.

    2004-11-01

    The rotation effect on resistive wall modes (RWMs) is examined for realistically shaped, high-beta tokamak equilibria, including reactor relevant cases with low mach number M and realistic thick walls. For low M, Stabilization of RWMs arises from unusually thin inertial layers. The investigation employs the newly developed adaptive eigenvalue code (AEGIS: Adaptive EiGenfunction Independent Solution), which describes both low and high n modes and is in good agreement with GATO in the benchmark studies. AEGIS is unique in using adaptive methods to resolve such inertial layers with low mach number rotation. This feature is even more desirable for transport barrier cases. Additionally, ITER and reactors have thick conducting walls ( ˜.5-1 m) which are not well modeled as a thin shell. Such thick walls are considered here, including semi-analytical approximations to account for the toroidally segmented nature of real walls.

  14. Research on JET in view of ITER

    NASA Astrophysics Data System (ADS)

    Pamela, Jerome; Ongena, Jef; Watkins, Michael

    2004-11-01

    Research on JET is focused on further development of the two ITER reference plasma scenarios. The ELMy H-Mode, has been extended to lower rho* at high and q_95=3, with simultaneously H_98=0.9, and f_GW=0.9 at I_p=3.5 MA. The dependence of confinement on beta and rho* has been found to be more favorable than given by the IPB98(y,2) scaling. Highlights in the development of Advanced Regimes with Internal Transport Barriers (ITB) and strong reversed shear (q_0=2-3, q_min=1.5-2.5) are : (i) operation at a core density close to the Greenwald limit and (ii) full current drive in 3T/1.8MA ITB plasmas extended to 20 seconds with a JET record injected energy of E≈ 330MJ; (iii) 7 keV Te≈ Ti ITB plasmas at low toroidal rotation, and (iv) wide radius ITB's (r/a=0.6). Furthermore, emphasis in JET is placed on (i) mitigating the impact of ELMs, (ii) understanding the phenomena leading to tritium retention and (iii) preparing burning plasma physics. Recent developments on JET in view of ITER are : (i) real-time control in both ELMy H-Mode and ITB plasmas and (ii) an upgrade of JET with: (a) increased NBI power (b) a new ELM-resilient ITER-like ICRH antenna (7MW) to be tested in 2006 (c) 16 new and upgraded diagnostics.

  15. Experimental studies of ITER demonstration discharges

    NASA Astrophysics Data System (ADS)

    Sips, A. C. C.; Casper, T. A.; Doyle, E. J.; Giruzzi, G.; Gribov, Y.; Hobirk, J.; Hogeweij, G. M. D.; Horton, L. D.; Hubbard, A. E.; Hutchinson, I.; Ide, S.; Isayama, A.; Imbeaux, F.; Jackson, G. L.; Kamada, Y.; Kessel, C.; Kochl, F.; Lomas, P.; Litaudon, X.; Luce, T. C.; Marmar, E.; Mattei, M.; Nunes, I.; Oyama, N.; Parail, V.; Portone, A.; Saibene, G.; Sartori, R.; Stober, J. K.; Suzuki, T.; Wolfe, S. M.; C-Mod Team; ASDEX Upgrade Team; DIII-D Team; JET EFDA Contributors

    2009-08-01

    Key parts of the ITER scenarios are determined by the capability of the proposed poloidal field (PF) coil set. They include the plasma breakdown at low loop voltage, the current rise phase, the performance during the flat top (FT) phase and a ramp down of the plasma. The ITER discharge evolution has been verified in dedicated experiments. New data are obtained from C-Mod, ASDEX Upgrade, DIII-D, JT-60U and JET. Results show that breakdown for Eaxis < 0.23-0.33 V m-1 is possible unassisted (ohmic) for large devices like JET and attainable in devices with a capability of using ECRH assist. For the current ramp up, good control of the plasma inductance is obtained using a full bore plasma shape with early X-point formation. This allows optimization of the flux usage from the PF set. Additional heating keeps li(3) < 0.85 during the ramp up to q95 = 3. A rise phase with an H-mode transition is capable of achieving li(3) < 0.7 at the start of the FT. Operation of the H-mode reference scenario at q95 ~ 3 and the hybrid scenario at q95 = 4-4.5 during the FT phase is documented, providing data for the li (3) evolution after the H-mode transition and the li (3) evolution after a back-transition to L-mode. During the ITER ramp down it is important to remain diverted and to reduce the elongation. The inductance could be kept <=1.2 during the first half of the current decay, using a slow Ip ramp down, but still consuming flux from the transformer. Alternatively, the discharges can be kept in H-mode during most of the ramp down, requiring significant amounts of additional heating.

  16. Corneal topography matching by iterative registration.

    PubMed

    Wang, Junjie; Elsheikh, Ahmed; Davey, Pinakin G; Wang, Weizhuo; Bao, Fangjun; Mottershead, John E

    2014-11-01

    Videokeratography is used for the measurement of corneal topography in overlapping portions (or maps) which must later be joined together to form the overall topography of the cornea. The separate portions are measured from different viewpoints and therefore must be brought together by registration of measurement points in the regions of overlap. The central map is generally the most accurate, but all maps are measured with uncertainty that increases towards the periphery. It becomes the reference (or static) map, and the peripheral (or dynamic) maps must then be transformed by rotation and translation so that the overlapping portions are matched. The process known as registration, of determining the necessary transformation, is a well-understood procedure in image analysis and has been applied in several areas of science and engineering. In this article, direct search optimisation using the Nelder-Mead algorithm and several variants of the iterative closest/corresponding point routine are explained and applied to simulated and real clinical data. The measurement points on the static and dynamic maps are generally different so that it becomes necessary to interpolate, which is done using a truncated series of Zernike polynomials. The point-to-plane iterative closest/corresponding point variant has the advantage of releasing certain optimisation constraints that lead to persistent registration and alignment errors when other approaches are used. The point-to-plane iterative closest/corresponding point routine is found to be robust to measurement noise, insensitive to starting values of the transformation parameters and produces high-quality results when using real clinical data.

  17. Iterative repair for scheduling and rescheduling

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Deale, Michael

    1991-01-01

    An iterative repair search method is described called constraint based simulated annealing. Simulated annealing is a hill climbing search technique capable of escaping local minima. The utility of the constraint based framework is shown by comparing search performance with and without the constraint framework on a suite of randomly generated problems. Results are also shown of applying the technique to the NASA Space Shuttle ground processing problem. These experiments show that the search methods scales to complex, real world problems and reflects interesting anytime behavior.

  18. Iterative phase retrieval strategy of transient events.

    PubMed

    Ng, Tuck Wah; Chua, Adrian Sau Ling

    2012-01-20

    Important nuances of a process or processes in action can be obtained from the phase retrieval of diffraction patterns for analysis of transient events. A significant limitation associated with the iterative approach is that predictive input functions are needed and can result in situations of nonconvergence. In dealing with a transient event recorded as a series of Fourier magnitude patterns, such a hit-and-miss characteristic, on the surface, appears computationally daunting. We report and demonstrate a strategy here that effectively minimizes this by using a prior retrieved frame as the predictive function for the current retrieval process. PMID:22270666

  19. Design of the ITER ICRF Antenna

    NASA Astrophysics Data System (ADS)

    Hancock, D.; Nightingale, M.; Bamber, R.; Dalton, N.; Durodie, F.; Firdaouss, M.; Lister, J.; Porton, M.; Shannon, M.; Wilson, D.; Winkler, K.; Wooldridge, E.

    2011-12-01

    The CYCLE consortium has been designing the ITER ICRF antenna since March 2010, supported by an F4E grant. Following a brief introduction to the consortium, this paper: describes the present status and layout of the design; highlights the key mechanical engineering features; shows the expected impact of cooling and radiation issues on the design and outlines the need for future R&D to support the design process. A key design requirement is the need for the mechanical design and analysis to be consistent with all requirements following from the RF physics and antenna layout optimisation. As such, this paper complements that of Durodie et al [1].

  20. Climate adaptation

    NASA Astrophysics Data System (ADS)

    Kinzig, Ann P.

    2015-03-01

    This paper is intended as a brief introduction to climate adaptation in a conference devoted otherwise to the physics of sustainable energy. Whereas mitigation involves measures to reduce the probability of a potential event, such as climate change, adaptation refers to actions that lessen the impact of climate change. Mitigation and adaptation differ in other ways as well. Adaptation does not necessarily have to be implemented immediately to be effective; it only needs to be in place before the threat arrives. Also, adaptation does not necessarily require global, coordinated action; many effective adaptation actions can be local. Some urban communities, because of land-use change and the urban heat-island effect, currently face changes similar to some expected under climate change, such as changes in water availability, heat-related morbidity, or changes in disease patterns. Concern over those impacts might motivate the implementation of measures that would also help in climate adaptation, despite skepticism among some policy makers about anthropogenic global warming. Studies of ancient civilizations in the southwestern US lends some insight into factors that may or may not be important to successful adaptation.

  1. Adaptive Strategies for Materials Design using Uncertainties.

    PubMed

    Balachandran, Prasanna V; Xue, Dezhen; Theiler, James; Hogden, John; Lookman, Turab

    2016-01-21

    We compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young's (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material with desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don't. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.

  2. Preconditioned iterative methods for inhomogeneous acoustic scattering applications

    NASA Astrophysics Data System (ADS)

    Sifuentes, Josef

    This thesis develops and analyzes efficient iterative methods for solving discretizations of the Lippmann--Schwinger integral equation for inhomogeneous acoustic scattering. Analysis and numerical illustrations of the spectral properties of the scattering problem demonstrate that a significant portion of the spectrum is approximated well on coarse grids. To exploit this, I develop a novel restarted GMRES method with adaptive deflation preconditioning based on spectral approximations on multiple grids. Much of the literature in this field is based on exact deflation, which is not feasible for most practical computations. This thesis provides an analytical framework for general approximate deflation methods and suggests a way to rigorously study a host of inexactly-applied preconditioners. Approximate deflation algorithms are implemented for scattering through thin inhomogeneities in photonic band gap problems. I also develop a short term recurrence for solving the one dimensional version of the problem that exploits the observation that the integral operator is a low rank perturbation of a self-adjoint operator. This method is based on strategies for solving Schur complement problems, and provides an alternative to a recent short term recurrence algorithm for matrices with such structure that we show to be numerically unstable for this application. The restarted GMRES method with adaptive deflation preconditioning over multiple grids, as well as the short term recurrence method for operators with low rank skew-adjoint parts, are very effective for reducing both the computational time and computer memory required to solve acoustic scattering problems. Furthermore, the methods are sufficiently general to be applicable to a wide class of problems.

  3. Enhancing multiple-point geostatistical modeling: 2. Iterative simulation and multiple distance function

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman; Sahimi, Muhammad

    2016-03-01

    This series addresses a fundamental issue in multiple-point statistical (MPS) simulation for generation of realizations of large-scale porous media. Past methods suffer from the fact that they generate discontinuities and patchiness in the realizations that, in turn, affect their flow and transport properties. Part I of this series addressed certain aspects of this fundamental issue, and proposed two ways of improving of one such MPS method, namely, the cross correlation-based simulation (CCSIM) method that was proposed by the authors. In the present paper, a new algorithm is proposed to further improve the quality of the realizations. The method utilizes the realizations generated by the algorithm introduced in Part I, iteratively removes any possible remaining discontinuities in them, and addresses the problem with honoring hard (quantitative) data, using an error map. The map represents the differences between the patterns in the training image (TI) and the current iteration of a realization. The resulting iterative CCSIM—the iCCSIM algorithm—utilizes a random path and the error map to identify the locations in the current realization in the iteration process that need further "repairing;" that is, those locations at which discontinuities may still exist. The computational time of the new iterative algorithm is considerably lower than one in which every cell of the simulation grid is visited in order to repair the discontinuities. Furthermore, several efficient distance functions are introduced by which one extracts effectively key information from the TIs. To increase the quality of the realizations and extracting the maximum amount of information from the TIs, the distance functions can be used simultaneously. The performance of the iCCSIM algorithm is studied using very complex 2-D and 3-D examples, including those that are process-based. Comparison is made between the quality and accuracy of the results with those generated by the original CCSIM

  4. The ITER Radial Neutron Camera Detection System

    SciTech Connect

    Marocco, D.; Belli, F.; Esposito, B.; Petrizzi, L.; Riva, M.; Bonheure, G.; Kaschuck, Y.

    2008-03-12

    A multichannel neutron detection system (Radial Neutron Camera, RNC) will be installed on the ITER equatorial port plug 1 for total neutron source strength, neutron emissivity/ion temperature profiles and n{sub t}/n{sub d} ratio measurements [1]. The system is composed by two fan shaped collimating structures: an ex-vessel structure, looking at the plasma core, containing tree sets of 12 collimators (each set lying on a different toroidal plane), and an in-vessel structure, containing 9 collimators, for plasma edge coverage. The RNC detecting system will work in a harsh environment (neutron fiux up to 10{sup 8}-10{sup 9} n/cm{sup 2} s, magnetic field >0.5 T or in-vessel detectors), should provide both counting and spectrometric information and should be flexible enough to cover the high neutron flux dynamic range expected during the different ITER operation phases. ENEA has been involved in several activities related to RNC design and optimization [2,3]. In the present paper the up-to-date design and the neutron emissivity reconstruction capabilities of the RNC will be described. Different options for detectors suitable for spectrometry and counting (e.g. scintillators and diamonds) focusing on the implications in terms of overall RNC performance will be discussed. The increase of the RNC capabilities offered by the use of new digital data acquisition systems will be also addressed.

  5. Diverse Power Iteration Embeddings and Its Applications

    SciTech Connect

    Huang H.; Yoo S.; Yu, D.; Qin, H.

    2014-12-14

    Abstract—Spectral Embedding is one of the most effective dimension reduction algorithms in data mining. However, its computation complexity has to be mitigated in order to apply it for real-world large scale data analysis. Many researches have been focusing on developing approximate spectral embeddings which are more efficient, but meanwhile far less effective. This paper proposes Diverse Power Iteration Embeddings (DPIE), which not only retains the similar efficiency of power iteration methods but also produces a series of diverse and more effective embedding vectors. We test this novel method by applying it to various data mining applications (e.g. clustering, anomaly detection and feature selection) and evaluating their performance improvements. The experimental results show our proposed DPIE is more effective than popular spectral approximation methods, and obtains the similar quality of classic spectral embedding derived from eigen-decompositions. Moreover it is extremely fast on big data applications. For example in terms of clustering result, DPIE achieves as good as 95% of classic spectral clustering on the complex datasets but 4000+ times faster in limited memory environment.

  6. Iterative Gaussianization: from ICA to random rotations.

    PubMed

    Laparra, Valero; Camps-Valls, Gustavo; Malo, Jesús

    2011-04-01

    Most signal processing problems involve the challenging task of multidimensional probability density function (PDF) estimation. In this paper, we propose a solution to this problem by using a family of rotation-based iterative Gaussianization (RBIG) transforms. The general framework consists of the sequential application of a univariate marginal Gaussianization transform followed by an orthonormal transform. The proposed procedure looks for differentiable transforms to a known PDF so that the unknown PDF can be estimated at any point of the original domain. In particular, we aim at a zero-mean unit-covariance Gaussian for convenience. RBIG is formally similar to classical iterative projection pursuit algorithms. However, we show that, unlike in PP methods, the particular class of rotations used has no special qualitative relevance in this context, since looking for interestingness is not a critical issue for PDF estimation. The key difference is that our approach focuses on the univariate part (marginal Gaussianization) of the problem rather than on the multivariate part (rotation). This difference implies that one may select the most convenient rotation suited to each practical application. The differentiability, invertibility, and convergence of RBIG are theoretically and experimentally analyzed. Relation to other methods, such as radial Gaussianization, one-class support vector domain description, and deep neural networks is also pointed out. The practical performance of RBIG is successfully illustrated in a number of multidimensional problems such as image synthesis, classification, denoising, and multi-information estimation. PMID:21349790

  7. Laser cleaning of ITER's diagnostic mirrors

    NASA Astrophysics Data System (ADS)

    Skinner, C. H.; Gentile, C. A.; Doerner, R.

    2012-10-01

    Practical methods to clean ITER's diagnostic mirrors and restore reflectivity will be critical to ITER's plasma operations. We report on laser cleaning of single crystal molybdenum mirrors coated with either carbon or beryllium films 150 - 420 nm thick. A 1.06 μm Nd laser system provided 220 ns pulses at 8 kHz with typical power densities of 1-2 J/cm^2. The laser beam was fiber optically coupled to a scanner suitable for tokamak applications. The efficacy of mirror cleaning was assessed with a new technique that combines microscopic imaging and reflectivity measurements [1]. The method is suitable for hazardous materials such as beryllium as the mirrors remain sealed in a vacuum chamber. Excellent restoration of reflectivity for the carbon coated Mo mirrors was observed after laser scanning under vacuum conditions. For the beryllium coated mirrors restoration of reflectivity has so far been incomplete and modeling indicates that a shorter duration laser pulse is needed. No damage of the molybdenum mirror substrates was observed.[4pt][1] C.H. Skinner et al., Rev. Sci. Instrum. at press.

  8. Nuclear Analysis of an ITER Blanket Module

    NASA Astrophysics Data System (ADS)

    Chiovaro, P.; Di Maio, P. A.; Parrinello, V.

    2013-08-01

    ITER blanket system is the reactor's plasma-facing component, it is mainly devoted to provide the thermal and nuclear shielding of the Vacuum Vessel and external ITER components, being intended also to act as plasma limiter. It consists of 440 individual modules which are located in the inboard, upper and outboard regions of the reactor. In this paper attention has been focused on to a single outboard blanket module located in the equatorial zone, whose nuclear response under irradiation has been investigated following a numerical approach based on the Monte Carlo method and adopting the MCNP5 code. The main features of this blanket module nuclear behaviour have been determined, paying particular attention to energy and spatial distribution of the neutron flux and deposited nuclear power together with the spatial distribution of its volumetric density. Moreover, the neutronic damage of the structural material has also been investigated through the evaluation of displacement per atom and helium and hydrogen production rates. Finally, an activation analysis has been performed with FISPACT inventory code using, as input, the evaluated neutron spectrum to assess the module specific activity and contact dose rate after irradiation under a specific operating scenario.

  9. Thomson scattering diagnostic systems in ITER

    NASA Astrophysics Data System (ADS)

    Bassan, M.; Andrew, P.; Kurskiev, G.; Mukhin, E.; Hatae, T.; Vayakis, G.; Yatsuka, E.; Walsh, M.

    2016-01-01

    Thomson scattering (TS) is a proven diagnostic technique that will be implemented in ITER in three independent systems. The Edge TS will measure electron temperature Te and electron density ne profiles at high resolution in the region with r/a>0.8 (with a the minor radius). The Core TS will cover the region r/a<0.85 and shall be able to measure electron temperatures up to 40 keV . The Divertor TS will observe a segment of the divertor plasma more than 700 mm long and is designed to detect Te as low as 0.3 eV . The Edge and Core systems are primary contributors to Te and ne profiles. Both are installed in equatorial port 10 and very close together with the toroidal distance between the two laser beams of less than 600 mm at the first wall (~ 6° toroidal separation), a characteristic that should allow to reliably match the two profiles in the region 0.8ITER environment is imposing specific loads (e.g. gamma and neutron radiation, temperatures, disruption-induced stresses) and also access and reliability constraints that require new designs for many of the sub-systems. The challenges and the proposed solutions for all three TS systems are presented.

  10. Iterative reconstruction of volumetric particle distribution

    NASA Astrophysics Data System (ADS)

    Wieneke, Bernhard

    2013-02-01

    For tracking the motion of illuminated particles in space and time several volumetric flow measurement techniques are available like 3D-particle tracking velocimetry (3D-PTV) recording images from typically three to four viewing directions. For higher seeding densities and the same experimental setup, tomographic PIV (Tomo-PIV) reconstructs voxel intensities using an iterative tomographic reconstruction algorithm (e.g. multiplicative algebraic reconstruction technique, MART) followed by cross-correlation of sub-volumes computing instantaneous 3D flow fields on a regular grid. A novel hybrid algorithm is proposed here that similar to MART iteratively reconstructs 3D-particle locations by comparing the recorded images with the projections calculated from the particle distribution in the volume. But like 3D-PTV, particles are represented by 3D-positions instead of voxel-based intensity blobs as in MART. Detailed knowledge of the optical transfer function and the particle image shape is mandatory, which may differ for different positions in the volume and for each camera. Using synthetic data it is shown that this method is capable of reconstructing densely seeded flows up to about 0.05 ppp with similar accuracy as Tomo-PIV. Finally the method is validated with experimental data.

  11. Iterative image reconstruction in spectral CT

    NASA Astrophysics Data System (ADS)

    Hernandez, Daniel; Michel, Eric; Kim, Hye S.; Kim, Jae G.; Han, Byung H.; Cho, Min H.; Lee, Soo Y.

    2012-03-01

    Scan time of spectral-CTs is much longer than conventional CTs due to limited number of x-ray photons detectable by photon-counting detectors. However, the spectral pixel information in spectral-CT has much richer information on physiological and pathological status of the tissues than the CT-number in conventional CT, which makes the spectral- CT one of the promising future imaging modalities. One simple way to reduce the scan time in spectral-CT imaging is to reduce the number of views in the acquisition of projection data. But, this may result in poorer SNR and strong streak artifacts which can severely compromise the image quality. In this work, spectral-CT projection data were obtained from a lab-built spectral-CT consisting of a single CdTe photon counting detector, a micro-focus x-ray tube and scan mechanics. For the image reconstruction, we used two iterative image reconstruction methods, the simultaneous iterative reconstruction technique (SIRT) and the total variation minimization based on conjugate gradient method (CG-TV), along with the filtered back-projection (FBP) to compare the image quality. From the imaging of the iodine containing phantoms, we have observed that SIRT and CG-TV are superior to the FBP method in terms of SNR and streak artifacts.

  12. Critical Assessment of Pressure Gauges for ITER

    SciTech Connect

    Tabares, Francisco L.; Tafalla, David; Garcia-Cortes, Isabel

    2008-03-12

    The density and flux of molecular species in ITER, largely dominated by the molecular form of the main plasma components and the He ash, is a valuable parameter of relevance not only for operation purposes but also for validating existing neutral particle models of direct implications in divertor performance. An accurate and spatially resolved monitoring of this parameter implies the proper selection of pressure gauges able to cope with the very unique and aggressive environment to be expected in a fusion reactor. To date, there is no standard gauge fulfilling all the requirements, which encompass high neutron and gamma fluxes, together with strong magnetic field and temperature excursions and dusty environment. In the present work, a review of the challenges to face in the measurement of neutral pressure in ITER, together with existing technologies and developments to be made in some of them for their application to the task is presented. Particular attention is paid to R and D needs of existing concepts with potential use in future designs.

  13. The role of heating and current drive in ITER

    SciTech Connect

    Nevins, W.M.; Haney, S.

    1993-10-18

    This report discusses and summarize the role of heating and non-inductive current drive in ITER as: (1) ITER must have heating power sufficient for ignition. (2) The heating system must be capable of current drive. (3) Steady-state operation is an ``ultimate goal.`` It is recognized that additional heating and current drive power (beyond what is initially installed on ITER) may be required. (4) The ``Ultimate goal of steady-state operation`` means steady-state with Q{sub CD} {ge} 5. Unlike the ``Terms of Reference`` for the ITER CDA, the ``ITER Technical Objectives and Approaches`` for the EDA sets no goal for the neutron wall load during steady-state operation. (5) In addition to bulk current drive, the ITER heating and current drive system should be used for current profile control and for burn control.

  14. A unified noise analysis for iterative image estimation

    SciTech Connect

    Qi, Jinyi

    2003-07-03

    Iterative image estimation methods have been widely used in emission tomography. Accurate estimate of the uncertainty of the reconstructed images is essential for quantitative applications. While theoretical approach has been developed to analyze the noise propagation from iteration to iteration, the current results are limited to only a few iterative algorithms that have an explicit multiplicative update equation. This paper presents a theoretical noise analysis that is applicable to a wide range of preconditioned gradient type algorithms. One advantage is that proposed method does not require an explicit expression of the preconditioner and hence it is applicable to some algorithms that involve line searches. By deriving fixed point expression from the iteration based results, we show that the iteration based noise analysis is consistent with the xed point based analysis. Examples in emission tomography and transmission tomography are shown.

  15. Predict! Teaching Statistics Using Informational Statistical Inference

    ERIC Educational Resources Information Center

    Makar, Katie

    2013-01-01

    Statistics is one of the most widely used topics for everyday life in the school mathematics curriculum. Unfortunately, the statistics taught in schools focuses on calculations and procedures before students have a chance to see it as a useful and powerful tool. Researchers have found that a dominant view of statistics is as an assortment of tools…

  16. Hessian Schatten-norm regularization for CBCT image reconstruction using fast iterative shrinkage-thresholding algorithm

    NASA Astrophysics Data System (ADS)

    Li, Xinxin; Wang, Jiang; Tan, Shan

    2015-03-01

    Statistical iterative reconstruction in Cone-beam computed tomography (CBCT) uses prior knowledge to form different kinds of regularization terms. The total variation (TV) regularization has shown state-of-the-art performance in suppressing noises and preserving edges. However, it produces the well-known staircase effect. In this paper, a method that involves second-order differential operators was employed to avoid the staircase effect. The ability to avoid staircase effect lies in that higher-order derivatives can avoid over-sharpening the regions of smooth intensity transitions. Meanwhile, a fast iterative shrinkage-thresholding algorithm was used for the corresponding optimization problem. The proposed Hessian Schatten norm-based regularization keeps lots of favorable properties of TV, such as translation and scale invariant, with getting rid of the staircase effect that appears in TV-based reconstructions. The experiments demonstrated the outstanding ability of the proposed algorithm over TV method especially in suppressing the staircase effect.

  17. Linearly-Constrained Adaptive Signal Processing Methods

    NASA Astrophysics Data System (ADS)

    Griffiths, Lloyd J.

    1988-01-01

    In adaptive least-squares estimation problems, a desired signal d(n) is estimated using a linear combination of L observation values samples xi (n), x2(n), . . . , xL-1(n) and denoted by the vector X(n). The estimate is formed as the inner product of this vector with a corresponding L-dimensional weight vector W. One particular weight vector of interest is Wopt which minimizes the mean-square between d(n) and the estimate. In this context, the term `mean-square difference' is a quadratic measure such as statistical expectation or time average. The specific value of W which achieves the minimum is given by the prod-uct of the inverse data covariance matrix and the cross-correlation between the data vector and the desired signal. The latter is often referred to as the P-vector. For those cases in which time samples of both the desired and data vector signals are available, a variety of adaptive methods have been proposed which will guarantee that an iterative weight vector Wa(n) converges (in some sense) to the op-timal solution. Two which have been extensively studied are the recursive least-squares (RLS) method and the LMS gradient approximation approach. There are several problems of interest in the communication and radar environment in which the optimal least-squares weight set is of interest and in which time samples of the desired signal are not available. Examples can be found in array processing in which only the direction of arrival of the desired signal is known and in single channel filtering where the spectrum of the desired response is known a priori. One approach to these problems which has been suggested is the P-vector algorithm which is an LMS-like approximate gradient method. Although it is easy to derive the mean and variance of the weights which result with this algorithm, there has never been an identification of the corresponding underlying error surface which the procedure searches. The purpose of this paper is to suggest an alternative

  18. Hydropower, Adaptive Management, and Biodiversity

    PubMed

    WIERINGA; MORTON

    1996-11-01

    / Adaptive management is a policy framework within which an iterative process of decision making is followed based on the observed responses to and effectiveness of previous decisions. The use of adaptive management allows science-based research and monitoring of natural resource and ecological community responses, in conjunction with societal values and goals, to guide decisions concerning man's activities. The adaptive management process has been proposed for application to hydropower operations at Glen Canyon Dam on the Colorado River, a situation that requires complex balancing of natural resources requirements and competing human uses. This example is representative of the general increase in public interest in the operation of hydropower facilities and possible effects on downstream natural resources and of the growing conflicts between uses and users of river-based resources. This paper describes the adaptive management process, using the Glen Canyon Dam example, and discusses ways to make the process work effectively in managing downstream natural resources and biodiversity. KEY WORDS: Adaptive management; Biodiversity; Hydropower; Glen Canyon Dam; Ecology

  19. A Novel Iterative Scheme and Its Application to Differential Equations

    PubMed Central

    Khan, Yasir; Naeem, F.; Šmarda, Zdeněk

    2014-01-01

    The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method. PMID:24757427

  20. ITER Cryoplant Status and Economics of the LHe plants

    NASA Astrophysics Data System (ADS)

    Monneret, E.; Chalifour, M.; Bonneton, M.; Fauve, E.; Voigt, T.; Badgujar, S.; Chang, H.-S.; Vincent, G.

    The ITER cryoplant is composed of helium and nitrogen refrigerators and generator combined with 80 K helium loop plants and external purification systems. Storage and recovery of the helium inventory is provided in warm and cold (80 K and 4.5 K) helium tanks.The conceptual design of the ITER cryoplant has been completed, the technical requirements defined for industrial procurement and contracts signed with industry. Each contract covers the design, manufacturing, installation and commissioning. Design is under finalization and manufacturing has started. First deliveries are scheduled by end of 2015.The various cryoplant systems are designed based on recognized codes and international standards to meet the availability, the reliability and the time between maintenance imposed by the long-term uninterrupted operation of the ITER Tokamak. In addition, ITER has to consider the constraint of a nuclear installation.ITER Organization (IO) is responsible for the liquid helium (LHe) Plants contract signed end of 2012 with industry. It is composed of three LHe Plants, working in parallel and able to provide a total average cooling capacity of 75 kW at 4.5 K. Based on concept designed developed with industries and the procurement phase, ITER has accumulated data to broaden the scaling laws for costing such systems.After describing the status of ITER cryoplant part of the cryogenic system, we shall present the economics of the ITER LHe Plants based on key design requirements, choice and challenges of this ITER Organization procurement.

  1. Optimal parameters for linear second-degree stationary iterative methods

    SciTech Connect

    Manteuffel, T. A.

    1980-11-01

    It is shown that the optimal parameters for linear second-degree stationary iterative methods applied to nonsymmetric linear systems can be found by solving the same minimax problem used to find optimal parameters for the Tchebychev iteration. In fact, the Tchebychev iteration is asymptotically equivalent to a linear second-degree stationary method. The method of finding optimal parameters for the Tchebychev iteration given by Manteuffel (Numer. Math., 28, 307-27 (1977)) can be used to find optimal parameters for the stationary method as well. 1 figure.

  2. A synopsis of collective alpha effects and implications for ITER

    SciTech Connect

    Sigmar, D.J.

    1990-10-01

    This paper discusses the following: Alpha Interaction with Toroidal Alfven Eigenmodes; Alpha Interaction with Ballooning Modes; Alpha Interaction with Fishbone Oscillations; and Implications for ITER.

  3. Bounded-Angle Iterative Decoding of LDPC Codes

    NASA Technical Reports Server (NTRS)

    Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2009-01-01

    Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).

  4. On an iterative ensemble smoother and its application to a reservoir facies estimation problem

    NASA Astrophysics Data System (ADS)

    Luo, Xiaodong; Chen, Yan; Valestrand, Randi; Stordal, Andreas; Lorentzen, Rolf; Nævdal, Geir

    2014-05-01

    For data assimilation problems there are different ways in utilizing the available observations. While certain data assimilation algorithms, for instance, the ensemble Kalman filter (EnKF, see, for examples, Aanonsen et al., 2009; Evensen, 2006) assimilate the observations sequentially in time, other data assimilation algorithms may instead collect the observations at different time instants and assimilate them simultaneously. In general such algorithms can be classified as smoothers. In this aspect, the ensemble smoother (ES, see, for example, Evensen and van Leeuwen, 2000) can be considered as an smoother counterpart of the EnKF. The EnKF has been widely used for reservoir data assimilation (history matching) problems since its introduction to the community of petroleum engineering (Nævdal et al., 2002). The applications of the ES to reservoir data assimilation problems are also investigated recently (see, for example, Skjervheim and Evensen, 2011). Compared to the EnKF, the ES has certain technical advantages, including, for instance, avoiding the restarts associated with each update step in the EnKF and also having fewer variables to update, which may result in a significant reduction in simulation time, while providing similar assimilation results to those obtained by the EnKF (Skjervheim and Evensen, 2011). To further improve the performance of the ES, some iterative ensemble smoothers are suggested in the literature, in which the iterations are carried out in the forms of certain iterative optimization algorithms, e.g., the Gaussian-Newton (Chen and Oliver, 2012) or the Levenberg-Marquardt method (Chen and Oliver, 2013; Emerick and Reynolds, 2012), or in the context of adaptive Gaussian mixture (AGM, see Stordal and Lorentzen, 2013). In Emerick and Reynolds (2012) the iteration formula is derived based on the idea that, for linear observations, the final results of the iterative ES should be equal to the estimate of the EnKF. In Chen and Oliver (2013), the

  5. Iteratively reweighted unidirectional variational model for stripe non-uniformity correction

    NASA Astrophysics Data System (ADS)

    Huang, Yongzhong; He, Cong; Fang, Houzhang; Wang, Xiaoping

    2016-03-01

    In this paper, we propose an adaptive unidirectional variational nonuniformity correction algorithm for fixed-pattern noise removal. The proposed algorithm is based on a unidirectional variational sparse model that makes use of unidirectional characteristics of stripe nonuniformity noise. The iteratively reweighted least squares (IRLS) technique is introduced to optimize the proposed correction model, which makes the proposed algorithm easy to implement with existing conjugate gradient method without introducing additional variables and parameters. Moreover, we derive a formula to automatically update the regularization parameter from the images. Comparative experimental results on real infrared images indicate that the proposed method can remove the stripe nonuniformity noise effectively while maintaining more useful image details.

  6. Adaptive management of natural resources-framework and issues

    USGS Publications Warehouse

    Williams, B.K.

    2011-01-01

    Adaptive management, an approach for simultaneously managing and learning about natural resources, has been around for several decades. Interest in adaptive decision making has grown steadily over that time, and by now many in natural resources conservation claim that adaptive management is the approach they use in meeting their resource management responsibilities. Yet there remains considerable ambiguity about what adaptive management actually is, and how it is to be implemented by practitioners. The objective of this paper is to present a framework and conditions for adaptive decision making, and discuss some important challenges in its application. Adaptive management is described as a two-phase process of deliberative and iterative phases, which are implemented sequentially over the timeframe of an application. Key elements, processes, and issues in adaptive decision making are highlighted in terms of this framework. Special emphasis is given to the question of geographic scale, the difficulties presented by non-stationarity, and organizational challenges in implementing adaptive management. ?? 2010.

  7. Using Action Research to Develop a Course in Statistical Inference for Workplace-Based Adults

    ERIC Educational Resources Information Center

    Forbes, Sharleen

    2014-01-01

    Many adults who need an understanding of statistical concepts have limited mathematical skills. They need a teaching approach that includes as little mathematical context as possible. Iterative participatory qualitative research (action research) was used to develop a statistical literacy course for adult learners informed by teaching in…

  8. Track Filtering via Iterative Correction of TDI Topology

    PubMed Central

    Aydogan, Dogu Baran; Shi, Yonggang

    2015-01-01

    We propose a new technique to clean outlier tracks from fiber bundles reconstructed by tractography. Previous techniques were mainly based on computing pair-wise distances and clustering methods to identify unwanted tracks, which relied heavy upon user inputs for parameter tuning. In this work, we propose the use of topological information in track density images (TDI) to achieve a more robust filtering of tracks. There are two main steps of our iterative algorithm. Given a fiber bundle, we first convert it to a TDI, then extract and score its critical points. After that, tracks that contribute to high scoring loops are identified and removed using the Reeb graph of the level set surface of the TDI. Our approach is geometrically intuitive and relies only on a single parameter that enables the user to decide on the length of insignificant loops. In our experiments, we use our method to reconstruct the optic radiation in human brain using the multi-shell HARDI data from the human connectome project (HCP). We compare our results against spectral filtering and show that our approach can achieve cleaner reconstructions. We also apply our method to 215 HCP subjects to test for asymmetry of the optic radiation and obtain statistically significant results that are consistent with post-mortem studies. PMID:26798847

  9. Toothbrush Adaptations.

    ERIC Educational Resources Information Center

    Exceptional Parent, 1987

    1987-01-01

    Suggestions are presented for helping disabled individuals learn to use or adapt toothbrushes for proper dental care. A directory lists dental health instructional materials available from various organizations. (CB)

  10. Some results concerning linear iterative (systolic) arrays

    SciTech Connect

    Ibarra, O.H.; Palis, M.A.; Kim, S.M.

    1985-05-01

    The authors have shown some new interesting results concerning the properties, power, and limitations of various types of linear iterative (systolic) arrays. The method they employed consisted of finding sequential machine characterizations of these array models, and then using the characterizations to prove the results. Because of the absence of any concurrency and synchronization problems, the authors obtained simple proofs to results which when proved directly on the arrays would seem very difficult. The characterizations, therefore, provide a novel and promising method which can be used to analyze other systolic systems. In the future they hope to extend this methodology to the study of two-dimensional and multidimensional systolic arrays, and other systolic systems with different interconnection networks.

  11. Structural analysis of ITER magnet feeders

    SciTech Connect

    Ilyin, Yuri; Gung, Chen-Yu; Bauer, Pierre; Chen, Yonghua; Jong, Cornelis; Devred, Arnaud; Mitchell, Neil; Lorriere, Philippe; Farek, Jaromir; Nannini, Matthieu

    2012-06-15

    This paper summarizes the results of the static structural analyses, which were conducted in support of the ITER magnet feeder design with the aim of validating certain components against the structural design criteria. While almost every feeder has unique features, they all share many common constructional elements and the same functional specifications. The analysis approach to assess the load conditions and stresses that have driven the design is equivalent for all feeders, except for particularities that needed to be modeled in each case. The mechanical analysis of the feeders follows the sub-modeling approach: the results of the global mechanical model of a feeder assembly are used as input for the detailed models of the feeder' sub-assemblies or single components. Examples of such approach, including the load conditions, stress assessment criteria and solutions for the most critical components, are discussed. It has been concluded that the feeder system is safe in the referential operation scenarios. (authors)

  12. Iterative methods for Toeplitz-like matrices

    SciTech Connect

    Huckle, T.

    1994-12-31

    In this paper the author will give a survey on iterative methods for solving linear equations with Toeplitz matrices, Block Toeplitz matrices, Toeplitz plus Hankel matrices, and matrices with low displacement rank. He will treat the following subjects: (1) optimal (w)-circulant preconditioners is a generalization of circulant preconditioners; (2) Optimal implementation of circulant-like preconditioners in the complex and real case; (3) preconditioning of near-singular matrices; what kind of preconditioners can be used in this case; (4) circulant preconditioning for more general classes of Toeplitz matrices; what can be said about matrices with coefficients that are not l{sub 1}-sequences; (5) preconditioners for Toeplitz least squares problems, for block Toeplitz matrices, and for Toeplitz plus Hankel matrices.

  13. Exact iterative reconstruction for the interior problem

    PubMed Central

    Zeng, Gengsheng L; Gullberg, Grant T

    2010-01-01

    There is a trend in single photon emission computed tomography (SPECT) that small and dedicated imaging systems are becoming popular. For example, many companies are developing small dedicated cardiac SPECT systems with different designs. These dedicated systems have a smaller field of view (FOV) than a full-size clinical system. Thus data truncation has become the norm rather than the exception in these systems. Therefore, it is important to develop region of interest (ROI) reconstruction algorithms using truncated data. This paper is a stepping stone toward this direction. This paper shows that the common generic iterative image reconstruction algorithms are able to exactly reconstruct the ROI under the conditions that the convex ROI is fully sampled and the image value in a sub-region within the ROI is known. If the ROI includes a sub-region that is outside the patient body, then the conditions can be easily satisfied. PMID:19741279

  14. ITER CENTRAL SOLENOID COIL INSULATION QUALIFICATION

    SciTech Connect

    Martovetsky, N N; Mann, T L; Miller, J R; Freudenberg, K D; Reed, R P; Walsh, R P; McColskey, J D; Evans, D

    2009-06-11

    An insulation system for ITER Central Solenoid must have sufficiently high electrical and structural strength. Design efforts to bring stresses in the turn and layer insulation within allowables failed. It turned out to be impossible to eliminate high local tensile stresses in the winding pack. When high local stresses can not be designed out, the qualification procedure requires verification of the acceptable structural and electrical strength by testing. We built two 4 x 4 arrays of the conductor jacket with two options of the CS insulation and subjected the arrays to 1.2 million compressive cycles at 60 MPa and at 76 K. Such conditions simulated stresses in the CS insulation. We performed voltage withstand tests and after end of cycling we measured the breakdown voltages between in the arrays. After that we dissectioned the arrays and studied micro cracks in the insulation. We report details of the specimens preparation, test procedures and test results.

  15. Robust tooth surface reconstruction by iterative deformation.

    PubMed

    Jiang, Xiaotong; Dai, Ning; Cheng, Xiaosheng; Wang, Jun; Peng, Qingjin; Liu, Hao; Cheng, Cheng

    2016-01-01

    Digital design technologies have been applied extensively in dental medicine, especially in the field of dental restoration. The all-ceramic crown is an important restoration type of dental CAD systems. This paper presents a robust tooth surface reconstruction algorithm for all-ceramic crown design. The algorithm involves three necessary steps: standard tooth initial positioning and division; salient feature point extraction using Morse theory; and standard tooth deformation using iterative Laplacian Surface Editing and mesh stitching. This algorithm can retain the morphological features of the tooth surface well. It is robust and suitable for almost all types of teeth, including incisor, canine, premolar, and molar. Moreover, it allows dental technicians to use their own preferred library teeth for reconstruction. The algorithm has been successfully integrated in our Dental CAD system, more than 1000 clinical cases have been tested to demonstrate the robustness and effectiveness of the proposed algorithm.

  16. Iterative restoration of SPECT projection images

    SciTech Connect

    Glick, S.J.; Xia, W.

    1997-04-01

    Photon attenuation and the limited nonstationary spatial resolution of the detector can reduce both qualitative and quantitative image quality in single photon emission computed tomography (SPECT). In this paper, a reconstruction approach is described which can compensate for both of these degradations. The approach involves processing the project data with Bellini`s method for attenuation compensation followed by an iterative deconvolution technique which uses the frequency distance principle (FDP) to model the distance-dependent camera blur. Modeling of the camera blur with the FDP allows an efficient implementation using fast Fourier transformation (FFT) methods. After processing of the project data, reconstruction is performed using filtered backprojections. Simulation studies using two different brain phantoms show that this approach gives reconstructions with a favorable bias versus noise tradeoff, provides no visually undesirable noise artifacts, and requires a low computational load.

  17. Iterated Gate Teleportation and Blind Quantum Computation.

    PubMed

    Pérez-Delgado, Carlos A; Fitzsimons, Joseph F

    2015-06-01

    Blind quantum computation allows a user to delegate a computation to an untrusted server while keeping the computation hidden. A number of recent works have sought to establish bounds on the communication requirements necessary to implement blind computation, and a bound based on the no-programming theorem of Nielsen and Chuang has emerged as a natural limiting factor. Here we show that this constraint only holds in limited scenarios, and show how to overcome it using a novel method of iterated gate teleportations. This technique enables drastic reductions in the communication required for distributed quantum protocols, extending beyond the blind computation setting. Applied to blind quantum computation, this technique offers significant efficiency improvements, and in some scenarios offers an exponential reduction in communication requirements. PMID:26196609

  18. Iterative Precise Conductivity Measurement with IDEs.

    PubMed

    Hubálek, Jaromír

    2015-05-22

    The paper presents a new approach in the field of precise electrolytic conductivity measurements with planar thin- and thick-film electrodes. This novel measuring method was developed for measurement with comb-like electrodes called interdigitated electrodes (IDEs). Correction characteristics over a wide range of specific conductivities were determined from an interface impedance characterization of the thick-film IDEs. The local maximum of the capacitive part of the interface impedance is used for corrections to get linear responses. The measuring frequency was determined at a wide range of measured conductivity. An iteration mode of measurements was suggested to precisely measure the conductivity at the right frequency in order to achieve a highly accurate response. The method takes precise conductivity measurements in concentration ranges from 10(-6) to 1 M without electrode cell replacement.

  19. ITER L-Mode Confinement Database

    SciTech Connect

    S.M. Kaye and the ITER Confinement Database Working Group

    1997-10-01

    This paper describes the content of an L-mode database that has been compiled with data from Alcator C-Mod, ASDEX, DIII, DIII-D, FTU, JET, JFT-2M, JT-60, PBX-M, PDX, T-10, TEXTOR, TFTR, and Tore-Supra. The database consists of a total of 2938 entries, 1881 of which are in the L-phase while 922 are ohmically heated (OH) only. Each entry contains up to 95 descriptive parameters, including global and kinetic information, machine conditioning, and configuration. The paper presents a description of the database and the variables contained therein, and it also presents global and thermal scalings along with predictions for ITER. The L-mode thermal confinement time scaling was determined from a subset of 1312 entries for which the thermal confinement time scaling was provided.

  20. ITER L-mode confinement database

    SciTech Connect

    Kaye, S.M.

    1997-10-06

    This paper describes the content of an L-mode database that has been compiled with data from Alcator C-Mod, ASDEX, DIII, DIII-D, FTU, JET, JFT-2M, JT-60, PBX-M, PDX, T-10, TEXTOR, TFTR, and Tore-Supra. The database consists of a total of 2938 entries, 1881 of which are in the L-phase while 922 are ohmically heated only (OH). Each entry contains up to 95 descriptive parameters, including global and kinetic information, machine conditioning, and configuration. The paper presents a description of the database and the variables contained therein, and it also presents global and thermal scalings along with predictions for ITER.

  1. Orbit of an image under iterated system

    NASA Astrophysics Data System (ADS)

    Singh, S. L.; Mishra, S. N.; Jain, Sarika

    2011-03-01

    An orbital picture depicts the path of an object under semi-group of transformations. The concept initially given by Barnsley [3] has utmost importance in image compression, biological modeling and other areas of fractal geometry. In this paper, we introduce superior iterations to study the role of linear and nonlinear transformations on the orbit of an object. Various characteristics of the computed figures have been discussed to indicate the usefulness of study in mathematical analysis. Modified algorithms are given to compute the orbital picture and V-variable orbital picture. An algorithm to calculate the distance between images makes the study motivating. A brief discussion about the proof of the Cauchy sequence of images is also given.

  2. Iterated Gate Teleportation and Blind Quantum Computation.

    PubMed

    Pérez-Delgado, Carlos A; Fitzsimons, Joseph F

    2015-06-01

    Blind quantum computation allows a user to delegate a computation to an untrusted server while keeping the computation hidden. A number of recent works have sought to establish bounds on the communication requirements necessary to implement blind computation, and a bound based on the no-programming theorem of Nielsen and Chuang has emerged as a natural limiting factor. Here we show that this constraint only holds in limited scenarios, and show how to overcome it using a novel method of iterated gate teleportations. This technique enables drastic reductions in the communication required for distributed quantum protocols, extending beyond the blind computation setting. Applied to blind quantum computation, this technique offers significant efficiency improvements, and in some scenarios offers an exponential reduction in communication requirements.

  3. Iterated upwind schemes for gas dynamics

    SciTech Connect

    Smolarkiewicz, Piotr K. Szmelter, Joanna

    2009-01-10

    A class of high-resolution schemes established in integration of anelastic equations is extended to fully compressible flows, and documented for unsteady (and steady) problems through a span of Mach numbers from zero to supersonic. The schemes stem from iterated upwind technology of the multidimensional positive definite advection transport algorithm (MPDATA). The derived algorithms employ standard and modified forms of the equations of gas dynamics for conservation of mass, momentum and either total or internal energy as well as potential temperature. Numerical examples from elementary wave propagation, through computational aerodynamics benchmarks, to atmospheric small- and large-amplitude acoustics with intricate wave-flow interactions verify the approach for both structured and unstructured meshes, and demonstrate its flexibility and robustness.

  4. Holographic imaging through a scattering medium by diffuser-assisted statistical averaging

    NASA Astrophysics Data System (ADS)

    Purcell, Michael J.; Kumar, Manish; Rand, Stephen C.

    2016-03-01

    The ability to image through a scattering or diffusive medium such as tissue or hazy atmosphere is a goal which has garnered extensive attention from the scientific community. Existing imaging methods in this field make use of phase conjugation, time of flight, iterative wave-front shaping or statistical averaging approaches, which tend to be either time consuming or complicated to implement. We introduce a novel and practical way of statistical averaging which makes use of a rotating ground glass diffuser to nullify the adverse effects caused by speckle introduced by a first static diffuser / aberrator. This is a Fourier transform-based, holographic approach which demonstrates the ability to recover detailed images and shows promise for further remarkable improvement. The present experiments were performed with 2D flat images, but this method could be easily adapted for recovery of 3D extended object information. The simplicity of the approach makes it fast, reliable, and potentially scalable as a portable technology. Since imaging through a diffuser has direct applications in biomedicine and defense technologies this method may augment advanced imaging capabilities in many fields.

  5. Statistical shape model-based reconstruction of a scaled, patient-specific surface model of the pelvis from a single standard AP x-ray radiograph

    SciTech Connect

    Zheng Guoyan

    2010-04-15

    Purpose: The aim of this article is to investigate the feasibility of using a statistical shape model (SSM)-based reconstruction technique to derive a scaled, patient-specific surface model of the pelvis from a single standard anteroposterior (AP) x-ray radiograph and the feasibility of estimating the scale of the reconstructed surface model by performing a surface-based 3D/3D matching. Methods: Data sets of 14 pelvises (one plastic bone, 12 cadavers, and one patient) were used to validate the single-image based reconstruction technique. This reconstruction technique is based on a hybrid 2D/3D deformable registration process combining a landmark-to-ray registration with a SSM-based 2D/3D reconstruction. The landmark-to-ray registration was used to find an initial scale and an initial rigid transformation between the x-ray image and the SSM. The estimated scale and rigid transformation were used to initialize the SSM-based 2D/3D reconstruction. The optimal reconstruction was then achieved in three stages by iteratively matching the projections of the apparent contours extracted from a 3D model derived from the SSM to the image contours extracted from the x-ray radiograph: Iterative affine registration, statistical instantiation, and iterative regularized shape deformation. The image contours are first detected by using a semiautomatic segmentation tool based on the Livewire algorithm and then approximated by a set of sparse dominant points that are adaptively sampled from the detected contours. The unknown scales of the reconstructed models were estimated by performing a surface-based 3D/3D matching between the reconstructed models and the associated ground truth models that were derived from a CT-based reconstruction method. Such a matching also allowed for computing the errors between the reconstructed models and the associated ground truth models. Results: The technique could reconstruct the surface models of all 14 pelvises directly from the landmark

  6. Error bounds from extra precise iterative refinement

    SciTech Connect

    Demmel, James; Hida, Yozo; Kahan, William; Li, Xiaoye S.; Mukherjee, Soni; Riedy, E. Jason

    2005-02-07

    We present the design and testing of an algorithm for iterative refinement of the solution of linear equations, where the residual is computed with extra precision. This algorithm was originally proposed in the 1960s [6, 22] as a means to compute very accurate solutions to all but the most ill-conditioned linear systems of equations. However two obstacles have until now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way to access the higher precision arithmetic needed to compute residuals, and (2) it was unclear how to compute a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard [5] has recently removed the first obstacle. To overcome the second obstacle, we show how a single application of iterative refinement can be used to compute an error bound in any norm at small cost, and use this to compute both an error bound in the usual infinity norm, and a componentwise relative error bound. We report extensive test results on over 6.2 million matrices of dimension 5, 10, 100, and 1000. As long as a normwise (resp. componentwise) condition number computed by the algorithm is less than 1/max{l_brace}10,{radical}n{r_brace} {var_epsilon}{sub w}, the computed normwise (resp. componentwise) error bound is at most 2 max{l_brace}10,{radical}n{r_brace} {center_dot} {var_epsilon}{sub w}, and indeed bounds the true error. Here, n is the matrix dimension and w is single precision roundoff error. For worse conditioned problems, we get similarly small correct error bounds in over 89.4% of cases.

  7. An iterative subaperture position correction algorithm

    NASA Astrophysics Data System (ADS)

    Lo, Weng-Hou; Lin, Po-Chih; Chen, Yi-Chun

    2015-08-01

    The subaperture stitching interferometry is a technique suitable for testing high numerical-aperture optics, large-diameter spherical lenses and aspheric optics. In the stitching process, each subaperture has to be placed at its correct position in a global coordinate, and the positioning precision would affect the accuracy of stitching result. However, the mechanical limitations in the alignment process as well as vibrations during the measurement would induce inevitable subaperture position uncertainties. In our previous study, a rotational scanning subaperture stitching interferometer has been constructed. This paper provides an iterative algorithm to correct the subaperture position without altering the interferometer configuration. Each subaperture is first placed at its geometric position estimated according to the F number of reference lens, the measurement zenithal angle and the number of pixels along the width of subaperture. By using the concept of differentiation, a shift compensator along the radial direction of the global coordinate is added into the stitching algorithm. The algorithm includes two kinds of compensators: one for the geometric null with four compensators of piston, two directional tilts and defocus, and the other for the position correction with the shift compensator. These compensators are computed iteratively to minimize the phase differences in the overlapped regions of subapertures in a least-squares sense. The simulation results demonstrate that the proposed method works to the position accuracy of 0.001 pixels for both the single-ring and multiple-ring configurations. Experimental verifications with the single-ring and multiple-ring data also show the effectiveness of the algorithm.

  8. Statistical Symbolic Execution with Informed Sampling

    NASA Technical Reports Server (NTRS)

    Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco

    2014-01-01

    Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.

  9. The Role of Bridging Organizations in Enhancing Ecosystem Services and Facilitating Adaptive Management of Social-Ecological Systems

    EPA Science Inventory

    Adaptive management is an approach for monitoring the response of ecological systems to different policies and practices and attempts to reduce the inherent uncertainty in ecological systems via system monitoring and iterative decision making and experimentation (Holling 1978). M...

  10. Iterative build OMIT maps: Map improvement by iterative model-building and refinement without model bias

    SciTech Connect

    Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545, USA; Lawrence Berkeley National Laboratory, One Cyclotron Road, Building 64R0121, Berkeley, CA 94720, USA; Department of Haematology, University of Cambridge, Cambridge CB2 0XY, England; Terwilliger, Thomas; Terwilliger, T.C.; Grosse-Kunstleve, Ralf Wilhelm; Afonine, P.V.; Moriarty, N.W.; Zwart, P.H.; Hung, L.-W.; Read, R.J.; Adams, P.D.

    2008-02-12

    A procedure for carrying out iterative model-building, density modification and refinement is presented in which the density in an OMIT region is essentially unbiased by an atomic model. Density from a set of overlapping OMIT regions can be combined to create a composite 'Iterative-Build' OMIT map that is everywhere unbiased by an atomic model but also everywhere benefiting from the model-based information present elsewhere in the unit cell. The procedure may have applications in the validation of specific features in atomic models as well as in overall model validation. The procedure is demonstrated with a molecular replacement structure and with an experimentally-phased structure, and a variation on the method is demonstrated by removing model bias from a structure from the Protein Data Bank.

  11. A Model and Simple Iterative Algorithm for Redundancy Analysis.

    ERIC Educational Resources Information Center

    Fornell, Claes; And Others

    1988-01-01

    This paper shows that redundancy maximization with J. K. Johansson's extension can be accomplished via a simple iterative algorithm based on H. Wold's Partial Least Squares. The model and the iterative algorithm for the least squares approach to redundancy maximization are presented. (TJH)

  12. Lessons Drawn from ITER and Other Fusion International Collaborations

    NASA Astrophysics Data System (ADS)

    Dean, Stephen O.

    1998-06-01

    The international character of fusion research and development is described, with special emphasis on the ITER (International Thermonuclear Experimental Reactor) joint venture. The history of the ITER collaboration is traced. Lessons drawn that may prove useful for future ventures are presented.

  13. An Iterative Method for Solving Variable Coefficient ODEs

    ERIC Educational Resources Information Center

    Deeba, Elias; Yoon, Jeong-Mi; Zafiris, Vasilis

    2003-01-01

    In this classroom note, the authors present a method to solve variable coefficients ordinary differential equations of the form p(x)y([squared])(x) + q(x)y([superscript 1])(x) + r(x)y(x) = 0. They propose an iterative method as an alternate method to solve the above equation. This iterative method is accessible to an undergraduate student studying…

  14. Not so Complex: Iteration in the Complex Plane

    ERIC Educational Resources Information Center

    O'Dell, Robin S.

    2014-01-01

    The simple process of iteration can produce complex and beautiful figures. In this article, Robin O'Dell presents a set of tasks requiring students to use the geometric interpretation of complex number multiplication to construct linear iteration rules. When the outputs are plotted in the complex plane, the graphs trace pleasing designs…

  15. Wall conditioning for ITER: Current experimental and modeling activities

    NASA Astrophysics Data System (ADS)

    Douai, D.; Kogut, D.; Wauters, T.; Brezinsek, S.; Hagelaar, G. J. M.; Hong, S. H.; Lomas, P. J.; Lyssoivan, A.; Nunes, I.; Pitts, R. A.; Rohde, V.; de Vries, P. C.

    2015-08-01

    Wall conditioning will be required in ITER to control fuel and impurity recycling, as well as tritium (T) inventory. Analysis of conditioning cycle on the JET, with its ITER-Like Wall is presented, evidencing reduced need for wall cleaning in ITER compared to JET-CFC. Using a novel 2D multi-fluid model, current density during Glow Discharge Conditioning (GDC) on the in-vessel plasma-facing components (PFC) of ITER is predicted to approach the simple expectation of total anode current divided by wall surface area. Baking of the divertor to 350 °C should desorb the majority of the co-deposited T. ITER foresees the use of low temperature plasma based techniques compatible with the permanent toroidal magnetic field, such as Ion (ICWC) or Electron Cyclotron Wall Conditioning (ECWC), for tritium removal between ITER plasma pulses. Extrapolation of JET ICWC results to ITER indicates removal comparable to estimated T-retention in nominal ITER D:T shots, whereas GDC may be unattractive for that purpose.

  16. Magnet design technical report---ITER definition phase

    SciTech Connect

    Henning, C.

    1989-04-28

    This report contains papers on the following topics: conceptual design; radiation damage of ITER magnet systems; insulation system of the magnets; critical current density and strain sensitivity; toroidal field coil structural analysis; stress analysis for the ITER central solenoid; and volt-second capabilities and PF magnet configurations.

  17. Validation of 1-D transport and sawtooth models for ITER

    SciTech Connect

    Connor, J.W.; Turner, M.F.; Attenberger, S.E.; Houlberg, W.A.

    1996-12-31

    In this paper the authors describe progress on validating a number of local transport models by comparing their predictions with relevant experimental data from a range of tokamaks in the ITER profile database. This database, the testing procedure and results are discussed. In addition a model for sawtooth oscillations is used to investigate their effect in an ITER plasma with alpha-particles.

  18. The Effect of Iteration on the Design Performance of Primary School Children

    ERIC Educational Resources Information Center

    Looijenga, Annemarie; Klapwijk, Remke; de Vries, Marc J.

    2015-01-01

    Iteration during the design process is an essential element. Engineers optimize their design by iteration. Research on iteration in Primary Design Education is however scarce; possibly teachers believe they do not have enough time for iteration in daily classroom practices. Spontaneous playing behavior of children indicates that iteration fits in…

  19. Parallel adaptive mesh refinement for electronic structure calculations

    SciTech Connect

    Kohn, S.; Weare, J.; Ong, E.; Baden, S.

    1996-12-01

    We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradients with multigrid preconditioning. We have parallelized our solver using an object-oriented adaptive mesh refinement framework.

  20. Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2006-01-01

    The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described

  1. A lower hybrid current drive system for ITER

    NASA Astrophysics Data System (ADS)

    Hoang, G. T.; Bécoulet, A.; Jacquinot, J.; Artaud, J. F.; Bae, Y. S.; Beaumont, B.; Belo, J. H.; Berger-By, G.; Bizarro, João P. S.; Bonoli, P.; Cho, M. H.; Decker, J.; Delpech, L.; Ekedahl, A.; Garcia, J.; Giruzzi, G.; Goniche, M.; Gormezano, C.; Guilhem, D.; Hillairet, J.; Imbeaux, F.; Kazarian, F.; Kessel, C.; Kim, S. H.; Kwak, J. G.; Jeong, J. H.; Lister, J. B.; Litaudon, X.; Magne, R.; Milora, S.; Mirizzi, F.; Namkung, W.; Noterdaeme, J. M.; Park, S. I.; Parker, R.; Peysson, Y.; Rasmussen, D.; Sharma, P. K.; Schneider, M.; Synakowski, E.; Tanga, A.; Tuccillo, A.; Wan, Y. X.

    2009-07-01

    A 20 MW/5 GHz lower hybrid current drive (LHCD) system was initially due to be commissioned and used for the second mission of ITER, i.e. the Q = 5 steady state target. Though not part of the currently planned procurement phase, it is now under consideration for an earlier delivery. In this paper, both physics and technology conceptual designs are reviewed. Furthermore, an appropriate work plan is also developed. This work plan for design, R&D, procurement and installation of a 20 MW LHCD system on ITER follows the ITER Scientific and Technical Advisory Committee (STAC) T13-05 task instructions. It gives more details on the various scientific and technical implications of the system, without presuming on any work or procurement sharing amongst the possible ITER partnersb The LHCD system of ITER is not part of the initial cost sharing.. This document does not commit the Institutions or Domestic Agencies of the various authors in that respect.

  2. Preliminary consideration of CFETR ITER-like case diagnostic system

    NASA Astrophysics Data System (ADS)

    Li, G. S.; Yang, Y.; Wang, Y. M.; Ming, T. F.; Han, X.; Liu, S. C.; Wang, E. H.; Liu, Y. K.; Yang, W. J.; Li, G. Q.; Hu, Q. S.; Gao, X.

    2016-11-01

    Chinese Fusion Engineering Test Reactor (CFETR) is a new superconducting tokamak device being designed in China, which aims at bridging the gap between ITER and DEMO, where DEMO is a tokamak demonstration fusion reactor. Two diagnostic cases, ITER-like case and towards DEMO case, have been considered for CFETR early and later operating phases, respectively. In this paper, some preliminary consideration of ITER-like case will be presented. Based on ITER diagnostic system, three versions of increased complexity and coverage of the ITER-like case diagnostic system have been developed with different goals and functions. Version A aims only machine protection and basic control. Both of version B and version C are mainly for machine protection, basic and advanced control, but version C has an increased level of redundancy necessary for improved measurements capability. The performance of these versions and needed R&D work are outlined.

  3. Final Report on ITER Task Agreement 81-10

    SciTech Connect

    Brad J. Merrill

    2009-01-01

    An International Thermonuclear Experimental Reactor (ITER) Implementing Task Agreement (ITA) on Magnet Safety was established between the ITER International Organization (IO) and the Idaho National Laboratory (INL) Fusion Safety Program (FSP) during calendar year 2004. The objectives of this ITA were to add new capabilities to the MAGARC code and to use this updated version of MAGARC to analyze unmitigated superconductor quench events for both poloidal field (PF) and toroidal field (TF) coils of the ITER design. This report documents the completion of the work scope for this ITA. Based on the results obtained for this ITA, an unmitigated quench event in an ITER larger PF coil does not appear to be as severe an accident as in an ITER TF coil.

  4. Effect of Low-Dose MDCT and Iterative Reconstruction on Trabecular Bone Microstructure Assessment.

    PubMed

    Kopp, Felix K; Holzapfel, Konstantin; Baum, Thomas; Nasirudin, Radin A; Mei, Kai; Garcia, Eduardo G; Burgkart, Rainer; Rummeny, Ernst J; Kirschke, Jan S; Noël, Peter B

    2016-01-01

    We investigated the effects of low-dose multi detector computed tomography (MDCT) in combination with statistical iterative reconstruction algorithms on trabecular bone microstructure parameters. Twelve donated vertebrae were scanned with the routine radiation exposure used in our department (standard-dose) and a low-dose protocol. Reconstructions were performed with filtered backprojection (FBP) and maximum-likelihood based statistical iterative reconstruction (SIR). Trabecular bone microstructure parameters were assessed and statistically compared for each reconstruction. Moreover, fracture loads of the vertebrae were biomechanically determined and correlated to the assessed microstructure parameters. Trabecular bone microstructure parameters based on low-dose MDCT and SIR significantly correlated with vertebral bone strength. There was no significant difference between microstructure parameters calculated on low-dose SIR and standard-dose FBP images. However, the results revealed a strong dependency on the regularization strength applied during SIR. It was observed that stronger regularization might corrupt the microstructure analysis, because the trabecular structure is a very small detail that might get lost during the regularization process. As a consequence, the introduction of SIR for trabecular bone microstructure analysis requires a specific optimization of the regularization parameters. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods.

  5. Effect of Low-Dose MDCT and Iterative Reconstruction on Trabecular Bone Microstructure Assessment.

    PubMed

    Kopp, Felix K; Holzapfel, Konstantin; Baum, Thomas; Nasirudin, Radin A; Mei, Kai; Garcia, Eduardo G; Burgkart, Rainer; Rummeny, Ernst J; Kirschke, Jan S; Noël, Peter B

    2016-01-01

    We investigated the effects of low-dose multi detector computed tomography (MDCT) in combination with statistical iterative reconstruction algorithms on trabecular bone microstructure parameters. Twelve donated vertebrae were scanned with the routine radiation exposure used in our department (standard-dose) and a low-dose protocol. Reconstructions were performed with filtered backprojection (FBP) and maximum-likelihood based statistical iterative reconstruction (SIR). Trabecular bone microstructure parameters were assessed and statistically compared for each reconstruction. Moreover, fracture loads of the vertebrae were biomechanically determined and correlated to the assessed microstructure parameters. Trabecular bone microstructure parameters based on low-dose MDCT and SIR significantly correlated with vertebral bone strength. There was no significant difference between microstructure parameters calculated on low-dose SIR and standard-dose FBP images. However, the results revealed a strong dependency on the regularization strength applied during SIR. It was observed that stronger regularization might corrupt the microstructure analysis, because the trabecular structure is a very small detail that might get lost during the regularization process. As a consequence, the introduction of SIR for trabecular bone microstructure analysis requires a specific optimization of the regularization parameters. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods. PMID:27447827

  6. Effect of Low-Dose MDCT and Iterative Reconstruction on Trabecular Bone Microstructure Assessment

    PubMed Central

    Baum, Thomas; Nasirudin, Radin A.; Mei, Kai; Garcia, Eduardo G.; Burgkart, Rainer; Rummeny, Ernst J.; Kirschke, Jan S.; Noël, Peter B.

    2016-01-01

    We investigated the effects of low-dose multi detector computed tomography (MDCT) in combination with statistical iterative reconstruction algorithms on trabecular bone microstructure parameters. Twelve donated vertebrae were scanned with the routine radiation exposure used in our department (standard-dose) and a low-dose protocol. Reconstructions were performed with filtered backprojection (FBP) and maximum-likelihood based statistical iterative reconstruction (SIR). Trabecular bone microstructure parameters were assessed and statistically compared for each reconstruction. Moreover, fracture loads of the vertebrae were biomechanically determined and correlated to the assessed microstructure parameters. Trabecular bone microstructure parameters based on low-dose MDCT and SIR significantly correlated with vertebral bone strength. There was no significant difference between microstructure parameters calculated on low-dose SIR and standard-dose FBP images. However, the results revealed a strong dependency on the regularization strength applied during SIR. It was observed that stronger regularization might corrupt the microstructure analysis, because the trabecular structure is a very small detail that might get lost during the regularization process. As a consequence, the introduction of SIR for trabecular bone microstructure analysis requires a specific optimization of the regularization parameters. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods. PMID:27447827

  7. An efficient reconstruction method for bioluminescence tomography based on two-step iterative shrinkage approach

    NASA Astrophysics Data System (ADS)

    Guo, Wei; Jia, Kebin; Tian, Jie; Han, Dong; Liu, Xueyan; Wu, Ping; Feng, Jinchao; Yang, Xin

    2012-03-01

    Among many molecular imaging modalities, Bioluminescence tomography (BLT) is an important optical molecular imaging modality. Due to its unique advantages in specificity, sensitivity, cost-effectiveness and low background noise, BLT is widely studied for live small animal imaging. Since only the photon distribution over the surface is measurable and the photo propagation with biological tissue is highly diffusive, BLT is often an ill-posed problem and may bear multiple solutions and aberrant reconstruction in the presence of measurement noise and optical parameter mismatches. For many BLT practical applications, such as early detection of tumors, the volumes of the light sources are very small compared with the whole body. Therefore, the L1-norm sparsity regularization has been used to take advantage of the sparsity prior knowledge and alleviate the ill-posedness of the problem. Iterative shrinkage (IST) algorithm is an important research achievement in a field of compressed sensing and widely applied in sparse signal reconstruction. However, the convergence rate of IST algorithm depends heavily on the linear operator. When the problem is ill-posed, it becomes very slow. In this paper, we present a sparsity regularization reconstruction method for BLT based on the two-step iterated shrinkage approach. By employing Two-step strategy of iterative reweighted shrinkage (IRS) to improve IST, the proposed method shows faster convergence rate and better adaptability for BLT. The simulation experiments with mouse atlas were conducted to evaluate the performance of proposed method. By contrast, the proposed method can obtain the stable and comparable reconstruction solution with less number of iterations.

  8. Hydropower, adaptive management, and biodiversity

    SciTech Connect

    Wieringa, M.J.; Morton, A.G.

    1996-11-01

    Adaptive management is a policy framework within which an iterative process of decision making is allowed based on the observed responses to and effectiveness of previous decisions. The use of adaptive management allows science-based research and monitoring of natural resource and ecological community responses, in conjunction with societal values and goals, to guide decisions concerning man`s activities. The adaptive management process has been proposed for application to hydropower operations at Glen Canyon Dam on the Colorado River, a situation that requires complex balancing of natural resources requirements and competing human uses. This example is representative of the general increase in public interest in the operation of hydropower facilities and possible effects on downstream natural resources and of the growing conflicts between uses and users of river-based resources. This paper describes the adaptive management process, using the Glen Canyon Dam example, and discusses ways to make the process work effectively in managing downstream natural resources and biodiversity. 10 refs., 2 figs.

  9. Statistical Reference Datasets

    National Institute of Standards and Technology Data Gateway

    Statistical Reference Datasets (Web, free access)   The Statistical Reference Datasets is also supported by the Standard Reference Data Program. The purpose of this project is to improve the accuracy of statistical software by providing reference datasets with certified computational results that enable the objective evaluation of statistical software.

  10. Adaptive Development

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The goal of this research is to develop and demonstrate innovative adaptive seal technologies that can lead to dramatic improvements in engine performance, life, range, and emissions, and enhance operability for next generation gas turbine engines. This work is concentrated on the development of self-adaptive clearance control systems for gas turbine engines. Researchers have targeted the high-pressure turbine (HPT) blade tip seal location for following reasons: Current active clearance control (ACC) systems (e.g., thermal case-cooling schemes) cannot respond to blade tip clearance changes due to mechanical, thermal, and aerodynamic loads. As such they are prone to wear due to the required tight running clearances during operation. Blade tip seal wear (increased clearances) reduces engine efficiency, performance, and service life. Adaptive sealing technology research has inherent impact on all envisioned 21st century propulsion systems (e.g. distributed vectored, hybrid and electric drive propulsion concepts).

  11. Explorations in statistics: statistical facets of reproducibility.

    PubMed

    Curran-Everett, Douglas

    2016-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This eleventh installment of Explorations in Statistics explores statistical facets of reproducibility. If we obtain an experimental result that is scientifically meaningful and statistically unusual, we would like to know that our result reflects a general biological phenomenon that another researcher could reproduce if (s)he repeated our experiment. But more often than not, we may learn this researcher cannot replicate our result. The National Institutes of Health and the Federation of American Societies for Experimental Biology have created training modules and outlined strategies to help improve the reproducibility of research. These particular approaches are necessary, but they are not sufficient. The principles of hypothesis testing and estimation are inherent to the notion of reproducibility in science. If we want to improve the reproducibility of our research, then we need to rethink how we apply fundamental concepts of statistics to our science.

  12. Application of Iterative Time-Reversal for Electromagnetic Wave Focusing in a Wave Chaotic System

    NASA Astrophysics Data System (ADS)

    Taddese, Biniyam; Antonsen, Thomas; Ott, Edward; Anlage, Steven

    2011-03-01

    Time-reversal mirrors exploit the time-reversal invariance of the wave equation to achieve spatial and temporal focusing, and they have been shown to be very effective sensors of perturbations to wave chaotic systems. The sensing technique is based on a classical analogue of the Loschmidt echo. However, dissipation results in an imperfect focusing, hence we created a sensing technique employing exponential amplification to overcome this limitation [1,2]. We now apply the technique of iterative time-reversal, which had been demonstrated in a dissipative acoustic system, to an electromagnetic time-reversal mirror, and experimentally demonstrate improved temporal focusing. We also use a numerical model of a network of transmission lines to demonstrate improved focusing by the iterative technique for various degrees and statistical distributions of loss in the system. The application of the iterative technique to improve the performance and practicality of our sensor is explored. This work is supported by an ONR MURI Grant No. N000140710734, AFOSR Grant No. FA95501010106, and the Maryland CNAM.

  13. Evaluating iterative reconstruction performance in computed tomography

    SciTech Connect

    Chen, Baiyu Solomon, Justin; Ramirez Giraldo, Juan Carlos; Samei, Ehsan

    2014-12-15

    Purpose: Iterative reconstruction (IR) offers notable advantages in computed tomography (CT). However, its performance characterization is complicated by its potentially nonlinear behavior, impacting performance in terms of specific tasks. This study aimed to evaluate the performance of IR with both task-specific and task-generic strategies. Methods: The performance of IR in CT was mathematically assessed with an observer model that predicted the detection accuracy in terms of the detectability index (d′). d′ was calculated based on the properties of the image noise and resolution, the observer, and the detection task. The characterizations of image noise and resolution were extended to accommodate the nonlinearity of IR. A library of tasks was mathematically modeled at a range of sizes (radius 1–4 mm), contrast levels (10–100 HU), and edge profiles (sharp and soft). Unique d′ values were calculated for each task with respect to five radiation exposure levels (volume CT dose index, CTDI{sub vol}: 3.4–64.8 mGy) and four reconstruction algorithms (filtered backprojection reconstruction, FBP; iterative reconstruction in imaging space, IRIS; and sinogram affirmed iterative reconstruction with strengths of 3 and 5, SAFIRE3 and SAFIRE5; all provided by Siemens Healthcare, Forchheim, Germany). The d′ values were translated into the areas under the receiver operating characteristic curve (AUC) to represent human observer performance. For each task and reconstruction algorithm, a threshold dose was derived as the minimum dose required to achieve a threshold AUC of 0.9. A task-specific dose reduction potential of IR was calculated as the difference between the threshold doses for IR and FBP. A task-generic comparison was further made between IR and FBP in terms of the percent of all tasks yielding an AUC higher than the threshold. Results: IR required less dose than FBP to achieve the threshold AUC. In general, SAFIRE5 showed the most significant dose reduction

  14. Multiagent reinforcement learning in the Iterated Prisoner's Dilemma.

    PubMed

    Sandholm, T W; Crites, R H

    1996-01-01

    Reinforcement learning (RL) is based on the idea that the tendency to produce an action should be strengthened (reinforced) if it produces favorable results, and weakened if it produces unfavorable results. Q-learning is a recent RL algorithm that does not need a model of its environment and can be used on-line. Therefore, it is well suited for use in repeated games against an unknown opponent. Most RL research has been confined to single-agent settings or to multiagent settings where the agents have totally positively correlated payoffs (team problems) or totally negatively correlated payoffs (zero-sum games). This paper is an empirical study of reinforcement learning in the Iterated Prisoner's Dilemma (IPD), where the agents' payoffs are neither totally positively nor totally negatively correlated. RL is considerably more difficult in such a domain. This paper investigates the ability of a variety of Q-learning agents to play the IPD game against an unknown opponent. In some experiments, the opponent is the fixed strategy Tit-For-Tat, while in others it is another Q-learner. All the Q-learners learned to play optimally against Tit-For-Tat. Playing against another learner was more difficult because the adaptation of the other learner created a non-stationary environment, and because the other learner was not endowed with any a priori knowledge about the IPD game such as a policy designed to encourage cooperation. The learners that were studied varied along three dimensions: the length of history they received as context, the type of memory they employed (lookup tables based on restricted history windows or recurrent neural networks that can theoretically store features from arbitrarily deep in the past), and the exploration schedule they followed. Although all the learners faced difficulties when playing against other learners, agents with longer history windows, lookup table memories, and longer exploration schedules fared best in the IPD games.

  15. Undergraduate experiments on statistical optics

    NASA Astrophysics Data System (ADS)

    Scholz, Ruediger; Friege, Gunnar; Weber, Kim-Alessandro

    2016-09-01

    Since the pioneering experiments of Forrester et al (1955 Phys. Rev. 99 1691) and Hanbury Brown and Twiss (1956 Nature 177 27; Nature 178 1046), along with the introduction of the laser in the 1960s, the systematic analysis of random fluctuations of optical fields has developed to become an indispensible part of physical optics for gaining insight into features of the fields. In 1985 Joseph W Goodman prefaced his textbook on statistical optics with a strong commitment to the ‘tools of probability and statistics’ (Goodman 2000 Statistical Optics (New York: John Wiley & Sons Inc.)) in the education of advanced optics. Since then a wide range of novel undergraduate optical counting experiments and corresponding pedagogical approaches have been introduced to underpin the rapid growth of the interest in coherence and photon statistics. We propose low cost experimental steps that are a fair way off ‘real’ quantum optics, but that give deep insight into random optical fluctuation phenomena: (1) the introduction of statistical methods into undergraduate university optical lab work, and (2) the connection between the photoelectrical signal and the characteristics of the light source. We describe three experiments and theoretical approaches which may be used to pave the way for a well balanced growth of knowledge, providing students with an opportunity to enhance their abilities to adapt the ‘tools of probability and statistics’.

  16. Adaptive Thresholds

    SciTech Connect

    Bremer, P. -T.

    2014-08-26

    ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.

  17. Parallelizable 3D statistical reconstruction for C-arm tomosynthesis system

    NASA Astrophysics Data System (ADS)

    Wang, Beilei; Barner, Kenneth; Lee, Denny

    2005-04-01

    Clinical diagnosis and security detection tasks increasingly require 3D information which is difficult or impossible to obtain from 2D (two dimensional) radiographs. As a 3D (three dimensional) radiographic and non-destructive imaging technique, digital tomosynthesis is especially fit for cases where 3D information is required while a complete projection data is not available. Nowadays, FBP (filtered back projection) is extensively used in industry for its fast speed and simplicity. However, it is hard to deal with situations where only a limited number of projections from constrained directions are available, or the SNR (signal to noises ratio) of the projections is low. In order to deal with noise and take into account a priori information of the object, a statistical image reconstruction method is described based on the acquisition model of X-ray projections. We formulate a ML (maximum likelihood) function for this model and develop an ordered-subsets iterative algorithm to estimate the unknown attenuation of the object. Simulations show that satisfied results can be obtained after 1 to 2 iterations, and after that there is no significant improvement of the image quality. An adaptive wiener filter is also applied to the reconstructed image to remove its noise. Some approximations to speed up the reconstruction computation are also considered. Applying this method to computer generated projections of a revised Shepp phantom and true projections from diagnostic radiographs of a patient"s hand and mammography images yields reconstructions with impressive quality. Parallel programming is also implemented and tested. The quality of the reconstructed object is conserved, while the computation time is considerably reduced by almost the number of threads used.

  18. Electron cyclotron emission diagnostic for ITER

    SciTech Connect

    Rowan, W.; Austin, M.; Phillips, P.; Beno, J.; Ouroua, A.; Ellis, R.; Feder, R.; Patel, A.

    2010-10-15

    Electron temperature measurements and electron thermal transport inferences will be critical to the nonactive and deuterium phases of ITER operation and will take on added importance during the alpha heating phase. The diagnostic must meet stringent criteria on spatial coverage and spatial resolution during full field operation. During the early phases of operation, it must operate equally well at half field. The key to the diagnostic is the front end design. It consists of a quasioptical antenna and a pair of calibration sources. The radial resolution of the diagnostic is less than 0.06 m. The spatial coverage extends at least from the core to the separatrix with first harmonic O-mode being used for the core and second harmonic X-mode being used for the pedestal. The instrumentation used for the core measurement at full field can be used for detection at half field by changing the detected polarization. Intermediate fields are accessible. The electron cyclotron emission systems require in situ calibration, which is provided by a novel hot calibration source. The critical component for the hot calibration source, the emissive surface, has been successfully tested. A prototype hot calibration source has been designed, making use of extensive thermal and mechanical modeling.

  19. Electron cyclotron emission diagnostic for ITER.

    PubMed

    Rowan, W; Austin, M; Beno, J; Ellis, R; Feder, R; Ouroua, A; Patel, A; Phillips, P

    2010-10-01

    Electron temperature measurements and electron thermal transport inferences will be critical to the nonactive and deuterium phases of ITER operation and will take on added importance during the alpha heating phase. The diagnostic must meet stringent criteria on spatial coverage and spatial resolution during full field operation. During the early phases of operation, it must operate equally well at half field. The key to the diagnostic is the front end design. It consists of a quasioptical antenna and a pair of calibration sources. The radial resolution of the diagnostic is less than 0.06 m. The spatial coverage extends at least from the core to the separatrix with first harmonic O-mode being used for the core and second harmonic X-mode being used for the pedestal. The instrumentation used for the core measurement at full field can be used for detection at half field by changing the detected polarization. Intermediate fields are accessible. The electron cyclotron emission systems require in situ calibration, which is provided by a novel hot calibration source. The critical component for the hot calibration source, the emissive surface, has been successfully tested. A prototype hot calibration source has been designed, making use of extensive thermal and mechanical modeling.

  20. Iterated conformal dynamics and Laplacian growth.

    PubMed

    Barra, Felipe; Davidovitch, Benny; Procaccia, Itamar

    2002-04-01

    The method of iterated conformal maps for the study of diffusion limited aggregates (DLA) is generalized to the study of Laplacian growth patterns and related processes. We emphasize the fundamental difference between these processes: DLA is grown serially with constant size particles, while Laplacian patterns are grown by advancing each boundary point in parallel, proportional to the gradient of the Laplacian field. We introduce a two-parameter family of growth patterns that interpolates between DLA and a discrete version of Laplacian growth. The ultraviolet putative finite-time singularities are regularized here by a minimal tip size, equivalently for all the models in this family. With this we stress that the difference between DLA and Laplacian growth is not in the manner of ultraviolet regularization, but rather in their deeply different growth rules. The fractal dimensions of the asymptotic patterns depend continuously on the two parameters of the family, giving rise to a "phase diagram" in which DLA and discretized Laplacian growth are at the extreme ends. In particular, we show that the fractal dimension of Laplacian growth patterns is higher than the fractal dimension of DLA, with the possibility of dimension 2 for the former not excluded. PMID:12005963