Science.gov

Sample records for adaptive statistical iterative

  1. Statistical iterative reconstruction using adaptive fractional order regularization

    PubMed Central

    Zhang, Yi; Wang, Yan; Zhang, Weihua; Lin, Feng; Pu, Yifei; Zhou, Jiliu

    2016-01-01

    In order to reduce the radiation dose of the X-ray computed tomography (CT), low-dose CT has drawn much attention in both clinical and industrial fields. A fractional order model based on statistical iterative reconstruction framework was proposed in this study. To further enhance the performance of the proposed model, an adaptive order selection strategy, determining the fractional order pixel-by-pixel, was given. Experiments, including numerical and clinical cases, illustrated better results than several existing methods, especially, in structure and texture preservation. PMID:27231604

  2. Quantitative evaluation of ASiR image quality: an adaptive statistical iterative reconstruction technique

    NASA Astrophysics Data System (ADS)

    Van de Casteele, Elke; Parizel, Paul; Sijbers, Jan

    2012-03-01

    Adaptive statistical iterative reconstruction (ASiR) is a new reconstruction algorithm used in the field of medical X-ray imaging. This new reconstruction method combines the idealized system representation, as we know it from the standard Filtered Back Projection (FBP) algorithm, and the strength of iterative reconstruction by including a noise model in the reconstruction scheme. It studies how noise propagates through the reconstruction steps, feeds this model back into the loop and iteratively reduces noise in the reconstructed image without affecting spatial resolution. In this paper the effect of ASiR on the contrast to noise ratio is studied using the low contrast module of the Catphan phantom. The experiments were done on a GE LightSpeed VCT system at different voltages and currents. The results show reduced noise and increased contrast for the ASiR reconstructions compared to the standard FBP method. For the same contrast to noise ratio the images from ASiR can be obtained using 60% less current, leading to a reduction in dose of the same amount.

  3. Ultralow dose computed tomography attenuation correction for pediatric PET CT using adaptive statistical iterative reconstruction

    SciTech Connect

    Brady, Samuel L.; Shulkin, Barry L.

    2015-02-15

    Purpose: To develop ultralow dose computed tomography (CT) attenuation correction (CTAC) acquisition protocols for pediatric positron emission tomography CT (PET CT). Methods: A GE Discovery 690 PET CT hybrid scanner was used to investigate the change to quantitative PET and CT measurements when operated at ultralow doses (10–35 mA s). CT quantitation: noise, low-contrast resolution, and CT numbers for 11 tissue substitutes were analyzed in-phantom. CT quantitation was analyzed to a reduction of 90% volume computed tomography dose index (0.39/3.64; mGy) from baseline. To minimize noise infiltration, 100% adaptive statistical iterative reconstruction (ASiR) was used for CT reconstruction. PET images were reconstructed with the lower-dose CTAC iterations and analyzed for: maximum body weight standardized uptake value (SUV{sub bw}) of various diameter targets (range 8–37 mm), background uniformity, and spatial resolution. Radiation dose and CTAC noise magnitude were compared for 140 patient examinations (76 post-ASiR implementation) to determine relative dose reduction and noise control. Results: CT numbers were constant to within 10% from the nondose reduced CTAC image for 90% dose reduction. No change in SUV{sub bw}, background percent uniformity, or spatial resolution for PET images reconstructed with CTAC protocols was found down to 90% dose reduction. Patient population effective dose analysis demonstrated relative CTAC dose reductions between 62% and 86% (3.2/8.3–0.9/6.2). Noise magnitude in dose-reduced patient images increased but was not statistically different from predose-reduced patient images. Conclusions: Using ASiR allowed for aggressive reduction in CT dose with no change in PET reconstructed images while maintaining sufficient image quality for colocalization of hybrid CT anatomy and PET radioisotope uptake.

  4. Influence of adaptive statistical iterative reconstruction algorithm on image quality in coronary computed tomography angiography

    PubMed Central

    Thygesen, Jesper; Gerke, Oke; Egstrup, Kenneth; Waaler, Dag; Lambrechtsen, Jess

    2016-01-01

    Background Coronary computed tomography angiography (CCTA) requires high spatial and temporal resolution, increased low contrast resolution for the assessment of coronary artery stenosis, plaque detection, and/or non-coronary pathology. Therefore, new reconstruction algorithms, particularly iterative reconstruction (IR) techniques, have been developed in an attempt to improve image quality with no cost in radiation exposure. Purpose To evaluate whether adaptive statistical iterative reconstruction (ASIR) enhances perceived image quality in CCTA compared to filtered back projection (FBP). Material and Methods Thirty patients underwent CCTA due to suspected coronary artery disease. Images were reconstructed using FBP, 30% ASIR, and 60% ASIR. Ninety image sets were evaluated by five observers using the subjective visual grading analysis (VGA) and assessed by proportional odds modeling. Objective quality assessment (contrast, noise, and the contrast-to-noise ratio [CNR]) was analyzed with linear mixed effects modeling on log-transformed data. The need for ethical approval was waived by the local ethics committee as the study only involved anonymously collected clinical data. Results VGA showed significant improvements in sharpness by comparing FBP with ASIR, resulting in odds ratios of 1.54 for 30% ASIR and 1.89 for 60% ASIR (P = 0.004). The objective measures showed significant differences between FBP and 60% ASIR (P < 0.0001) for noise, with an estimated ratio of 0.82, and for CNR, with an estimated ratio of 1.26. Conclusion ASIR improved the subjective image quality of parameter sharpness and, objectively, reduced noise and increased CNR.

  5. Impact of adaptive statistical iterative reconstruction on radiation dose in evaluation of trauma patients

    PubMed Central

    Maxfield, Mark W.; Schuster, Kevin M.; McGillicuddy, Edward A.; Young, Calvin J.; Ghita, Monica; Bokhari, S.A. Jamal; Oliva, Isabel B.; Brink, James A.; Davis, Kimberly A.

    2013-01-01

    BACKGROUND A recent study showed that computed tomographic (CT) scans contributed 93% of radiation exposure of 177 patients admitted to our Level I trauma center. Adaptive statistical iterative reconstruction (ASIR) is an algorithm that reduces the noise level in reconstructed images and therefore allows the use of less ionizing radiation during CT scans without significantly affecting image quality. ASIR was instituted on all CT scans performed on trauma patients in June 2009. Our objective was to determine if implementation of ASIR reduced radiation dose without compromising patient outcomes. METHODS We identified 300 patients activating the trauma system before and after the implementation of ASIR imaging. After applying inclusion criteria, 245 charts were reviewed. Baseline demographics, presenting characteristics, number of delayed diagnoses, and missed injuries were recorded. The postexamination volume CT dose index (CTDIvol) and dose-length product (DLP)reported by the scanner for CT scans of the chest, abdomen, and pelvis and CT scans of the brain and cervical spine were recorded. Subjective image quality was compared between the two groups. RESULTS For CT scans of the chest, abdomen, and pelvis, the mean CTDIvol(17.1 mGy vs. 14.2 mGy; p < 0.001) and DLP (1,165 mGy·cm vs. 1,004 mGy·cm; p < 0.001) was lower for studies performed with ASIR. For CT scans of the brain and cervical spine, the mean CTDIvol(61.7 mGy vs. 49.6 mGy; p < 0.001) and DLP (1,327 mGy·cm vs. 1,067 mGy·cm; p < 0.001) was lower for studies performed with ASIR. There was no subjective difference in image quality between ASIR and non-ASIR scans. All CT scans were deemed of good or excellent image quality. There were no delayed diagnoses or missed injuries related to CT scanning identified in either group. CONCLUSION Implementation of ASIR imaging for CT scans performed on trauma patients led to a nearly 20% reduction in ionizing radiation without compromising outcomes or image quality

  6. Characterization of adaptive statistical iterative reconstruction algorithm for dose reduction in CT: A pediatric oncology perspective

    SciTech Connect

    Brady, S. L.; Yee, B. S.; Kaufman, R. A.

    2012-09-15

    Purpose: This study demonstrates a means of implementing an adaptive statistical iterative reconstruction (ASiR Trade-Mark-Sign ) technique for dose reduction in computed tomography (CT) while maintaining similar noise levels in the reconstructed image. The effects of image quality and noise texture were assessed at all implementation levels of ASiR Trade-Mark-Sign . Empirically derived dose reduction limits were established for ASiR Trade-Mark-Sign for imaging of the trunk for a pediatric oncology population ranging from 1 yr old through adolescence/adulthood. Methods: Image quality was assessed using metrics established by the American College of Radiology (ACR) CT accreditation program. Each image quality metric was tested using the ACR CT phantom with 0%-100% ASiR Trade-Mark-Sign blended with filtered back projection (FBP) reconstructed images. Additionally, the noise power spectrum (NPS) was calculated for three common reconstruction filters of the trunk. The empirically derived limitations on ASiR Trade-Mark-Sign implementation for dose reduction were assessed using (1, 5, 10) yr old and adolescent/adult anthropomorphic phantoms. To assess dose reduction limits, the phantoms were scanned in increments of increased noise index (decrementing mA using automatic tube current modulation) balanced with ASiR Trade-Mark-Sign reconstruction to maintain noise equivalence of the 0% ASiR Trade-Mark-Sign image. Results: The ASiR Trade-Mark-Sign algorithm did not produce any unfavorable effects on image quality as assessed by ACR criteria. Conversely, low-contrast resolution was found to improve due to the reduction of noise in the reconstructed images. NPS calculations demonstrated that images with lower frequency noise had lower noise variance and coarser graininess at progressively higher percentages of ASiR Trade-Mark-Sign reconstruction; and in spite of the similar magnitudes of noise, the image reconstructed with 50% or more ASiR Trade-Mark-Sign presented a more

  7. Image quality of CT angiography with model-based iterative reconstruction in young children with congenital heart disease: comparison with filtered back projection and adaptive statistical iterative reconstruction.

    PubMed

    Son, Sung Sil; Choo, Ki Seok; Jeon, Ung Bae; Jeon, Gye Rok; Nam, Kyung Jin; Kim, Tae Un; Yeom, Jeong A; Hwang, Jae Yeon; Jeong, Dong Wook; Lim, Soo Jin

    2015-06-01

    To retrospectively evaluate the image quality of CT angiography (CTA) reconstructed by model-based iterative reconstruction (MBIR) and to compare this with images obtained by filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR) in newborns and infants with congenital heart disease (CHD). Thirty-seven children (age 4.8 ± 3.7 months; weight 4.79 ± 0.47 kg) with suspected CHD underwent CTA on a 64detector MDCT without ECG gating (80 kVp, 40 mA using tube current modulation). Total dose length product was recorded in all patients. Images were reconstructed using FBP, ASIR, and MBIR. Objective image qualities (density, noise) were measured in the great vessels and heart chambers. The contrast-to-noise ratio (CNR) was calculated by measuring the density and noise of myocardial walls. Two radiologists evaluated images for subjective noise, diagnostic confidence, and sharpness at the level prior to the first branch of the main pulmonary artery. Images were compared with respect to reconstruction method, and reconstruction times were measured. Images from all patients were diagnostic, and the effective dose was 0.22 mSv. The objective image noise of MBIR was significantly lower than those of FBP and ASIR in the great vessels and heart chambers (P < 0.05); however, with respect to attenuations in the four chambers, ascending aorta, descending aorta, and pulmonary trunk, no statistically significant difference was observed among the three methods (P > 0.05). Mean CNR values were 8.73 for FBP, 14.54 for ASIR, and 22.95 for MBIR. In addition, the subjective image noise of MBIR was significantly lower than those of the others (P < 0.01). Furthermore, while FBP had the highest score for image sharpness, ASIR had the highest score for diagnostic confidence (P < 0.05), and mean reconstruction times were 5.1 ± 2.3 s for FBP and ASIR and 15.1 ± 2.4 min for MBIR. While CTA with MBIR in newborns and infants with CHD can reduce image noise and

  8. Can use of adaptive statistical iterative reconstruction reduce radiation dose in unenhanced head CT? An analysis of qualitative and quantitative image quality

    PubMed Central

    Heggen, Kristin Livelten; Pedersen, Hans Kristian; Andersen, Hilde Kjernlie; Martinsen, Anne Catrine T

    2016-01-01

    Background Iterative reconstruction can reduce image noise and thereby facilitate dose reduction. Purpose To evaluate qualitative and quantitative image quality for full dose and dose reduced head computed tomography (CT) protocols reconstructed using filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR). Material and Methods Fourteen patients undergoing follow-up head CT were included. All patients underwent full dose (FD) exam and subsequent 15% dose reduced (DR) exam, reconstructed using FBP and 30% ASIR. Qualitative image quality was assessed using visual grading characteristics. Quantitative image quality was assessed using ROI measurements in cerebrospinal fluid (CSF), white matter, peripheral and central gray matter. Additionally, quantitative image quality was measured in Catphan and vendor’s water phantom. Results There was no significant difference in qualitative image quality between FD FBP and DR ASIR. Comparing same scan FBP versus ASIR, a noise reduction of 28.6% in CSF and between −3.7 and 3.5% in brain parenchyma was observed. Comparing FD FBP versus DR ASIR, a noise reduction of 25.7% in CSF, and −7.5 and 6.3% in brain parenchyma was observed. Image contrast increased in ASIR reconstructions. Contrast-to-noise ratio was improved in DR ASIR compared to FD FBP. In phantoms, noise reduction was in the range of 3 to 28% with image content. Conclusion There was no significant difference in qualitative image quality between full dose FBP and dose reduced ASIR. CNR improved in DR ASIR compared to FD FBP mostly due to increased contrast, not reduced noise. Therefore, we recommend using caution if reducing dose and applying ASIR to maintain image quality. PMID:27583169

  9. SU-E-I-86: Ultra-Low Dose Computed Tomography Attenuation Correction for Pediatric PET CT Using Adaptive Statistical Iterative Reconstruction (ASiR™)

    SciTech Connect

    Brady, S; Shulkin, B

    2015-06-15

    Purpose: To develop ultra-low dose computed tomography (CT) attenuation correction (CTAC) acquisition protocols for pediatric positron emission tomography CT (PET CT). Methods: A GE Discovery 690 PET CT hybrid scanner was used to investigate the change to quantitative PET and CT measurements when operated at ultra-low doses (10–35 mAs). CT quantitation: noise, low-contrast resolution, and CT numbers for eleven tissue substitutes were analyzed in-phantom. CT quantitation was analyzed to a reduction of 90% CTDIvol (0.39/3.64; mGy) radiation dose from baseline. To minimize noise infiltration, 100% adaptive statistical iterative reconstruction (ASiR) was used for CT reconstruction. PET images were reconstructed with the lower-dose CTAC iterations and analyzed for: maximum body weight standardized uptake value (SUVbw) of various diameter targets (range 8–37 mm), background uniformity, and spatial resolution. Radiation organ dose, as derived from patient exam size specific dose estimate (SSDE), was converted to effective dose using the standard ICRP report 103 method. Effective dose and CTAC noise magnitude were compared for 140 patient examinations (76 post-ASiR implementation) to determine relative patient population dose reduction and noise control. Results: CT numbers were constant to within 10% from the non-dose reduced CTAC image down to 90% dose reduction. No change in SUVbw, background percent uniformity, or spatial resolution for PET images reconstructed with CTAC protocols reconstructed with ASiR and down to 90% dose reduction. Patient population effective dose analysis demonstrated relative CTAC dose reductions between 62%–86% (3.2/8.3−0.9/6.2; mSv). Noise magnitude in dose-reduced patient images increased but was not statistically different from pre dose-reduced patient images. Conclusion: Using ASiR allowed for aggressive reduction in CTAC dose with no change in PET reconstructed images while maintaining sufficient image quality for co

  10. Statistical Physics for Adaptive Distributed Control

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2005-01-01

    A viewgraph presentation on statistical physics for distributed adaptive control is shown. The topics include: 1) The Golden Rule; 2) Advantages; 3) Roadmap; 4) What is Distributed Control? 5) Review of Information Theory; 6) Iterative Distributed Control; 7) Minimizing L(q) Via Gradient Descent; and 8) Adaptive Distributed Control.

  11. Matched filter based iterative adaptive approach

    NASA Astrophysics Data System (ADS)

    Nepal, Ramesh; Zhang, Yan Rockee; Li, Zhengzheng; Blake, William

    2016-05-01

    Matched Filter sidelobes from diversified LPI waveform design and sensor resolution are two important considerations in radars and active sensors in general. Matched Filter sidelobes can potentially mask weaker targets, and low sensor resolution not only causes a high margin of error but also limits sensing in target-rich environment/ sector. The improvement in those factors, in part, concern with the transmitted waveform and consequently pulse compression techniques. An adaptive pulse compression algorithm is hence desired that can mitigate the aforementioned limitations. A new Matched Filter based Iterative Adaptive Approach, MF-IAA, as an extension to traditional Iterative Adaptive Approach, IAA, has been developed. MF-IAA takes its input as the Matched Filter output. The motivation here is to facilitate implementation of Iterative Adaptive Approach without disrupting the processing chain of traditional Matched Filter. Similar to IAA, MF-IAA is a user parameter free, iterative, weighted least square based spectral identification algorithm. This work focuses on the implementation of MF-IAA. The feasibility of MF-IAA is studied using a realistic airborne radar simulator as well as actual measured airborne radar data. The performance of MF-IAA is measured with different test waveforms, and different Signal-to-Noise (SNR) levels. In addition, Range-Doppler super-resolution using MF-IAA is investigated. Sidelobe reduction as well as super-resolution enhancement is validated. The robustness of MF-IAA with respect to different LPI waveforms and SNR levels is also demonstrated.

  12. Low kilovoltage peak (kVp) with an adaptive statistical iterative reconstruction algorithm in computed tomography urography: evaluation of image quality and radiation dose

    PubMed Central

    Zhou, Zhiguo; Chen, Haixi; Wei, Wei; Zhou, Shanghui; Xu, Jingbo; Wang, Xifu; Wang, Qingguo; Zhang, Guixiang; Zhang, Zhuoli; Zheng, Linfeng

    2016-01-01

    Purpose: The purpose of this study was to evaluate the image quality and radiation dose in computed tomography urography (CTU) images acquired with a low kilovoltage peak (kVp) in combination with an adaptive statistical iterative reconstruction (ASiR) algorithm. Methods: A total of 45 subjects (18 women, 27 men) who underwent CTU with kV assist software for automatic selection of the optimal kVp were included and divided into two groups (A and B) based on the kVp and image reconstruction algorithm: group A consisted of patients who underwent CTU with a 80 or 100 kVp and whose images were reconstructed with the 50% ASiR algorithm (n=32); group B consisted of patients who underwent CTU with a 120 kVp and whose images were reconstructed with the filtered back projection (FBP) algorithm (n=13). The images were separately reconstructed with volume rendering (VR) and maximum intensity projection (MIP). Finally, the image quality was evaluated using an image score, CT attenuation, image noise, the contrast-to-noise ratio (CNR) of the renal pelvis-to-abdominal visceral fat and the signal-to-noise ratio (SNR) of the renal pelvis. The radiation dose was assessed using volume CT dose index (CTDIvol), dose-length product (DLP) and effective dose (ED). Results: For groups A and B, the subjective image scores for the VR reconstruction images were 3.9±0.4 and 3.8±0.4, respectively, while those for the MIP reconstruction images were 3.8±0.4 and 3.6±0.6, respectively. No significant difference was found (p>0.05) between the two groups’ image scores for either the VR or MIP reconstruction images. Additionally, the inter-reviewer image scores did not significantly differ (p>0.05). The mean attenuation of the bilateral renal pelvis in group A was significantly higher than that in group B (271.4±57.6 vs. 221.8±35.3 HU, p<0.05), whereas the image noise in group A was significantly lower than that in group B (7.9±2.1 vs. 10.5±2.3 HU, p<0.05). The CNR and SNR in group A were

  13. Adaptable Iterative and Recursive Kalman Filter Schemes

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato

    2014-01-01

    Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.

  14. Nuclear Forensic Inferences Using Iterative Multidimensional Statistics

    SciTech Connect

    Robel, M; Kristo, M J; Heller, M A

    2009-06-09

    Nuclear forensics involves the analysis of interdicted nuclear material for specific material characteristics (referred to as 'signatures') that imply specific geographical locations, production processes, culprit intentions, etc. Predictive signatures rely on expert knowledge of physics, chemistry, and engineering to develop inferences from these material characteristics. Comparative signatures, on the other hand, rely on comparison of the material characteristics of the interdicted sample (the 'questioned sample' in FBI parlance) with those of a set of known samples. In the ideal case, the set of known samples would be a comprehensive nuclear forensics database, a database which does not currently exist. In fact, our ability to analyze interdicted samples and produce an extensive list of precise materials characteristics far exceeds our ability to interpret the results. Therefore, as we seek to develop the extensive databases necessary for nuclear forensics, we must also develop the methods necessary to produce the necessary inferences from comparison of our analytical results with these large, multidimensional sets of data. In the work reported here, we used a large, multidimensional dataset of results from quality control analyses of uranium ore concentrate (UOC, sometimes called 'yellowcake'). We have found that traditional multidimensional techniques, such as principal components analysis (PCA), are especially useful for understanding such datasets and drawing relevant conclusions. In particular, we have developed an iterative partial least squares-discriminant analysis (PLS-DA) procedure that has proven especially adept at identifying the production location of unknown UOC samples. By removing classes which fell far outside the initial decision boundary, and then rebuilding the PLS-DA model, we have consistently produced better and more definitive attributions than with a single pass classification approach. Performance of the iterative PLS-DA method

  15. Statistical Physics of Adaptation

    NASA Astrophysics Data System (ADS)

    Perunov, Nikolay; Marsland, Robert A.; England, Jeremy L.

    2016-04-01

    Whether by virtue of being prepared in a slowly relaxing, high-free energy initial condition, or because they are constantly dissipating energy absorbed from a strong external drive, many systems subject to thermal fluctuations are not expected to behave in the way they would at thermal equilibrium. Rather, the probability of finding such a system in a given microscopic arrangement may deviate strongly from the Boltzmann distribution, raising the question of whether thermodynamics still has anything to tell us about which arrangements are the most likely to be observed. In this work, we build on past results governing nonequilibrium thermodynamics and define a generalized Helmholtz free energy that exactly delineates the various factors that quantitatively contribute to the relative probabilities of different outcomes in far-from-equilibrium stochastic dynamics. By applying this expression to the analysis of two examples—namely, a particle hopping in an oscillating energy landscape and a population composed of two types of exponentially growing self-replicators—we illustrate a simple relationship between outcome-likelihood and dissipative history. In closing, we discuss the possible relevance of such a thermodynamic principle for our understanding of self-organization in complex systems, paying particular attention to a possible analogy to the way evolutionary adaptations emerge in living things.

  16. FASART: An iterative reconstruction algorithm with inter-iteration adaptive NAD filter.

    PubMed

    Zhou, Ziying; Li, Yugang; Zhang, Fa; Wan, Xiaohua

    2015-01-01

    Electron tomography (ET) is an essential imaging technique for studying structures of large biological specimens. These structures are reconstructed from a set of projections obtained at different sample orientations by tilting the specimen. However, most of existing reconstruction methods are not appropriate when the data are extremely noisy and incomplete. A new iterative method has been proposed: adaptive simultaneous algebraic reconstruction with inter-iteration adaptive non-linear anisotropic diffusion (NAD) filter (FASART). We also adopted an adaptive parameter and discussed the step for the filter in this reconstruction method. Experimental results show that FASART can restrain the noise generated in the process of iterative reconstruction and still preserve the more details of the structure edges.

  17. Normalized iterative denoising ghost imaging based on the adaptive threshold

    NASA Astrophysics Data System (ADS)

    Li, Gaoliang; Yang, Zhaohua; Zhao, Yan; Yan, Ruitao; Liu, Xia; Liu, Baolei

    2017-02-01

    An approach for improving ghost imaging (GI) quality is proposed. In this paper, an iteration model based on normalized GI is built through theoretical analysis. An adaptive threshold value is selected in the iteration model. The initial value of the iteration model is estimated as a step to remove the correlated noise. The simulation and experimental results reveal that the proposed strategy reconstructs a better image than traditional and normalized GI, without adding complexity. The NIDGI-AT scheme does not require prior information regarding the object, and can also choose the threshold adaptively. More importantly, the signal-to-noise ratio (SNR) of the reconstructed image is greatly improved. Therefore, this methodology represents another step towards practical real-world applications.

  18. Adaptive Strategies in the Iterated Exchange Problem

    NASA Astrophysics Data System (ADS)

    Baraov, Arthur

    2011-03-01

    We argue for clear separation of the exchange problem from the exchange paradox to avoid confusion about the subject matter of these two distinct problems. The exchange problem in its current format belongs to the domain of optimal decision making—it doesn't make any sense as a game of competition. But it takes just a tiny modification in the statement of the problem to breathe new life into it and make it a practicable and meaningful game of competition. In this paper, we offer an explanation for paradoxical priors and discuss adaptive strategies for both the house and the player in the restated exchange problem.

  19. A successive overrelaxation iterative technique for an adaptive equalizer

    NASA Technical Reports Server (NTRS)

    Kosovych, O. S.

    1973-01-01

    An adaptive strategy for the equalization of pulse-amplitude-modulated signals in the presence of intersymbol interference and additive noise is reported. The successive overrelaxation iterative technique is used as the algorithm for the iterative adjustment of the equalizer coefficents during a training period for the minimization of the mean square error. With 2-cyclic and nonnegative Jacobi matrices substantial improvement is demonstrated in the rate of convergence over the commonly used gradient techniques. The Jacobi theorems are also extended to nonpositive Jacobi matrices. Numerical examples strongly indicate that the improvements obtained for the special cases are possible for general channel characteristics. The technique is analytically demonstrated to decrease the mean square error at each iteration for a large range of parameter values for light or moderate intersymbol interference and for small intervals for general channels. Analytically, convergence of the relaxation algorithm was proven in a noisy environment and the coefficient variance was demonstrated to be bounded.

  20. Adaptively Tuned Iterative Low Dose CT Image Denoising

    PubMed Central

    Hashemi, SayedMasoud; Paul, Narinder S.; Beheshti, Soosan; Cobbold, Richard S. C.

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  1. Adaptively Tuned Iterative Low Dose CT Image Denoising.

    PubMed

    Hashemi, SayedMasoud; Paul, Narinder S; Beheshti, Soosan; Cobbold, Richard S C

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction.

  2. Estimated spectrum adaptive postfilter and the iterative prepost filtering algirighms

    NASA Technical Reports Server (NTRS)

    Linares, Irving (Inventor)

    2004-01-01

    The invention presents The Estimated Spectrum Adaptive Postfilter (ESAP) and the Iterative Prepost Filter (IPF) algorithms. These algorithms model a number of image-adaptive post-filtering and pre-post filtering methods. They are designed to minimize Discrete Cosine Transform (DCT) blocking distortion caused when images are highly compressed with the Joint Photographic Expert Group (JPEG) standard. The ESAP and the IPF techniques of the present invention minimize the mean square error (MSE) to improve the objective and subjective quality of low-bit-rate JPEG gray-scale images while simultaneously enhancing perceptual visual quality with respect to baseline JPEG images.

  3. Iterative Re-Weighted Instance Transfer for Domain Adaptation

    NASA Astrophysics Data System (ADS)

    Paul, A.; Rottensteiner, F.; Heipke, C.

    2016-06-01

    Domain adaptation techniques in transfer learning try to reduce the amount of training data required for classification by adapting a classifier trained on samples from a source domain to a new data set (target domain) where the features may have different distributions. In this paper, we propose a new technique for domain adaptation based on logistic regression. Starting with a classifier trained on training data from the source domain, we iteratively include target domain samples for which class labels have been obtained from the current state of the classifier, while at the same time removing source domain samples. In each iteration the classifier is re-trained, so that the decision boundaries are slowly transferred to the distribution of the target features. To make the transfer procedure more robust we introduce weights as a function of distance from the decision boundary and a new way of regularisation. Our methodology is evaluated using a benchmark data set consisting of aerial images and digital surface models. The experimental results show that in the majority of cases our domain adaptation approach can lead to an improvement of the classification accuracy without additional training data, but also indicate remaining problems if the difference in the feature distributions becomes too large.

  4. Iterative-Transform Phase Retrieval Using Adaptive Diversity

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein

  5. Adaptive restoration of river terrace vegetation through iterative experiments

    USGS Publications Warehouse

    Dela Cruz, Michelle P.; Beauchamp, Vanessa B.; Shafroth, Patrick B.; Decker, Cheryl E.; O’Neil, Aviva

    2014-01-01

    Restoration projects can involve a high degree of uncertainty and risk, which can ultimately result in failure. An adaptive restoration approach can reduce uncertainty through controlled, replicated experiments designed to test specific hypotheses and alternative management approaches. Key components of adaptive restoration include willingness of project managers to accept the risk inherent in experimentation, interest of researchers, availability of funding for experimentation and monitoring, and ability to restore sites as iterative experiments where results from early efforts can inform the design of later phases. This paper highlights an ongoing adaptive restoration project at Zion National Park (ZNP), aimed at reducing the cover of exotic annual Bromus on riparian terraces, and revegetating these areas with native plant species. Rather than using a trial-and-error approach, ZNP staff partnered with academic, government, and private-sector collaborators to conduct small-scale experiments to explicitly address uncertainties concerning biomass removal of annual bromes, herbicide application rates and timing, and effective seeding methods for native species. Adaptive restoration has succeeded at ZNP because managers accept the risk inherent in experimentation and ZNP personnel are committed to continue these projects over a several-year period. Techniques that result in exotic annual Bromus removal and restoration of native plant species at ZNP can be used as a starting point for adaptive restoration projects elsewhere in the region.

  6. Adaptive Iterated Extended Kalman Filter and Its Application to Autonomous Integrated Navigation for Indoor Robot

    PubMed Central

    Chen, Xiyuan; Li, Qinghua

    2014-01-01

    As the core of the integrated navigation system, the data fusion algorithm should be designed seriously. In order to improve the accuracy of data fusion, this work proposed an adaptive iterated extended Kalman (AIEKF) which used the noise statistics estimator in the iterated extended Kalman (IEKF), and then AIEKF is used to deal with the nonlinear problem in the inertial navigation systems (INS)/wireless sensors networks (WSNs)-integrated navigation system. Practical test has been done to evaluate the performance of the proposed method. The results show that the proposed method is effective to reduce the mean root-mean-square error (RMSE) of position by about 92.53%, 67.93%, 55.97%, and 30.09% compared with the INS only, WSN, EKF, and IEKF. PMID:24693225

  7. Adaptive iterated extended Kalman filter and its application to autonomous integrated navigation for indoor robot.

    PubMed

    Xu, Yuan; Chen, Xiyuan; Li, Qinghua

    2014-01-01

    As the core of the integrated navigation system, the data fusion algorithm should be designed seriously. In order to improve the accuracy of data fusion, this work proposed an adaptive iterated extended Kalman (AIEKF) which used the noise statistics estimator in the iterated extended Kalman (IEKF), and then AIEKF is used to deal with the nonlinear problem in the inertial navigation systems (INS)/wireless sensors networks (WSNs)-integrated navigation system. Practical test has been done to evaluate the performance of the proposed method. The results show that the proposed method is effective to reduce the mean root-mean-square error (RMSE) of position by about 92.53%, 67.93%, 55.97%, and 30.09% compared with the INS only, WSN, EKF, and IEKF.

  8. Investigation of statistical iterative reconstruction for dedicated breast CT

    SciTech Connect

    Makeev, Andrey; Glick, Stephen J.

    2013-08-15

    Purpose: Dedicated breast CT has great potential for improving the detection and diagnosis of breast cancer. Statistical iterative reconstruction (SIR) in dedicated breast CT is a promising alternative to traditional filtered backprojection (FBP). One of the difficulties in using SIR is the presence of free parameters in the algorithm that control the appearance of the resulting image. These parameters require tuning in order to achieve high quality reconstructions. In this study, the authors investigated the penalized maximum likelihood (PML) method with two commonly used types of roughness penalty functions: hyperbolic potential and anisotropic total variation (TV) norm. Reconstructed images were compared with images obtained using standard FBP. Optimal parameters for PML with the hyperbolic prior are reported for the task of detecting microcalcifications embedded in breast tissue.Methods: Computer simulations were used to acquire projections in a half-cone beam geometry. The modeled setup describes a realistic breast CT benchtop system, with an x-ray spectra produced by a point source and an a-Si, CsI:Tl flat-panel detector. A voxelized anthropomorphic breast phantom with 280 μm microcalcification spheres embedded in it was used to model attenuation properties of the uncompressed woman's breast in a pendant position. The reconstruction of 3D images was performed using the separable paraboloidal surrogates algorithm with ordered subsets. Task performance was assessed with the ideal observer detectability index to determine optimal PML parameters.Results: The authors' findings suggest that there is a preferred range of values of the roughness penalty weight and the edge preservation threshold in the penalized objective function with the hyperbolic potential, which resulted in low noise images with high contrast microcalcifications preserved. In terms of numerical observer detectability index, the PML method with optimal parameters yielded substantially improved

  9. Investigation of statistical iterative reconstruction for dedicated breast CT

    PubMed Central

    Makeev, Andrey; Glick, Stephen J.

    2013-01-01

    Purpose: Dedicated breast CT has great potential for improving the detection and diagnosis of breast cancer. Statistical iterative reconstruction (SIR) in dedicated breast CT is a promising alternative to traditional filtered backprojection (FBP). One of the difficulties in using SIR is the presence of free parameters in the algorithm that control the appearance of the resulting image. These parameters require tuning in order to achieve high quality reconstructions. In this study, the authors investigated the penalized maximum likelihood (PML) method with two commonly used types of roughness penalty functions: hyperbolic potential and anisotropic total variation (TV) norm. Reconstructed images were compared with images obtained using standard FBP. Optimal parameters for PML with the hyperbolic prior are reported for the task of detecting microcalcifications embedded in breast tissue. Methods: Computer simulations were used to acquire projections in a half-cone beam geometry. The modeled setup describes a realistic breast CT benchtop system, with an x-ray spectra produced by a point source and an a-Si, CsI:Tl flat-panel detector. A voxelized anthropomorphic breast phantom with 280 μm microcalcification spheres embedded in it was used to model attenuation properties of the uncompressed woman's breast in a pendant position. The reconstruction of 3D images was performed using the separable paraboloidal surrogates algorithm with ordered subsets. Task performance was assessed with the ideal observer detectability index to determine optimal PML parameters. Results: The authors' findings suggest that there is a preferred range of values of the roughness penalty weight and the edge preservation threshold in the penalized objective function with the hyperbolic potential, which resulted in low noise images with high contrast microcalcifications preserved. In terms of numerical observer detectability index, the PML method with optimal parameters yielded substantially improved

  10. Spatially adaptive regularized iterative high-resolution image reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Lim, Won Bae; Park, Min K.; Kang, Moon Gi

    2000-12-01

    High resolution images are often required in applications such as remote sensing, frame freeze in video, military and medical imaging. Digital image sensor arrays, which are used for image acquisition in many imaging systems, are not dense enough to prevent aliasing, so the acquired images will be degraded by aliasing effects. To prevent aliasing without loss of resolution, a dense detector array is required. But it may be very costly or unavailable, thus, many imaging systems are designed to allow some level of aliasing during image acquisition. The purpose of our work is to reconstruct an unaliased high resolution image from the acquired aliased image sequence. In this paper, we propose a spatially adaptive regularized iterative high resolution image reconstruction algorithm for blurred, noisy and down-sampled image sequences. The proposed approach is based on a Constrained Least Squares (CLS) high resolution reconstruction algorithm, with spatially adaptive regularization operators and parameters. These regularization terms are shown to improve the reconstructed image quality by forcing smoothness, while preserving edges in the reconstructed high resolution image. Accurate sub-pixel motion registration is the key of the success of the high resolution image reconstruction algorithm. However, sub-pixel motion registration may have some level of registration error. Therefore, a reconstruction algorithm which is robust against the registration error is required. The registration algorithm uses a gradient based sub-pixel motion estimator which provides shift information for each of the recorded frames. The proposed algorithm is based on a technique of high resolution image reconstruction, and it solves spatially adaptive regularized constrained least square minimization functionals. In this paper, we show that the reconstruction algorithm gives dramatic improvements in the resolution of the reconstructed image and is effective in handling the aliased information. The

  11. Statistical Inference for Data Adaptive Target Parameters.

    PubMed

    Hubbard, Alan E; Kherad-Pajouh, Sara; van der Laan, Mark J

    2016-05-01

    Consider one observes n i.i.d. copies of a random variable with a probability distribution that is known to be an element of a particular statistical model. In order to define our statistical target we partition the sample in V equal size sub-samples, and use this partitioning to define V splits in an estimation sample (one of the V subsamples) and corresponding complementary parameter-generating sample. For each of the V parameter-generating samples, we apply an algorithm that maps the sample to a statistical target parameter. We define our sample-split data adaptive statistical target parameter as the average of these V-sample specific target parameters. We present an estimator (and corresponding central limit theorem) of this type of data adaptive target parameter. This general methodology for generating data adaptive target parameters is demonstrated with a number of practical examples that highlight new opportunities for statistical learning from data. This new framework provides a rigorous statistical methodology for both exploratory and confirmatory analysis within the same data. Given that more research is becoming "data-driven", the theory developed within this paper provides a new impetus for a greater involvement of statistical inference into problems that are being increasingly addressed by clever, yet ad hoc pattern finding methods. To suggest such potential, and to verify the predictions of the theory, extensive simulation studies, along with a data analysis based on adaptively determined intervention rules are shown and give insight into how to structure such an approach. The results show that the data adaptive target parameter approach provides a general framework and resulting methodology for data-driven science.

  12. Performance Enhancement for a GPS Vector-Tracking Loop Utilizing an Adaptive Iterated Extended Kalman Filter

    PubMed Central

    Chen, Xiyuan; Wang, Xiying; Xu, Yuan

    2014-01-01

    This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively. PMID:25502124

  13. Performance enhancement for a GPS vector-tracking loop utilizing an adaptive iterated extended Kalman filter.

    PubMed

    Chen, Xiyuan; Wang, Xiying; Xu, Yuan

    2014-12-09

    This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively.

  14. Iter

    NASA Astrophysics Data System (ADS)

    Iotti, Robert

    2015-04-01

    ITER is an international experimental facility being built by seven Parties to demonstrate the long term potential of fusion energy. The ITER Joint Implementation Agreement (JIA) defines the structure and governance model of such cooperation. There are a number of necessary conditions for such international projects to be successful: a complete design, strong systems engineering working with an agreed set of requirements, an experienced organization with systems and plans in place to manage the project, a cost estimate backed by industry, and someone in charge. Unfortunately for ITER many of these conditions were not present. The paper discusses the priorities in the JIA which led to setting up the project with a Central Integrating Organization (IO) in Cadarache, France as the ITER HQ, and seven Domestic Agencies (DAs) located in the countries of the Parties, responsible for delivering 90%+ of the project hardware as Contributions-in-Kind and also financial contributions to the IO, as ``Contributions-in-Cash.'' Theoretically the Director General (DG) is responsible for everything. In practice the DG does not have the power to control the work of the DAs, and there is not an effective management structure enabling the IO and the DAs to arbitrate disputes, so the project is not really managed, but is a loose collaboration of competing interests. Any DA can effectively block a decision reached by the DG. Inefficiencies in completing design while setting up a competent organization from scratch contributed to the delays and cost increases during the initial few years. So did the fact that the original estimate was not developed from industry input. Unforeseen inflation and market demand on certain commodities/materials further exacerbated the cost increases. Since then, improvements are debatable. Does this mean that the governance model of ITER is a wrong model for international scientific cooperation? I do not believe so. Had the necessary conditions for success

  15. Policy iteration optimal tracking control for chaotic systems by using an adaptive dynamic programming approach

    NASA Astrophysics Data System (ADS)

    Wei, Qing-Lai; Liu, De-Rong; Xu, Yan-Cai

    2015-03-01

    A policy iteration algorithm of adaptive dynamic programming (ADP) is developed to solve the optimal tracking control for a class of discrete-time chaotic systems. By system transformations, the optimal tracking problem is transformed into an optimal regulation one. The policy iteration algorithm for discrete-time chaotic systems is first described. Then, the convergence and admissibility properties of the developed policy iteration algorithm are presented, which show that the transformed chaotic system can be stabilized under an arbitrary iterative control law and the iterative performance index function simultaneously converges to the optimum. By implementing the policy iteration algorithm via neural networks, the developed optimal tracking control scheme for chaotic systems is verified by a simulation. Project supported by the National Natural Science Foundation of China (Grant Nos. 61034002, 61233001, 61273140, 61304086, and 61374105) and the Beijing Natural Science Foundation, China (Grant No. 4132078).

  16. Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Hanquan

    2016-03-01

    In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.

  17. Statistical iterative reconstruction using fast optimization transfer algorithm with successively increasing factor in Digital Breast Tomosynthesis

    NASA Astrophysics Data System (ADS)

    Xu, Shiyu; Zhang, Zhenxi; Chen, Ying

    2014-03-01

    Statistical iterative reconstruction exhibits particularly promising since it provides the flexibility of accurate physical noise modeling and geometric system description in transmission tomography system. However, to solve the objective function is computationally intensive compared to analytical reconstruction methods due to multiple iterations needed for convergence and each iteration involving forward/back-projections by using a complex geometric system model. Optimization transfer (OT) is a general algorithm converting a high dimensional optimization to a parallel 1-D update. OT-based algorithm provides a monotonic convergence and a parallel computing framework but slower convergence rate especially around the global optimal. Based on an indirect estimation on the spectrum of the OT convergence rate matrix, we proposed a successively increasing factor- scaled optimization transfer (OT) algorithm to seek an optimal step size for a faster rate. Compared to a representative OT based method such as separable parabolic surrogate with pre-computed curvature (PC-SPS), our algorithm provides comparable image quality (IQ) with fewer iterations. Each iteration retains a similar computational cost to PC-SPS. The initial experiment with a simulated Digital Breast Tomosynthesis (DBT) system shows that a total 40% computing time is saved by the proposed algorithm. In general, the successively increasing factor-scaled OT exhibits a tremendous potential to be a iterative method with a parallel computation, a monotonic and global convergence with fast rate.

  18. Policy iteration adaptive dynamic programming algorithm for discrete-time nonlinear systems.

    PubMed

    Liu, Derong; Wei, Qinglai

    2014-03-01

    This paper is concerned with a new discrete-time policy iteration adaptive dynamic programming (ADP) method for solving the infinite horizon optimal control problem of nonlinear systems. The idea is to use an iterative ADP technique to obtain the iterative control law, which optimizes the iterative performance index function. The main contribution of this paper is to analyze the convergence and stability properties of policy iteration method for discrete-time nonlinear systems for the first time. It shows that the iterative performance index function is nonincreasingly convergent to the optimal solution of the Hamilton-Jacobi-Bellman equation. It is also proven that any of the iterative control laws can stabilize the nonlinear systems. Neural networks are used to approximate the performance index function and compute the optimal control law, respectively, for facilitating the implementation of the iterative ADP algorithm, where the convergence of the weight matrices is analyzed. Finally, the numerical results and analysis are presented to illustrate the performance of the developed method.

  19. Full dose reduction potential of statistical iterative reconstruction for head CT protocols in a predominantly pediatric population

    PubMed Central

    Mirro, Amy E.; Brady, Samuel L.; Kaufman, Robert. A.

    2016-01-01

    Purpose To implement the maximum level of statistical iterative reconstruction that can be used to establish dose-reduced head CT protocols in a primarily pediatric population. Methods Select head examinations (brain, orbits, sinus, maxilla and temporal bones) were investigated. Dose-reduced head protocols using an adaptive statistical iterative reconstruction (ASiR) were compared for image quality with the original filtered back projection (FBP) reconstructed protocols in phantom using the following metrics: image noise frequency (change in perceived appearance of noise texture), image noise magnitude, contrast-to-noise ratio (CNR), and spatial resolution. Dose reduction estimates were based on computed tomography dose index (CTDIvol) values. Patient CTDIvol and image noise magnitude were assessed in 737 pre and post dose reduced examinations. Results Image noise texture was acceptable up to 60% ASiR for Soft reconstruction kernel (at both 100 and 120 kVp), and up to 40% ASiR for Standard reconstruction kernel. Implementation of 40% and 60% ASiR led to an average reduction in CTDIvol of 43% for brain, 41% for orbits, 30% maxilla, 43% for sinus, and 42% for temporal bone protocols for patients between 1 month and 26 years, while maintaining an average noise magnitude difference of 0.1% (range: −3% to 5%), improving CNR of low contrast soft tissue targets, and improving spatial resolution of high contrast bony anatomy, as compared to FBP. Conclusion The methodology in this study demonstrates a methodology for maximizing patient dose reduction and maintaining image quality using statistical iterative reconstruction for a primarily pediatric population undergoing head CT examination. PMID:27056425

  20. Some challenges with statistical inference in adaptive designs.

    PubMed

    Hung, H M James; Wang, Sue-Jane; Yang, Peiling

    2014-01-01

    Adaptive designs have generated a great deal of attention to clinical trial communities. The literature contains many statistical methods to deal with added statistical uncertainties concerning the adaptations. Increasingly encountered in regulatory applications are adaptive statistical information designs that allow modification of sample size or related statistical information and adaptive selection designs that allow selection of doses or patient populations during the course of a clinical trial. For adaptive statistical information designs, a few statistical testing methods are mathematically equivalent, as a number of articles have stipulated, but arguably there are large differences in their practical ramifications. We pinpoint some undesirable features of these methods in this work. For adaptive selection designs, the selection based on biomarker data for testing the correlated clinical endpoints may increase statistical uncertainty in terms of type I error probability, and most importantly the increased statistical uncertainty may be impossible to assess.

  1. Adaptive iteration method for star centroid extraction under highly dynamic conditions

    NASA Astrophysics Data System (ADS)

    Gao, Yushan; Qin, Shiqiao; Wang, Xingshu

    2016-10-01

    Star centroiding accuracy decreases significantly when star sensor works under highly dynamic conditions or star images are corrupted by severe noise, reducing the output attitude precision. Herein, an adaptive iteration method is proposed to solve this problem. Firstly, initial star centroids are predicted by traditional method, and then based on initial reported star centroids and angular velocities of the star sensor, adaptive centroiding windows are generated to cover the star area and then an iterative method optimizing the location of centroiding window is used to obtain the final star spot extraction results. Simulation results shows that, compared with traditional star image restoration method and Iteratively Weighted Center of Gravity method, AWI algorithm maintains higher extraction accuracy when rotation velocities or noise level increases.

  2. Non-iterative adaptive optical microscopy using wavefront sensing

    NASA Astrophysics Data System (ADS)

    Tao, X.; Azucena, O.; Kubby, J.

    2016-03-01

    This paper will review the development of wide-field and confocal microscopes with wavefront sensing and adaptive optics for correcting refractive aberrations and compensating scattering when imaging through thick tissues (Drosophila embryos and mouse brain tissue). To make wavefront measurements in biological specimens we have modified the laser guide-star techniques used in astronomy for measuring wavefront aberrations that occur as star light passes through Earth's turbulent atmosphere. Here sodium atoms in Earth's mesosphere, at an altitude of 95 km, are excited to fluoresce at resonance by a high-power sodium laser. The fluorescent light creates a guide-star reference beacon at the top of the atmosphere that can be used for measuring wavefront aberrations that occur as the light passes through the atmosphere. We have developed a related approach for making wavefront measurements in biological specimens using cellular structures labeled with fluorescent proteins as laser guide-stars. An example is a fluorescently labeled centrosome in a fruit fly embryo or neurons and dendrites in mouse brains. Using adaptive optical microscopy we show that the Strehl ratio, the ratio of the peak intensity of an aberrated point source relative to the diffraction limited image, can be improved by an order of magnitude when imaging deeply into live dynamic specimens, enabling near diffraction limited deep tissue imaging.

  3. Adaptive iterative learning control for a class of non-linearly parameterised systems with input saturations

    NASA Astrophysics Data System (ADS)

    Zhang, Ruikun; Hou, Zhongsheng; Ji, Honghai; Yin, Chenkun

    2016-04-01

    In this paper, an adaptive iterative learning control scheme is proposed for a class of non-linearly parameterised systems with unknown time-varying parameters and input saturations. By incorporating a saturation function, a new iterative learning control mechanism is presented which includes a feedback term and a parameter updating term. Through the use of parameter separation technique, the non-linear parameters are separated from the non-linear function and then a saturated difference updating law is designed in iteration domain by combining the unknown parametric term of the local Lipschitz continuous function and the unknown time-varying gain into an unknown time-varying function. The analysis of convergence is based on a time-weighted Lyapunov-Krasovskii-like composite energy function which consists of time-weighted input, state and parameter estimation information. The proposed learning control mechanism warrants a L2[0, T] convergence of the tracking error sequence along the iteration axis. Simulation results are provided to illustrate the effectiveness of the adaptive iterative learning control scheme.

  4. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  5. Multi-modal iterative adaptive processing (MIAP) performance in the discrimination mode for landmine detection

    NASA Astrophysics Data System (ADS)

    Yu, Yongli; Collins, Leslie M.

    2005-06-01

    Due to the nature of landmine detection, a high detection probability (Pd) is required to avoid casualties and injuries. However, high Pd is often obtained at the price of extremely high false alarm rates. It is widely accepted that no single sensor technology has the ability to achieve the required detection rate while keeping acceptably low false alarm rates for all types of mines in all types of soil and with all types of false targets. Remarkable advances in sensor technology for landmine detection have made multi-sensor fusion an attractive alternative to single sensor detection techniques. Hence, multi-sensor fusion mine detection systems, which use complementary sensor technologies, are proposed. Previously we proposed a new multi-sensor fusion algorithm called Multi-modal Iterative Adaptive Processing (MIAP), which incorporates information from multiple sensors in an adaptive Bayesian decision framework and the identification capabilities of multiple sensors are utilized to modify the statistical models utilized by the mine detector. Simulation results demonstrate the improvement in performance obtained using the MIAP algorithm. In this paper, we assume a hand-held mine detection system utilizing both an electromagnetic induction sensor (EMI) and a ground-penetrating radar (GPR). The hand-held mine detection sensors are designed to have two modes of operations: search mode and discrimination mode. Search mode generates an initial causal detection on the suspected location; and discrimination mode confirms whether there is a mine. The MIAP algorithm is applied in the discrimination mode for hand-held mine detection. The performance of the detector is evaluated on a data set collected by the government, and the performance is compared with the other traditional fusion results.

  6. Bias in iterative reconstruction of low-statistics PET data: benefits of a resolution model

    NASA Astrophysics Data System (ADS)

    Walker, M. D.; Asselin, M.-C.; Julyan, P. J.; Feldmann, M.; Talbot, P. S.; Jones, T.; Matthews, J. C.

    2011-02-01

    Iterative image reconstruction methods such as ordered-subset expectation maximization (OSEM) are widely used in PET. Reconstructions via OSEM are however reported to be biased for low-count data. We investigated this and considered the impact for dynamic PET. Patient listmode data were acquired in [11C]DASB and [15O]H2O scans on the HRRT brain PET scanner. These data were subsampled to create many independent, low-count replicates. The data were reconstructed and the images from low-count data were compared to the high-count originals (from the same reconstruction method). This comparison enabled low-statistics bias to be calculated for the given reconstruction, as a function of the noise-equivalent counts (NEC). Two iterative reconstruction methods were tested, one with and one without an image-based resolution model (RM). Significant bias was observed when reconstructing data of low statistical quality, for both subsampled human and simulated data. For human data, this bias was substantially reduced by including a RM. For [11C]DASB the low-statistics bias in the caudate head at 1.7 M NEC (approx. 30 s) was -5.5% and -13% with and without RM, respectively. We predicted biases in the binding potential of -4% and -10%. For quantification of cerebral blood flow for the whole-brain grey- or white-matter, using [15O]H2O and the PET autoradiographic method, a low-statistics bias of <2.5% and <4% was predicted for reconstruction with and without the RM. The use of a resolution model reduces low-statistics bias and can hence be beneficial for quantitative dynamic PET.

  7. Iterative learning-based decentralized adaptive tracker for large-scale systems: a digital redesign approach.

    PubMed

    Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua

    2011-07-01

    In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC.

  8. Adaptation to direction statistics modulates perceptual discrimination.

    PubMed

    Price, Nicholas S C; Prescott, Danielle L

    2012-06-22

    Perception depends on the relative activity of populations of sensory neurons with a range of tunings and response gains. Each neuron's tuning and gain are malleable and can be modified by sustained exposure to an adapting stimulus. Here, we used a combination of human psychophysical testing and models of neuronal population decoding to assess how rapid adaptation to moving stimuli might change neuronal tuning and thereby modulate direction perception. Using a novel motion stimulus in which the direction changed every 10 ms, we demonstrated that 1,500 ms of adaptation to a distribution of directions was capable of modifying human psychophysical direction discrimination performance. Consistent with previous reports, we found perceptual repulsion following adaptation to a single direction. Notably, compared with a uniform adaptation condition in which all motion directions were equiprobable, discrimination was impaired after adaptation to a stimulus comprising only directions ± 30-60° from the discrimination boundary and enhanced after adaptation to the complementary range of directions. Thus, stimulus distributions can be selectively chosen to either impair or improve discrimination performance through adaptation. A neuronal population decoding model incorporating adaptation-induced repulsive shifts in direction tuning curves can account for most aspects of our psychophysical data; however, changes in neuronal gain are sufficient to account for all aspects of our psychophysical data.

  9. Distributed adaptive fuzzy iterative learning control of coordination problems for higher order multi-agent systems

    NASA Astrophysics Data System (ADS)

    Li, Jinsha; Li, Junmin

    2016-07-01

    In this paper, the adaptive fuzzy iterative learning control scheme is proposed for coordination problems of Mth order (M ≥ 2) distributed multi-agent systems. Every follower agent has a higher order integrator with unknown nonlinear dynamics and input disturbance. The dynamics of the leader are a higher order nonlinear systems and only available to a portion of the follower agents. With distributed initial state learning, the unified distributed protocols combined time-domain and iteration-domain adaptive laws guarantee that the follower agents track the leader uniformly on [0, T]. Then, the proposed algorithm extends to achieve the formation control. A numerical example and a multiple robotic system are provided to demonstrate the performance of the proposed approach.

  10. Statistical iterative reconstruction to improve image quality for digital breast tomosynthesis

    PubMed Central

    Xu, Shiyu; Lu, Jianping; Zhou, Otto; Chen, Ying

    2015-01-01

    Purpose: Digital breast tomosynthesis (DBT) is a novel modality with the potential to improve early detection of breast cancer by providing three-dimensional (3D) imaging with a low radiation dose. 3D image reconstruction presents some challenges: cone-beam and flat-panel geometry, and highly incomplete sampling. A promising means to overcome these challenges is statistical iterative reconstruction (IR), since it provides the flexibility of accurate physics modeling and a general description of system geometry. The authors’ goal was to develop techniques for applying statistical IR to tomosynthesis imaging data. Methods: These techniques include the following: a physics model with a local voxel-pair based prior with flexible parameters to fine-tune image quality; a precomputed parameter λ in the prior, to remove data dependence and to achieve a uniform resolution property; an effective ray-driven technique to compute the forward and backprojection; and an oversampled, ray-driven method to perform high resolution reconstruction with a practical region-of-interest technique. To assess the performance of these techniques, the authors acquired phantom data on the stationary DBT prototype system. To solve the estimation problem, the authors proposed an optimization-transfer based algorithm framework that potentially allows fewer iterations to achieve an acceptably converged reconstruction. Results: IR improved the detectability of low-contrast and small microcalcifications, reduced cross-plane artifacts, improved spatial resolution, and lowered noise in reconstructed images. Conclusions: Although the computational load remains a significant challenge for practical development, the superior image quality provided by statistical IR, combined with advancing computational techniques, may bring benefits to screening, diagnostics, and intraoperative imaging in clinical applications. PMID:26328987

  11. Comparison of image quality from filtered back projection, statistical iterative reconstruction, and model-based iterative reconstruction algorithms in abdominal computed tomography.

    PubMed

    Kuo, Yu; Lin, Yi-Yang; Lee, Rheun-Chuan; Lin, Chung-Jung; Chiou, Yi-You; Guo, Wan-Yuo

    2016-08-01

    The purpose of this study was to compare the image noise-reducing abilities of iterative model reconstruction (IMR) with those of traditional filtered back projection (FBP) and statistical iterative reconstruction (IR) in abdominal computed tomography (CT) imagesThis institutional review board-approved retrospective study enrolled 103 patients; informed consent was waived. Urinary bladder (n = 83) and renal cysts (n = 44) were used as targets for evaluating imaging quality. Raw data were retrospectively reconstructed using FBP, statistical IR, and IMR. Objective image noise and signal-to-noise ratio (SNR) were calculated and analyzed using one-way analysis of variance. Subjective image quality was evaluated and analyzed using Wilcoxon signed-rank test with Bonferroni correction.Objective analysis revealed a reduction in image noise for statistical IR compared with that for FBP, with no significant differences in SNR. In the urinary bladder group, IMR achieved up to 53.7% noise reduction, demonstrating a superior performance to that of statistical IR. IMR also yielded a significantly superior SNR to that of statistical IR. Similar results were obtained in the cyst group. Subjective analysis revealed reduced image noise for IMR, without inferior margin delineation or diagnostic confidence.IMR reduced noise and increased SNR to greater degrees than did FBP and statistical IR. Applying the IMR technique to abdominal CT imaging has potential for reducing the radiation dose without sacrificing imaging quality.

  12. Comparison of image quality from filtered back projection, statistical iterative reconstruction, and model-based iterative reconstruction algorithms in abdominal computed tomography

    PubMed Central

    Kuo, Yu; Lin, Yi-Yang; Lee, Rheun-Chuan; Lin, Chung-Jung; Chiou, Yi-You; Guo, Wan-Yuo

    2016-01-01

    Abstract The purpose of this study was to compare the image noise-reducing abilities of iterative model reconstruction (IMR) with those of traditional filtered back projection (FBP) and statistical iterative reconstruction (IR) in abdominal computed tomography (CT) images This institutional review board-approved retrospective study enrolled 103 patients; informed consent was waived. Urinary bladder (n = 83) and renal cysts (n = 44) were used as targets for evaluating imaging quality. Raw data were retrospectively reconstructed using FBP, statistical IR, and IMR. Objective image noise and signal-to-noise ratio (SNR) were calculated and analyzed using one-way analysis of variance. Subjective image quality was evaluated and analyzed using Wilcoxon signed-rank test with Bonferroni correction. Objective analysis revealed a reduction in image noise for statistical IR compared with that for FBP, with no significant differences in SNR. In the urinary bladder group, IMR achieved up to 53.7% noise reduction, demonstrating a superior performance to that of statistical IR. IMR also yielded a significantly superior SNR to that of statistical IR. Similar results were obtained in the cyst group. Subjective analysis revealed reduced image noise for IMR, without inferior margin delineation or diagnostic confidence. IMR reduced noise and increased SNR to greater degrees than did FBP and statistical IR. Applying the IMR technique to abdominal CT imaging has potential for reducing the radiation dose without sacrificing imaging quality. PMID:27495078

  13. Statistical Models of Adaptive Immune populations

    NASA Astrophysics Data System (ADS)

    Sethna, Zachary; Callan, Curtis; Walczak, Aleksandra; Mora, Thierry

    The availability of large (104-106 sequences) datasets of B or T cell populations from a single individual allows reliable fitting of complex statistical models for naïve generation, somatic selection, and hypermutation. It is crucial to utilize a probabilistic/informational approach when modeling these populations. The inferred probability distributions allow for population characterization, calculation of probability distributions of various hidden variables (e.g. number of insertions), as well as statistical properties of the distribution itself (e.g. entropy). In particular, the differences between the T cell populations of embryonic and mature mice will be examined as a case study. Comparing these populations, as well as proposed mixed populations, provides a concrete exercise in model creation, comparison, choice, and validation.

  14. An IPMC driven micropump with adaptive on-line iterative feedback tuning

    NASA Astrophysics Data System (ADS)

    Aw, Kean C.; Yu, Wei; McDaid, Andrew J.; Xie, Sheng Q.

    2011-11-01

    This paper presents the design, fabrication and experimental characterization of a valveless micropump actuated by an ionic-polymer-metal-composite (IPMC) soft actuator. The performance of the IPMC varies over time, therefore on-line iterative feedback tuning (IFT) is used to adaptively tune the PID controller to control the bending deflection of the IPMC to ensure a constant pumping rate. The pump rate is higher at lower frequencies for a given applied voltage to the IPMC. A maximum flow rate of 130 μl/min is achieved at 0.1 Hz.

  15. An IPMC driven micropump with adaptive on-line iterative feedback tuning

    NASA Astrophysics Data System (ADS)

    Aw, Kean C.; Yu, Wei; McDaid, Andrew J.; Xie, Sheng Q.

    2012-04-01

    This paper presents the design, fabrication and experimental characterization of a valveless micropump actuated by an ionic-polymer-metal-composite (IPMC) soft actuator. The performance of the IPMC varies over time, therefore on-line iterative feedback tuning (IFT) is used to adaptively tune the PID controller to control the bending deflection of the IPMC to ensure a constant pumping rate. The pump rate is higher at lower frequencies for a given applied voltage to the IPMC. A maximum flow rate of 130 μl/min is achieved at 0.1 Hz.

  16. Adaptive implicit-explicit and parallel element-by-element iteration schemes

    NASA Astrophysics Data System (ADS)

    Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.

    Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.

  17. Adaptive implicit-explicit and parallel element-by-element iteration schemes

    NASA Technical Reports Server (NTRS)

    Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.

    1989-01-01

    Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.

  18. Adaptive statistical pattern classifiers for remotely sensed data

    NASA Technical Reports Server (NTRS)

    Gonzalez, R. C.; Pace, M. O.; Raulston, H. S.

    1975-01-01

    A technique for the adaptive estimation of nonstationary statistics necessary for Bayesian classification is developed. The basic approach to the adaptive estimation procedure consists of two steps: (1) an optimal stochastic approximation of the parameters of interest and (2) a projection of the parameters in time or position. A divergence criterion is developed to monitor algorithm performance. Comparative results of adaptive and nonadaptive classifier tests are presented for simulated four dimensional spectral scan data.

  19. Adaptive mesh refinement and multilevel iteration for multiphase, multicomponent flow in porous media

    SciTech Connect

    Hornung, R.D.

    1996-12-31

    An adaptive local mesh refinement (AMR) algorithm originally developed for unsteady gas dynamics is extended to multi-phase flow in porous media. Within the AMR framework, we combine specialized numerical methods to treat the different aspects of the partial differential equations. Multi-level iteration and domain decomposition techniques are incorporated to accommodate elliptic/parabolic behavior. High-resolution shock capturing schemes are used in the time integration of the hyperbolic mass conservation equations. When combined with AMR, these numerical schemes provide high resolution locally in a more efficient manner than if they were applied on a uniformly fine computational mesh. We will discuss the interplay of physical, mathematical, and numerical concerns in the application of adaptive mesh refinement to flow in porous media problems of practical interest.

  20. Robust GM/WM segmentation of the spinal cord with iterative non-local statistical fusion.

    PubMed

    Asman, Andrew J; Smith, Seth A; Reich, Daniel S; Landman, Bennett A

    2013-01-01

    New magnetic resonance imaging (MRI) sequences are enabling clinical study of the in vivo spinal cord's internal structure. Yet, low contrast-to-noise ratio, artifacts, and imaging distortions have limited the applicability of tissue segmentation techniques pioneered elsewhere in the central nervous system. Recently, methods have been presented for cord/non-cord segmentation on MRI and the feasibility of gray matter/white matter tissue segmentation has been evaluated. To date, no automated algorithms have been presented. Herein, we present a non-local multi-atlas framework that robustly identifies the spinal cord and segments its internal structure with submillimetric accuracy. The proposed algorithm couples non-local fusion with a large number of slice-based atlases (as opposed to typical volumetric ones). To improve performance, the fusion process is interwoven with registration so that segmentation information guides registration and vice versa. We demonstrate statistically significant improvement over state-of-the-art benchmarks in a study of 67 patients. The primary contributions of this work are (1) innovation in non-volumetric atlas information, (2) advancement of label fusion theory to include iterative registration/segmentation, and (3) the first fully automated segmentation algorithm for spinal cord internal structure on MRI.

  1. A statistical iterative reconstruction framework for dual energy computed tomography without knowing tube spectrum

    NASA Astrophysics Data System (ADS)

    Chang, Shaojie; Mou, Xuanqin

    2016-09-01

    Dual energy computed tomography (DECT) has significant impacts on material characterization, bone mineral density inspection, nondestructive evaluation and so on. In spite of great progress has been made recently on reconstruction algorithms for DECT, there still exist two main problems: 1) For polyenergetic X-ray source, the tube spectrum needed in reconstruction is not always available. 2) The reconstructed image of DECT is very sensitive to noise which demands special noise suppression strategy in reconstruction algorithm design. In this paper, we propose a novel method for DECT reconstruction that reconstructs tube spectrum from projection data and suppresses image noise by introducing l1-norm based regularization into statistical reconstruction for polychromatic DECT. The contribution of this work is twofold. 1) A three parameters model is devised to represent spectrum of ployenergetic X-ray source. And the parameters can be estimated from projection data by solving an optimization problem. 2) With the estimated tube spectrum, we propose a computation framework of l1-norm regularization based statistical iterative reconstruction for polychromatic DECT. Simulation experiments with two phantoms were conducted to evaluate the proposed method. Experimental results demonstrate the accuracy and robustness of the spectrum model in terms of that comparable reconstruction image quality can be achieved with the estimated and ideal spectrum, and validate that the proposed method works with attractive performance in terms of accuracy of reconstructed image. The root mean square error (RMSE) between the reconstructed image and the ground truth image are 7.648 × 10-4 and 2.687 x 10-4 for the two phantoms, respectively.

  2. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    PubMed

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  3. [Novel method of noise power spectrum measurement for computed tomography images with adaptive iterative reconstruction method].

    PubMed

    Nishimaru, Eiji; Ichikawa, Katsuhiro; Hara, Takanori; Terakawa, Shoichi; Yokomachi, Kazushi; Fujioka, Chikako; Kiguchi, Masao; Ishifuro, Minoru

    2012-01-01

    Adaptive iterative reconstruction techniques (IRs) can decrease image noise in computed tomography (CT) and are expected to contribute to reduction of the radiation dose. To evaluate the performance of IRs, the conventional two-dimensional (2D) noise power spectrum (NPS) is widely used. However, when an IR provides an NPS value drop at all spatial frequency (which is similar to NPS changes by dose increase), the conventional method cannot evaluate the correct noise property because the conventional method does not correspond to the volume data natures of CT images. The purpose of our study was to develop a new method for NPS measurements that can be adapted to IRs. Our method utilized thick multi-planar reconstruction (MPR) images. The thick images are generally made by averaging CT volume data in a direction perpendicular to a MPR plane (e.g. z-direction for axial MPR plane). By using this averaging technique as a cutter for 3D-NPS, we can obtain adequate 2D-extracted NPS (eNPS) from 3D NPS. We applied this method to IR images generated with adaptive iterative dose reduction 3D (AIDR-3D, Toshiba) to investigate the validity of our method. A water phantom with 24 cm-diameters was scanned at 120 kV and 200 mAs with a 320-row CT (Acquilion One, Toshiba). From the results of study, the adequate thickness of MPR images for eNPS was more than 25.0 mm. Our new NPS measurement method utilizing thick MPR images was accurate and effective for evaluating noise reduction effects of IRs.

  4. Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method

    SciTech Connect

    Tao, Yinghua; Chen, Guang-Hong; Hacker, Timothy A.; Raval, Amish N.; Van Lysel, Michael S.; Speidel, Michael A.

    2014-07-15

    Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937

  5. Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method

    PubMed Central

    Tao, Yinghua; Chen, Guang-Hong; Hacker, Timothy A.; Raval, Amish N.; Van Lysel, Michael S.; Speidel, Michael A.

    2014-01-01

    Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937

  6. Iterative Adaptive Dynamic Programming for Solving Unknown Nonlinear Zero-Sum Game Based on Online Data.

    PubMed

    Zhu, Yuanheng; Zhao, Dongbin; Li, Xiangjun

    2017-03-01

    H∞ control is a powerful method to solve the disturbance attenuation problems that occur in some control systems. The design of such controllers relies on solving the zero-sum game (ZSG). But in practical applications, the exact dynamics is mostly unknown. Identification of dynamics also produces errors that are detrimental to the control performance. To overcome this problem, an iterative adaptive dynamic programming algorithm is proposed in this paper to solve the continuous-time, unknown nonlinear ZSG with only online data. A model-free approach to the Hamilton-Jacobi-Isaacs equation is developed based on the policy iteration method. Control and disturbance policies and value are approximated by neural networks (NNs) under the critic-actor-disturber structure. The NN weights are solved by the least-squares method. According to the theoretical analysis, our algorithm is equivalent to a Gauss-Newton method solving an optimization problem, and it converges uniformly to the optimal solution. The online data can also be used repeatedly, which is highly efficient. Simulation results demonstrate its feasibility to solve the unknown nonlinear ZSG. When compared with other algorithms, it saves a significant amount of online measurement time.

  7. Normalized full gradient of full tensor gravity gradient based on adaptive iterative Tikhonov regularization downward continuation

    NASA Astrophysics Data System (ADS)

    Zhou, Wenna

    2015-07-01

    Normalized full gradient (NFG) method depends on the downward continuation of NFG values of gravity data. In this paper, I deduce an improved NFG method of full tensor gravity gradient (FTG) data by using x-, y- and z-directional analytic signals of FTG data. During the calculation, I introduce the adaptive iterative Tikhonov regularization downward continuation method in the calculation process to improve the stability of the NFG method. The new approach is tested on various model data with and without noise, and satisfactory results are obtained. It demonstrates that the new NFG method of FTG can improve the lateral resolution and describe the gravity bodies in more detail. In addition, the method is applied to a real field FTG data acquired over the Vinton Salt Dome, Louisiana, USA. All results demonstrate that the new method can accurately detect the depth of the geologic sources while providing enhanced information of the sources simultaneously.

  8. Adding Statistical Machine Translation Adaptation to Computer-Assisted Translation

    DTIC Science & Technology

    2013-09-01

    on Telecommunications. Tehran, 2012, 822–826. Bertoldi, N.; Federico, M. Domain Adaptation for Statistical Machine Translation with Monolingual ...for Interactive Machine Translation. ICMI’11. Alicante, Spain: ACM, 2011, 197–200. 14 Haffari, G.; Sarkar, A. Active Learning for Multilingual

  9. J-adaptive estimation with estimated noise statistics

    NASA Technical Reports Server (NTRS)

    Jazwinski, A. H.; Hipkins, C.

    1973-01-01

    The J-adaptive sequential estimator is extended to include simultaneous estimation of the noise statistics in a model for system dynamics. This extension completely automates the estimator, eliminating the requirement of an analyst in the loop. Simulations in satellite orbit determination demonstrate the efficacy of the sequential estimation algorithm.

  10. Adapting iterative algorithms for solving large sparse linear systems for efficient use on the CDC CYBER 205

    NASA Technical Reports Server (NTRS)

    Kincaid, D. R.; Young, D. M.

    1984-01-01

    Adapting and designing mathematical software to achieve optimum performance on the CYBER 205 is discussed. Comments and observations are made in light of recent work done on modifying the ITPACK software package and on writing new software for vector supercomputers. The goal was to develop very efficient vector algorithms and software for solving large sparse linear systems using iterative methods.

  11. Adaptive spatial filtering for off-axis digital holographic microscopy based on region recognition approach with iterative thresholding

    NASA Astrophysics Data System (ADS)

    He, Xuefei; Nguyen, Chuong Vinh; Pratap, Mrinalini; Zheng, Yujie; Wang, Yi; Nisbet, David R.; Rug, Melanie; Maier, Alexander G.; Lee, Woei Ming

    2016-12-01

    Here we propose a region-recognition approach with iterative thresholding, which is adaptively tailored to extract the appropriate region or shape of spatial frequency. In order to justify the method, we tested it with different samples and imaging conditions (different objectives). We demonstrate that our method provides a useful method for rapid imaging of cellular dynamics in microfluidic and cell cultures.

  12. Efficient pulse compression for LPI waveforms based on a nonparametric iterative adaptive approach

    NASA Astrophysics Data System (ADS)

    Li, Zhengzheng; Nepal, Ramesh; Zhang, Yan; Blake, WIlliam

    2015-05-01

    In order to achieve low probability-of-intercept (LPI), radar waveforms are usually long and randomly generated. Due to the randomized nature, Matched filter responses (autocorrelation) of those waveforms can have high sidelobes which would mask weaker targets near a strong target, limiting radar's ability to distinguish close-by targets. To improve resolution and reduced sidelobe contaminations, a waveform independent pulse compression filter is desired. Furthermore, the pulse compression filter needs to be able to adapt to received signal to achieve optimized performance. As many existing pulse techniques require intensive computation, real-time implementation is infeasible. This paper introduces a new adaptive pulse compression technique for LPI waveforms that is based on a nonparametric iterative adaptive approach (IAA). Due to the nonparametric nature, no parameter tuning is required for different waveforms. IAA can achieve super-resolution and sidelobe suppression in both range and Doppler domains. Also it can be extended to directly handle the matched filter (MF) output (called MF-IAA), which further reduces the computational load. The practical impact of LPI waveform operations on IAA and MF-IAA has not been carefully studied in previous work. Herein the typical LPI waveforms such as random phase coding and other non- PI waveforms are tested with both single-pulse and multi-pulse IAA processing. A realistic airborne radar simulator as well as actual measured radar data are used for the validations. It is validated that in spite of noticeable difference with different test waveforms, the IAA algorithms and its improvement can effectively achieve range-Doppler super-resolution in realistic data.

  13. Statistical model based iterative reconstruction (MBIR) in clinical CT systems: Experimental assessment of noise performance

    SciTech Connect

    Li, Ke; Tang, Jie; Chen, Guang-Hong

    2014-04-15

    Purpose: To reduce radiation dose in CT imaging, the statistical model based iterative reconstruction (MBIR) method has been introduced for clinical use. Based on the principle of MBIR and its nonlinear nature, the noise performance of MBIR is expected to be different from that of the well-understood filtered backprojection (FBP) reconstruction method. The purpose of this work is to experimentally assess the unique noise characteristics of MBIR using a state-of-the-art clinical CT system. Methods: Three physical phantoms, including a water cylinder and two pediatric head phantoms, were scanned in axial scanning mode using a 64-slice CT scanner (Discovery CT750 HD, GE Healthcare, Waukesha, WI) at seven different mAs levels (5, 12.5, 25, 50, 100, 200, 300). At each mAs level, each phantom was repeatedly scanned 50 times to generate an image ensemble for noise analysis. Both the FBP method with a standard kernel and the MBIR method (Veo{sup ®}, GE Healthcare, Waukesha, WI) were used for CT image reconstruction. Three-dimensional (3D) noise power spectrum (NPS), two-dimensional (2D) NPS, and zero-dimensional NPS (noise variance) were assessed both globally and locally. Noise magnitude, noise spatial correlation, noise spatial uniformity and their dose dependence were examined for the two reconstruction methods. Results: (1) At each dose level and at each frequency, the magnitude of the NPS of MBIR was smaller than that of FBP. (2) While the shape of the NPS of FBP was dose-independent, the shape of the NPS of MBIR was strongly dose-dependent; lower dose lead to a “redder” NPS with a lower mean frequency value. (3) The noise standard deviation (σ) of MBIR and dose were found to be related through a power law of σ ∝ (dose){sup −β} with the component β ≈ 0.25, which violated the classical σ ∝ (dose){sup −0.5} power law in FBP. (4) With MBIR, noise reduction was most prominent for thin image slices. (5) MBIR lead to better noise spatial

  14. Towards Validation of an Adaptive Flight Control Simulation Using Statistical Emulation

    NASA Technical Reports Server (NTRS)

    He, Yuning; Lee, Herbert K. H.; Davies, Misty D.

    2012-01-01

    Traditional validation of flight control systems is based primarily upon empirical testing. Empirical testing is sufficient for simple systems in which a.) the behavior is approximately linear and b.) humans are in-the-loop and responsible for off-nominal flight regimes. A different possible concept of operation is to use adaptive flight control systems with online learning neural networks (OLNNs) in combination with a human pilot for off-nominal flight behavior (such as when a plane has been damaged). Validating these systems is difficult because the controller is changing during the flight in a nonlinear way, and because the pilot and the control system have the potential to co-adapt in adverse ways traditional empirical methods are unlikely to provide any guarantees in this case. Additionally, the time it takes to find unsafe regions within the flight envelope using empirical testing means that the time between adaptive controller design iterations is large. This paper describes a new concept for validating adaptive control systems using methods based on Bayesian statistics. This validation framework allows the analyst to build nonlinear models with modal behavior, and to have an uncertainty estimate for the difference between the behaviors of the model and system under test.

  15. Iterative development and the scope for plasticity: contrasts among trait categories in an adaptive radiation

    PubMed Central

    Foster, S A; Wund, M A; Graham, M A; Earley, R L; Gardiner, R; Kearns, T; Baker, J A

    2015-01-01

    Phenotypic plasticity can influence evolutionary change in a lineage, ranging from facilitation of population persistence in a novel environment to directing the patterns of evolutionary change. As the specific nature of plasticity can impact evolutionary consequences, it is essential to consider how plasticity is manifested if we are to understand the contribution of plasticity to phenotypic evolution. Most morphological traits are developmentally plastic, irreversible, and generally considered to be costly, at least when the resultant phenotype is mis-matched to the environment. At the other extreme, behavioral phenotypes are typically activational (modifiable on very short time scales), and not immediately costly as they are produced by constitutive neural networks. Although patterns of morphological and behavioral plasticity are often compared, patterns of plasticity of life history phenotypes are rarely considered. Here we review patterns of plasticity in these trait categories within and among populations, comprising the adaptive radiation of the threespine stickleback fish Gasterosteus aculeatus. We immediately found it necessary to consider the possibility of iterated development, the concept that behavioral and life history trajectories can be repeatedly reset on activational (usually behavior) or developmental (usually life history) time frames, offering fine tuning of the response to environmental context. Morphology in stickleback is primarily reset only in that developmental trajectories can be altered as environments change over the course of development. As anticipated, the boundaries between the trait categories are not clear and are likely to be linked by shared, underlying physiological and genetic systems. PMID:26243135

  16. Adaptive strategy for the statistical analysis of connectomes.

    PubMed

    Meskaldji, Djalel Eddine; Ottet, Marie-Christine; Cammoun, Leila; Hagmann, Patric; Meuli, Reto; Eliez, Stephan; Thiran, Jean Philippe; Morgenthaler, Stephan

    2011-01-01

    We study an adaptive statistical approach to analyze brain networks represented by brain connection matrices of interregional connectivity (connectomes). Our approach is at a middle level between a global analysis and single connections analysis by considering subnetworks of the global brain network. These subnetworks represent either the inter-connectivity between two brain anatomical regions or by the intra-connectivity within the same brain anatomical region. An appropriate summary statistic, that characterizes a meaningful feature of the subnetwork, is evaluated. Based on this summary statistic, a statistical test is performed to derive the corresponding p-value. The reformulation of the problem in this way reduces the number of statistical tests in an orderly fashion based on our understanding of the problem. Considering the global testing problem, the p-values are corrected to control the rate of false discoveries. Finally, the procedure is followed by a local investigation within the significant subnetworks. We contrast this strategy with the one based on the individual measures in terms of power. We show that this strategy has a great potential, in particular in cases where the subnetworks are well defined and the summary statistics are properly chosen. As an application example, we compare structural brain connection matrices of two groups of subjects with a 22q11.2 deletion syndrome, distinguished by their IQ scores.

  17. Competition and time-dependent behavior in spatial iterated prisoner’s dilemma incorporating adaptive zero-determinant strategies

    NASA Astrophysics Data System (ADS)

    Li, Yong; Xu, Chen; Liu, Jie; Hui, Pak Ming

    2016-10-01

    We propose and study the competitiveness of a class of adaptive zero-determinant strategies (ZDSs) in a population with spatial structure against four classic strategies in iterated prisoner’s dilemma. Besides strategy updating via a probabilistic mechanism by imitating the strategy of a better performing opponent, players using the ZDSs can also adapt their strategies to take advantage of their local competing environment with another probability. The adapted ZDSs could be extortionate-like to avoid being continually cheated by defectors or to take advantage of unconditional cooperators. The adapted ZDSs could also be a compliance strategy so as to cooperate with the conditionally cooperative players. This flexibility makes adaptive ZDSs more competitive than nonadaptive ZDSs. Results show that adaptive ZDSs can either dominate over other strategies or at least coexist with them when the ZDSs are allowed to adapt more readily than to imitate other strategies. The effectiveness of the adaptive ZDSs relies on how fast they can adapt to the competing environment before they are replaced by other strategies. The adaptive ZDSs generally work well as they could adapt gradually and make use of other strategies for suppressing their enemies. When adaptation happens more readily than imitation for the ZDSs, they outperform other strategies over a wide range of cost-to-benefit ratios.

  18. Discrete-Time Nonzero-Sum Games for Multiplayer Using Policy-Iteration-Based Adaptive Dynamic Programming Algorithms.

    PubMed

    Zhang, Huaguang; Jiang, He; Luo, Chaomin; Xiao, Geyang

    2016-10-03

    In this paper, we investigate the nonzero-sum games for a class of discrete-time (DT) nonlinear systems by using a novel policy iteration (PI) adaptive dynamic programming (ADP) method. The main idea of our proposed PI scheme is to utilize the iterative ADP algorithm to obtain the iterative control policies, which not only ensure the system to achieve stability but also minimize the performance index function for each player. This paper integrates game theory, optimal control theory, and reinforcement learning technique to formulate and handle the DT nonzero-sum games for multiplayer. First, we design three actor-critic algorithms, an offline one and two online ones, for the PI scheme. Subsequently, neural networks are employed to implement these algorithms and the corresponding stability analysis is also provided via the Lyapunov theory. Finally, a numerical simulation example is presented to demonstrate the effectiveness of our proposed approach.

  19. Speckle statistics in adaptive optics images at visible wavelengths

    NASA Astrophysics Data System (ADS)

    Stangalini, Marco; Pedichini, Fernando; Ambrosino, Filippo; Centrone, Mauro; Del Moro, Dario

    2016-07-01

    Residual speckles in adaptive optics (AO) images represent a well known limitation to the achievement of the contrast needed for faint stellar companions detection. Speckles in AO imagery can be the result of either residual atmospheric aberrations, not corrected by the AO, or slowly evolving aberrations induced by the optical system. In this work we take advantage of new high temporal cadence (1 ms) data acquired by the SHARK forerunner experiment at the Large Binocular Telescope (LBT), to characterize the AO residual speckles at visible waveleghts. By means of an automatic identification of speckles, we study the main statistical properties of AO residuals. In addition, we also study the memory of the process, and thus the clearance time of the atmospheric aberrations, by using information Theory. These information are useful for increasing the realism of numerical simulations aimed at assessing the instrumental performances, and for the application of post-processing techniques on AO imagery.

  20. Iterative Algorithms for Integral Equations of the First Kind With Applications to Statistics

    DTIC Science & Technology

    1992-10-01

    n-.) B2.(m-&)Xm where J11 is a nonsingular matrix of Jordan blocks, and N22 is a nilpotent matrix of index t > 1 of Jordan blocks corresponding to a...statistical methodology, and an application to an inverse problem. ) In the first part, singular matrix equations that result from discretizing ill...application to an inverse problem. In the first part, singular matrix equations that result from discretizing ill-posed inte- gral equations of the first

  1. Polychromatic Iterative Statistical Material Image Reconstruction for Photon-Counting Computed Tomography

    PubMed Central

    Weidinger, Thomas; Buzug, Thorsten M.; Flohr, Thomas; Kappler, Steffen; Stierstorfer, Karl

    2016-01-01

    This work proposes a dedicated statistical algorithm to perform a direct reconstruction of material-decomposed images from data acquired with photon-counting detectors (PCDs) in computed tomography. It is based on local approximations (surrogates) of the negative logarithmic Poisson probability function. Exploiting the convexity of this function allows for parallel updates of all image pixels. Parallel updates can compensate for the rather slow convergence that is intrinsic to statistical algorithms. We investigate the accuracy of the algorithm for ideal photon-counting detectors. Complementarily, we apply the algorithm to simulation data of a realistic PCD with its spectral resolution limited by K-escape, charge sharing, and pulse-pileup. For data from both an ideal and realistic PCD, the proposed algorithm is able to correct beam-hardening artifacts and quantitatively determine the material fractions of the chosen basis materials. Via regularization we were able to achieve a reduction of image noise for the realistic PCD that is up to 90% lower compared to material images form a linear, image-based material decomposition using FBP images. Additionally, we find a dependence of the algorithms convergence speed on the threshold selection within the PCD. PMID:27195003

  2. Microwave medical imaging based on sparsity and an iterative method with adaptive thresholding.

    PubMed

    Azghani, Masoumeh; Kosmas, Panagiotis; Marvasti, Farokh

    2015-02-01

    We propose a new image recovery method to improve the resolution in microwave imaging applications. Scattered field data obtained from a simplified breast model with closely located targets is used to formulate an electromagnetic inverse scattering problem, which is then solved using the Distorted Born Iterative Method (DBIM). At each iteration of the DBIM method, an underdetermined set of linear equations is solved using our proposed sparse recovery algorithm, IMATCS. Our results demonstrate the ability of the proposed method to recover small targets in cases where traditional DBIM approaches fail. Furthermore, in order to regularize the sparse recovery algorithm, we propose a novel L(2) -based approach and prove its convergence. The simulation results indicate that the L(2)-regularized method improves the robustness of the algorithm against the ill-posed conditions of the EM inverse scattering problem. Finally, we demonstrate that the regularized IMATCS-DBIM approach leads to fast, accurate and stable reconstructions of highly dense breast compositions.

  3. Statistics of intensity in adaptive-optics images and their usefulness for detection and photometry of exoplanets.

    PubMed

    Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C

    2010-11-01

    This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.

  4. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    NASA Astrophysics Data System (ADS)

    Rolland, Joran; Simonnet, Eric

    2015-02-01

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection-mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations.

  5. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    SciTech Connect

    Rolland, Joran Simonnet, Eric

    2015-02-15

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations.

  6. Iterative adaptive radiations of fossil canids show no evidence for diversity-dependent trait evolution

    NASA Astrophysics Data System (ADS)

    Slater, Graham J.

    2015-04-01

    A long-standing hypothesis in adaptive radiation theory is that ecological opportunity constrains rates of phenotypic evolution, generating a burst of morphological disparity early in clade history. Empirical support for the early burst model is rare in comparative data, however. One possible reason for this lack of support is that most phylogenetic tests have focused on extant clades, neglecting information from fossil taxa. Here, I test for the expected signature of adaptive radiation using the outstanding 40-My fossil record of North American canids. Models implying time- and diversity-dependent rates of morphological evolution are strongly rejected for two ecologically important traits, body size and grinding area of the molar teeth. Instead, Ornstein-Uhlenbeck processes implying repeated, and sometimes rapid, attraction to distinct dietary adaptive peaks receive substantial support. Diversity-dependent rates of morphological evolution seem uncommon in clades, such as canids, that exhibit a pattern of replicated adaptive radiation. Instead, these clades might best be thought of as deterministic radiations in constrained Simpsonian subzones of a major adaptive zone. Support for adaptive peak models may be diagnostic of subzonal radiations. It remains to be seen whether early burst or ecological opportunity models can explain broader adaptive radiations, such as the evolution of higher taxa.

  7. Iterative adaptive radiations of fossil canids show no evidence for diversity-dependent trait evolution.

    PubMed

    Slater, Graham J

    2015-04-21

    A long-standing hypothesis in adaptive radiation theory is that ecological opportunity constrains rates of phenotypic evolution, generating a burst of morphological disparity early in clade history. Empirical support for the early burst model is rare in comparative data, however. One possible reason for this lack of support is that most phylogenetic tests have focused on extant clades, neglecting information from fossil taxa. Here, I test for the expected signature of adaptive radiation using the outstanding 40-My fossil record of North American canids. Models implying time- and diversity-dependent rates of morphological evolution are strongly rejected for two ecologically important traits, body size and grinding area of the molar teeth. Instead, Ornstein-Uhlenbeck processes implying repeated, and sometimes rapid, attraction to distinct dietary adaptive peaks receive substantial support. Diversity-dependent rates of morphological evolution seem uncommon in clades, such as canids, that exhibit a pattern of replicated adaptive radiation. Instead, these clades might best be thought of as deterministic radiations in constrained Simpsonian subzones of a major adaptive zone. Support for adaptive peak models may be diagnostic of subzonal radiations. It remains to be seen whether early burst or ecological opportunity models can explain broader adaptive radiations, such as the evolution of higher taxa.

  8. The statistics of genetic diversity in rapidly adapting populations.

    NASA Astrophysics Data System (ADS)

    Desai, Michael

    2013-03-01

    Evolutionary adaptation is driven by the accumulation of beneficial mutations, but the sequence-level dynamics of this process are poorly understood. The traditional view is that adaptation is dominated by rare beneficial ``driver'' mutations that occur sporadically and then rapidly increase in frequency until they fix (a ``selective sweep''). Yet in microbial populations, multiple beneficial mutations are often present simultaneously. Selection cannot act on each mutation independently, but only on linked combinations. This means that the fate of any mutation depends on a complex interplay between its own fitness effect, the genomic background in which it arises, and the rest of the sequence variation in the population. The balance between these factors determines which mutations fix, the patterns of sequence diversity within populations, and the degree to which evolution in replicate populations will follow parallel (or divergent) trajectories at the sequence level. Earlier work has uncovered signatures of these effects, but the dynamics of genomic sequence evolution in adapting microbial populations have not yet been directly observed. In this talk, I will describe how full-genome whole-population sequencing can be used to provide a detailed view of these dynamics at high temporal resolution over 1000 generations in 40 adapting Saccharomyces cerevisiaepopulations. This data shows how patterns of sequence evolution are driven by a balance between chance interference and hitchhiking effects, which increase stochastic variation in evolutionary outcomes, and the deterministic action of selection on individual mutations, which favors parallel solutions in replicate populations.

  9. Diversity of immune strategies explained by adaptation to pathogen statistics

    PubMed Central

    Mayer, Andreas; Mora, Thierry; Rivoire, Olivier; Walczak, Aleksandra M.

    2016-01-01

    Biological organisms have evolved a wide range of immune mechanisms to defend themselves against pathogens. Beyond molecular details, these mechanisms differ in how protection is acquired, processed, and passed on to subsequent generations—differences that may be essential to long-term survival. Here, we introduce a mathematical framework to compare the long-term adaptation of populations as a function of the pathogen dynamics that they experience and of the immune strategy that they adopt. We find that the two key determinants of an optimal immune strategy are the frequency and the characteristic timescale of the pathogens. Depending on these two parameters, our framework identifies distinct modes of immunity, including adaptive, innate, bet-hedging, and CRISPR-like immunities, which recapitulate the diversity of natural immune systems. PMID:27432970

  10. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    PubMed

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  11. Statistical model based iterative reconstruction (MBIR) in clinical CT systems. Part II. Experimental assessment of spatial resolution performance

    SciTech Connect

    Li, Ke; Chen, Guang-Hong; Garrett, John; Ge, Yongshuai

    2014-07-15

    Purpose: Statistical model based iterative reconstruction (MBIR) methods have been introduced to clinical CT systems and are being used in some clinical diagnostic applications. The purpose of this paper is to experimentally assess the unique spatial resolution characteristics of this nonlinear reconstruction method and identify its potential impact on the detectabilities and the associated radiation dose levels for specific imaging tasks. Methods: The thoracic section of a pediatric phantom was repeatedly scanned 50 or 100 times using a 64-slice clinical CT scanner at four different dose levels [CTDI{sub vol} =4, 8, 12, 16 (mGy)]. Both filtered backprojection (FBP) and MBIR (Veo{sup ®}, GE Healthcare, Waukesha, WI) were used for image reconstruction and results were compared with one another. Eight test objects in the phantom with contrast levels ranging from 13 to 1710 HU were used to assess spatial resolution. The axial spatial resolution was quantified with the point spread function (PSF), while the z resolution was quantified with the slice sensitivity profile. Both were measured locally on the test objects and in the image domain. The dependence of spatial resolution on contrast and dose levels was studied. The study also features a systematic investigation of the potential trade-off between spatial resolution and locally defined noise and their joint impact on the overall image quality, which was quantified by the image domain-based channelized Hotelling observer (CHO) detectability index d′. Results: (1) The axial spatial resolution of MBIR depends on both radiation dose level and image contrast level, whereas it is supposedly independent of these two factors in FBP. The axial spatial resolution of MBIR always improved with an increasing radiation dose level and/or contrast level. (2) The axial spatial resolution of MBIR became equivalent to that of FBP at some transitional contrast level, above which MBIR demonstrated superior spatial resolution than

  12. Specificity and timescales of cortical adaptation as inferences about natural movie statistics

    PubMed Central

    Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia

    2016-01-01

    Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation. PMID:27699416

  13. Statistical context shapes stimulus-specific adaptation in human auditory cortex

    PubMed Central

    Henry, Molly J.; Fromboluti, Elisa Kim; McAuley, J. Devin

    2015-01-01

    Stimulus-specific adaptation is the phenomenon whereby neural response magnitude decreases with repeated stimulation. Inconsistencies between recent nonhuman animal recordings and computational modeling suggest dynamic influences on stimulus-specific adaptation. The present human electroencephalography (EEG) study investigates the potential role of statistical context in dynamically modulating stimulus-specific adaptation by examining the auditory cortex-generated N1 and P2 components. As in previous studies of stimulus-specific adaptation, listeners were presented with oddball sequences in which the presentation of a repeated tone was infrequently interrupted by rare spectral changes taking on three different magnitudes. Critically, the statistical context varied with respect to the probability of small versus large spectral changes within oddball sequences (half of the time a small change was most probable; in the other half a large change was most probable). We observed larger N1 and P2 amplitudes (i.e., release from adaptation) for all spectral changes in the small-change compared with the large-change statistical context. The increase in response magnitude also held for responses to tones presented with high probability, indicating that statistical adaptation can overrule stimulus probability per se in its influence on neural responses. Computational modeling showed that the degree of coadaptation in auditory cortex changed depending on the statistical context, which in turn affected stimulus-specific adaptation. Thus the present data demonstrate that stimulus-specific adaptation in human auditory cortex critically depends on statistical context. Finally, the present results challenge the implicit assumption of stationarity of neural response magnitudes that governs the practice of isolating established deviant-detection responses such as the mismatch negativity. PMID:25652920

  14. Observer performance for adaptive, image-based denoising and filtered back projection compared to scanner-based iterative reconstruction for lower dose CT enterography

    PubMed Central

    Fletcher, Joel G.; Hara, Amy K.; Fidler, Jeff L.; Silva, Alvin C.; Barlow, John M.; Carter, Rickey E.; Bartley, Adam; Shiung, Maria; Holmes, David R.; Weber, Nicolas K.; Bruining, David H.; Yu, Lifeng; McCollough, Cynthia H.

    2015-01-01

    Purpose The purpose of this study was to compare observer performance for detection of intestinal inflammation for low-dose CT enterography (LD-CTE) using scanner-based iterative reconstruction (IR) vs. vendor-independent, adaptive image-based noise reduction (ANLM) or filtered back projection (FBP). Methods Sixty-two LD-CTE exams were performed. LD-CTE images were reconstructed using IR, ANLM, and FBP. Three readers, blinded to image type, marked intestinal inflammation directly on patient images using a specialized workstation over three sessions, interpreting one image type/patient/session. Reference standard was created by a gastroenterologist and radiologist, who reviewed all available data including dismissal Gastroenterology records, and who marked all inflamed bowel segments on the same workstation. Reader and reference localizations were then compared. Non-inferiority was tested using Jackknife free-response ROC (JAFROC) figures of merit (FOM) for ANLM and FBP compared to IR. Patient-level analyses for the presence or absence of inflammation were also conducted. Results There were 46 inflamed bowel segments in 24/62 patients (CTDIvol interquartile range 6.9–10.1 mGy). JAFROC FOM for ANLM and FBP were 0.84 (95% CI 0.75–0.92) and 0.84 (95% CI 0.75–0.92), and were statistically non-inferior to IR (FOM 0.84; 95% CI 0.76–0.93). Patient-level pooled confidence intervals for sensitivity widely overlapped, as did specificities. Image quality was rated as better with IR and AMLM compared to FBP (p < 0.0001), with no difference in reading times (p = 0.89). Conclusions Vendor-independent adaptive image-based noise reduction and FBP provided observer performance that was non-inferior to scanner-based IR methods. Adaptive image-based noise reduction maintained or improved upon image quality ratings compared to FBP when performing CTE at lower dose levels. PMID:25725794

  15. Adaptive Iterative Dose Reduction Using Three Dimensional Processing (AIDR3D) Improves Chest CT Image Quality and Reduces Radiation Exposure

    PubMed Central

    Yamashiro, Tsuneo; Miyara, Tetsuhiro; Honda, Osamu; Kamiya, Hisashi; Murata, Kiyoshi; Ohno, Yoshiharu; Tomiyama, Noriyuki; Moriya, Hiroshi; Koyama, Mitsuhiro; Noma, Satoshi; Kamiya, Ayano; Tanaka, Yuko; Murayama, Sadayuki

    2014-01-01

    Objective To assess the advantages of Adaptive Iterative Dose Reduction using Three Dimensional Processing (AIDR3D) for image quality improvement and dose reduction for chest computed tomography (CT). Methods Institutional Review Boards approved this study and informed consent was obtained. Eighty-eight subjects underwent chest CT at five institutions using identical scanners and protocols. During a single visit, each subject was scanned using different tube currents: 240, 120, and 60 mA. Scan data were converted to images using AIDR3D and a conventional reconstruction mode (without AIDR3D). Using a 5-point scale from 1 (non-diagnostic) to 5 (excellent), three blinded observers independently evaluated image quality for three lung zones, four patterns of lung disease (nodule/mass, emphysema, bronchiolitis, and diffuse lung disease), and three mediastinal measurements (small structure visibility, streak artifacts, and shoulder artifacts). Differences in these scores were assessed by Scheffe's test. Results At each tube current, scans using AIDR3D had higher scores than those without AIDR3D, which were significant for lung zones (p<0.0001) and all mediastinal measurements (p<0.01). For lung diseases, significant improvements with AIDR3D were frequently observed at 120 and 60 mA. Scans with AIDR3D at 120 mA had significantly higher scores than those without AIDR3D at 240 mA for lung zones and mediastinal streak artifacts (p<0.0001), and slightly higher or equal scores for all other measurements. Scans with AIDR3D at 60 mA were also judged superior or equivalent to those without AIDR3D at 120 mA. Conclusion For chest CT, AIDR3D provides better image quality and can reduce radiation exposure by 50%. PMID:25153797

  16. Adaptive Perfectionism, Maladaptive Perfectionism and Statistics Anxiety in Graduate Psychology Students

    ERIC Educational Resources Information Center

    Comerchero, Victoria; Fortugno, Dominick

    2013-01-01

    The current study examined if correlations between statistics anxiety and dimensions of perfectionism (adaptive and maladaptive) were present amongst a sample of psychology graduate students (N = 96). Results demonstrated that scores on the APS-R Discrepancy scale, corresponding to maladaptive perfectionism, correlated with higher levels of…

  17. Research and Teaching: Statistics across the Curriculum Using an Iterative, Interactive Approach in an Inquiry-Based Lab Sequence

    ERIC Educational Resources Information Center

    Remsburg, Alysa J.; Harris, Michelle A.; Batzli, Janet M.

    2014-01-01

    How can science instructors prepare students for the statistics needed in authentic inquiry labs? We designed and assessed four instructional modules with the goals of increasing student confidence, appreciation, and performance in both experimental design and data analysis. Using extensions from a just-in-time teaching approach, we introduced…

  18. Drifter-based Predictions of the Spread of Surface Contamination Using Iterative Statistics: A Local Example with Global Applications

    NASA Astrophysics Data System (ADS)

    Fertitta, D. A.; Macdonald, A. M.; Rypina, I.

    2015-12-01

    In the aftermath of the 2011 Fukushima nuclear power plant accident, it became critical to determine how radionuclides, both from atmospheric deposition and direct ocean discharge, were spreading in the ocean. One successful method used drifter observations from the Global Drifter Program (GDP) to predict the timing of the spread of surface contamination. U.S. coasts are home to a number of nuclear power plants as well as other industries capable of leaking contamination into the surface ocean. Here, the spread of surface contamination from a hypothetical accident at the existing Pilgrim nuclear power plant on the coast of Massachusetts is used as an example to show how the historical drifter dataset can be used as a prediction tool. Our investigation uses a combined dataset of drifter tracks from the GDP and the NOAA Northeast Fisheries Science Center. Two scenarios are examined to estimate the spread of surface contamination: a local direct leakage scenario and a broader atmospheric deposition scenario that could result from an explosion. The local leakage scenario is used to study the spread of contamination within and beyond Cape Cod Bay, and the atmospheric deposition scenario is used to study the large-scale spread of contamination throughout the North Atlantic Basin. A multiple-iteration method of estimating probability makes best use of the available drifter data. This technique, which allows for direct observationally-based predictions, can be applied anywhere that drifter data are available to calculate estimates of the likelihood and general timing of the spread of surface contamination in the ocean.

  19. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  20. Adaptation to changes in higher-order stimulus statistics in the salamander retina.

    PubMed

    Tkačik, Gašper; Ghosh, Anandamohan; Schneidman, Elad; Segev, Ronen

    2014-01-01

    Adaptation in the retina is thought to optimize the encoding of natural light signals into sequences of spikes sent to the brain. While adaptive changes in retinal processing to the variations of the mean luminance level and second-order stimulus statistics have been documented before, no such measurements have been performed when higher-order moments of the light distribution change. We therefore measured the ganglion cell responses in the tiger salamander retina to controlled changes in the second (contrast), third (skew) and fourth (kurtosis) moments of the light intensity distribution of spatially uniform temporally independent stimuli. The skew and kurtosis of the stimuli were chosen to cover the range observed in natural scenes. We quantified adaptation in ganglion cells by studying linear-nonlinear models that capture well the retinal encoding properties across all stimuli. We found that the encoding properties of retinal ganglion cells change only marginally when higher-order statistics change, compared to the changes observed in response to the variation in contrast. By analyzing optimal coding in LN-type models, we showed that neurons can maintain a high information rate without large dynamic adaptation to changes in skew or kurtosis. This is because, for uncorrelated stimuli, spatio-temporal summation within the receptive field averages away non-gaussian aspects of the light intensity distribution.

  1. Speckle reduction in ultrasound medical images using adaptive filter based on second order statistics.

    PubMed

    Thakur, A; Anand, R S

    2007-01-01

    This article discusses an adaptive filtering technique for reducing speckle using second order statistics of the speckle pattern in ultrasound medical images. Several region-based adaptive filter techniques have been developed for speckle noise suppression, but there are no specific criteria for selecting the region growing size in the post processing of the filter. The size appropriate for one local region may not be appropriate for other regions. Selection of the correct region size involves a trade-off between speckle reduction and edge preservation. Generally, a large region size is used to smooth speckle and a small size to preserve the edges into an image. In this paper, a smoothing procedure combines the first order statistics of speckle for the homogeneity test and second order statistics for selection of filters and desired region growth. Grey level co-occurrence matrix (GLCM) is calculated for every region during the region contraction and region growing for second order statistics. Further, these GLCM features determine the appropriate filter for the region smoothing. The performance of this approach is compared with the aggressive region-growing filter (ARGF) using edge preservation and speckle reduction tests. The processed image results show that the proposed method effectively reduces speckle noise and preserves edge details.

  2. Iterative adaption of the bidimensional wall of the French T2 wind tunnel around a C5 axisymmetrical model: Infinite variation of the Mach number at zero incidence and a test at increased incidence

    NASA Technical Reports Server (NTRS)

    Archambaud, J. P.; Dor, J. B.; Payry, M. J.; Lamarche, L.

    1986-01-01

    The top and bottom two-dimensional walls of the T2 wind tunnel are adapted through an iterative process. The adaptation calculation takes into account the flow three-dimensionally. This method makes it possible to start with any shape of walls. The tests were performed with a C5 axisymmetric model at ambient temperature. Comparisons are made with the results of a true three-dimensional adaptation.

  3. WE-G-18A-04: 3D Dictionary Learning Based Statistical Iterative Reconstruction for Low-Dose Cone Beam CT Imaging

    SciTech Connect

    Bai, T; Yan, H; Shi, F; Jia, X; Jiang, Steve B.; Lou, Y; Xu, Q; Mou, X

    2014-06-15

    clinical application. A high zresolution is preferred to stabilize statistical iterative reconstruction. This work was supported in part by NIH(1R01CA154747-01), NSFC((No. 61172163), Research Fund for the Doctoral Program of Higher Education of China (No. 20110201110011), China Scholarship Council.

  4. Adaptive sampling rate control for networked systems based on statistical characteristics of packet disordering.

    PubMed

    Li, Jin-Na; Er, Meng-Joo; Tan, Yen-Kheng; Yu, Hai-Bin; Zeng, Peng

    2015-09-01

    This paper investigates an adaptive sampling rate control scheme for networked control systems (NCSs) subject to packet disordering. The main objectives of the proposed scheme are (a) to avoid heavy packet disordering existing in communication networks and (b) to stabilize NCSs with packet disordering, transmission delay and packet loss. First, a novel sampling rate control algorithm based on statistical characteristics of disordering entropy is proposed; secondly, an augmented closed-loop NCS that consists of a plant, a sampler and a state-feedback controller is transformed into an uncertain and stochastic system, which facilitates the controller design. Then, a sufficient condition for stochastic stability in terms of Linear Matrix Inequalities (LMIs) is given. Moreover, an adaptive tracking controller is designed such that the sampling period tracks a desired sampling period, which represents a significant contribution. Finally, experimental results are given to illustrate the effectiveness and advantages of the proposed scheme.

  5. Statistical inference for response adaptive randomization procedures with adjusted optimal allocation proportions.

    PubMed

    Zhu, Hongjian

    2016-12-12

    Seamless phase II/III clinical trials have attracted increasing attention recently. They mainly use Bayesian response adaptive randomization (RAR) designs. There has been little research into seamless clinical trials using frequentist RAR designs because of the difficulty in performing valid statistical inference following this procedure. The well-designed frequentist RAR designs can target theoretically optimal allocation proportions, and they have explicit asymptotic results. In this paper, we study the asymptotic properties of frequentist RAR designs with adjusted target allocation proportions, and investigate statistical inference for this procedure. The properties of the proposed design provide an important theoretical foundation for advanced seamless clinical trials. Our numerical studies demonstrate that the design is ethical and efficient.

  6. Weighted log-rank statistic to compare shared-path adaptive treatment strategies.

    PubMed

    Kidwell, Kelley M; Wahed, Abdus S

    2013-04-01

    Adaptive treatment strategies (ATSs) more closely mimic the reality of a physician's prescription process where the physician prescribes a medication to his/her patient, and based on that patient's response to the medication, modifies the treatment. Two-stage randomization designs, more generally, sequential multiple assignment randomization trial designs, are useful to assess ATSs where the interest is in comparing the entire sequence of treatments, including the patient's intermediate response. In this paper, we introduce the notion of shared-path and separate-path ATSs and propose a weighted log-rank statistic to compare overall survival distributions of multiple two-stage ATSs, some of which may be shared-path. Large sample properties of the statistic are derived and the type I error rate and power of the test are compared with the standard log-rank test through simulation.

  7. Auto-adaptive statistical procedure for tracking structural health monitoring data

    NASA Astrophysics Data System (ADS)

    Smith, R. Lowell; Jannarone, Robert J.

    2004-07-01

    Whatever specific methods come to be preferred in the field of structural health/integrity monitoring, the associated raw data will eventually have to provide inputs for appropriate damage accumulation models and decision making protocols. The status of hardware under investigation eventually will be inferred from the evolution in time of the characteristics of this kind of functional figure of merit. Irrespective of the specific character of raw and processed data, it is desirable to develop simple, practical procedures to support damage accumulation modeling, status discrimination, and operational decision making in real time. This paper addresses these concerns and presents an auto-adaptive procedure developed to process data output from an array of many dozens of correlated sensors. These represent a full complement of information channels associated with typical structural health monitoring applications. What the algorithm does is learn in statistical terms the normal behavior patterns of the system, and against that backdrop, is configured to recognize and flag departures from expected behavior. This is accomplished using standard statistical methods, with certain proprietary enhancements employed to address issues of ill conditioning that may arise. Examples have been selected to illustrate how the procedure performs in practice. These are drawn from the fields of nondestructive testing, infrastructure management, and underwater acoustics. The demonstrations presented include the evaluation of historical electric power utilization data for a major facility, and a quantitative assessment of the performance benefits of net-centric, auto-adaptive computational procedures as a function of scale.

  8. Domain adaptation of statistical machine translation with domain-focused web crawling.

    PubMed

    Pecina, Pavel; Toral, Antonio; Papavassiliou, Vassilis; Prokopidis, Prokopis; Tamchyna, Aleš; Way, Andy; van Genabith, Josef

    In this paper, we tackle the problem of domain adaptation of statistical machine translation (SMT) by exploiting domain-specific data acquired by domain-focused crawling of text from the World Wide Web. We design and empirically evaluate a procedure for automatic acquisition of monolingual and parallel text and their exploitation for system training, tuning, and testing in a phrase-based SMT framework. We present a strategy for using such resources depending on their availability and quantity supported by results of a large-scale evaluation carried out for the domains of environment and labour legislation, two language pairs (English-French and English-Greek) and in both directions: into and from English. In general, machine translation systems trained and tuned on a general domain perform poorly on specific domains and we show that such systems can be adapted successfully by retuning model parameters using small amounts of parallel in-domain data, and may be further improved by using additional monolingual and parallel training data for adaptation of language and translation models. The average observed improvement in BLEU achieved is substantial at 15.30 points absolute.

  9. Adaptive vibration suppression system: an iterative control law for a piezoelectric actuator shunted by a negative capacitor.

    PubMed

    Kodejska, Milos; Mokry, Pavel; Linhart, Vaclav; Vaclavik, Jan; Sluka, Tomas

    2012-12-01

    An adaptive system for the suppression of vibration transmission using a single piezoelectric actuator shunted by a negative capacitance circuit is presented. It is known that by using a negative-capacitance shunt, the spring constant of a piezoelectric actuator can be controlled to extreme values of zero or infinity. Because the value of spring constant controls a force transmitted through an elastic element, it is possible to achieve a reduction of transmissibility of vibrations through the use of a piezoelectric actuator by reducing its effective spring constant. Narrow frequency range and broad frequency range vibration isolation systems are analyzed, modeled, and experimentally investigated. The problem of high sensitivity of the vibration control system to varying operational conditions is resolved by applying an adaptive control to the circuit parameters of the negative capacitor. A control law that is based on the estimation of the value of the effective spring constant of a shunted piezoelectric actuator is presented. An adaptive system which achieves a self-adjustment of the negative capacitor parameters is presented. It is shown that such an arrangement allows the design of a simple electronic system which offers a great vibration isolation efficiency under variable vibration conditions.

  10. Performances of the fractal iterative method with an internal model control law on the ESO end-to-end ELT adaptive optics simulator

    NASA Astrophysics Data System (ADS)

    Béchet, C.; Le Louarn, M.; Tallon, M.; Thiébaut, É.

    2008-07-01

    Adaptive Optics systems under study for the Extremely Large Telescopes gave rise to a new generation of algorithms for both wavefront reconstruction and the control law. In the first place, the large number of controlled actuators impose the use of computationally efficient methods. Secondly, the performance criterion is no longer solely based on nulling residual measurements. Priors on turbulence must be inserted. In order to satisfy these two requirements, we suggested to associate the Fractal Iterative Method for the estimation step with an Internal Model Control. This combination has now been tested on an end-to-end adaptive optics numerical simulator at ESO, named Octopus. Results are presented here and performance of our method is compared to the classical Matrix-Vector Multiplication combined with a pure integrator. In the light of a theoretical analysis of our control algorithm, we investigate the influence of several errors contributions on our simulations. The reconstruction error varies with the signal-to-noise ratio but is limited by the use of priors. The ratio between the system loop delay and the wavefront coherence time also impacts on the reachable Strehl ratio. Whereas no instabilities are observed, correction quality is obviously affected at low flux, when subapertures extinctions are frequent. Last but not least, the simulations have demonstrated the robustness of the method with respect to sensor modeling errors and actuators misalignments.

  11. An Adaptive Association Test for Multiple Phenotypes with GWAS Summary Statistics.

    PubMed

    Kim, Junghi; Bai, Yun; Pan, Wei

    2015-12-01

    We study the problem of testing for single marker-multiple phenotype associations based on genome-wide association study (GWAS) summary statistics without access to individual-level genotype and phenotype data. For most published GWASs, because obtaining summary data is substantially easier than accessing individual-level phenotype and genotype data, while often multiple correlated traits have been collected, the problem studied here has become increasingly important. We propose a powerful adaptive test and compare its performance with some existing tests. We illustrate its applications to analyses of a meta-analyzed GWAS dataset with three blood lipid traits and another with sex-stratified anthropometric traits, and further demonstrate its potential power gain over some existing methods through realistic simulation studies. We start from the situation with only one set of (possibly meta-analyzed) genome-wide summary statistics, then extend the method to meta-analysis of multiple sets of genome-wide summary statistics, each from one GWAS. We expect the proposed test to be useful in practice as more powerful than or complementary to existing methods.

  12. Multiple solution of systems of linear algebraic equations by an iterative method with the adaptive recalculation of the preconditioner

    NASA Astrophysics Data System (ADS)

    Akhunov, R. R.; Gazizov, T. R.; Kuksenko, S. P.

    2016-08-01

    The mean time needed to solve a series of systems of linear algebraic equations (SLAEs) as a function of the number of SLAEs is investigated. It is proved that this function has an extremum point. An algorithm for adaptively determining the time when the preconditioner matrix should be recalculated when a series of SLAEs is solved is developed. A numerical experiment with multiply solving a series of SLAEs using the proposed algorithm for computing 100 capacitance matrices with two different structures—microstrip when its thickness varies and a modal filter as the gap between the conductors varies—is carried out. The speedups turned out to be close to the optimal ones.

  13. Statistics

    Cancer.gov

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  14. Adaptive Markov chain Monte Carlo forward projection for statistical analysis in epidemic modelling of human papillomavirus.

    PubMed

    Korostil, Igor A; Peters, Gareth W; Cornebise, Julien; Regan, David G

    2013-05-20

    A Bayesian statistical model and estimation methodology based on forward projection adaptive Markov chain Monte Carlo is developed in order to perform the calibration of a high-dimensional nonlinear system of ordinary differential equations representing an epidemic model for human papillomavirus types 6 and 11 (HPV-6, HPV-11). The model is compartmental and involves stratification by age, gender and sexual-activity group. Developing this model and a means to calibrate it efficiently is relevant because HPV is a very multi-typed and common sexually transmitted infection with more than 100 types currently known. The two types studied in this paper, types 6 and 11, are causing about 90% of anogenital warts. We extend the development of a sexual mixing matrix on the basis of a formulation first suggested by Garnett and Anderson, frequently used to model sexually transmitted infections. In particular, we consider a stochastic mixing matrix framework that allows us to jointly estimate unknown attributes and parameters of the mixing matrix along with the parameters involved in the calibration of the HPV epidemic model. This matrix describes the sexual interactions between members of the population under study and relies on several quantities that are a priori unknown. The Bayesian model developed allows one to estimate jointly the HPV-6 and HPV-11 epidemic model parameters as well as unknown sexual mixing matrix parameters related to assortativity. Finally, we explore the ability of an extension to the class of adaptive Markov chain Monte Carlo algorithms to incorporate a forward projection strategy for the ordinary differential equation state trajectories. Efficient exploration of the Bayesian posterior distribution developed for the ordinary differential equation parameters provides a challenge for any Markov chain sampling methodology, hence the interest in adaptive Markov chain methods. We conclude with simulation studies on synthetic and recent actual data.

  15. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    PubMed Central

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  16. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    PubMed

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-08

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  17. Vibration-based structural health monitoring using adaptive statistical method under varying environmental condition

    NASA Astrophysics Data System (ADS)

    Jin, Seung-Seop; Jung, Hyung-Jo

    2014-03-01

    It is well known that the dynamic properties of a structure such as natural frequencies depend not only on damage but also on environmental condition (e.g., temperature). The variation in dynamic characteristics of a structure due to environmental condition may mask damage of the structure. Without taking the change of environmental condition into account, false-positive or false-negative damage diagnosis may occur so that structural health monitoring becomes unreliable. In order to address this problem, an approach to construct a regression model based on structural responses considering environmental factors has been usually used by many researchers. The key to success of this approach is the formulation between the input and output variables of the regression model to take into account the environmental variations. However, it is quite challenging to determine proper environmental variables and measurement locations in advance for fully representing the relationship between the structural responses and the environmental variations. One alternative (i.e., novelty detection) is to remove the variations caused by environmental factors from the structural responses by using multivariate statistical analysis (e.g., principal component analysis (PCA), factor analysis, etc.). The success of this method is deeply depending on the accuracy of the description of normal condition. Generally, there is no prior information on normal condition during data acquisition, so that the normal condition is determined by subjective perspective with human-intervention. The proposed method is a novel adaptive multivariate statistical analysis for monitoring of structural damage detection under environmental change. One advantage of this method is the ability of a generative learning to capture the intrinsic characteristics of the normal condition. The proposed method is tested on numerically simulated data for a range of noise in measurement under environmental variation. A comparative

  18. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels

    PubMed Central

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J.

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively “hiding” its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research. PMID:25505378

  19. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels.

    PubMed

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.

  20. FLAGS: A Flexible and Adaptive Association Test for Gene Sets Using Summary Statistics

    PubMed Central

    Huang, Jianfei; Wang, Kai; Wei, Peng; Liu, Xiangtao; Liu, Xiaoming; Tan, Kai; Boerwinkle, Eric; Potash, James B.; Han, Shizhong

    2016-01-01

    Genome-wide association studies (GWAS) have been widely used for identifying common variants associated with complex diseases. Despite remarkable success in uncovering many risk variants and providing novel insights into disease biology, genetic variants identified to date fail to explain the vast majority of the heritability for most complex diseases. One explanation is that there are still a large number of common variants that remain to be discovered, but their effect sizes are generally too small to be detected individually. Accordingly, gene set analysis of GWAS, which examines a group of functionally related genes, has been proposed as a complementary approach to single-marker analysis. Here, we propose a flexible and adaptive test for gene sets (FLAGS), using summary statistics. Extensive simulations showed that this method has an appropriate type I error rate and outperforms existing methods with increased power. As a proof of principle, through real data analyses of Crohn’s disease GWAS data and bipolar disorder GWAS meta-analysis results, we demonstrated the superior performance of FLAGS over several state-of-the-art association tests for gene sets. Our method allows for the more powerful application of gene set analysis to complex diseases, which will have broad use given that GWAS summary results are increasingly publicly available. PMID:26773050

  1. Adaptation of the human visual system to the statistics of letters and line configurations.

    PubMed

    Chang, Claire H C; Pallier, Christophe; Wu, Denise H; Nakamura, Kimihiro; Jobert, Antoinette; Kuo, W-J; Dehaene, Stanislas

    2015-10-15

    By adulthood, literate humans have been exposed to millions of visual scenes and pages of text. Does the human visual system become attuned to the statistics of its inputs? Using functional magnetic resonance imaging, we examined whether the brain responses to line configurations are proportional to their natural-scene frequency. To further distinguish prior cortical competence from adaptation induced by learning to read, we manipulated whether the selected configurations formed letters and whether they were presented on the horizontal meridian, the familiar location where words usually appear, or on the vertical meridian. While no natural-scene frequency effect was observed, we observed letter-status and letter frequency effects on bilateral occipital activation, mainly for horizontal stimuli. The findings suggest a reorganization of the visual pathway resulting from reading acquisition under genetic and connectional constraints. Even early retinotopic areas showed a stronger response to letters than to rotated versions of the same shapes, suggesting an early visual tuning to large visual features such as letters.

  2. Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Bentler, Peter M.

    2000-01-01

    Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)

  3. Statistical model based iterative reconstruction in clinical CT systems. Part III. Task-based kV/mAs optimization for radiation dose reduction

    PubMed Central

    Li, Ke; Gomez-Cardona, Daniel; Hsieh, Jiang; Lubner, Meghan G.; Pickhardt, Perry J.; Chen, Guang-Hong

    2015-01-01

    Purpose: For a given imaging task and patient size, the optimal selection of x-ray tube potential (kV) and tube current-rotation time product (mAs) is pivotal in achieving the maximal radiation dose reduction while maintaining the needed diagnostic performance. Although contrast-to-noise (CNR)-based strategies can be used to optimize kV/mAs for computed tomography (CT) imaging systems employing the linear filtered backprojection (FBP) reconstruction method, a more general framework needs to be developed for systems using the nonlinear statistical model-based iterative reconstruction (MBIR) method. The purpose of this paper is to present such a unified framework for the optimization of kV/mAs selection for both FBP- and MBIR-based CT systems. Methods: The optimal selection of kV and mAs was formulated as a constrained optimization problem to minimize the objective function, Dose(kV,mAs), under the constraint that the achievable detectability index d′(kV,mAs) is not lower than the prescribed value of d℞′ for a given imaging task. Since it is difficult to analytically model the dependence of d′ on kV and mAs for the highly nonlinear MBIR method, this constrained optimization problem is solved with comprehensive measurements of Dose(kV,mAs) and d′(kV,mAs) at a variety of kV–mAs combinations, after which the overlay of the dose contours and d′ contours is used to graphically determine the optimal kV–mAs combination to achieve the lowest dose while maintaining the needed detectability for the given imaging task. As an example, d′ for a 17 mm hypoattenuating liver lesion detection task was experimentally measured with an anthropomorphic abdominal phantom at four tube potentials (80, 100, 120, and 140 kV) and fifteen mA levels (25 and 50–700) with a sampling interval of 50 mA at a fixed rotation time of 0.5 s, which corresponded to a dose (CTDIvol) range of [0.6, 70] mGy. Using the proposed method, the optimal kV and mA that minimized dose for the

  4. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  5. Adaptive dose finding based on t-statistic for dose-response trials.

    PubMed

    Ivanova, Anastasia; Bolognese, James A; Perevozskaya, Inna

    2008-05-10

    The goals of phase II dose-response studies are to prove that the treatment is effective and to choose the dose for further development. Randomized designs with equal allocation to either a high dose and placebo or to each of several doses and placebo are typically used. However, in trials where response is observed relatively quickly, adaptive designs might offer an advantage over equal allocation. We propose an adaptive design for dose-response trials that concentrates the allocation of subjects in one or more areas of interest, for example, near a minimum clinically important effect level, or near some maximal effect level, and also allows for the possibility to stop the trial early if needed. The proposed adaptive design yields higher power to detect a dose-response relationship, higher power in comparison with placebo, and selects the correct dose more frequently compared with a corresponding randomized design with equal allocation to doses.

  6. Cross-cultural adaptation of research instruments: language, setting, time and statistical considerations

    PubMed Central

    2010-01-01

    Background Research questionnaires are not always translated appropriately before they are used in new temporal, cultural or linguistic settings. The results based on such instruments may therefore not accurately reflect what they are supposed to measure. This paper aims to illustrate the process and required steps involved in the cross-cultural adaptation of a research instrument using the adaptation process of an attitudinal instrument as an example. Methods A questionnaire was needed for the implementation of a study in Norway 2007. There was no appropriate instruments available in Norwegian, thus an Australian-English instrument was cross-culturally adapted. Results The adaptation process included investigation of conceptual and item equivalence. Two forward and two back-translations were synthesized and compared by an expert committee. Thereafter the instrument was pretested and adjusted accordingly. The final questionnaire was administered to opioid maintenance treatment staff (n=140) and harm reduction staff (n=180). The overall response rate was 84%. The original instrument failed confirmatory analysis. Instead a new two-factor scale was identified and found valid in the new setting. Conclusions The failure of the original scale highlights the importance of adapting instruments to current research settings. It also emphasizes the importance of ensuring that concepts within an instrument are equal between the original and target language, time and context. If the described stages in the cross-cultural adaptation process had been omitted, the findings would have been misleading, even if presented with apparent precision. Thus, it is important to consider possible barriers when making a direct comparison between different nations, cultures and times. PMID:20144247

  7. Statistical evaluation of the performance of an optimized adaptive optics arm for retinal imaging flood system

    NASA Astrophysics Data System (ADS)

    Magaña Chávez, J. L.; Medina-Márquez, J.; Valdivieso-González, L. G.; Balderas-Mata, S. E.

    2016-09-01

    In the last decade, Adaptive Optics has been used to compensate the aberrations of the eye in order to acquire high resolution retinal images. The use of high speed deformable mirrors (DMs) to accomplish this compensation in real time is of great importance. But, sometimes DMs are overused, compensating the aberrations inherent in the optical systems. In this work the evaluation of the performance of an adaptive optics system together with the imaging system will be evaluated in order to know in advance the aberrations inherent in them in order to compensate them prior the use of a DM.

  8. Investigation on improved infrared image detail enhancement algorithm based on adaptive histogram statistical stretching and gradient filtering

    NASA Astrophysics Data System (ADS)

    Zeng, Bangze; Zhu, Youpan; Li, Zemin; Hu, Dechao; Luo, Lin; Zhao, Deli; Huang, Juan

    2014-11-01

    Duo to infrared image with low contrast, big noise and unclear visual effect, target is very difficult to observed and identified. This paper presents an improved infrared image detail enhancement algorithm based on adaptive histogram statistical stretching and gradient filtering (AHSS-GF). Based on the fact that the human eyes are very sensitive to the edges and lines, the author proposed to extract the details and textures by using the gradient filtering. New histogram could be acquired by calculating the sum of original histogram based on fixed window. With the minimum value for cut-off point, author carried on histogram statistical stretching. After the proper weights given to the details and background, the detail-enhanced results could be acquired finally. The results indicate image contrast could be improved and the details and textures could be enhanced effectively as well.

  9. Statistical model for free-space optical coherent communications using adaptive optics

    NASA Astrophysics Data System (ADS)

    Anzuola, Esdras; Gladysz, Szymon

    2016-10-01

    In this paper we present a new model for describing the turbulence-induced fading that uses the representation of the phase in the aperture plane as a collection of random "cells". This model serves as input to calculate the probability density function of fading intensity. The model has two parameters: phase variance and number of wavefront cells . We derive expressions for the signal-to-noise ratio in the presence of atmospheric turbulence and adaptive optics compensation. We estimate symbol error probabilities for M-ary phase shift keying and evaluate the performance of coherent receivers as a function of the normalized aperture and the number of actuators on the deformable mirror or the number of compensated modes. We perform numerical simulations of the fading intensity for different uncompensated and compensated scenarios and we compare the results with the proposed model.

  10. Natural Selection, Adaptive Topographies and the Problem of Statistical Inference: The Moraba scurra Controversy Under the Microscope.

    PubMed

    Grodwohl, Jean-Baptiste

    2016-08-01

    This paper gives a detailed narrative of a controversial empirical research in postwar population genetics, the analysis of the cytological polymorphisms of an Australian grasshopper, Moraba scurra. This research intertwined key technical developments in three research areas during the 1950s and 1960s: it involved Dobzhansky's empirical research program on cytological polymorphisms, the mathematical theory of natural selection in two-locus systems, and the building of reliable estimates of natural selection in the wild. In the mid-1950s the cytologist Michael White discovered an interesting case of epistasis in populations of Moraba scurra. These observations received a wide diffusion when theoretical population geneticist Richard Lewontin represented White's data on adaptive topographies. These topographies connected the information on the genetic structure of these grasshopper populations with the formal framework of theoretical population genetics. As such, they appeared at the time as the most successful application of two-locus models of natural selection to an empirical study system. However, this connection generated paradoxical results: in the landscapes, all grasshopper populations were located on a ridge (an unstable equilibrium) while they were expected to reach a peak. This puzzling result fueled years of research and triggered a controversy attracting contributors from Australia, the United States and the United Kingdom. While the original problem seemed, at first, purely empirical, the subsequent controversy affected the main mathematical tools used in the study of two-gene systems under natural selection. Adaptive topographies and their underlying mathematical structure, Wright's mean fitness equations, were submitted to close scrutiny. Suspicion eventually shifted to the statistical machinery used in data analysis, reflecting the crucial role of statistical inference in applied population genetics. In the 1950s and 1960s, population geneticists were

  11. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    PubMed

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical.

  12. Dual adaptive statistical approach for quantitative noise reduction in photon-counting medical imaging: application to nuclear medicine images.

    PubMed

    Hannequin, Pascal Paul

    2015-06-07

    Noise reduction in photon-counting images remains challenging, especially at low count levels. We have developed an original procedure which associates two complementary filters using a Wiener-derived approach. This approach combines two statistically adaptive filters into a dual-weighted (DW) filter. The first one, a statistically weighted adaptive (SWA) filter, replaces the central pixel of a sliding window with a statistically weighted sum of its neighbors. The second one, a statistical and heuristic noise extraction (extended) (SHINE-Ext) filter, performs a discrete cosine transformation (DCT) using sliding blocks. Each block is reconstructed using its significant components which are selected using tests derived from multiple linear regression (MLR). The two filters are weighted according to Wiener theory. This approach has been validated using a numerical phantom and a real planar Jaszczak phantom. It has also been illustrated using planar bone scintigraphy and myocardial single-photon emission computed tomography (SPECT) data. Performances of filters have been tested using mean normalized absolute error (MNAE) between the filtered images and the reference noiseless or high-count images.Results show that the proposed filters quantitatively decrease the MNAE in the images and then increase the signal-to-noise Ratio (SNR). This allows one to work with lower count images. The SHINE-Ext filter is well suited to high-size images and low-variance areas. DW filtering is efficient for low-size images and in high-variance areas. The relative proportion of eliminated noise generally decreases when count level increases. In practice, SHINE filtering alone is recommended when pixel spacing is less than one-quarter of the effective resolution of the system and/or the size of the objects of interest. It can also be used when the practical interest of high frequencies is low. In any case, DW filtering will be preferable.The proposed filters have been applied to nuclear

  13. Dual adaptive statistical approach for quantitative noise reduction in photon-counting medical imaging: application to nuclear medicine images

    NASA Astrophysics Data System (ADS)

    Hannequin, Pascal Paul

    2015-06-01

    Noise reduction in photon-counting images remains challenging, especially at low count levels. We have developed an original procedure which associates two complementary filters using a Wiener-derived approach. This approach combines two statistically adaptive filters into a dual-weighted (DW) filter. The first one, a statistically weighted adaptive (SWA) filter, replaces the central pixel of a sliding window with a statistically weighted sum of its neighbors. The second one, a statistical and heuristic noise extraction (extended) (SHINE-Ext) filter, performs a discrete cosine transformation (DCT) using sliding blocks. Each block is reconstructed using its significant components which are selected using tests derived from multiple linear regression (MLR). The two filters are weighted according to Wiener theory. This approach has been validated using a numerical phantom and a real planar Jaszczak phantom. It has also been illustrated using planar bone scintigraphy and myocardial single-photon emission computed tomography (SPECT) data. Performances of filters have been tested using mean normalized absolute error (MNAE) between the filtered images and the reference noiseless or high-count images. Results show that the proposed filters quantitatively decrease the MNAE in the images and then increase the signal-to-noise Ratio (SNR). This allows one to work with lower count images. The SHINE-Ext filter is well suited to high-size images and low-variance areas. DW filtering is efficient for low-size images and in high-variance areas. The relative proportion of eliminated noise generally decreases when count level increases. In practice, SHINE filtering alone is recommended when pixel spacing is less than one-quarter of the effective resolution of the system and/or the size of the objects of interest. It can also be used when the practical interest of high frequencies is low. In any case, DW filtering will be preferable. The proposed filters have been applied to nuclear

  14. US ITER Moving Forward

    ScienceCinema

    US ITER / ORNL

    2016-07-12

    US ITER Project Manager Ned Sauthoff, joined by Wayne Reiersen, Team Leader Magnet Systems, and Jan Berry, Team Leader Tokamak Cooling System, discuss the U.S.'s role in the ITER international collaboration.

  15. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  16. Iter and Ornl

    NASA Astrophysics Data System (ADS)

    Uckan, N. A.; Milora, S. L.

    2004-11-01

    ITER (means ``the way''), a tokamak burning plasma experiment, is the next step device toward making fusion energy a reality. The programmatic objective of ITER is to demonstrate the scientific and technological feasibility of fusion energy for peaceful purposes. ITER began in 1985 as collaboration between the Russian Federation (former Soviet Union), the USA, European Union, and Japan. ITER conceptual and engineering design activities led to a detailed design in 2001. The USA opted out of the project between 1999-2003, but rejoined in 2004 for site selection and construction negotiations. China and Korea joined the project in 2003. Negotiations are continuing and a decision on the site for ITER construction [France versus Japan] is pending. The ITER international undertaking is an unprecedented scale and the six ITER parties represent 40% of the world population. By 2018, ITER will produce a fusion power of 500 million Watts for time periods up to an hour with one-tenth of the power needed to sustain it. Steady state operation is also possible at lower power levels with higher fraction of circulated power. The ITER parties invested about $1 billion into the research and development (R) and related fusion experiments to establish the ITER's feasibility. ORNL has been a key player in the ITER project and contributed to its physics and engineering design and related R since its inception. Recently, the U.S. DOE selected the PPPL/ORNL partnership to lead the U.S. project office for ITER.

  17. Iterative consolidation of unorganized point clouds.

    PubMed

    Liu, Shengjun; Chan, Kwan-Chung; Wang, Charlie C L

    2012-01-01

    Unorganized point clouds obtained from 3D shape acquisition devices usually present noise, outliers, and nonuniformities. The proposed framework consolidates unorganized points through an iterative procedure of interlaced downsampling and upsampling. Selection operations remove outliers while preserving geometric details. The framework improves the uniformity of points by moving the downsampled particles and refining point samples. Surface extrapolation fills missed regions. Moreover, an adaptive sampling strategy speeds up the iterations. Experimental results demonstrate the framework's effectiveness.

  18. Statistical Physics of Adaptation

    DTIC Science & Technology

    2016-08-23

    decay). We therefore restrict our atten- tion to the case where δ is nonzero but exponentially small compared with the growth rate: lnðgX=½δNXÞ ≫ 1. Our...14], so exponential population growth turns out to be one very reliable mechanism of driving external entropy production. In this more general physical...population composed of two types of exponentially growing self-replicators—we illustrate a simple relationship between outcome-likelihood and

  19. Imaging task-based optimal kV and mA selection for CT radiation dose reduction: from filtered backprojection (FBP) to statistical model based iterative reconstruction (MBIR)

    NASA Astrophysics Data System (ADS)

    Li, Ke; Gomez-Cardona, Daniel; Lubner, Meghan G.; Pickhardt, Perry J.; Chen, Guang-Hong

    2015-03-01

    Optimal selections of tube potential (kV) and tube current (mA) are essential in maximizing the diagnostic potential of a given CT technology while minimizing radiation dose. The use of a lower tube potential may improve image contrast, but may also require a significantly higher tube current to compensate for the rapid decrease of tube output at lower tube potentials. Therefore, the selection of kV and mA should take those kinds of constraints as well as the specific diagnostic imaging task in to consideration. For conventional quasi-linear CT systems employing linear filtered back-projection (FBP) image reconstruction algorithm, the optimization of kV-mA combinations are relatively straightforward, as neither spatial resolution nor noise texture has significant dependence on kV and mA settings. In these cases, zero-frequency analysis such as contrast-to-noise ratio (CNR) or normalized CNR by dose (CNRD) can be used for optimal kV-mA selection. The recently introduced statistical model-based iterative reconstruction (MBIR) method, however, has introduced new challenges to optimal kV and mA selection, as both spatial resolution and noise texture become closely correlated with kV and mA. In this work, a task-based approach based on modern signal detection theory and the corresponding frequency-dependent analysis has been proposed to perform the kV and mA optimization for both FBP and MBIR. By performing exhaustive measurements of task-based detectability index through the technically accessible kV-mA parameter space, iso-detectability contours were generated and overlaid on top of iso-dose contours, from which the kV-mA pair that minimize dose yet still achieving the desired detectability level can be identified.

  20. Particle System Based Adaptive Sampling on Spherical Parameter Space to Improve the MDL Method for Construction of Statistical Shape Models

    PubMed Central

    Zhou, Xiangrong; Hirano, Yasushi; Tachibana, Rie; Hara, Takeshi; Kido, Shoji; Fujita, Hiroshi

    2013-01-01

    Minimum description length (MDL) based group-wise registration was a state-of-the-art method to determine the corresponding points of 3D shapes for the construction of statistical shape models (SSMs). However, it suffered from the problem that determined corresponding points did not uniformly spread on original shapes, since corresponding points were obtained by uniformly sampling the aligned shape on the parameterized space of unit sphere. We proposed a particle-system based method to obtain adaptive sampling positions on the unit sphere to resolve this problem. Here, a set of particles was placed on the unit sphere to construct a particle system whose energy was related to the distortions of parameterized meshes. By minimizing this energy, each particle was moved on the unit sphere. When the system became steady, particles were treated as vertices to build a spherical mesh, which was then relaxed to slightly adjust vertices to obtain optimal sampling-positions. We used 47 cases of (left and right) lungs and 50 cases of livers, (left and right) kidneys, and spleens for evaluations. Experiments showed that the proposed method was able to resolve the problem of the original MDL method, and the proposed method performed better in the generalization and specificity tests. PMID:23861721

  1. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  2. Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR)

    SciTech Connect

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Robins, Marthony; Colsher, James; Samei, Ehsan

    2013-11-15

    Purpose: Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables.Methods: Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision.Results: Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A.Conclusions: The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of

  3. Comparison of Iterative and Non-Iterative Strain-Gage Balance Load Calculation Methods

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2010-01-01

    The accuracy of iterative and non-iterative strain-gage balance load calculation methods was compared using data from the calibration of a force balance. Two iterative and one non-iterative method were investigated. In addition, transformations were applied to balance loads in order to process the calibration data in both direct read and force balance format. NASA's regression model optimization tool BALFIT was used to generate optimized regression models of the calibration data for each of the three load calculation methods. This approach made sure that the selected regression models met strict statistical quality requirements. The comparison of the standard deviation of the load residuals showed that the first iterative method may be applied to data in both the direct read and force balance format. The second iterative method, on the other hand, implicitly assumes that the primary gage sensitivities of all balance gages exist. Therefore, the second iterative method only works if the given balance data is processed in force balance format. The calibration data set was also processed using the non-iterative method. Standard deviations of the load residuals for the three load calculation methods were compared. Overall, the standard deviations show very good agreement. The load prediction accuracies of the three methods appear to be compatible as long as regression models used to analyze the calibration data meet strict statistical quality requirements. Recent improvements of the regression model optimization tool BALFIT are also discussed in the paper.

  4. Statistical adaptation of ALADIN RCM outputs over the French alpine massifs - application to future climate and snow cover

    NASA Astrophysics Data System (ADS)

    Rousselot, M.; Durand, Y.; Giraud, G.; Mérindol, L.; Dombrowski-Etchevers, I.; Déqué, M.

    2012-01-01

    In this study, snowpack scenarios are modelled across the French Alps using dynamically downscaled variables from the ALADIN Regional Climate Model (RCM) for the control period (1961-1990) and three emission scenarios (SRES B1, A1B and A2) by the mid- and late of the 21st century (2021-2050 and 2071-2100). These variables are statistically adapted to the different elevations, aspects and slopes of the alpine massifs. For this purpose, we use a simple analogue criterion with ERA40 series as well as an existing detailed climatology of the French Alps (Durand et al., 2009a) that provides complete meteorological fields from the SAFRAN analysis model. The resulting scenarios of precipitation, temperature, wind, cloudiness, longwave and shortwave radiation, and humidity are used to run the physical snow model CROCUS and simulate snowpack evolution over the massifs studied. The seasonal and regional characteristics of the simulated climate and snow cover changes are explored, as is the influence of the scenarios on these changes. Preliminary results suggest that the Snow Water Equivalent (SWE) of the snowpack will decrease dramatically in the next century, especially in the Southern and Extreme Southern part of the Alps. This decrease seems to result primarily from a general warming throughout the year, and possibly a deficit of precipitation in the autumn. The magnitude of the snow cover decline follows a marked altitudinal gradient, with the highest altitudes being less exposed to climate change. Scenario A2, with its high concentrations of greenhouse gases, results in a SWE reduction roughly twice as large as in the low-emission scenario B1 by the end of the century. This study needs to be completed using simulations from other RCMs, since a multi-model approach is essential for uncertainty analysis.

  5. Statistical adaptation of ALADIN RCM outputs over the French Alps - application to future climate and snow cover

    NASA Astrophysics Data System (ADS)

    Rousselot, M.; Durand, Y.; Giraud, G.; Mérindol, L.; Dombrowski-Etchevers, I.; Déqué, M.; Castebrunet, H.

    2012-07-01

    In this study, snowpack scenarios are modelled across the French Alps using dynamically downscaled variables from the ALADIN Regional Climate Model (RCM) for the control period (1961-1990) and three emission scenarios (SRES B1, A1B and A2) for the mid- and late 21st century (2021-2050 and 2071-2100). These variables are statistically adapted to the different elevations, aspects and slopes of the Alpine massifs. For this purpose, we use a simple analogue criterion with ERA40 series as well as an existing detailed climatology of the French Alps (Durand et al., 2009a) that provides complete meteorological fields from the SAFRAN analysis model. The resulting scenarios of precipitation, temperature, wind, cloudiness, longwave and shortwave radiation, and humidity are used to run the physical snow model CROCUS and simulate snowpack evolution over the massifs studied. The seasonal and regional characteristics of the simulated climate and snow cover changes are explored, as is the influence of the scenarios on these changes. Preliminary results suggest that the snow water equivalent (SWE) of the snowpack will decrease dramatically in the next century, especially in the Southern and Extreme Southern parts of the Alps. This decrease seems to result primarily from a general warming throughout the year, and possibly a deficit of precipitation in the autumn. The magnitude of the snow cover decline follows a marked altitudinal gradient, with the highest altitudes being less exposed to climate change. Scenario A2, with its high concentrations of greenhouse gases, results in a SWE reduction roughly twice as large as in the low-emission scenario B1 by the end of the century. This study needs to be completed using simulations from other RCMs, since a multi-model approach is essential for uncertainty analysis.

  6. Iteration, Not Induction

    ERIC Educational Resources Information Center

    Dobbs, David E.

    2009-01-01

    The main purpose of this note is to present and justify proof via iteration as an intuitive, creative and empowering method that is often available and preferable as an alternative to proofs via either mathematical induction or the well-ordering principle. The method of iteration depends only on the fact that any strictly decreasing sequence of…

  7. The ITER design

    NASA Astrophysics Data System (ADS)

    Aymar, R.; Barabaschi, P.; Shimomura, Y.

    2002-05-01

    In 1998, after six years of joint work originally foreseen under the ITER engineering design activities (EDA) agreement, a design for ITER had been developed fulfilling all objectives and the cost target adopted by the ITER parties in 1992 at the start of the EDA. While accepting this design, the ITER parties recognized the possibility that they might be unable, for financial reasons, to proceed to the construction of the then foreseen device. The focus of effort in the ITER EDA since 1998 has been the development of a new design to meet revised technical objectives and a cost reduction target of about 50% of the previously accepted cost estimate. The rationale for the choice of parameters of the design has been based largely on system analysis drawing on the design solutions already developed and using the latest physics results and outputs from technology R&D projects. In so doing the joint central team and home teams converge towards a new design which will allow the exploration of a range of burning plasma conditions. The new ITER design, whilst having reduced technical objectives from its predecessor, will nonetheless meet the programmatic objective of providing an integrated demonstration of the scientific and technological feasibility of fusion energy. Background, design features, performance, safety features, and R&D and future perspectives of the ITER design are discussed.

  8. Reducing the latency of the Fractal Iterative Method to half an iteration

    NASA Astrophysics Data System (ADS)

    Béchet, Clémentine; Tallon, Michel

    2013-12-01

    The fractal iterative method for atmospheric tomography (FRiM-3D) has been introduced to solve the wavefront reconstruction at the dimensions of an ELT with a low-computational cost. Previous studies reported the requirement of only 3 iterations of the algorithm in order to provide the best adaptive optics (AO) performance. Nevertheless, any iterative method in adaptive optics suffer from the intrinsic latency induced by the fact that one iteration can start only once the previous one is completed. Iterations hardly match the low-latency requirement of the AO real-time computer. We present here a new approach to avoid iterations in the computation of the commands with FRiM-3D, thus allowing low-latency AO response even at the scale of the European ELT (E-ELT). The method highlights the importance of "warm-start" strategy in adaptive optics. To our knowledge, this particular way to use the "warm-start" has not been reported before. Futhermore, removing the requirement of iterating to compute the commands, the computational cost of the reconstruction with FRiM-3D can be simplified and at least reduced to half the computational cost of a classical iteration. Thanks to simulations of both single-conjugate and multi-conjugate AO for the E-ELT,with FRiM-3D on Octopus ESO simulator, we demonstrate the benefit of this approach. We finally enhance the robustness of this new implementation with respect to increasing measurement noise, wind speed and even modeling errors.

  9. Perl Modules for Constructing Iterators

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt

    2009-01-01

    The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.

  10. ITER Cryoplant Infrastructures

    NASA Astrophysics Data System (ADS)

    Fauve, E.; Monneret, E.; Voigt, T.; Vincent, G.; Forgeas, A.; Simon, M.

    2017-02-01

    The ITER Tokamak requires an average 75 kW of refrigeration power at 4.5 K and 600 kW of refrigeration Power at 80 K to maintain the nominal operation condition of the ITER thermal shields, superconducting magnets and cryopumps. This is produced by the ITER Cryoplant, a complex cluster of refrigeration systems including in particular three identical Liquid Helium Plants and two identical Liquid Nitrogen Plants. Beyond the equipment directly part of the Cryoplant, colossal infrastructures are required. These infrastructures account for a large part of the Cryoplants lay-out, budget and engineering efforts. It is ITER Organization responsibility to ensure that all infrastructures are adequately sized and designed to interface with the Cryoplant. This proceeding presents the overall architecture of the cryoplant. It provides order of magnitude related to the cryoplant building and utilities: electricity, cooling water, heating, ventilation and air conditioning (HVAC).

  11. Diagnostics for ITER

    SciTech Connect

    Donne, A. J. H.; Hellermann, M. G. von; Barnsley, R.

    2008-10-22

    After an introduction into the specific challenges in the field of diagnostics for ITER (specifically high level of nuclear radiation, long pulses, high fluxes of particles to plasma facing components, need for reliability and robustness), an overview will be given of the spectroscopic diagnostics foreseen for ITER. The paper will describe both active neutral-beam based diagnostics as well as passive spectroscopic diagnostics operating in the visible, ultra-violet and x-ray spectral regions.

  12. Robust iterative methods

    SciTech Connect

    Saadd, Y.

    1994-12-31

    In spite of the tremendous progress achieved in recent years in the general area of iterative solution techniques, there are still a few obstacles to the acceptance of iterative methods in a number of applications. These applications give rise to very indefinite or highly ill-conditioned non Hermitian matrices. Trying to solve these systems with the simple-minded standard preconditioned Krylov subspace methods can be a frustrating experience. With the mathematical and physical models becoming more sophisticated, the typical linear systems which we encounter today are far more difficult to solve than those of just a few years ago. This trend is likely to accentuate. This workshop will discuss (1) these applications and the types of problems that they give rise to; and (2) recent progress in solving these problems with iterative methods. The workshop will end with a hopefully stimulating panel discussion with the speakers.

  13. Rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, and produce modified schedules quickly. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. These experiments were performed within the domain of Space Shuttle ground processing.

  14. Rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, produce modified schedules, quickly, and exhibits 'anytime' behavior. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. We also show the anytime characteristics of the system. These experiments were performed within the domain of Space Shuttle ground processing.

  15. Iterated multidimensional wave conversion

    SciTech Connect

    Brizard, A. J.; Tracy, E. R.; Johnston, D.; Kaufman, A. N.; Richardson, A. S.; Zobin, N.

    2011-12-23

    Mode conversion can occur repeatedly in a two-dimensional cavity (e.g., the poloidal cross section of an axisymmetric tokamak). We report on two novel concepts that allow for a complete and global visualization of the ray evolution under iterated conversions. First, iterated conversion is discussed in terms of ray-induced maps from the two-dimensional conversion surface to itself (which can be visualized in terms of three-dimensional rooms). Second, the two-dimensional conversion surface is shown to possess a symplectic structure derived from Dirac constraints associated with the two dispersion surfaces of the interacting waves.

  16. Block-based adaptive lifting schemes for multiband image compression

    NASA Astrophysics Data System (ADS)

    Masmoudi, Hela; Benazza-Benyahia, Amel; Pesquet, Jean-Christophe

    2004-02-01

    In this paper, we are interested in designing lifting schemes adapted to the statistics of the wavelet coefficients of multiband images for compression applications. More precisely, nonseparable vector lifting schemes are used in order to capture simultaneously the spatial and the spectral redundancies. The underlying operators are then computed in order to minimize the entropy of the resulting multiresolution representation. To this respect, we have developed a new iterative block-based classification algorithm. Simulation tests carried out on remotely sensed multispectral images indicate that a substantial gain in terms of bit-rate is achieved by the proposed adaptive coding method w.r.t the non-adaptive one.

  17. Iterative software kernels

    SciTech Connect

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  18. ITER Fusion Energy

    ScienceCinema

    Dr. Norbert Holtkamp

    2016-07-12

    ITER (in Latin “the way”) is designed to demonstrate the scientific and technological feasibility of fusion energy. Fusion is the process by which two light atomic nuclei combine to form a heavier over one and thus release energy. In the fusion process two isotopes of hydrogen – deuterium and tritium – fuse together to form a helium atom and a neutron. Thus fusion could provide large scale energy production without greenhouse effects; essentially limitless fuel would be available all over the world. The principal goals of ITER are to generate 500 megawatts of fusion power for periods of 300 to 500 seconds with a fusion power multiplication factor, Q, of at least 10. Q ? 10 (input power 50 MW / output power 500 MW). The ITER Organization was officially established in Cadarache, France, on 24 October 2007. The seven members engaged in the project – China, the European Union, India, Japan, Korea, Russia and the United States – represent more than half the world’s population. The costs for ITER are shared by the seven members. The cost for the construction will be approximately 5.5 billion Euros, a similar amount is foreseen for the twenty-year phase of operation and the subsequent decommissioning.

  19. Adaptive Management of Ecosystems

    EPA Science Inventory

    Adaptive management is an approach to natural resource management that emphasizes learning through management. As such, management may be treated as experiment, with replication, or management may be conducted in an iterative manner. Although the concept has resonated with many...

  20. Iterative Vessel Segmentation of Fundus Images.

    PubMed

    Roychowdhury, Sohini; Koozekanani, Dara D; Parhi, Keshab K

    2015-07-01

    This paper presents a novel unsupervised iterative blood vessel segmentation algorithm using fundus images. First, a vessel enhanced image is generated by tophat reconstruction of the negative green plane image. An initial estimate of the segmented vasculature is extracted by global thresholding the vessel enhanced image. Next, new vessel pixels are identified iteratively by adaptive thresholding of the residual image generated by masking out the existing segmented vessel estimate from the vessel enhanced image. The new vessel pixels are, then, region grown into the existing vessel, thereby resulting in an iterative enhancement of the segmented vessel structure. As the iterations progress, the number of false edge pixels identified as new vessel pixels increases compared to the number of actual vessel pixels. A key contribution of this paper is a novel stopping criterion that terminates the iterative process leading to higher vessel segmentation accuracy. This iterative algorithm is robust to the rate of new vessel pixel addition since it achieves 93.2-95.35% vessel segmentation accuracy with 0.9577-0.9638 area under ROC curve (AUC) on abnormal retinal images from the STARE dataset. The proposed algorithm is computationally efficient and consistent in vessel segmentation performance for retinal images with variations due to pathology, uneven illumination, pigmentation, and fields of view since it achieves a vessel segmentation accuracy of about 95% in an average time of 2.45, 3.95, and 8 s on images from three public datasets DRIVE, STARE, and CHASE_DB1, respectively. Additionally, the proposed algorithm has more than 90% segmentation accuracy for segmenting peripapillary blood vessels in the images from the DRIVE and CHASE_DB1 datasets.

  1. Accelerated iterative beam angle selection in IMRT

    SciTech Connect

    Bangert, Mark; Unkelbach, Jan

    2016-03-15

    Purpose: Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n − 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based on surrogates for the FMO objective function value. Methods: We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Results: Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could

  2. The Iterate Manual

    DTIC Science & Technology

    1990-10-01

    is probably a bad idea. A better versica would use a temporary: (defmacro sum-of-squares (expr) (let ((temp ( gensym ))) ’(lot (,temp ,expr)) (sum...val ( gensym )) (tempi ( gensym )) (temp2 ( gensym )) (winner (or var iterate::*result-var*))) ’(progn (with ,max-val - nil) (with ,winner = nil) (cond ((null...the elements of a vector (disregards fill-pointer)" (let ((vect ( gensym )) (end ( gensym )) (index ( gensym ))) ’(progn (with ,vect - v) (with ,end = (array

  3. Neutron activation for ITER

    SciTech Connect

    Barnes, C.W.; Loughlin, M.J.; Nishitani, Takeo

    1996-04-29

    There are three primary goals for the Neutron Activation system for ITER: maintain a robust relative measure of fusion power with stability and high dynamic range (7 orders of magnitude); allow an absolute calibration of fusion power (energy); and provide a flexible and reliable system for materials testing. The nature of the activation technique is such that stability and high dynamic range can be intrinsic properties of the system. It has also been the technique that demonstrated (on JET and TFTR) the highest accuracy neutron measurements in DT operation. Since the gamma-ray detectors are not located on the tokamak and are therefore amenable to accurate characterization, and if material foils are placed very close to the ITER plasma with minimum scattering or attenuation, high overall accuracy in the fusion energy production (7--10%) should be achievable on ITER. In the paper, a conceptual design is presented. A system is shown to be capable of meeting these three goals, also detailed design issues remain to be solved.

  4. Neutron cameras for ITER

    SciTech Connect

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-12-31

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from {sup 16}N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with {sup 16}N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins.

  5. Energy Monitoring and Targeting as diagnosis; Applying work analysis to adapt a statistical change detection strategy using representation aiding

    NASA Astrophysics Data System (ADS)

    Hilliard, Antony

    Energy Monitoring and Targeting is a well-established business process that develops information about utility energy consumption in a business or institution. While M&T has persisted as a worthwhile energy conservation support activity, it has not been widely adopted. This dissertation explains M&T challenges in terms of diagnosing and controlling energy consumption, informed by a naturalistic field study of M&T work. A Cognitive Work Analysis of M&T identifies structures that diagnosis can search, information flows un-supported in canonical support tools, and opportunities to extend the most popular tool for MM&T: Cumulative Sum of Residuals (CUSUM) charts. A design application outlines how CUSUM charts were augmented with a more contemporary statistical change detection strategy, Recursive Parameter Estimates, modified to better suit the M&T task using Representation Aiding principles. The design was experimentally evaluated in a controlled M&T synthetic task, and was shown to significantly improve diagnosis performance.

  6. Searching with iterated maps

    PubMed Central

    Elser, V.; Rankenburg, I.; Thibault, P.

    2007-01-01

    In many problems that require extensive searching, the solution can be described as satisfying two competing constraints, where satisfying each independently does not pose a challenge. As an alternative to tree-based and stochastic searching, for these problems we propose using an iterated map built from the projections to the two constraint sets. Algorithms of this kind have been the method of choice in a large variety of signal-processing applications; we show here that the scope of these algorithms is surprisingly broad, with applications as diverse as protein folding and Sudoku. PMID:17202267

  7. Iterative Magnetometer Calibration

    NASA Technical Reports Server (NTRS)

    Sedlak, Joseph

    2006-01-01

    This paper presents an iterative method for three-axis magnetometer (TAM) calibration that makes use of three existing utilities recently incorporated into the attitude ground support system used at NASA's Goddard Space Flight Center. The method combines attitude-independent and attitude-dependent calibration algorithms with a new spinning spacecraft Kalman filter to solve for biases, scale factors, nonorthogonal corrections to the alignment, and the orthogonal sensor alignment. The method is particularly well-suited to spin-stabilized spacecraft, but may also be useful for three-axis stabilized missions given sufficient data to provide observability.

  8. Linear iterative solvers for implicit ODE methods

    NASA Technical Reports Server (NTRS)

    Saylor, Paul E.; Skeel, Robert D.

    1990-01-01

    The numerical solution of stiff initial value problems, which lead to the problem of solving large systems of mildly nonlinear equations are considered. For many problems derived from engineering and science, a solution is possible only with methods derived from iterative linear equation solvers. A common approach to solving the nonlinear equations is to employ an approximate solution obtained from an explicit method. The error is examined to determine how it is distributed among the stiff and non-stiff components, which bears on the choice of an iterative method. The conclusion is that error is (roughly) uniformly distributed, a fact that suggests the Chebyshev method (and the accompanying Manteuffel adaptive parameter algorithm). This method is described, also commenting on Richardson's method and its advantages for large problems. Richardson's method and the Chebyshev method with the Mantueffel algorithm are applied to the solution of the nonlinear equations by Newton's method.

  9. Synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON): A statistical model based iterative image reconstruction method to eliminate limited-view artifacts and to mitigate the temporal-average artifacts in time-resolved CT

    PubMed Central

    Chen, Guang-Hong; Li, Yinsheng

    2015-01-01

    Purpose: In x-ray computed tomography (CT), a violation of the Tuy data sufficiency condition leads to limited-view artifacts. In some applications, it is desirable to use data corresponding to a narrow temporal window to reconstruct images with reduced temporal-average artifacts. However, the need to reduce temporal-average artifacts in practice may result in a violation of the Tuy condition and thus undesirable limited-view artifacts. In this paper, the authors present a new iterative reconstruction method, synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON), to eliminate limited-view artifacts using data acquired within an ultranarrow temporal window that severely violates the Tuy condition. Methods: In time-resolved contrast enhanced CT acquisitions, image contrast dynamically changes during data acquisition. Each image reconstructed from data acquired in a given temporal window represents one time frame and can be denoted as an image vector. Conventionally, each individual time frame is reconstructed independently. In this paper, all image frames are grouped into a spatial–temporal image matrix and are reconstructed together. Rather than the spatial and/or temporal smoothing regularizers commonly used in iterative image reconstruction, the nuclear norm of the spatial–temporal image matrix is used in SMART-RECON to regularize the reconstruction of all image time frames. This regularizer exploits the low-dimensional structure of the spatial–temporal image matrix to mitigate limited-view artifacts when an ultranarrow temporal window is desired in some applications to reduce temporal-average artifacts. Both numerical simulations in two dimensional image slices with known ground truth and in vivo human subject data acquired in a contrast enhanced cone beam CT exam have been used to validate the proposed SMART-RECON algorithm and to demonstrate the initial performance of the algorithm. Reconstruction errors and temporal fidelity

  10. The solution of radiative transfer problems in molecular bands without the LTE assumption by accelerated lambda iteration methods

    NASA Technical Reports Server (NTRS)

    Kutepov, A. A.; Kunze, D.; Hummer, D. G.; Rybicki, G. B.

    1991-01-01

    An iterative method based on the use of approximate transfer operators, which was designed initially to solve multilevel NLTE line formation problems in stellar atmospheres, is adapted and applied to the solution of the NLTE molecular band radiative transfer in planetary atmospheres. The matrices to be constructed and inverted are much smaller than those used in the traditional Curtis matrix technique, which makes possible the treatment of more realistic problems using relatively small computers. This technique converges much more rapidly than straightforward iteration between the transfer equation and the equations of statistical equilibrium. A test application of this new technique to the solution of NLTE radiative transfer problems for optically thick and thin bands (the 4.3 micron CO2 band in the Venusian atmosphere and the 4.7 and 2.3 micron CO bands in the earth's atmosphere) is described.

  11. Runaway electrons and ITER

    NASA Astrophysics Data System (ADS)

    Boozer, Allen

    2016-10-01

    ITER planning for avoiding runaway damage depends on magnetic surface breakup in fast relaxations. These arise in thermal quenches and in the spreading of impurities from massive gas injection or shattered pellets. Surface breakup would prevent a runaway to relativistic energies were it not for non-intercepting flux tubes, which contain magnetic field lines that do not intercept the walls. Such tubes persist near the magnetic axis and in the cores of islands but must dissipate before any confining surfaces re-form. Otherwise, a highly dangerous situation arises. Electrons that were trapped and accelerated in these flux tubes can fill a large volume of stochastic field lines and serve as a seed for the transfer of the full plasma current to runaways. If the outer confining surfaces are punctured, as by a drift into the wall, then the full runaway inventory will be lost in a short pulse along a narrow flux tube. Although not part of ITER planning, currents induced in the walls by the fast magnetic relaxation could be used to passively prevent outer surfaces re-forming. If magnetic surface breakup can be avoided during impurity injection, the plasma current could be terminated in tens of milliseconds by plasma cooling with no danger of runaway. Support by DoE Office of Fusion Energy Science Grant De-FG02-03ER54696.

  12. Microtearing instability in ITER*

    NASA Astrophysics Data System (ADS)

    Wong, King-Lap; Mikkelsen, David; Budny, Robert; Breslau, Joshua

    2010-11-01

    Microtearing modes are found to be unstable in some regions of a simulated ITER H-mode plasma [1] with the GS2 code [2]. Modes with kρs>1 are in the interior (r/a˜0.65-0.85) while longer wavelength modes are in the pedestal region. This instability may keep the pedestal within the peeling-ballooning stability boundary [3]. Microtearing modes can produce stochastic magnetic field similar to RMP coils; they may have similar effects on ELMs by increasing the pedestal width. The possibility of using this technique for ELM mitigation in ITER is explored. We propose to use a deuterium gas jet to control the microtearing instability and the Chirikov parameter at the edge. Preliminary evaluation of its effectiveness will be presented and the limitations of the GS2 code will be discussed based on our understanding from NSTX [4]. *This work is supported by USDoE contract DE-AC02-09CH11466. [4pt] [1] R. V. Budny, Nucl. Fusion (2009)[0pt] [2] W. Dorland et al., Phys. Rev. Lett. (2000).[0pt] [3] P. B. Snyder et al.,Nucl. Fusion (2009).[0pt] [4] K. L. Wong et al., Phys. Rev. Lett. (2007).

  13. An iterative approach of protein function prediction

    PubMed Central

    2011-01-01

    Background Current approaches of predicting protein functions from a protein-protein interaction (PPI) dataset are based on an assumption that the available functions of the proteins (a.k.a. annotated proteins) will determine the functions of the proteins whose functions are unknown yet at the moment (a.k.a. un-annotated proteins). Therefore, the protein function prediction is a mono-directed and one-off procedure, i.e. from annotated proteins to un-annotated proteins. However, the interactions between proteins are mutual rather than static and mono-directed, although functions of some proteins are unknown for some reasons at present. That means when we use the similarity-based approach to predict functions of un-annotated proteins, the un-annotated proteins, once their functions are predicted, will affect the similarities between proteins, which in turn will affect the prediction results. In other words, the function prediction is a dynamic and mutual procedure. This dynamic feature of protein interactions, however, was not considered in the existing prediction algorithms. Results In this paper, we propose a new prediction approach that predicts protein functions iteratively. This iterative approach incorporates the dynamic and mutual features of PPI interactions, as well as the local and global semantic influence of protein functions, into the prediction. To guarantee predicting functions iteratively, we propose a new protein similarity from protein functions. We adapt new evaluation metrics to evaluate the prediction quality of our algorithm and other similar algorithms. Experiments on real PPI datasets were conducted to evaluate the effectiveness of the proposed approach in predicting unknown protein functions. Conclusions The iterative approach is more likely to reflect the real biological nature between proteins when predicting functions. A proper definition of protein similarity from protein functions is the key to predicting functions iteratively. The

  14. Adaptive management

    USGS Publications Warehouse

    Allen, Craig R.; Garmestani, Ahjond S.

    2015-01-01

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.

  15. F-8C adaptive control law refinement and software development

    NASA Technical Reports Server (NTRS)

    Hartmann, G. L.; Stein, G.

    1981-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.

  16. Statistical Engineering in Air Traffic Management Research

    NASA Technical Reports Server (NTRS)

    Wilson, Sara R.

    2015-01-01

    NASA is working to develop an integrated set of advanced technologies to enable efficient arrival operations in high-density terminal airspace for the Next Generation Air Transportation System. This integrated arrival solution is being validated and verified in laboratories and transitioned to a field prototype for an operational demonstration at a major U.S. airport. Within NASA, this is a collaborative effort between Ames and Langley Research Centers involving a multi-year iterative experimentation process. Designing and analyzing a series of sequential batch computer simulations and human-in-the-loop experiments across multiple facilities and simulation environments involves a number of statistical challenges. Experiments conducted in separate laboratories typically have different limitations and constraints, and can take different approaches with respect to the fundamental principles of statistical design of experiments. This often makes it difficult to compare results from multiple experiments and incorporate findings into the next experiment in the series. A statistical engineering approach is being employed within this project to support risk-informed decision making and maximize the knowledge gained within the available resources. This presentation describes a statistical engineering case study from NASA, highlights statistical challenges, and discusses areas where existing statistical methodology is adapted and extended.

  17. Iterated crowdsourcing dilemma game

    PubMed Central

    Oishi, Koji; Cebrian, Manuel; Abeliuk, Andres; Masuda, Naoki

    2014-01-01

    The Internet has enabled the emergence of collective problem solving, also known as crowdsourcing, as a viable option for solving complex tasks. However, the openness of crowdsourcing presents a challenge because solutions obtained by it can be sabotaged, stolen, and manipulated at a low cost for the attacker. We extend a previously proposed crowdsourcing dilemma game to an iterated game to address this question. We enumerate pure evolutionarily stable strategies within the class of so-called reactive strategies, i.e., those depending on the last action of the opponent. Among the 4096 possible reactive strategies, we find 16 strategies each of which is stable in some parameter regions. Repeated encounters of the players can improve social welfare when the damage inflicted by an attack and the cost of attack are both small. Under the current framework, repeated interactions do not really ameliorate the crowdsourcing dilemma in a majority of the parameter space. PMID:24526244

  18. Iterated crowdsourcing dilemma game

    NASA Astrophysics Data System (ADS)

    Oishi, Koji; Cebrian, Manuel; Abeliuk, Andres; Masuda, Naoki

    2014-02-01

    The Internet has enabled the emergence of collective problem solving, also known as crowdsourcing, as a viable option for solving complex tasks. However, the openness of crowdsourcing presents a challenge because solutions obtained by it can be sabotaged, stolen, and manipulated at a low cost for the attacker. We extend a previously proposed crowdsourcing dilemma game to an iterated game to address this question. We enumerate pure evolutionarily stable strategies within the class of so-called reactive strategies, i.e., those depending on the last action of the opponent. Among the 4096 possible reactive strategies, we find 16 strategies each of which is stable in some parameter regions. Repeated encounters of the players can improve social welfare when the damage inflicted by an attack and the cost of attack are both small. Under the current framework, repeated interactions do not really ameliorate the crowdsourcing dilemma in a majority of the parameter space.

  19. Iteration of ultrasound aberration correction methods

    NASA Astrophysics Data System (ADS)

    Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond

    2004-05-01

    Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.

  20. Two-step iterative reconstruction of region-of-interest with truncated projection in computed tomography

    NASA Astrophysics Data System (ADS)

    Yamakawa, Keisuke; Kojima, Shinichi

    2014-03-01

    Iteratively reconstructing data only inside the region of interest (ROI) is widely used to acquire CT images in less computation time while maintaining high spatial resolution. A method that subtracts projected data outside the ROI from full-coverage measured data has been proposed. A serious problem with this method is that the accuracy of the measured data confined inside the ROI decreases according to the truncation error outside the ROI. We propose a two-step iterative method that reconstructs image inside the full-coverage in addition to a conventional iterative method inside the ROI to reduce the truncation error inside full-coverage images. Statistical information (e.g., quantum-noise distributions) acquired by detected X-ray photons is generally used in iterative methods as a photon weight to efficiently reduce image noise. Our proposed method applies one of two kinds of weights (photon or constant weights) chosen adaptively by taking into consideration the influence of truncation error. The effectiveness of the proposed method compared with that of the conventional method was evaluated in terms of simulated CT values by using elliptical phantoms and an abdomen phantom. The standard deviation of error and the average absolute error of the proposed method on the profile curve were respectively reduced from 3.4 to 0.4 [HU] and from 2.8 to 0.8 [HU] compared with that of the conventional method. As a result, applying a suitable weight on the basis of a target object made it possible to effectively reduce the errors in CT images.

  1. A holistic strategy for adaptive land management

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Adaptive management is widely applied to natural resources management. Adaptive management can be generally defined as an iterative decision-making process that incorporates formulation of management objectives, actions designed to address these objectives, monitoring of results, and repeated adapta...

  2. Iterative marker excision system.

    PubMed

    Myronovskyi, Maksym; Rosenkränzer, Birgit; Luzhetskyy, Andriy

    2014-05-01

    The deletions of large genomic DNA fragments and consecutive gene knockouts are prerequisites for the generation of organisms with improved properties. One of the key issues in this context is the removal of antibiotic resistance markers from engineered organisms without leaving an active recombinase recognition site. Here, we report the establishment of an iterative marker excision system (IMES) that solves this problem. Based on the phiC31 integrase and its mutant att sites, IMES can be used for highly effective deletion of DNA fragments between inversely oriented B-CC and P-GG sites. The B-CC and P-GG sites are derived from attB and attP by substitution of the central core TT dinucleotide with CC and GG, respectively. An unnatural RR site that resides in the chromosome following deletion is the joining product of the right shoulders of B-CC and P-GG. We show that the RR sites do not recombine with each other as well as the RR site recombines with B-CC. The recombination efficiencies between RR and P-GG or RR and LL are only 0.1 % and 1 %, respectively. Thus, IMES can be used for multistep genomic engineering without risking unwanted DNA recombination. The fabrication of multi-purpose antibiotic cassettes and examples of the utilisation of IMES are described.

  3. Mode conversion in ITER

    NASA Astrophysics Data System (ADS)

    Jaeger, E. F.; Berry, L. A.; Myra, J. R.

    2006-10-01

    Fast magnetosonic waves in the ion cyclotron range of frequencies (ICRF) can convert to much shorter wavelength modes such as ion Bernstein waves (IBW) and ion cyclotron waves (ICW) [1]. These modes are potentially useful for plasma control through the generation of localized currents and sheared flows. As part of the SciDAC Center for Simulation of Wave-Plasma Interactions project, the AORSA global-wave solver [2] has been ported to the new, dual-core Cray XT-3 (Jaguar) at ORNL where it demonstrates excellent scaling with the number of processors. Preliminary calculations using 4096 processors have allowed the first full-wave simulations of mode conversion in ITER. Mode conversion from the fast wave to the ICW is observed in mixtures of deuterium, tritium and helium3 at 53 MHz. The resulting flow velocity and electric field shear will be calculated. [1] F.W. Perkins, Nucl. Fusion 17, 1197 (1977). [2] E.F. Jaeger, L.A. Berry, J.R. Myra, et al., Phys. Rev. Lett. 90, 195001-1 (2003).

  4. The first fusion reactor: ITER

    NASA Astrophysics Data System (ADS)

    Campbell, D. J.

    2016-11-01

    Established by the signature of the ITER Agreement in November 2006 and currently under construction at St Paul-lez-Durance in southern France, the ITER project [1,2] involves the European Union (including Switzerland), China, India, Japan, the Russian Federation, South Korea and the United States. ITER (`the way' in Latin) is a critical step in the development of fusion energy. Its role is to provide an integrated demonstration of the physics and technology required for a fusion power plant based on magnetic confinement.

  5. The ITER project construction status

    NASA Astrophysics Data System (ADS)

    Motojima, O.

    2015-10-01

    The pace of the ITER project in St Paul-lez-Durance, France is accelerating rapidly into its peak construction phase. With the completion of the B2 slab in August 2014, which will support about 400 000 metric tons of the tokamak complex structures and components, the construction is advancing on a daily basis. Magnet, vacuum vessel, cryostat, thermal shield, first wall and divertor structures are under construction or in prototype phase in the ITER member states of China, Europe, India, Japan, Korea, Russia, and the United States. Each of these member states has its own domestic agency (DA) to manage their procurements of components for ITER. Plant systems engineering is being transformed to fully integrate the tokamak and its auxiliary systems in preparation for the assembly and operations phase. CODAC, diagnostics, and the three main heating and current drive systems are also progressing, including the construction of the neutral beam test facility building in Padua, Italy. The conceptual design of the Chinese test blanket module system for ITER has been completed and those of the EU are well under way. Significant progress has been made addressing several outstanding physics issues including disruption load characterization, prediction, avoidance, and mitigation, first wall and divertor shaping, edge pedestal and SOL plasma stability, fuelling and plasma behaviour during confinement transients and W impurity transport. Further development of the ITER Research Plan has included a definition of the required plant configuration for 1st plasma and subsequent phases of ITER operation as well as the major plasma commissioning activities and the needs of the accompanying R&D program to ITER construction by the ITER parties.

  6. ITER Central Solenoid Module Fabrication

    SciTech Connect

    Smith, John

    2016-09-23

    The fabrication of the modules for the ITER Central Solenoid (CS) has started in a dedicated production facility located in Poway, California, USA. The necessary tools have been designed, built, installed, and tested in the facility to enable the start of production. The current schedule has first module fabrication completed in 2017, followed by testing and subsequent shipment to ITER. The Central Solenoid is a key component of the ITER tokamak providing the inductive voltage to initiate and sustain the plasma current and to position and shape the plasma. The design of the CS has been a collaborative effort between the US ITER Project Office (US ITER), the international ITER Organization (IO) and General Atomics (GA). GA’s responsibility includes: completing the fabrication design, developing and qualifying the fabrication processes and tools, and then completing the fabrication of the seven 110 tonne CS modules. The modules will be shipped separately to the ITER site, and then stacked and aligned in the Assembly Hall prior to insertion in the core of the ITER tokamak. A dedicated facility in Poway, California, USA has been established by GA to complete the fabrication of the seven modules. Infrastructure improvements included thick reinforced concrete floors, a diesel generator for backup power, along with, cranes for moving the tooling within the facility. The fabrication process for a single module requires approximately 22 months followed by five months of testing, which includes preliminary electrical testing followed by high current (48.5 kA) tests at 4.7K. The production of the seven modules is completed in a parallel fashion through ten process stations. The process stations have been designed and built with most stations having completed testing and qualification for carrying out the required fabrication processes. The final qualification step for each process station is achieved by the successful production of a prototype coil. Fabrication of the first

  7. ITER safety challenges and opportunities

    SciTech Connect

    Piet, S.J.

    1991-01-01

    Results of the Conceptual Design Activity (CDA) for the International Thermonuclear Experimental Reactor (ITER) suggest challenges and opportunities. ITER is capable of meeting anticipated regulatory dose limits,'' but proof is difficult because of large radioactive inventories needing stringent radioactivity confinement. We need much research and development (R D) and design analysis to establish that ITER meets regulatory requirements. We have a further opportunity to do more to prove more of fusion's potential safety and environmental advantages and maximize the amount of ITER technology on the path toward fusion power plants. To fulfill these tasks, we need to overcome three programmatic challenges and three technical challenges. The first programmatic challenge is to fund a comprehensive safety and environmental ITER R D plan. Second is to strengthen safety and environment work and personnel in the international team. Third is to establish an external consultant group to advise the ITER Joint Team on designing ITER to meet safety requirements for siting by any of the Parties. The first of the three key technical challenges is plasma engineering -- burn control, plasma shutdown, disruptions, tritium burn fraction, and steady state operation. The second is the divertor, including tritium inventory, activation hazards, chemical reactions, and coolant disturbances. The third technical challenge is optimization of design requirements considering safety risk, technical risk, and cost. Some design requirements are now too strict; some are too lax. Fuel cycle design requirements are presently too strict, mandating inappropriate T separation from H and D. Heat sink requirements are presently too lax; they should be strengthened to ensure that maximum loss of coolant accident temperatures drop.

  8. Fusion Power measurement at ITER

    SciTech Connect

    Bertalot, L.; Barnsley, R.; Krasilnikov, V.; Stott, P.; Suarez, A.; Vayakis, G.; Walsh, M.

    2015-07-01

    Nuclear fusion research aims to provide energy for the future in a sustainable way and the ITER project scope is to demonstrate the feasibility of nuclear fusion energy. ITER is a nuclear experimental reactor based on a large scale fusion plasma (tokamak type) device generating Deuterium - Tritium (DT) fusion reactions with emission of 14 MeV neutrons producing up to 700 MW fusion power. The measurement of fusion power, i.e. total neutron emissivity, will play an important role for achieving ITER goals, in particular the fusion gain factor Q related to the reactor performance. Particular attention is given also to the development of the neutron calibration strategy whose main scope is to achieve the required accuracy of 10% for the measurement of fusion power. Neutron Flux Monitors located in diagnostic ports and inside the vacuum vessel will measure ITER total neutron emissivity, expected to range from 1014 n/s in Deuterium - Deuterium (DD) plasmas up to almost 10{sup 21} n/s in DT plasmas. The neutron detection systems as well all other ITER diagnostics have to withstand high nuclear radiation and electromagnetic fields as well ultrahigh vacuum and thermal loads. (authors)

  9. Relaxation Criteria for Iterated Traffic Simulations

    NASA Astrophysics Data System (ADS)

    Kelly, Terence; Nagel, Kai

    Iterative transportation microsimulations adjust traveler route plans by iterating between a microsimulation and a route planner. At each iteration, the route planner adjusts individuals' route choices based on the preceding microsimulations. Empirically, this process yields good results, but it is usually unclear when to stop the iterative process when modeling real-world traffic. This paper investigates several criteria to judge relaxation of the iterative process, emphasizing criteria related to traveler decision-making.

  10. ITER plant layout and site services

    NASA Astrophysics Data System (ADS)

    Chuyanov, V. A.

    2000-03-01

    The ITER site has not yet been determined. Nevertheless, to develop a construction plan and a cost estimate, it is necessary to have a detailed layout of the buildings, structures and outdoor equipment integrated with the balance of plant service systems prototypical of large fusion power plants. These services include electrical power for magnet feeds and plasma heating systems, cryogenic and conventional cooling systems, compressed air, gas supplies, demineralized water, steam and drainage. Nuclear grade facilities are provided to handle tritium fuel and activated waste, as well as to prevent radiation exposure of workers and the public. To prevent interference between services of different types and for efficient arrangement of buildings, structures and equipment within the site area, a plan was developed which segregated different classes of services to four quadrants surrounding the tokamak building, placed at the approximate geographical centre of the site. The locations of the buildings on the generic site were selected to meet all design requirements at minimum total project cost. A similar approach was used to determine the locations of services above, at and below grade. The generic site plan can be adapted to the site selected for ITER without significant changes to the buildings or equipment. Some rearrangements may be required by site topography, resulting primarily in changes to the length of services that link the buildings and equipment.

  11. Construction Safety Forecast for ITER

    SciTech Connect

    cadwallader, lee charles

    2006-11-01

    The International Thermonuclear Experimental Reactor (ITER) project is poised to begin its construction activity. This paper gives an estimate of construction safety as if the experiment was being built in the United States. This estimate of construction injuries and potential fatalities serves as a useful forecast of what can be expected for construction of such a major facility in any country. These data should be considered by the ITER International Team as it plans for safety during the construction phase. Based on average U.S. construction rates, ITER may expect a lost workday case rate of < 4.0 and a fatality count of 0.5 to 0.9 persons per year.

  12. Error Field Correction in ITER

    SciTech Connect

    Park, Jong-kyu; Boozer, Allen H.; Menard, Jonathan E.; Schaffer, Michael J.

    2008-05-22

    A new method for correcting magnetic field errors in the ITER tokamak is developed using the Ideal Perturbed Equilibrium Code (IPEC). The dominant external magnetic field for driving islands is shown to be localized to the outboard midplane for three ITER equilibria that represent the projected range of operational scenarios. The coupling matrices between the poloidal harmonics of the external magnetic perturbations and the resonant fields on the rational surfaces that drive islands are combined for different equilibria and used to determine an ordered list of the dominant errors in the external magnetic field. It is found that efficient and robust error field correction is possible with a fixed setting of the correction currents relative to the currents in the main coils across the range of ITER operating scenarios that was considered.

  13. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  14. On pre-image iterations for speech enhancement.

    PubMed

    Leitner, Christina; Pernkopf, Franz

    2015-01-01

    In this paper, we apply kernel PCA for speech enhancement and derive pre-image iterations for speech enhancement. Both methods make use of a Gaussian kernel. The kernel variance serves as tuning parameter that has to be adapted according to the SNR and the desired degree of de-noising. We develop a method to derive a suitable value for the kernel variance from a noise estimate to adapt pre-image iterations to arbitrary SNRs. In experiments, we compare the performance of kernel PCA and pre-image iterations in terms of objective speech quality measures and automatic speech recognition. The speech data is corrupted by white and colored noise at 0, 5, 10, and 15 dB SNR. As a benchmark, we provide results of the generalized subspace method, of spectral subtraction, and of the minimum mean-square error log-spectral amplitude estimator. In terms of the scores of the PEASS (Perceptual Evaluation Methods for Audio Source Separation) toolbox, the proposed methods achieve a similar performance as the reference methods. The speech recognition experiments show that the utterances processed by pre-image iterations achieve a consistently better word recognition accuracy than the unprocessed noisy utterances and than the utterances processed by the generalized subspace method.

  15. Iterative Restoration Of Tomosynthetic Slices

    NASA Astrophysics Data System (ADS)

    Ruttimann, U. E.; Groenhuis, R. A.; Webber, R. L.

    1984-08-01

    Tomosynthetic reconstructions suffer from the disadvantage that blurred images of object detail lying outside the plane of interest are superimposed over the desired image of structures in the tomosynthetic plane. It is proposed to selectively reduce these undesired superimpositions by a constrained iterative restoration method. Sufficient conditions are derived ensuring the convergence of the iterations to the exact solution in the absence of noise and constraints. Although in practice the restoration process must be left incomplete because of noise and quantization artifacts, the experimental results demonstrate that for reasons of stability these convergence conditions must be satisfied.

  16. The real mission of ITER

    SciTech Connect

    Wurden, G A

    2009-01-01

    For future machines, the plasma stored energy is going up by factors of 20-40x, and plasma currents by 2-3x, while the surface to volume ratio is at the same time decreasing. Therefore the disruption forces, even for constant B, (which scale like IxB), and associated possible localized heating on machine components, are more severe. Notably, Tore Supra has demonstrated removal of more than 1 GJ of input energy, over nearly a 400 second period. However, the instantaneous stored energy in the Tore Supra system (which is most directly related to the potential for disruption damage) is quite small compared to other large tokamaks. The goal of ITER is routinely described as studying DT burning plasmas with a Q {approx} 10. In reality, ITER has a much more important first order mission. In fact, if it fails at this mission, the consequences are that ITER will never get to the eventual stated purpose of studying a burning plasma. The real mission of ITER is to study (and demonstrate successfully) plasma control with {approx}10-17 MA toroidal currents and {approx}100-400 MJ plasma stored energy levels in long-pulse scenarios. Before DT operation is ever given a go-ahead in ITER, the reality is that ITER must demonstrate routine and reliable control of high energy hydrogen (and deuterium) plasmas. The difficulty is that ITER must simultaneously deal with several technical problems: (1) heat removal at the plasma/wall interface, (2) protection of the wall components from off-normal events, and (3) generation of dust/redeposition of first wall materials. All previous tokamaks have encountered hundred's of major disruptions in the course of their operation. The consequences of a few MA of runaway electrons (at 20-50 MeV) being generated in ITER, and then being lost to the walls are simply catastrophic. They will not be deposited globally, but will drift out (up, down, whatever, depending on control system), and impact internal structures, unless 'ameliorated'. Basically, this

  17. Optimal application of Morrison's iterative noise removal for deconvolution

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1986-01-01

    Morrison's iterative method of noise removal can be applied for both noise removal alone and noise removal prior to deconvolution. This method is applied to noise of various noise levels added to determine the optimum use of the method. The phase shift method of migration and modeling is evaluated and the results are compared to Stolt's approach. A method is introduced by which the optimum iterative number for deconvolution can be found. Statistical computer simulation is used to describe the optimum use of two convergent iterative techniques for seismic data. The Always-Convergent deconvolution technique was applied to data recorded during the quantitative analysis of materials through NonDestructive Evaluation (NDE) in which ultrasonic signals were used to detect flaws in substances such as composites.

  18. Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report

    SciTech Connect

    Saad, Yousef

    2014-01-16

    The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners for solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the

  19. Iterative method for interferogram processing

    NASA Astrophysics Data System (ADS)

    Kotlyar, Victor V.; Seraphimovich, P. G.; Zalyalov, Oleg K.

    1994-12-01

    We have developed and numerically evaluated an iterative algorithm for interferogram processing including the Fourier-transform method, the Gerchberg-Papoulis algorithm and Wiener's filter-based regularization used in combination. Using a signal-to-noise ratio not less than 1, it has been possible to reconstruct the phase of an object field with accuracy better than 5%.

  20. Energetic ions in ITER plasmas

    SciTech Connect

    Pinches, S. D.; Chapman, I. T.; Sharapov, S. E.; Lauber, Ph. W.; Oliver, H. J. C.; Shinohara, K.; Tani, K.

    2015-02-15

    This paper discusses the behaviour and consequences of the expected populations of energetic ions in ITER plasmas. It begins with a careful analytic and numerical consideration of the stability of Alfvén Eigenmodes in the ITER 15 MA baseline scenario. The stability threshold is determined by balancing the energetic ion drive against the dominant damping mechanisms and it is found that only in the outer half of the plasma (r/a>0.5) can the fast ions overcome the thermal ion Landau damping. This is in spite of the reduced numbers of alpha-particles and beam ions in this region but means that any Alfvén Eigenmode-induced redistribution is not expected to influence the fusion burn process. The influence of energetic ions upon the main global MHD phenomena expected in ITER's primary operating scenarios, including sawteeth, neoclassical tearing modes and Resistive Wall Modes, is also reviewed. Fast ion losses due to the non-axisymmetric fields arising from the finite number of toroidal field coils, the inclusion of ferromagnetic inserts, the presence of test blanket modules containing ferromagnetic material, and the fields created by the Edge Localised Mode (ELM) control coils in ITER are discussed. The greatest losses and associated heat loads onto the plasma facing components arise due to the use of the ELM control coils and come from neutral beam ions that are ionised in the plasma edge.

  1. Networking Theories by Iterative Unpacking

    ERIC Educational Resources Information Center

    Koichu, Boris

    2014-01-01

    An iterative unpacking strategy consists of sequencing empirically-based theoretical developments so that at each step of theorizing one theory serves as an overarching conceptual framework, in which another theory, either existing or emerging, is embedded in order to elaborate on the chosen element(s) of the overarching theory. The strategy is…

  2. Learning to improve iterative repair scheduling

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene

    1992-01-01

    This paper presents a general learning method for dynamically selecting between repair heuristics in an iterative repair scheduling system. The system employs a version of explanation-based learning called Plausible Explanation-Based Learning (PEBL) that uses multiple examples to confirm conjectured explanations. The basic approach is to conjecture contradictions between a heuristic and statistics that measure the quality of the heuristic. When these contradictions are confirmed, a different heuristic is selected. To motivate the utility of this approach we present an empirical evaluation of the performance of a scheduling system with respect to two different repair strategies. We show that the scheduler that learns to choose between the heuristics outperforms the same scheduler with any one of two heuristics alone.

  3. Progress of IRSN R&D on ITER Safety Assessment

    NASA Astrophysics Data System (ADS)

    Van Dorsselaere, J. P.; Perrault, D.; Barrachin, M.; Bentaib, A.; Gensdarmes, F.; Haeck, W.; Pouvreau, S.; Salat, E.; Seropian, C.; Vendel, J.

    2012-08-01

    The French "Institut de Radioprotection et de Sûreté Nucléaire" (IRSN), in support to the French "Autorité de Sûreté Nucléaire", is analysing the safety of ITER fusion installation on the basis of the ITER operator's safety file. IRSN set up a multi-year R&D program in 2007 to support this safety assessment process. Priority has been given to four technical issues and the main outcomes of the work done in 2010 and 2011 are summarized in this paper: for simulation of accident scenarios in the vacuum vessel, adaptation of the ASTEC system code; for risk of explosion of gas-dust mixtures in the vacuum vessel, adaptation of the TONUS-CFD code for gas distribution, development of DUST code for dust transport, and preparation of IRSN experiments on gas inerting, dust mobilization, and hydrogen-dust mixtures explosion; for evaluation of the efficiency of the detritiation systems, thermo-chemical calculations of tritium speciation during transport in the gas phase and preparation of future experiments to evaluate the most influent factors on detritiation; for material neutron activation, adaptation of the VESTA Monte Carlo depletion code. The first results of these tasks have been used in 2011 for the analysis of the ITER safety file. In the near future, this R&D global programme may be reoriented to account for the feedback of the latter analysis or for new knowledge.

  4. Application of Adaptive Design Methodology in Development of a Long-Acting Glucagon-Like Peptide-1 Analog (Dulaglutide): Statistical Design and Simulations

    PubMed Central

    Skrivanek, Zachary; Berry, Scott; Berry, Don; Chien, Jenny; Geiger, Mary Jane; Anderson, James H.; Gaydos, Brenda

    2012-01-01

    Background Dulaglutide (dula, LY2189265), a long-acting glucagon-like peptide-1 analog, is being developed to treat type 2 diabetes mellitus. Methods To foster the development of dula, we designed a two-stage adaptive, dose-finding, inferentially seamless phase 2/3 study. The Bayesian theoretical framework is used to adaptively randomize patients in stage 1 to 7 dula doses and, at the decision point, to either stop for futility or to select up to 2 dula doses for stage 2. After dose selection, patients continue to be randomized to the selected dula doses or comparator arms. Data from patients assigned the selected doses will be pooled across both stages and analyzed with an analysis of covariance model, using baseline hemoglobin A1c and country as covariates. The operating characteristics of the trial were assessed by extensive simulation studies. Results Simulations demonstrated that the adaptive design would identify the correct doses 88% of the time, compared to as low as 6% for a fixed-dose design (the latter value based on frequentist decision rules analogous to the Bayesian decision rules for adaptive design). Conclusions This article discusses the decision rules used to select the dula dose(s); the mathematical details of the adaptive algorithm—including a description of the clinical utility index used to mathematically quantify the desirability of a dose based on safety and efficacy measurements; and a description of the simulation process and results that quantify the operating characteristics of the design. PMID:23294775

  5. Iterative simulated quenching for designing irregular-spot-array generators.

    PubMed

    Gillet, J N; Sheng, Y

    2000-07-10

    We propose a novel, to our knowledge, algorithm of iterative simulated quenching with temperature rescaling for designing diffractive optical elements, based on an analogy between simulated annealing and statistical thermodynamics. The temperature is iteratively rescaled at the end of each quenching process according to ensemble statistics to bring the system back from a frozen imperfect state with a local minimum of energy to a dynamic state in a Boltzmann heat bath in thermal equilibrium at the rescaled temperature. The new algorithm achieves much lower cost function and reconstruction error and higher diffraction efficiency than conventional simulated annealing with a fast exponential cooling schedule and is easy to program. The algorithm is used to design binary-phase generators of large irregular spot arrays. The diffractive phase elements have trapezoidal apertures of varying heights, which fit ideal arbitrary-shaped apertures better than do trapezoidal apertures of fixed heights.

  6. Bioinspired iterative synthesis of polyketides

    PubMed Central

    Zheng, Kuan; Xie, Changmin; Hong, Ran

    2015-01-01

    Diverse array of biopolymers and second metabolites (particularly polyketide natural products) has been manufactured in nature through an enzymatic iterative assembly of simple building blocks. Inspired by this strategy, molecules with inherent modularity can be efficiently synthesized by repeated succession of similar reaction sequences. This privileged strategy has been widely adopted in synthetic supramolecular chemistry. Its value also has been reorganized in natural product synthesis. A brief overview of this approach is given with a particular emphasis on the total synthesis of polyol-embedded polyketides, a class of vastly diverse structures and biologically significant natural products. This viewpoint also illustrates the limits of known individual modules in terms of diastereoselectivity and enantioselectivity. More efficient and practical iterative strategies are anticipated to emerge in the future development. PMID:26052510

  7. I-mode for ITER?

    NASA Astrophysics Data System (ADS)

    Whyte, D. G.; Marmar, E.; Hubbard, A.; Hughes, J.; Dominguez, A.; Greenwald, M.

    2011-10-01

    I-mode is a recently explored confinement regime that features a temperature pedestal and H-mode energy confinement, yet with L-mode particle confinement and no density pedestal nor large ELMs. Experiments on Alcator C-Mod and ASDEX-Upgrade show this leads to a stationary collisionless pedestal that inherently does not require ELMs for core impurity and particle control, possibly making I-mode an attractive operating regime for ITER where ELM heat pulses are expected to surpass material limits. We speculate as to how I-mode could be obtained, maintained and exploited for the ITER burning plasma physics mission. Issues examined include I-mode topology and power threshold requirements, pedestal formation, density control, avoiding H-mode, and the response of I-mode to alpha self-heating. Key uncertainties requiring further investigation are identified. Supported by the US DOE Cooperative Agreement DE-FC02-99ER54512.

  8. Spectroscopic problems in ITER diagnostics

    NASA Astrophysics Data System (ADS)

    Lisitsa, V. S.; Bureyeva, L. A.; Kukushkin, A. B.; Kadomtsev, M. B.; Krupin, V. A.; Levashova, M. G.; Medvedev, A. A.; Mukhin, E. E.; Shurygin, V. A.; Tugarinov, S. N.; Vukolov, K. Yu

    2012-12-01

    Problems of spectroscopic diagnostics of ITER plasma are under consideration. Three types of diagnostics are presented: 1) Balmer lines spectroscopy in the edge and divertor plasmas; 2) Thomson scattering, 3) charge exchange recombination spectroscopy. The Zeeman-Stark structure of line shapes is discussed. The overlapping of isotopes H-D-T spectral line shapes are presented for the SOL and divertor conditions. The polarization measurements of H-alpha spectral lines for H-D mixture on T-10 tokamak are shown in order to separate Zeeman splitting in more details. The problem of plasma background radiation emission for Thomson scattering in ITER is discussed in details. The line shape of P-7 hydrogen spectral line having a wave length close to laser one is presented together with continuum radiation. The charge exchange recombination spectroscopy (CXRS) is discussed in details. The data on Dα, HeII and CVI measurements in CXRS experiments on T-10 tokamak are presented.

  9. Design of ITER Relief Lines

    NASA Astrophysics Data System (ADS)

    Shah, N.; Choukekar, K.; Jadon, M.; Sarkar, B.; Joshi, B.; Kanzaria, H.; Gehani, V.; Vyas, H.; Pandya, U.; Panjwani, R.; Badgujar, S.; Monneret, E.

    2017-02-01

    The ITER Cryogenic system is one of the most complex cryogenic systems in the world. It includes roughly 5 km of cryogenic transfer line (cryolines) having large number of layout singularities in terms of bends at odd angles and branches. The relief lines are particularly important cryolines as they collect the helium from outlet of all process safety valves of the cryogenic clients and transfers it back to cryoplant. The total length of ITER relief lines is around 1.6 km with process pipe size varying from DN 50 to DN 200. While some part of relief lines carries warm helium for the recovery system, most part of the relief line is vacuum jacketed cryoline which carries cold helium from the clients. The final detailed design of relief lines has been completed. The paper describes the major input data and constraints for design of relief lines, design steps, flexibility and structural analysis approach and major design outcome.

  10. ITER Plasma Control System Development

    NASA Astrophysics Data System (ADS)

    Snipes, Joseph; ITER PCS Design Team

    2015-11-01

    The development of the ITER Plasma Control System (PCS) continues with the preliminary design phase for 1st plasma and early plasma operation in H/He up to Ip = 15 MA in L-mode. The design is being developed through a contract between the ITER Organization and a consortium of plasma control experts from EU and US fusion laboratories, which is expected to be completed in time for a design review at the end of 2016. This design phase concentrates on breakdown including early ECH power and magnetic control of the poloidal field null, plasma current, shape, and position. Basic kinetic control of the heating (ECH, ICH, NBI) and fueling systems is also included. Disruption prediction, mitigation, and maintaining stable operation are also included because of the high magnetic and kinetic stored energy present already for early plasma operation. Support functions for error field topology and equilibrium reconstruction are also required. All of the control functions also must be integrated into an architecture that will be capable of the required complexity of all ITER scenarios. A database is also being developed to collect and manage PCS functional requirements from operational scenarios that were defined in the Conceptual Design with links to proposed event handling strategies and control algorithms for initial basic control functions. A brief status of the PCS development will be presented together with a proposed schedule for design phases up to DT operation.

  11. Adaptive reshaping of objects in (multiparameter) Hilbert space for enhanced detection and classification: an application of receiver operating curve statistics to laser-based mass spectroscopy.

    PubMed

    Romanov, Dmitri A; Healy, Dennis M; Brady, John J; Levis, Robert J

    2008-05-01

    We propose a new approach to the classical detection problem of discrimination of a true signal of interest from an interferent signal, which may be applied to the area of chemical sensing. We show that the detection performance, as quantified by the receiver operating curve (ROC), can be substantially improved when the signal is represented by a multicomponent data set that is actively manipulated by means of a shaped laser probe pulse. In this case, the signal sought (agent) and the interfering signal (interferent) are visualized by vectors in a multidimensional detection space. Separation of these vectors can be achieved by adaptive modification of a probing laser pulse to actively manipulate the Hamiltonian of the agent and interferent. We demonstrate one implementation of the concept of adaptive rotation of signal vectors to chemical agent detection by means of strong-field time-of-flight mass spectrometry.

  12. Cryogenic instrumentation for ITER magnets

    NASA Astrophysics Data System (ADS)

    Poncet, J.-M.; Manzagol, J.; Attard, A.; André, J.; Bizel-Bizellot, L.; Bonnay, P.; Ercolani, E.; Luchier, N.; Girard, A.; Clayton, N.; Devred, A.; Huygen, S.; Journeaux, J.-Y.

    2017-02-01

    Accurate measurements of the helium flowrate and of the temperature of the ITER magnets is of fundamental importance to make sure that the magnets operate under well controlled and reliable conditions, and to allow suitable helium flow distribution in the magnets through the helium piping. Therefore, the temperature and flow rate measurements shall be reliable and accurate. In this paper, we present the thermometric chains as well as the venturi flow meters installed in the ITER magnets and their helium piping. The presented thermometric block design is based on the design developed by CERN for the LHC, which has been further optimized via thermal simulations carried out by CEA. The electronic part of the thermometric chain was entirely developed by the CEA and will be presented in detail: it is based on a lock-in measurement and small signal amplification, and also provides a web interface and software to an industrial PLC. This measuring device provides a reliable, accurate, electromagnetically immune, and fast (up to 100 Hz bandwidth) system for resistive temperature sensors between a few ohms to 100 kΩ. The flowmeters (venturi type) which make up part of the helium mass flow measurement chain have been completely designed, and manufacturing is on-going. The behaviour of the helium gas has been studied in detailed thanks to ANSYS CFX software in order to obtain the same differential pressure for all types of flowmeters. Measurement uncertainties have been estimated and the influence of input parameters has been studied. Mechanical calculations have been performed to guarantee the mechanical strength of the venturis required for pressure equipment operating in nuclear environment. In order to complete the helium mass flow measurement chain, different technologies of absolute and differential pressure sensors have been tested in an applied magnetic field to identify equipment compatible with the ITER environment.

  13. Iterates of maps with symmetry

    NASA Technical Reports Server (NTRS)

    Chossat, Pascal; Golubitsky, Martin

    1988-01-01

    Fixed-point bifurcation, period doubling, and Hopf bifurcation (HB) for iterates of equivariant mappings are investigated analytically, with a focus on HB in the presence of symmetry. An algebraic formulation for the hypotheses of the theorem of Ruelle (1973) is derived, and the case of standing waves in a system of ordinary differential equations with O(2) symmetry is considered in detail. In this case, it is shown that HB can lead directly to motion on an invariant 3-torus, with an unexpected third frequency due to drift of standing waves along the torus.

  14. Color Image Denoising via Discriminatively Learned Iterative Shrinkage.

    PubMed

    Sun, Jian; Sun, Jian; Xu, Zingben

    2015-11-01

    In this paper, we propose a novel model, a discriminatively learned iterative shrinkage (DLIS) model, for color image denoising. The DLIS is a generalization of wavelet shrinkage by iteratively performing shrinkage over patch groups and whole image aggregation. We discriminatively learn the shrinkage functions and basis from the training pairs of noisy/noise-free images, which can adaptively handle different noise characteristics in luminance/chrominance channels, and the unknown structured noise in real-captured color images. Furthermore, to remove the splotchy real color noises, we design a Laplacian pyramid-based denoising framework to progressively recover the clean image from the coarsest scale to the finest scale by the DLIS model learned from the real color noises. Experiments show that our proposed approach can achieve the state-of-the-art denoising results on both synthetic denoising benchmark and real-captured color images.

  15. Guaranteeing Convergence of Iterative Skewed Voting Algorithms for Image Segmentation

    PubMed Central

    Balcan, Doru C.; Srinivasa, Gowri; Fickus, Matthew; Kovačević, Jelena

    2012-01-01

    In this paper we provide rigorous proof for the convergence of an iterative voting-based image segmentation algorithm called Active Masks. Active Masks (AM) was proposed to solve the challenging task of delineating punctate patterns of cells from fluorescence microscope images. Each iteration of AM consists of a linear convolution composed with a nonlinear thresholding; what makes this process special in our case is the presence of additive terms whose role is to “skew” the voting when prior information is available. In real-world implementation, the AM algorithm always converges to a fixed point. We study the behavior of AM rigorously and present a proof of this convergence. The key idea is to formulate AM as a generalized (parallel) majority cellular automaton, adapting proof techniques from discrete dynamical systems. PMID:22984338

  16. Multimodal and Adaptive Learning Management: An Iterative Design

    ERIC Educational Resources Information Center

    Squires, David R.; Orey, Michael A.

    2015-01-01

    The purpose of this study is to measure the outcome of a comprehensive learning management system implemented at a Spinal Cord Injury (SCI) hospital in the Southeast United States. Specifically this SCI hospital has been experiencing an evident volume of patients returning seeking more information about the nature of their injuries. Recognizing…

  17. Iterated Stretching of Viscoelastic Jets

    NASA Technical Reports Server (NTRS)

    Chang, Hsueh-Chia; Demekhin, Evgeny A.; Kalaidin, Evgeny

    1999-01-01

    We examine, with asymptotic analysis and numerical simulation, the iterated stretching dynamics of FENE and Oldroyd-B jets of initial radius r(sub 0), shear viscosity nu, Weissenberg number We, retardation number S, and capillary number Ca. The usual Rayleigh instability stretches the local uniaxial extensional flow region near a minimum in jet radius into a primary filament of radius [Ca(1 - S)/ We](sup 1/2)r(sub 0) between two beads. The strain-rate within the filament remains constant while its radius (elastic stress) decreases (increases) exponentially in time with a long elastic relaxation time 3We(r(sup 2, sub 0)/nu). Instabilities convected from the bead relieve the tension at the necks during this slow elastic drainage and trigger a filament recoil. Secondary filaments then form at the necks from the resulting stretching. This iterated stretching is predicted to occur successively to generate high-generation filaments of radius r(sub n), (r(sub n)/r(sub 0)) = square root of 2[r(sub n-1)/r(sub 0)](sup 3/2) until finite-extensibility effects set in.

  18. ETR/ITER systems code

    SciTech Connect

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.; Bulmer, R.H.; Busigin, A.; DuBois, P.F.; Fenstermacher, M.E.; Fink, J.; Finn, P.A.; Galambos, J.D.; Gohar, Y.; Gorker, G.E.; Haines, J.R.; Hassanein, A.M.; Hicks, D.R.; Ho, S.K.; Kalsi, S.S.; Kalyanam, K.M.; Kerns, J.A.; Lee, J.D.; Miller, J.R.; Miller, R.L.; Myall, J.O.; Peng, Y-K.M.; Perkins, L.J.; Spampinato, P.T.; Strickler, D.J.; Thomson, S.L.; Wagner, C.E.; Willms, R.S.; Reid, R.L.

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs.

  19. Status of US ITER Diagnostics

    NASA Astrophysics Data System (ADS)

    Stratton, B.; Delgado-Aparicio, L.; Hill, K.; Johnson, D.; Pablant, N.; Barnsley, R.; Bertschinger, G.; de Bock, M. F. M.; Reichle, R.; Udintsev, V. S.; Watts, C.; Austin, M.; Phillips, P.; Beiersdorfer, P.; Biewer, T. M.; Hanson, G.; Klepper, C. C.; Carlstrom, T.; van Zeeland, M. A.; Brower, D.; Doyle, E.; Peebles, A.; Ellis, R.; Levinton, F.; Yuh, H.

    2013-10-01

    The US is providing 7 diagnostics to ITER: the Upper Visible/IR cameras, the Low Field Side Reflectometer, the Motional Stark Effect diagnostic, the Electron Cyclotron Emission diagnostic, the Toroidal Interferometer/Polarimeter, the Core Imaging X-Ray Spectrometer, and the Diagnostic Residual Gas Analyzer. The front-end components of these systems must operate with high reliability in conditions of long pulse operation, high neutron and gamma fluxes, very high neutron fluence, significant neutron heating (up to 7 MW/m3) , large radiant and charge exchange heat flux (0.35 MW/m2) , and high electromagnetic loads. Opportunities for repair and maintenance of these components will be limited. These conditions lead to significant challenges for the design of the diagnostics. Space constraints, provision of adequate radiation shielding, and development of repair and maintenance strategies are challenges for diagnostic integration into the port plugs that also affect diagnostic design. The current status of design of the US ITER diagnostics is presented and R&D needs are identified. Supported by DOE contracts DE-AC02-09CH11466 (PPPL) and DE-AC05-00OR22725 (UT-Battelle, LLC).

  20. Challenges for Cryogenics at Iter

    NASA Astrophysics Data System (ADS)

    Serio, L.

    2010-04-01

    Nuclear fusion of light nuclei is a promising option to provide clean, safe and cost competitive energy in the future. The ITER experimental reactor being designed by seven partners representing more than half of the world population will be assembled at Cadarache, South of France in the next decade. It is a thermonuclear fusion Tokamak that requires high magnetic fields to confine and stabilize the plasma. Cryogenic technology is extensively employed to achieve low-temperature conditions for the magnet and vacuum pumping systems. Efficient and reliable continuous operation shall be achieved despite unprecedented dynamic heat loads due to magnetic field variations and neutron production from the fusion reaction. Constraints and requirements of the largest superconducting Tokamak machine have been analyzed. Safety and technical risks have been initially assessed and proposals to mitigate the consequences analyzed. Industrial standards and components are being investigated to anticipate the requirements of reliable and efficient large scale energy production. After describing the basic features of ITER and its cryogenic system, we shall present the key design requirements, improvements, optimizations and challenges.

  1. ITER Port Interspace Pressure Calculations

    SciTech Connect

    Carbajo, Juan J; Van Hove, Walter A

    2016-01-01

    The ITER Vacuum Vessel (VV) is equipped with 54 access ports. Each of these ports has an opening in the bioshield that communicates with a dedicated port cell. During Tokamak operation, the bioshield opening must be closed with a concrete plug to shield the radiation coming from the plasma. This port plug separates the port cell into a Port Interspace (between VV closure lid and Port Plug) on the inner side and the Port Cell on the outer side. This paper presents calculations of pressures and temperatures in the ITER (Ref. 1) Port Interspace after a double-ended guillotine break (DEGB) of a pipe of the Tokamak Cooling Water System (TCWS) with high temperature water. It is assumed that this DEGB occurs during the worst possible conditions, which are during water baking operation, with water at a temperature of 523 K (250 C) and at a pressure of 4.4 MPa. These conditions are more severe than during normal Tokamak operation, with the water at 398 K (125 C) and 2 MPa. Two computer codes are employed in these calculations: RELAP5-3D Version 4.2.1 (Ref. 2) to calculate the blowdown releases from the pipe break, and MELCOR, Version 1.8.6 (Ref. 3) to calculate the pressures and temperatures in the Port Interspace. A sensitivity study has been performed to optimize some flow areas.

  2. Communication-optimal iterative methods

    NASA Astrophysics Data System (ADS)

    Demmel, J.; Hoemmen, M.; Mohiyuddin, M.; Yelick, K.

    2009-07-01

    Data movement, both within the memory system of a single processor node and between multiple nodes in a system, limits the performance of many Krylov subspace methods that solve sparse linear systems and eigenvalue problems. Here, s iterations of algorithms such as CG, GMRES, Lanczos, and Arnoldi perform s sparse matrix-vector multiplications and Ω(s) vector reductions, resulting in a growth of Ω(s) in both single-node and network communication. By reorganizing the sparse matrix kernel to compute a set of matrix-vector products at once and reorganizing the rest of the algorithm accordingly, we can perform s iterations by sending O(log P) messages instead of Ω(s·log P) messages on a parallel machine, and reading the on-node components of the matrix A from DRAM to cache just once on a single node instead of s times. This reduces communication to the minimum possible. We discuss both algorithms and an implementation of GMRES on a single node of an 8-core Intel Clovertown. Our implementations achieve significant speedups over the conventional algorithms.

  3. A randomised trial of adaptive pacing therapy, cognitive behaviour therapy, graded exercise, and specialist medical care for chronic fatigue syndrome (PACE): statistical analysis plan

    PubMed Central

    2013-01-01

    Background The publication of protocols by medical journals is increasingly becoming an accepted means for promoting good quality research and maximising transparency. Recently, Finfer and Bellomo have suggested the publication of statistical analysis plans (SAPs).The aim of this paper is to make public and to report in detail the planned analyses that were approved by the Trial Steering Committee in May 2010 for the principal papers of the PACE (Pacing, graded Activity, and Cognitive behaviour therapy: a randomised Evaluation) trial, a treatment trial for chronic fatigue syndrome. It illustrates planned analyses of a complex intervention trial that allows for the impact of clustering by care providers, where multiple care-providers are present for each patient in some but not all arms of the trial. Results The trial design, objectives and data collection are reported. Considerations relating to blinding, samples, adherence to the protocol, stratification, centre and other clustering effects, missing data, multiplicity and compliance are described. Descriptive, interim and final analyses of the primary and secondary outcomes are then outlined. Conclusions This SAP maximises transparency, providing a record of all planned analyses, and it may be a resource for those who are developing SAPs, acting as an illustrative example for teaching and methodological research. It is not the sum of the statistical analysis sections of the principal papers, being completed well before individual papers were drafted. Trial registration ISRCTN54285094 assigned 22 May 2003; First participant was randomised on 18 March 2005. PMID:24225069

  4. Preconditioned iterations to calculate extreme eigenvalues

    SciTech Connect

    Brand, C.W.; Petrova, S.

    1994-12-31

    Common iterative algorithms to calculate a few extreme eigenvalues of a large, sparse matrix are Lanczos methods or power iterations. They converge at a rate proportional to the separation of the extreme eigenvalues from the rest of the spectrum. Appropriate preconditioning improves the separation of the eigenvalues. Davidson`s method and its generalizations exploit this fact. The authors examine a preconditioned iteration that resembles a truncated version of Davidson`s method with a different preconditioning strategy.

  5. A noise power spectrum study of a new model-based iterative reconstruction system: Veo 3.0.

    PubMed

    Li, Guang; Liu, Xinming; Dodge, Cristina T; Jensen, Corey T; Rong, X John

    2016-09-08

    The purpose of this study was to evaluate performance of the third generation of model-based iterative reconstruction (MBIR) system, Veo 3.0, based on noise power spectrum (NPS) analysis with various clinical presets over a wide range of clinically applicable dose levels. A CatPhan 600 surrounded by an oval, fat-equivalent ring to mimic patient size/shape was scanned 10 times at each of six dose levels on a GE HD 750 scanner. NPS analysis was performed on images reconstructed with various Veo 3.0 preset combinations for comparisons of those images reconstructed using Veo 2.0, filtered back projection (FBP) and adaptive statistical iterative reconstruc-tion (ASiR). The new Target Thickness setting resulted in higher noise in thicker axial images. The new Texture Enhancement function achieved a more isotropic noise behavior with less image artifacts. Veo 3.0 provides additional reconstruction options designed to allow the user choice of balance between spatial resolution and image noise, relative to Veo 2.0. Veo 3.0 provides more user selectable options and in general improved isotropic noise behavior in comparison to Veo 2.0. The overall noise reduction performance of both versions of MBIR was improved in comparison to FBP and ASiR, especially at low-dose levels.

  6. Liver recognition based on statistical shape model in CT images

    NASA Astrophysics Data System (ADS)

    Xiang, Dehui; Jiang, Xueqing; Shi, Fei; Zhu, Weifang; Chen, Xinjian

    2016-03-01

    In this paper, an automatic method is proposed to recognize the liver on clinical 3D CT images. The proposed method effectively use statistical shape model of the liver. Our approach consist of three main parts: (1) model training, in which shape variability is detected using principal component analysis from the manual annotation; (2) model localization, in which a fast Euclidean distance transformation based method is able to localize the liver in CT images; (3) liver recognition, the initial mesh is locally and iteratively adapted to the liver boundary, which is constrained with the trained shape model. We validate our algorithm on a dataset which consists of 20 3D CT images obtained from different patients. The average ARVD was 8.99%, the average ASSD was 2.69mm, the average RMSD was 4.92mm, the average MSD was 28.841mm, and the average MSD was 13.31%.

  7. Iterative LQG Controller Design Through Closed-Loop Identification

    NASA Technical Reports Server (NTRS)

    Hsiao, Min-Hung; Huang, Jen-Kuang; Cox, David E.

    1996-01-01

    This paper presents an iterative Linear Quadratic Gaussian (LQG) controller design approach for a linear stochastic system with an uncertain open-loop model and unknown noise statistics. This approach consists of closed-loop identification and controller redesign cycles. In each cycle, the closed-loop identification method is used to identify an open-loop model and a steady-state Kalman filter gain from closed-loop input/output test data obtained by using a feedback LQG controller designed from the previous cycle. Then the identified open-loop model is used to redesign the state feedback. The state feedback and the identified Kalman filter gain are used to form an updated LQC controller for the next cycle. This iterative process continues until the updated controller converges. The proposed controller design is demonstrated by numerical simulations and experiments on a highly unstable large-gap magnetic suspension system.

  8. IPADE: Iterative prototype adjustment for nearest neighbor classification.

    PubMed

    Triguero, Isaac; Garcia, Salvador; Herrera, Francisco

    2010-12-01

    Nearest prototype methods are a successful trend of many pattern classification tasks. However, they present several shortcomings such as time response, noise sensitivity, and storage requirements. Data reduction techniques are suitable to alleviate these drawbacks. Prototype generation is an appropriate process for data reduction, which allows the fitting of a dataset for nearest neighbor (NN) classification. This brief presents a methodology to learn iteratively the positioning of prototypes using real parameter optimization procedures. Concretely, we propose an iterative prototype adjustment technique based on differential evolution. The results obtained are contrasted with nonparametric statistical tests and show that our proposal consistently outperforms previously proposed methods, thus becoming a suitable tool in the task of enhancing the performance of the NN classifier.

  9. Model-based Iterative Reconstruction: Effect on Patient Radiation Dose and Image Quality in Pediatric Body CT

    PubMed Central

    Dillman, Jonathan R.; Goodsitt, Mitchell M.; Christodoulou, Emmanuel G.; Keshavarzi, Nahid; Strouse, Peter J.

    2014-01-01

    Purpose To retrospectively compare image quality and radiation dose between a reduced-dose computed tomographic (CT) protocol that uses model-based iterative reconstruction (MBIR) and a standard-dose CT protocol that uses 30% adaptive statistical iterative reconstruction (ASIR) with filtered back projection. Materials and Methods Institutional review board approval was obtained. Clinical CT images of the chest, abdomen, and pelvis obtained with a reduced-dose protocol were identified. Images were reconstructed with two algorithms: MBIR and 100% ASIR. All subjects had undergone standard-dose CT within the prior year, and the images were reconstructed with 30% ASIR. Reduced- and standard-dose images were evaluated objectively and subjectively. Reduced-dose images were evaluated for lesion detectability. Spatial resolution was assessed in a phantom. Radiation dose was estimated by using volumetric CT dose index (CTDIvol) and calculated size-specific dose estimates (SSDE). A combination of descriptive statistics, analysis of variance, and t tests was used for statistical analysis. Results In the 25 patients who underwent the reduced-dose protocol, mean decrease in CTDIvol was 46% (range, 19%–65%) and mean decrease in SSDE was 44% (range, 19%–64%). Reduced-dose MBIR images had less noise (P > .004). Spatial resolution was superior for reduced-dose MBIR images. Reduced-dose MBIR images were equivalent to standard-dose images for lungs and soft tissues (P > .05) but were inferior for bones (P = .004). Reduced-dose 100% ASIR images were inferior for soft tissues (P < .002), lungs (P < .001), and bones (P < .001). By using the same reduced-dose acquisition, lesion detectability was better (38% [32 of 84 rated lesions]) or the same (62% [52 of 84 rated lesions]) with MBIR as compared with 100% ASIR. Conclusion CT performed with a reduced-dose protocol and MBIR is feasible in the pediatric population, and it maintains diagnostic quality. © RSNA, 2013 Online supplemental

  10. Flight data processing with the F-8 adaptive algorithm

    NASA Technical Reports Server (NTRS)

    Hartmann, G.; Stein, G.; Petersen, K.

    1977-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters has been designed for NASA's DFBW F-8 aircraft. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm has been implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer and surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software. The software and its performance evaluation based on flight data are described

  11. Iterative phase retrieval without support.

    PubMed

    Wu, J S; Weierstall, U; Spence, J C H; Koch, C T

    2004-12-01

    An iterative phase retrieval method for nonperiodic objects has been developed from the charge-flipping algorithm proposed in crystallography. A combination of the hybrid input-output (HIO) algorithm and the flipping algorithm has greatly improved performance. In this combined algorithm the flipping algorithm serves to find the support (object boundary) dynamically, and the HIO part improves convergence and moves the algorithm out of local minima. It starts with a single intensity measurement in the Fourier domain and does not require a priori knowledge of the support in the image domain. This method is suitable for general image recovery from oversampled diffuse elastic x-ray and electron-diffraction intensities. The relationship between this algorithm and the output-output algorithm is elucidated.

  12. Iterative phase retrieval without support

    NASA Astrophysics Data System (ADS)

    Wu, J. S.; Weierstall, U.; Spence, J. C. H.; Koch, C. T.

    2004-12-01

    An iterative phase retrieval method for nonperiodic objects has been developed from the charge-flipping algorithm proposed in crystallography. A combination of the hybrid input-output (HIO) algorithm and the flipping algorithm has greatly improved performance. In this combined algorithm the flipping algorithm serves to find the support (object boundary) dynamically, and the HIO part improves convergence and moves the algorithm out of local minima. It starts with a single intensity measurement in the Fourier domain and does not require a priori knowledge of the support in the image domain. This method is suitable for general image recovery from oversampled diffuse elastic x-ray and electron-diffraction intensities. The relationship between this algorithm and the output-output algorithm is elucidated.

  13. Planning as an Iterative Process

    NASA Technical Reports Server (NTRS)

    Smith, David E.

    2012-01-01

    Activity planning for missions such as the Mars Exploration Rover mission presents many technical challenges, including oversubscription, consideration of time, concurrency, resources, preferences, and uncertainty. These challenges have all been addressed by the research community to varying degrees, but significant technical hurdles still remain. In addition, the integration of these capabilities into a single planning engine remains largely unaddressed. However, I argue that there is a deeper set of issues that needs to be considered namely the integration of planning into an iterative process that begins before the goals, objectives, and preferences are fully defined. This introduces a number of technical challenges for planning, including the ability to more naturally specify and utilize constraints on the planning process, the ability to generate multiple qualitatively different plans, and the ability to provide deep explanation of plans.

  14. Benchmarking ICRF simulations for ITER

    SciTech Connect

    R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R.J. Dumont, A. Fukuyama, R. Harvey, E.F. Jaeger, E. Lerche, C.K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

    2010-09-28

    Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode plasma. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by seven groups to predict the ICRF electromagnetic fields and heating profiles. Approximate agreement is achieved for the predicted heating power partitions for the DT and He4 cases. Profiles of the heating powers and electromagnetic fields are compared.

  15. Ultralow dose dentomaxillofacial CT imaging and iterative reconstruction techniques: variability of Hounsfield units and contrast-to-noise ratio

    PubMed Central

    Bischel, Alexander; Stratis, Andreas; Kakar, Apoorv; Bosmans, Hilde; Jacobs, Reinhilde; Gassner, Eva-Maria; Puelacher, Wolfgang; Pauwels, Ruben

    2016-01-01

    Objective: The aim of this study was to evaluate whether application of ultralow dose protocols and iterative reconstruction technology (IRT) influence quantitative Hounsfield units (HUs) and contrast-to-noise ratio (CNR) in dentomaxillofacial CT imaging. Methods: A phantom with inserts of five types of materials was scanned using protocols for (a) a clinical reference for navigated surgery (CT dose index volume 36.58 mGy), (b) low-dose sinus imaging (18.28 mGy) and (c) four ultralow dose imaging (4.14, 2.63, 0.99 and 0.53 mGy). All images were reconstructed using: (i) filtered back projection (FBP); (ii) IRT: adaptive statistical iterative reconstruction-50 (ASIR-50), ASIR-100 and model-based iterative reconstruction (MBIR); and (iii) standard (std) and bone kernel. Mean HU, CNR and average HU error after recalibration were determined. Each combination of protocols was compared using Friedman analysis of variance, followed by Dunn's multiple comparison test. Results: Pearson's sample correlation coefficients were all >0.99. Ultralow dose protocols using FBP showed errors of up to 273 HU. Std kernels had less HU variability than bone kernels. MBIR reduced the error value for the lowest dose protocol to 138 HU and retained the highest relative CNR. ASIR could not demonstrate significant advantages over FBP. Conclusions: Considering a potential dose reduction as low as 1.5% of a std protocol, ultralow dose protocols and IRT should be further tested for clinical dentomaxillofacial CT imaging. Advances in knowledge: HU as a surrogate for bone density may vary significantly in CT ultralow dose imaging. However, use of std kernels and MBIR technology reduce HU error values and may retain the highest CNR. PMID:26859336

  16. Sequence analysis by iterated maps, a review.

    PubMed

    Almeida, Jonas S

    2014-05-01

    Among alignment-free methods, Iterated Maps (IMs) are on a particular extreme: they are also scale free (order free). The use of IMs for sequence analysis is also distinct from other alignment-free methodologies in being rooted in statistical mechanics instead of computational linguistics. Both of these roots go back over two decades to the use of fractal geometry in the characterization of phase-space representations. The time series analysis origin of the field is betrayed by the title of the manuscript that started this alignment-free subdomain in 1990, 'Chaos Game Representation'. The clash between the analysis of sequences as continuous series and the better established use of Markovian approaches to discrete series was almost immediate, with a defining critique published in same journal 2 years later. The rest of that decade would go by before the scale-free nature of the IM space was uncovered. The ensuing decade saw this scalability generalized for non-genomic alphabets as well as an interest in its use for graphic representation of biological sequences. Finally, in the past couple of years, in step with the emergence of BigData and MapReduce as a new computational paradigm, there is a surprising third act in the IM story. Multiple reports have described gains in computational efficiency of multiple orders of magnitude over more conventional sequence analysis methodologies. The stage appears to be now set for a recasting of IMs with a central role in processing nextgen sequencing results.

  17. Iterative reconstruction methods in X-ray CT.

    PubMed

    Beister, Marcel; Kolditz, Daniel; Kalender, Willi A

    2012-04-01

    Iterative reconstruction (IR) methods have recently re-emerged in transmission x-ray computed tomography (CT). They were successfully used in the early years of CT, but given up when the amount of measured data increased because of the higher computational demands of IR compared to analytical methods. The availability of large computational capacities in normal workstations and the ongoing efforts towards lower doses in CT have changed the situation; IR has become a hot topic for all major vendors of clinical CT systems in the past 5 years. This review strives to provide information on IR methods and aims at interested physicists and physicians already active in the field of CT. We give an overview on the terminology used and an introduction to the most important algorithmic concepts including references for further reading. As a practical example, details on a model-based iterative reconstruction algorithm implemented on a modern graphics adapter (GPU) are presented, followed by application examples for several dedicated CT scanners in order to demonstrate the performance and potential of iterative reconstruction methods. Finally, some general thoughts regarding the advantages and disadvantages of IR methods as well as open points for research in this field are discussed.

  18. New concurrent iterative methods with monotonic convergence

    SciTech Connect

    Yao, Qingchuan

    1996-12-31

    This paper proposes the new concurrent iterative methods without using any derivatives for finding all zeros of polynomials simultaneously. The new methods are of monotonic convergence for both simple and multiple real-zeros of polynomials and are quadratically convergent. The corresponding accelerated concurrent iterative methods are obtained too. The new methods are good candidates for the application in solving symmetric eigenproblems.

  19. An accelerated subspace iteration for eigenvector derivatives

    NASA Technical Reports Server (NTRS)

    Ting, Tienko

    1991-01-01

    An accelerated subspace iteration method for calculating eigenvector derivatives has been developed. Factors affecting the effectiveness and the reliability of the subspace iteration are identified, and effective strategies concerning these factors are presented. The method has been implemented, and the results of a demonstration problem are presented.

  20. Rater Variables Associated with ITER Ratings

    ERIC Educational Resources Information Center

    Paget, Michael; Wu, Caren; McIlwrick, Joann; Woloschuk, Wayne; Wright, Bruce; McLaughlin, Kevin

    2013-01-01

    Advocates of holistic assessment consider the ITER a more authentic way to assess performance. But this assessment format is subjective and, therefore, susceptible to rater bias. Here our objective was to study the association between rater variables and ITER ratings. In this observational study our participants were clerks at the University of…

  1. Iterative methods for weighted least-squares

    SciTech Connect

    Bobrovnikova, E.Y.; Vavasis, S.A.

    1996-12-31

    A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.

  2. Adaptive management: Chapter 1

    USGS Publications Warehouse

    Allen, Craig R.; Garmestani, Ahjond S.; Allen, Craig R.; Garmestani, Ahjond S.

    2015-01-01

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.

  3. Descriptive statistics.

    PubMed

    Shi, Runhua; McLarty, Jerry W

    2009-10-01

    In this article, we introduced basic concepts of statistics, type of distributions, and descriptive statistics. A few examples were also provided. The basic concepts presented herein are only a fraction of the concepts related to descriptive statistics. Also, there are many commonly used distributions not presented herein, such as Poisson distributions for rare events and exponential distributions, F distributions, and logistic distributions. More information can be found in many statistics books and publications.

  4. A methodology for finding the optimal iteration number of the SIRT algorithm for quantitative Electron Tomography.

    PubMed

    Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen

    2017-02-01

    The SIRT (Simultaneous Iterative Reconstruction Technique) algorithm is commonly used in Electron Tomography to calculate the original volume of the sample from noisy images, but the results provided by this iterative procedure are strongly dependent on the specific implementation of the algorithm, as well as on the number of iterations employed for the reconstruction. In this work, a methodology for selecting the iteration number of the SIRT reconstruction that provides the most accurate segmentation is proposed. The methodology is based on the statistical analysis of the intensity profiles at the edge of the objects in the reconstructed volume. A phantom which resembles a a carbon black aggregate has been created to validate the methodology and the SIRT implementations of two free software packages (TOMOJ and TOMO3D) have been used.

  5. Inexact Picard iterative scheme for steady-state nonlinear diffusion in random heterogeneous media.

    PubMed

    Mohan, P Surya; Nair, Prasanth B; Keane, Andy J

    2009-04-01

    In this paper, we present a numerical scheme for the analysis of steady-state nonlinear diffusion in random heterogeneous media. The key idea is to iteratively solve the nonlinear stochastic governing equations via an inexact Picard iteration scheme, wherein the nonlinear constitutive law is linearized using the current guess of the solution. The linearized stochastic governing equations are then spatially discretized and approximately solved using stochastic reduced basis projection schemes. The approximation to the solution process thus obtained is used as the guess for the next iteration. This iterative procedure is repeated until an appropriate convergence criterion is met. Detailed numerical studies are presented for diffusion in a square domain for varying degrees of nonlinearity. The numerical results are compared against benchmark Monte Carlo simulations, and it is shown that the proposed approach provides good approximations for the response statistics at modest computational effort.

  6. Statistical Software.

    ERIC Educational Resources Information Center

    Callamaras, Peter

    1983-01-01

    This buyer's guide to seven major types of statistics software packages for microcomputers reviews Edu-Ware Statistics 3.0; Financial Planning; Speed Stat; Statistics with DAISY; Human Systems Dynamics package of Stats Plus, ANOVA II, and REGRESS II; Maxistat; and Moore-Barnes' MBC Test Construction and MBC Correlation. (MBR)

  7. Statistical Diversions

    ERIC Educational Resources Information Center

    Petocz, Peter; Sowey, Eric

    2008-01-01

    As a branch of knowledge, Statistics is ubiquitous and its applications can be found in (almost) every field of human endeavour. In this article, the authors track down the possible source of the link between the "Siren song" and applications of Statistics. Answers to their previous five questions and five new questions on Statistics are presented.

  8. Performance evaluation of iterative reconstruction algorithms for achieving CT radiation dose reduction - a phantom study.

    PubMed

    Dodge, Cristina T; Tamm, Eric P; Cody, Dianna D; Liu, Xinming; Jensen, Corey T; Wei, Wei; Kundra, Vikas; Rong, X John

    2016-03-08

    The purpose of this study was to characterize image quality and dose performance with GE CT iterative reconstruction techniques, adaptive statistical iterative recontruction (ASiR), and model-based iterative reconstruction (MBIR), over a range of typical to low-dose intervals using the Catphan 600 and the anthropomorphic Kyoto Kagaku abdomen phantoms. The scope of the project was to quantitatively describe the advantages and limitations of these approaches. The Catphan 600 phantom, supplemented with a fat-equivalent oval ring, was scanned using a GE Discovery HD750 scanner at 120 kVp, 0.8 s rotation time, and pitch factors of 0.516, 0.984, and 1.375. The mA was selected for each pitch factor to achieve CTDIvol values of 24, 18, 12, 6, 3, 2, and 1 mGy. Images were reconstructed at 2.5 mm thickness with filtered back-projection (FBP); 20%, 40%, and 70% ASiR; and MBIR. The potential for dose reduction and low-contrast detectability were evaluated from noise and contrast-to-noise ratio (CNR) measurements in the CTP 404 module of the Catphan. Hounsfield units (HUs) of several materials were evaluated from the cylinder inserts in the CTP 404 module, and the modulation transfer function (MTF) was calculated from the air insert. The results were con-firmed in the anthropomorphic Kyoto Kagaku abdomen phantom at 6, 3, 2, and 1mGy. MBIR reduced noise levels five-fold and increased CNR by a factor of five compared to FBP below 6mGy CTDIvol, resulting in a substantial improvement in image quality. Compared to ASiR and FBP, HU in images reconstructed with MBIR were consistently lower, and this discrepancy was reversed by higher pitch factors in some materials. MBIR improved the conspicuity of the high-contrast spatial resolution bar pattern, and MTF quantification confirmed the superior spatial resolution performance of MBIR versus FBP and ASiR at higher dose levels. While ASiR and FBP were relatively insensitive to changes in dose and pitch, the spatial resolution for MBIR

  9. On the interplay between inner and outer iterations for a class of iterative methods

    SciTech Connect

    Giladi, E.

    1994-12-31

    Iterative algorithms for solving linear systems of equations often involve the solution of a subproblem at each step. This subproblem is usually another linear system of equations. For example, a preconditioned iteration involves the solution of a preconditioner at each step. In this paper, the author considers algorithms for which the subproblem is also solved iteratively. The subproblem is then said to be solved by {open_quotes}inner iterations{close_quotes} while the term {open_quotes}outer iteration{close_quotes} refers to a step of the basic algorithm. The cost of performing an outer iteration is dominated by the solution of the subproblem, and can be measured by the number of inner iterations. A good measure of the total amount of work needed to solve the original problem to some accuracy c is then, the total number of inner iterations. To lower the amount of work, one can consider solving the subproblems {open_quotes}inexactly{close_quotes} i.e. not to full accuracy. Although this diminishes the cost of solving each subproblem, it usually slows down the convergence of the outer iteration. It is therefore interesting to study the effect of solving each subproblem inexactly on the total amount of work. Specifically, the author considers strategies in which the accuracy to which the inner problem is solved, changes from one outer iteration to the other. The author seeks the `optimal strategy`, that is, the one that yields the lowest possible cost. Here, the author develops a methodology to find the optimal strategy, from the set of slowly varying strategies, for some iterative algorithms. This methodology is applied to the Chebychev iteration and it is shown that for Chebychev iteration, a strategy in which the inner-tolerance remains constant is optimal. The author also estimates this optimal constant. Then generalizations to other iterative procedures are discussed.

  10. Fixed Point Transformations Based Iterative Control of a Polymerization Reaction

    NASA Astrophysics Data System (ADS)

    Tar, József K.; Rudas, Imre J.

    As a paradigm of strongly coupled non-linear multi-variable dynamic systems the mathematical model of the free-radical polymerization of methyl-metachrylate with azobis (isobutyro-nitrile) as an initiator and toluene as a solvent taking place in a jacketed Continuous Stirred Tank Reactor (CSTR) is considered. In the adaptive control of this system only a single input variable is used as the control signal (the process input, i.e. dimensionless volumetric flow rate of the initiator), and a single output variable is observed (the process output, i.e. the number-average molecular weight of the polymer). Simulation examples illustrate that on the basis of a very rough and primitive model consisting of two scalar variables various fixed-point transformations based convergent iterations result in a novel, sophisticated adaptive control.

  11. Adaptive Image Denoising by Mixture Adaptation

    NASA Astrophysics Data System (ADS)

    Luo, Enming; Chan, Stanley H.; Nguyen, Truong Q.

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the Expectation-Maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad-hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper: First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. Experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  12. Adaptive Image Denoising by Mixture Adaptation.

    PubMed

    Luo, Enming; Chan, Stanley H; Nguyen, Truong Q

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  13. Iterants, Fermions and Majorana Operators

    NASA Astrophysics Data System (ADS)

    Kauffman, Louis H.

    Beginning with an elementary, oscillatory discrete dynamical system associated with the square root of minus one, we study both the foundations of mathematics and physics. Position and momentum do not commute in our discrete physics. Their commutator is related to the diffusion constant for a Brownian process and to the Heisenberg commutator in quantum mechanics. We take John Wheeler's idea of It from Bit as an essential clue and we rework the structure of that bit to a logical particle that is its own anti-particle, a logical Marjorana particle. This is our key example of the amphibian nature of mathematics and the external world. We show how the dynamical system for the square root of minus one is essentially the dynamics of a distinction whose self-reference leads to both the fusion algebra and the operator algebra for the Majorana Fermion. In the course of this, we develop an iterant algebra that supports all of matrix algebra and we end the essay with a discussion of the Dirac equation based on these principles.

  14. Progress on ITER Diagnostic Integration

    NASA Astrophysics Data System (ADS)

    Johnson, David; Feder, Russ; Klabacha, Jonathan; Loesser, Doug; Messineo, Mike; Stratton, Brentley; Wood, Rick; Zhai, Yuhu; Andrew, Phillip; Barnsley, Robin; Bertschinger, Guenter; Debock, Maarten; Reichle, Roger; Udintsev, Victor; Vayakis, George; Watts, Christopher; Walsh, Michael

    2013-10-01

    On ITER, front-end components must operate reliably in a hostile environment. Many will be housed in massive port plugs, which also shield the machine from radiation. Multiple diagnostics reside in a single plug, presenting new challenges for developers. Front-end components must tolerate thermally-induced stresses, disruption-induced mechanical loads, stray ECH radiation, displacement damage, and degradation due to plasma-induced coatings. The impact of failures is amplified due to the difficulty in performing robotic maintenance on these large structures. Motivated by needs to minimize disruption loads on the plugs, standardize the handling of shield modules, and decouple the parallel efforts of the many parties, the packaging strategy for diagnostics has recently focused on the use of 3 vertical shield modules inserted from the plasma side into each equatorial plug structure. At the front of each is a detachable first wall element with customized apertures. Progress on US equatorial and upper plugs will be used as examples, including the layout of components in the interspace and port cell regions. Supported by PPPL under contract DE-AC02-09CH11466 and UT-Battelle, LLC under contract DE-AC05-00OR22725 with the U.S. DOE.

  15. A simple and flexible graphical approach for adaptive group-sequential clinical trials.

    PubMed

    Sugitani, Toshifumi; Bretz, Frank; Maurer, Willi

    2016-01-01

    In this article, we introduce a graphical approach to testing multiple hypotheses in group-sequential clinical trials allowing for midterm design modifications. It is intended for structured study objectives in adaptive clinical trials and extends the graphical group-sequential designs from Maurer and Bretz (Statistics in Biopharmaceutical Research 2013; 5: 311-320) to adaptive trial designs. The resulting test strategies can be visualized graphically and performed iteratively. We illustrate the methodology with two examples from our clinical trial practice. First, we consider a three-armed gold-standard trial with the option to reallocate patients to either the test drug or the active control group, while stopping the recruitment of patients to placebo, after having demonstrated superiority of the test drug over placebo at an interim analysis. Second, we consider a confirmatory two-stage adaptive design with treatment selection at interim.

  16. Neural Network Aided Adaptive Extended Kalman Filtering Approach for DGPS Positioning

    NASA Astrophysics Data System (ADS)

    Jwo, Dah-Jing; Huang, Hung-Chih

    2004-09-01

    The extended Kalman filter, when employed in the GPS receiver as the navigation state estimator, provides optimal solutions if the noise statistics for the measurement and system are completely known. In practice, the noise varies with time, which results in performance degradation. The covariance matching method is a conventional adaptive approach for estimation of noise covariance matrices. The technique attempts to make the actual filter residuals consistent with their theoretical covariance. However, this innovation-based adaptive estimation shows very noisy results if the window size is small. To resolve the problem, a multilayered neural network is trained to identify the measurement noise covariance matrix, in which the back-propagation algorithm is employed to iteratively adjust the link weights using the steepest descent technique. Numerical simulations show that based on the proposed approach the adaptation performance is substantially enhanced and the positioning accuracy is substantially improved.

  17. New stopping criteria for iterative root finding

    PubMed Central

    Nikolajsen, Jorgen L.

    2014-01-01

    A set of simple stopping criteria is presented, which improve the efficiency of iterative root finding by terminating the iterations immediately when no further improvement of the roots is possible. The criteria use only the function evaluations already needed by the root finding procedure to which they are applied. The improved efficiency is achieved by formulating the stopping criteria in terms of fractional significant digits. Test results show that the new stopping criteria reduce the iteration work load by about one-third compared with the most efficient stopping criteria currently available. This is achieved without compromising the accuracy of the extracted roots. PMID:26064544

  18. Iterative restoration algorithms for nonlinear constraint computing

    NASA Astrophysics Data System (ADS)

    Szu, Harold

    A general iterative-restoration principle is introduced to facilitate the implementation of nonlinear optical processors. The von Neumann convergence theorem is generalized to include nonorthogonal subspaces which can be reduced to a special orthogonal projection operator by applying an orthogonality condition. This principle is shown to permit derivation of the Jacobi algorithm, the recursive principle, the van Cittert (1931) deconvolution method, the iteration schemes of Gerchberg (1974) and Papoulis (1975), and iteration schemes using two Fourier conjugate domains (e.g., Fienup, 1981). Applications to restoring the image of a double star and division by hard and soft zeros are discussed, and sample results are presented graphically.

  19. Overhead Image Statistics

    SciTech Connect

    Vijayaraj, Veeraraghavan; Cheriyadat, Anil M; Bhaduri, Budhendra L; Vatsavai, Raju; Bright, Eddie A

    2008-01-01

    Statistical properties of high-resolution overhead images representing different land use categories are analyzed using various local and global statistical image properties based on the shape of the power spectrum, image gradient distributions, edge co-occurrence, and inter-scale wavelet coefficient distributions. The analysis was performed on a database of high-resolution (1 meter) overhead images representing a multitude of different downtown, suburban, commercial, agricultural and wooded exemplars. Various statistical properties relating to these image categories and their relationship are discussed. The categorical variations in power spectrum contour shapes, the unique gradient distribution characteristics of wooded categories, the similarity in edge co-occurrence statistics for overhead and natural images, and the unique edge co-occurrence statistics of downtown categories are presented in this work. Though previous work on natural image statistics has showed some of the unique characteristics for different categories, the relationships for overhead images are not well understood. The statistical properties of natural images were used in previous studies to develop prior image models, to predict and index objects in a scene and to improve computer vision models. The results from our research findings can be used to augment and adapt computer vision algorithms that rely on prior image statistics to process overhead images, calibrate the performance of overhead image analysis algorithms, and derive features for better discrimination of overhead image categories.

  20. ITER CS Intermodule Support Structure

    SciTech Connect

    Myatt, R.; Freudenberg, Kevin D

    2011-01-01

    With five independently driven, bi-polarity power supplies, the modules of the ITER central solenoid (CS) can be energized in aligned or opposing field directions. This sets up the possibility for repelling modules, which indeed occurs, particularly between CS2L and CS3L around the End of Burn (EOB) time point. Light interface compression between these two modules at EOB and wide variations in these coil currents throughout the pulse produce a tendency for relative motion or slip. Ideally, the slip is purely radial as the modules breathe without any accumulative translational motion. In reality, however, asymmetries such as nonuniformity in intermodule friction, lateral loads from a plasma Vertical Disruption Event (VDE), magnetic forces from manufacturing and assembly tolerances, and earthquakes can all contribute to a combination of radial and lateral module motion. This paper presents 2D and 3D, nonlinear, ANSYS models which simulate these various asymmetries and determine the lateral forces which must be carried by the intermodule structure. Summing all of these asymmetric force contributions leads to a design-basis lateral load which is used in the design of various support concepts: the CS-CDR centering rings and a variation, the 2001 FDR baseline radial keys, and interlocking castles structures. Radial key-type intermodule structure interface slip and stresses are tracked through multiple 15 MA scenario current pulses to demonstrate stable motion following the first few cycles. Detractions and benefits of each candidate intermodule structure are discussed, leading to the simplest and most robust configuration which meets the design requirements: match-drilled radial holes and pin-shaped keys.

  1. Embedding adaptive arithmetic coder in chaos-based cryptography

    NASA Astrophysics Data System (ADS)

    Li, Heng-Jian; Zhang, Jia-Shu

    2010-05-01

    In this study an adaptive arithmetic coder is embedded in the Baptista-type chaotic cryptosystem for implementing secure data compression. To build the multiple lookup tables of secure data compression, the phase space of chaos map with a uniform distribution in the search mode is divided non-uniformly according to the dynamic probability estimation of plaintext symbols. As a result, more probable symbols are selected according to the local statistical characters of plaintext and the required number of iterations is small since the more probable symbols have a higher chance to be visited by the chaotic search trajectory. By exploiting non-uniformity in the probabilities under which a number of iteration to be coded takes on its possible values, the compression capability is achieved by adaptive arithmetic code. Therefore, the system offers both compression and security. Compared with original arithmetic coding, simulation results on Calgary Corpus files show that the proposed scheme suffers from a reduction in compression performance less than 12% and is not susceptible to previously carried out attacks on arithmetic coding algorithms.

  2. Sequence analysis by iterated maps, a review

    PubMed Central

    2014-01-01

    Among alignment-free methods, Iterated Maps (IMs) are on a particular extreme: they are also scale free (order free). The use of IMs for sequence analysis is also distinct from other alignment-free methodologies in being rooted in statistical mechanics instead of computational linguistics. Both of these roots go back over two decades to the use of fractal geometry in the characterization of phase-space representations. The time series analysis origin of the field is betrayed by the title of the manuscript that started this alignment-free subdomain in 1990, ‘Chaos Game Representation’. The clash between the analysis of sequences as continuous series and the better established use of Markovian approaches to discrete series was almost immediate, with a defining critique published in same journal 2 years later. The rest of that decade would go by before the scale-free nature of the IM space was uncovered. The ensuing decade saw this scalability generalized for non-genomic alphabets as well as an interest in its use for graphic representation of biological sequences. Finally, in the past couple of years, in step with the emergence of BigData and MapReduce as a new computational paradigm, there is a surprising third act in the IM story. Multiple reports have described gains in computational efficiency of multiple orders of magnitude over more conventional sequence analysis methodologies. The stage appears to be now set for a recasting of IMs with a central role in processing nextgen sequencing results. PMID:24162172

  3. A component analysis based on serial results analyzing performance of parallel iterative programs

    SciTech Connect

    Richman, S.C.

    1994-12-31

    This research is concerned with the parallel performance of iterative methods for solving large, sparse, nonsymmetric linear systems. Most of the iterative methods are first presented with their time costs and convergence rates examined intensively on sequential machines, and then adapted to parallel machines. The analysis of the parallel iterative performance is more complicated than that of serial performance, since the former can be affected by many new factors, such as data communication schemes, number of processors used, and Ordering and mapping techniques. Although the author is able to summarize results from data obtained after examining certain cases by experiments, two questions remain: (1) How to explain the results obtained? (2) How to extend the results from the certain cases to general cases? To answer these two questions quantitatively, the author introduces a tool called component analysis based on serial results. This component analysis is introduced because the iterative methods consist mainly of several basic functions such as linked triads, inner products, and triangular solves, which have different intrinsic parallelisms and are suitable for different parallel techniques. The parallel performance of each iterative method is first expressed as a weighted sum of the parallel performance of the basic functions that are the components of the method. Then, one separately examines the performance of basic functions and the weighting distributions of iterative methods, from which two independent sets of information are obtained when solving a given problem. In this component approach, all the weightings require only serial costs not parallel costs, and each iterative method for solving a given problem is represented by its unique weighting distribution. The information given by the basic functions is independent of iterative method, while that given by weightings is independent of parallel technique, parallel machine and number of processors.

  4. The Physics Basis of ITER Confinement

    SciTech Connect

    Wagner, F.

    2009-02-19

    ITER will be the first fusion reactor and the 50 year old dream of fusion scientists will become reality. The quality of magnetic confinement will decide about the success of ITER, directly in the form of the confinement time and indirectly because it decides about the plasma parameters and the fluxes, which cross the separatrix and have to be handled externally by technical means. This lecture portrays some of the basic principles which govern plasma confinement, uses dimensionless scaling to set the limits for the predictions for ITER, an approach which also shows the limitations of the predictions, and describes briefly the major characteristics and physics behind the H-mode--the preferred confinement regime of ITER.

  5. Archimedes' Pi--An Introduction to Iteration.

    ERIC Educational Resources Information Center

    Lotspeich, Richard

    1988-01-01

    One method (attributed to Archimedes) of approximating pi offers a simple yet interesting introduction to one of the basic ideas of numerical analysis, an iteration sequence. The method is described and elaborated. (PK)

  6. Anderson Acceleration for Fixed-Point Iterations

    SciTech Connect

    Walker, Homer F.

    2015-08-31

    The purpose of this grant was to support research on acceleration methods for fixed-point iterations, with applications to computational frameworks and simulation problems that are of interest to DOE.

  7. ITER Magnet Feeder: Design, Manufacturing and Integration

    NASA Astrophysics Data System (ADS)

    CHEN, Yonghua; ILIN, Y.; M., SU; C., NICHOLAS; BAUER, P.; JAROMIR, F.; LU, Kun; CHENG, Yong; SONG, Yuntao; LIU, Chen; HUANG, Xiongyi; ZHOU, Tingzhi; SHEN, Guang; WANG, Zhongwei; FENG, Hansheng; SHEN, Junsong

    2015-03-01

    The International Thermonuclear Experimental Reactor (ITER) feeder procurement is now well underway. The feeder design has been improved by the feeder teams at the ITER Organization (IO) and the Institute of Plasma Physics, Chinese Academy of Sciences (ASIPP) in the last 2 years along with analyses and qualification activities. The feeder design is being progressively finalized. In addition, the preparation of qualification and manufacturing are well scheduled at ASIPP. This paper mainly presents the design, the overview of manufacturing and the status of integration on the ITER magnet feeders. supported by the National Special Support for R&D on Science and Technology for ITER (Ministry of Public Security of the People's Republic of China-MPS) (No. 2008GB102000)

  8. On the safety of ITER accelerators.

    PubMed

    Li, Ge

    2013-01-01

    Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate -1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER.

  9. Statistical Diversions

    ERIC Educational Resources Information Center

    Petocz, Peter; Sowey, Eric

    2008-01-01

    In this article, the authors focus on hypothesis testing--that peculiarly statistical way of deciding things. Statistical methods for testing hypotheses were developed in the 1920s and 1930s by some of the most famous statisticians, in particular Ronald Fisher, Jerzy Neyman and Egon Pearson, who laid the foundations of almost all modern methods of…

  10. Accelerated Schwarz iterations for Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Nagid, Nabila; Belhadj, Hassan; Amattouch, Mohamed Ridouan

    2017-01-01

    In this paper, the Restricted additive Schwarz (RAS) method is applied to solve Helmholtz equation. To accelerate the RAS iterations, we propose to apply the vector ɛ-algorithm. Some convergence analysis of the proposed method is presented, and applied succeffully to Helmholtz problem. The obtained results show the efficiency of the proposed approach. Moreover, the algorithm yields much faster convergence than the classical Schwarz iterations.

  11. Programmable Iterative Optical Image And Data Processing

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    1995-01-01

    Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

  12. Novel aspects of plasma control in ITER

    NASA Astrophysics Data System (ADS)

    Humphreys, D.; Ambrosino, G.; de Vries, P.; Felici, F.; Kim, S. H.; Jackson, G.; Kallenbach, A.; Kolemen, E.; Lister, J.; Moreau, D.; Pironti, A.; Raupp, G.; Sauter, O.; Schuster, E.; Snipes, J.; Treutterer, W.; Walker, M.; Welander, A.; Winter, A.; Zabeo, L.

    2015-02-01

    ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily for ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g., current profile regulation, tearing mode (TM) suppression), control mathematics (e.g., algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g., methods for management of highly subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.

  13. Iterative methods for design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Yoon, B. G.

    1989-01-01

    A numerical method is presented for design sensitivity analysis, using an iterative-method reanalysis of the structure generated by a small perturbation in the design variable; a forward-difference scheme is then employed to obtain the approximate sensitivity. Algorithms are developed for displacement and stress sensitivity, as well as for eignevalues and eigenvector sensitivity, and the iterative schemes are modified so that the coefficient matrices are constant and therefore decomposed only once.

  14. Novel aspects of plasma control in ITER

    DOE PAGES

    Humphreys, David; Ambrosino, G.; de Vries, Peter; ...

    2015-02-12

    ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g. current profile regulation, tearing mode suppression (TM)), control mathematics (e.g. algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g. methods for management of highly-subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Finally, issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less

  15. Wavefront Control for Space Telescope Applications Using Adaptive Optics

    DTIC Science & Technology

    2007-12-01

    SPACE TELESCOPE APPLICATIONS USING ADAPTIVE OPTICS by Matthew R. Allen December 2007 Thesis Advisor: Brij Agrawal Second Reader...ASTRONAUTICAL ENGINEERING from the NAVAL POSTGRADUATE SCHOOL December 2007 Author: Matthew R. Allen Approved by: Dr, Brij Agrawal...34 3. Direct Iterative Zonal Feedback Control ........................................ 35 4. Direct Iterative

  16. Statistics Clinic

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James

    2014-01-01

    Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.

  17. Quick Statistics

    MedlinePlus

    ... population, or about 25 million Americans, has experienced tinnitus lasting at least five minutes in the past ... by NIDCD Epidemiology and Statistics Program staff: (1) tinnitus prevalence was obtained from the 2008 National Health ...

  18. CORSICA modelling of ITER hybrid operation scenarios

    NASA Astrophysics Data System (ADS)

    Kim, S. H.; Bulmer, R. H.; Campbell, D. J.; Casper, T. A.; LoDestro, L. L.; Meyer, W. H.; Pearlstein, L. D.; Snipes, J. A.

    2016-12-01

    The hybrid operating mode observed in several tokamaks is characterized by further enhancement over the high plasma confinement (H-mode) associated with reduced magneto-hydro-dynamic (MHD) instabilities linked to a stationary flat safety factor (q ) profile in the core region. The proposed ITER hybrid operation is currently aiming at operating for a long burn duration (>1000 s) with a moderate fusion power multiplication factor, Q , of at least 5. This paper presents candidate ITER hybrid operation scenarios developed using a free-boundary transport modelling code, CORSICA, taking all relevant physics and engineering constraints into account. The ITER hybrid operation scenarios have been developed by tailoring the 15 MA baseline ITER inductive H-mode scenario. Accessible operation conditions for ITER hybrid operation and achievable range of plasma parameters have been investigated considering uncertainties on the plasma confinement and transport. ITER operation capability for avoiding the poloidal field coil current, field and force limits has been examined by applying different current ramp rates, flat-top plasma currents and densities, and pre-magnetization of the poloidal field coils. Various combinations of heating and current drive (H&CD) schemes have been applied to study several physics issues, such as the plasma current density profile tailoring, enhancement of the plasma energy confinement and fusion power generation. A parameterized edge pedestal model based on EPED1 added to the CORSICA code has been applied to hybrid operation scenarios. Finally, fully self-consistent free-boundary transport simulations have been performed to provide information on the poloidal field coil voltage demands and to study the controllability with the ITER controllers. Extended from Proc. 24th Int. Conf. on Fusion Energy (San Diego, 2012) IT/P1-13.

  19. Energetic particle physics issues for ITER

    SciTech Connect

    Cheng, C.Z.; Budny, R.; Fu, G.Y.

    1996-12-31

    This paper summarizes our present understanding of the following energetic/alpha particle physics issues for the 21 MA, 20 TF coil ITER Interim Design configuration and operational scenarios: (a) toroidal field ripple effects on alpha particle confinement, (b) energetic particle interaction with low frequency MHD modes, (c) energetic particle excitation of toroidal Alfven eigenmodes, and (d) energetic particle transport due to MHD modes. TF ripple effects on alpha loss in ITER under a number of different operating conditions are found to be small with a maximum loss of 1%. With careful plasma control in ITER reversed-shear operation, TF ripple induced alpha loss can be reduced to below the nominal ITER design limit of 5%. Fishbone modes are expected to be unstable for {beta}{sub {alpha}} > 1%, and sawtooth stabilization is lost if the ideal kink growth rate exceeds 10% of the deeply trapped alpha precessional drift frequency evaluated at the q = 1 surface. However, it is expected that the fishbone modes will lead only to a local flattening of the alpha profile due to small banana size. MHD modes observed during slow decrease of stored energy after fast partial electron temperature collapse in JT-60U reversed-shear experiments may be resonant type instabilities; they may have implications on the energetic particle confinement in ITER reversed-shear operation. From the results of various TAE stability code calculations, ITER equilibria appear to lie close to TAE linear stability thresholds. However, the prognosis depends strongly on q profile and profiles of alpha and other high energy particles species. If TAE modes are unstable in ITER, the stochastic diffusion is the main loss mechanism, which scales with ({delta}B{sub r}/B){sup 2}, because of the relatively small alpha particle banana orbit size. For isolated TAE modes the particle loss is very small, and TAE modes saturate via the resonant wave-particle trapping process at very small amplitude.

  20. Parallel computing for simultaneous iterative tomographic imaging by graphics processing units

    NASA Astrophysics Data System (ADS)

    Bello-Maldonado, Pedro D.; López, Ricardo; Rogers, Colleen; Jin, Yuanwei; Lu, Enyue

    2016-05-01

    In this paper, we address the problem of accelerating inversion algorithms for nonlinear acoustic tomographic imaging by parallel computing on graphics processing units (GPUs). Nonlinear inversion algorithms for tomographic imaging often rely on iterative algorithms for solving an inverse problem, thus computationally intensive. We study the simultaneous iterative reconstruction technique (SIRT) for the multiple-input-multiple-output (MIMO) tomography algorithm which enables parallel computations of the grid points as well as the parallel execution of multiple source excitation. Using graphics processing units (GPUs) and the Compute Unified Device Architecture (CUDA) programming model an overall improvement of 26.33x was achieved when combining both approaches compared with sequential algorithms. Furthermore we propose an adaptive iterative relaxation factor and the use of non-uniform weights to improve the overall convergence of the algorithm. Using these techniques, fast computations can be performed in parallel without the loss of image quality during the reconstruction process.

  1. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles.

    PubMed

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-08-13

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well.

  2. Iterative Frequency Domain Decision Feedback Equalization and Decoding for Underwater Acoustic Communications

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Ge, Jian-Hua

    2012-12-01

    Single-carrier (SC) transmission with frequency-domain equalization (FDE) is today recognized as an attractive alternative to orthogonal frequency-division multiplexing (OFDM) for communication application with the inter-symbol interference (ISI) caused by multi-path propagation, especially in shallow water channel. In this paper, we investigate an iterative receiver based on minimum mean square error (MMSE) decision feedback equalizer (DFE) with symbol rate and fractional rate samplings in the frequency domain (FD) and serially concatenated trellis coded modulation (SCTCM) decoder. Based on sound speed profiles (SSP) measured in the lake and finite-element ray tracking (Bellhop) method, the shallow water channel is constructed to evaluate the performance of the proposed iterative receiver. Performance results show that the proposed iterative receiver can significantly improve the performance and obtain better data transmission than FD linear and adaptive decision feedback equalizers, especially in adopting fractional rate sampling.

  3. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles

    PubMed Central

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-01-01

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well. PMID:26287194

  4. An Efficient Augmented Lagrangian Method for Statistical X-Ray CT Image Reconstruction

    PubMed Central

    Li, Jiaojiao; Niu, Shanzhou; Huang, Jing; Bian, Zhaoying; Feng, Qianjin; Yu, Gaohang; Liang, Zhengrong; Chen, Wufan; Ma, Jianhua

    2015-01-01

    Statistical iterative reconstruction (SIR) for X-ray computed tomography (CT) under the penalized weighted least-squares criteria can yield significant gains over conventional analytical reconstruction from the noisy measurement. However, due to the nonlinear expression of the objective function, most exiting algorithms related to the SIR unavoidably suffer from heavy computation load and slow convergence rate, especially when an edge-preserving or sparsity-based penalty or regularization is incorporated. In this work, to address abovementioned issues of the general algorithms related to the SIR, we propose an adaptive nonmonotone alternating direction algorithm in the framework of augmented Lagrangian multiplier method, which is termed as “ALM-ANAD”. The algorithm effectively combines an alternating direction technique with an adaptive nonmonotone line search to minimize the augmented Lagrangian function at each iteration. To evaluate the present ALM-ANAD algorithm, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present ALM-ANAD algorithm can achieve noticeable gains over the classical nonlinear conjugate gradient algorithm and state-of-the-art split Bregman algorithm in terms of noise reduction, contrast-to-noise ratio, convergence rate, and universal quality index metrics. PMID:26495975

  5. Pipeline Processing with an Iterative, Context-Based Detection Model

    DTIC Science & Technology

    2016-01-22

    Detection statistics for four matched field processors operating on data from the ASAR array show that empirical matched field processing with...adaptive (bottom) matched field processors . ......................................................... 70 Figure 38: Amplitude envelope of a single raw data ...corrected by the gating function of the post- processor is displayed in Figure 13. The top trace is again the original filtered data from station SPA0

  6. A non-iterative implementation of Tango's score confidence interval for a paired difference of proportions.

    PubMed

    Yang, Zhao; Sun, Xuezheng; Hardin, James W

    2013-04-15

    For matched-pair binary data, a variety of approaches have been proposed for the construction of a confidence interval (CI) for the difference of marginal probabilities between two procedures. The score-based approximate CI has been shown to outperform other asymptotic CIs. Tango's method provides a score CI by inverting a score test statistic using an iterative procedure. In this paper, we propose an efficient non-iterative method with closed-form expression to calculate Tango's CIs. Examples illustrate the practical application of the new approach.

  7. [The ideographic iteration mark in Senkinho].

    PubMed

    Matsuoka, Takanori; Yamashita, Koichi; Murasaki, Toru

    2006-06-01

    In the 7th century, Senkinho was written by Sonshibaku in the Tang dynasty China. This book that was altered in 1066 in the north Sung dynasty China has become known in the world now. However four series of books remained intact, as they were not modified. The names of each book were Senkinho Kentoushi-syouraibon, the Shincho-sonshinjin senkinho, Stein book, and the Kozlov book. Senkinho Kentoushi-syouraibon and Shincho-sonshinjin Senkinho are in Japan, while Stein and the Kozlov books are in the United Kingdom and Russia respectively. We researched the ideographic iteration marks in these books. In Senkinho Kentoushi-syouraibon, several ideographic iteration marks were used. But in Shincho-sonshinjin senkinho and the Kozlov book, only one ideographic iteration mark was used. Furthermore, there were two types of ideographic iteration marks in the Chinese character text of Senkinho Kentoushi-syouraibon. We estimated that the ideographic iteration marks in the Katakana character were transcribed between the middle era of Kamakura Japan and the early era of Muromachi Japan.

  8. Adaptive management of watersheds and related resources

    USGS Publications Warehouse

    Williams, Byron K.

    2009-01-01

    The concept of learning about natural resources through the practice of management has been around for several decades and by now is associated with the term adaptive management. The objectives of this paper are to offer a framework for adaptive management that includes an operational definition, a description of conditions in which it can be usefully applied, and a systematic approach to its application. Adaptive decisionmaking is described as iterative, learning-based management in two phases, each with its own mechanisms for feedback and adaptation. The linkages between traditional experimental science and adaptive management are discussed.

  9. Statistical Mechanics of Combinatorial Auctions

    NASA Astrophysics Data System (ADS)

    Galla, Tobias; Leone, Michele; Marsili, Matteo; Sellitto, Mauro; Weigt, Martin; Zecchina, Riccardo

    2006-09-01

    Combinatorial auctions are formulated as frustrated lattice gases on sparse random graphs, allowing the determination of the optimal revenue by methods of statistical physics. Transitions between computationally easy and hard regimes are found and interpreted in terms of the geometric structure of the space of solutions. We introduce an iterative algorithm to solve intermediate and large instances, and discuss competing states of optimal revenue and maximal number of satisfied bidders. The algorithm can be generalized to the hard phase and to more sophisticated auction protocols.

  10. Iterative Reconstruction of Coded Source Neutron Radiographs

    SciTech Connect

    Santos-Villalobos, Hector J; Bingham, Philip R; Gregor, Jens

    2013-01-01

    Use of a coded source facilitates high-resolution neutron imaging through magnifications but requires that the radiographic data be deconvolved. A comparison of direct deconvolution with two different iterative algorithms has been performed. One iterative algorithm is based on a maximum likelihood estimation (MLE)-like framework and the second is based on a geometric model of the neutron beam within a least squares formulation of the inverse imaging problem. Simulated data for both uniform and Gaussian shaped source distributions was used for testing to understand the impact of non-uniformities present in neutron beam distributions on the reconstructed images. Results indicate that the model based reconstruction method will match resolution and improve on contrast over convolution methods in the presence of non-uniform sources. Additionally, the model based iterative algorithm provides direct calculation of quantitative transmission values while the convolution based methods must be normalized base on known values.

  11. The Dynamics of Some Iterative Implicit Schemes

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1994-01-01

    The global asymptotic nonlinear behavior of some standard iterative procedures in solving nonlinear systems of algebraic equations arising from four implicit linear multistep methods (LMMs) in discretizing 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations is analyzed using the theory of dynamical systems. With the aid of parallel Connection Machines (CM-2 and CM-5), the associated bifurcation diagrams as a function of the time step, and the complex behavior of the associated 'numerical basins of attraction' of these iterative implicit schemes are revealed and compared. Studies showed that all of the four implicit LMMs exhibit a drastic distortion and segmentation but less shrinkage of the basin of attraction of the true solution than standard explicit methods. The numerical basins of attraction of a noniterative implicit procedure mimic more closely the basins of attraction of the differential equations than the iterative implicit procedures for the four implicit LMMs.

  12. The ITER in-vessel system

    SciTech Connect

    Lousteau, D.C.

    1994-09-01

    The overall programmatic objective, as defined in the ITER Engineering Design Activities (EDA) Agreement, is to demonstrate the scientific and technological feasibility of fusion energy for peaceful purposes. The ITER EDA Phase, due to last until July 1998, will encompass the design of the device and its auxiliary systems and facilities, including the preparation of engineering drawings. The EDA also incorporates validating research and development (R&D) work, including the development and testing of key components. The purpose of this paper is to review the status of the design, as it has been developed so far, emphasizing the design and integration of those components contained within the vacuum vessel of the ITER device. The components included in the in-vessel systems are divertor and first wall; blanket and shield; plasma heating, fueling, and vacuum pumping equipment; and remote handling equipment.

  13. Re-starting an Arnoldi iteration

    SciTech Connect

    Lehoucq, R.B.

    1996-12-31

    The Arnoldi iteration is an efficient procedure for approximating a subset of the eigensystem of a large sparse n x n matrix A. The iteration produces a partial orthogonal reduction of A into an upper Hessenberg matrix H{sub m} of order m. The eigenvalues of this small matrix H{sub m} are used to approximate a subset of the eigenvalues of the large matrix A. The eigenvalues of H{sub m} improve as estimates to those of A as m increases. Unfortunately, so does the cost and storage of the reduction. The idea of re-starting the Arnoldi iteration is motivated by the prohibitive cost associated with building a large factorization.

  14. Selection of plasma facing materials for ITER

    SciTech Connect

    Ulrickson, M.; Barabash, V.; Chiocchio, S.

    1996-10-01

    ITER will be the first tokamak having long pulse operation using deuterium-tritium fuel. The problem of designing heat removal structures for steady state in a neutron environment is a major technical goal for the ITER Engineering Design Activity (EDA). The steady state heat flux specified for divertor components is 5 MW/m{sup 2} for normal operation with transients to 15 MW/m{sup 2} for up to 10 s. The selection of materials for plasma facing components is one of the major research activities. Three materials are being considered for the divertor; carbon fiber composites, beryllium, and tungsten. This paper discusses the relative advantages and disadvantages of these materials. The final section of plasma facing materials for the ITER divertor will not be made until the end of the EDA.

  15. Global Asymptotic Behavior of Iterative Implicit Schemes

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1994-01-01

    The global asymptotic nonlinear behavior of some standard iterative procedures in solving nonlinear systems of algebraic equations arising from four implicit linear multistep methods (LMMs) in discretizing three models of 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODEs) is analyzed using the theory of dynamical systems. The iterative procedures include simple iteration and full and modified Newton iterations. The results are compared with standard Runge-Kutta explicit methods, a noniterative implicit procedure, and the Newton method of solving the steady part of the ODEs. Studies showed that aside from exhibiting spurious asymptotes, all of the four implicit LMMs can change the type and stability of the steady states of the differential equations (DEs). They also exhibit a drastic distortion but less shrinkage of the basin of attraction of the true solution than standard nonLMM explicit methods. The simple iteration procedure exhibits behavior which is similar to standard nonLMM explicit methods except that spurious steady-state numerical solutions cannot occur. The numerical basins of attraction of the noniterative implicit procedure mimic more closely the basins of attraction of the DEs and are more efficient than the three iterative implicit procedures for the four implicit LMMs. Contrary to popular belief, the initial data using the Newton method of solving the steady part of the DEs may not have to be close to the exact steady state for convergence. These results can be used as an explanation for possible causes and cures of slow convergence and nonconvergence of steady-state numerical solutions when using an implicit LMM time-dependent approach in computational fluid dynamics.

  16. Iterated learning and the evolution of language.

    PubMed

    Kirby, Simon; Griffiths, Tom; Smith, Kenny

    2014-10-01

    Iterated learning describes the process whereby an individual learns their behaviour by exposure to another individual's behaviour, who themselves learnt it in the same way. It can be seen as a key mechanism of cultural evolution. We review various methods for understanding how behaviour is shaped by the iterated learning process: computational agent-based simulations; mathematical modelling; and laboratory experiments in humans and non-human animals. We show how this framework has been used to explain the origins of structure in language, and argue that cultural evolution must be considered alongside biological evolution in explanations of language origins.

  17. Iterative method for generating correlated binary sequences

    NASA Astrophysics Data System (ADS)

    Usatenko, O. V.; Melnik, S. S.; Apostolov, S. S.; Makarov, N. M.; Krokhin, A. A.

    2014-11-01

    We propose an efficient iterative method for generating random correlated binary sequences with a prescribed correlation function. The method is based on consecutive linear modulations of an initially uncorrelated sequence into a correlated one. Each step of modulation increases the correlations until the desired level has been reached. The robustness and efficiency of the proposed algorithm are tested by generating sequences with inverse power-law correlations. The substantial increase in the strength of correlation in the iterative method with respect to single-step filtering generation is shown for all studied correlation functions. Our results can be used for design of disordered superlattices, waveguides, and surfaces with selective transport properties.

  18. Modified Iterative Extended Hueckel. 1: Theory

    NASA Technical Reports Server (NTRS)

    Aronowitz, S.

    1980-01-01

    Iterative Extended Huekel is modified by inclusion of explicit effective internuclear and electronic interactions. The one electron energies are shown to obey a variational principle because of the form of the effective electronic interactions. The modifications permit mimicking of aspects of valence bond theory with the additional feature that the energies associated with valence bond type structures are explicitly calculated. In turn, a hybrid molecular, orbital valence, bond scheme is introduced which incorporates variant total molecular electronic density distributions similar to the way that Iterative Extended Hueckel incorporates atoms.

  19. Patch-based iterative conditional geostatistical simulation using graph cuts

    NASA Astrophysics Data System (ADS)

    Li, Xue; Mariethoz, Gregoire; Lu, DeTang; Linde, Niklas

    2016-08-01

    Training image-based geostatistical methods are increasingly popular in groundwater hydrology even if existing algorithms present limitations that often make real-world applications difficult. These limitations include a computational cost that can be prohibitive for high-resolution 3-D applications, the presence of visual artifacts in the model realizations, and a low variability between model realizations due to the limited pool of patterns available in a finite-size training image. In this paper, we address these issues by proposing an iterative patch-based algorithm which adapts a graph cuts methodology that is widely used in computer graphics. Our adapted graph cuts method optimally cuts patches of pixel values borrowed from the training image and assembles them successively, each time accounting for the information of previously stitched patches. The initial simulation result might display artifacts, which are identified as regions of high cost. These artifacts are reduced by iteratively placing new patches in high-cost regions. In contrast to most patch-based algorithms, the proposed scheme can also efficiently address point conditioning. An advantage of the method is that the cut process results in the creation of new patterns that are not present in the training image, thereby increasing pattern variability. To quantify this effect, a new measure of variability is developed, the merging index, quantifies the pattern variability in the realizations with respect to the training image. A series of sensitivity analyses demonstrates the stability of the proposed graph cuts approach, which produces satisfying simulations for a wide range of parameters values. Applications to 2-D and 3-D cases are compared to state-of-the-art multiple-point methods. The results show that the proposed approach obtains significant speedups and increases variability between realizations. Connectivity functions applied to 2-D models transport simulations in 3-D models are used to

  20. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    SciTech Connect

    Li Yupeng Deutsch, Clayton V.

    2012-06-15

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells. In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.

  1. Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace

    PubMed Central

    Zhang, Cheng; Lai, Chun-Liang; Pettitt, B. Montgomery

    2016-01-01

    The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution. PMID:27453632

  2. Statistics Revelations

    ERIC Educational Resources Information Center

    Chicot, Katie; Holmes, Hilary

    2012-01-01

    The use, and misuse, of statistics is commonplace, yet in the printed format data representations can be either over simplified, supposedly for impact, or so complex as to lead to boredom, supposedly for completeness and accuracy. In this article the link to the video clip shows how dynamic visual representations can enliven and enhance the…

  3. Statistical Inference

    NASA Astrophysics Data System (ADS)

    Khan, Shahjahan

    Often scientific information on various data generating processes are presented in the from of numerical and categorical data. Except for some very rare occasions, generally such data represent a small part of the population, or selected outcomes of any data generating process. Although, valuable and useful information is lurking in the array of scientific data, generally, they are unavailable to the users. Appropriate statistical methods are essential to reveal the hidden "jewels" in the mess of the row data. Exploratory data analysis methods are used to uncover such valuable characteristics of the observed data. Statistical inference provides techniques to make valid conclusions about the unknown characteristics or parameters of the population from which scientifically drawn sample data are selected. Usually, statistical inference includes estimation of population parameters as well as performing test of hypotheses on the parameters. However, prediction of future responses and determining the prediction distributions are also part of statistical inference. Both Classical or Frequentists and Bayesian approaches are used in statistical inference. The commonly used Classical approach is based on the sample data alone. In contrast, increasingly popular Beyesian approach uses prior distribution on the parameters along with the sample data to make inferences. The non-parametric and robust methods are also being used in situations where commonly used model assumptions are unsupported. In this chapter,we cover the philosophical andmethodological aspects of both the Classical and Bayesian approaches.Moreover, some aspects of predictive inference are also included. In the absence of any evidence to support assumptions regarding the distribution of the underlying population, or if the variable is measured only in ordinal scale, non-parametric methods are used. Robust methods are employed to avoid any significant changes in the results due to deviations from the model

  4. Statistical Inference

    NASA Astrophysics Data System (ADS)

    Khan, Shahjahan

    Often scientific information on various data generating processes are presented in the from of numerical and categorical data. Except for some very rare occasions, generally such data represent a small part of the population, or selected outcomes of any data generating process. Although, valuable and useful information is lurking in the array of scientific data, generally, they are unavailable to the users. Appropriate statistical methods are essential to reveal the hidden “jewels” in the mess of the row data. Exploratory data analysis methods are used to uncover such valuable characteristics of the observed data. Statistical inference provides techniques to make valid conclusions about the unknown characteristics or parameters of the population from which scientifically drawn sample data are selected. Usually, statistical inference includes estimation of population parameters as well as performing test of hypotheses on the parameters. However, prediction of future responses and determining the prediction distributions are also part of statistical inference. Both Classical or Frequentists and Bayesian approaches are used in statistical inference. The commonly used Classical approach is based on the sample data alone. In contrast, increasingly popular Beyesian approach uses prior distribution on the parameters along with the sample data to make inferences. The non-parametric and robust methods are also being used in situations where commonly used model assumptions are unsupported. In this chapter,we cover the philosophical andmethodological aspects of both the Classical and Bayesian approaches.Moreover, some aspects of predictive inference are also included. In the absence of any evidence to support assumptions regarding the distribution of the underlying population, or if the variable is measured only in ordinal scale, non-parametric methods are used. Robust methods are employed to avoid any significant changes in the results due to deviations from the model

  5. Series Supply of Cryogenic Venturi Flowmeters for the ITER Project

    NASA Astrophysics Data System (ADS)

    André, J.; Poncet, J. M.; Ercolani, E.; Clayton, N.; Journeaux, J. Y.

    2015-12-01

    In the framework of the ITER project, the CEA-SBT has been contracted to supply 277 venturi tube flowmeters to measure the distribution of helium in the superconducting magnets of the ITER tokamak. Six sizes of venturi tube have been designed so as to span a measurable helium flowrate range from 0.1 g/s to 400g/s. They operate, in nominal conditions, either at 4K or at 300K, and in a nuclear and magnetic environment. Due to the cryogenic conditions and the large number of venturi tubes to be supplied, an individual calibration of each venturi tube would be too expensive and time consuming. Studies have been performed to produce a design which will offer high repeatability in manufacture, reduce the geometrical uncertainties and improve the final helium flowrate measurement accuracy. On the instrumentation side, technologies for differential and absolute pressure transducers able to operate in applied magnetic fields need to be identified and validated. The complete helium mass flow measurement chain will be qualified in four test benches: - A helium loop at room temperature to insure the qualification of a statistically relevant number of venturi tubes operating at 300K.- A supercritical helium loop for the qualification of venturi tubes operating at cryogenic temperature (a modification to the HELIOS test bench). - A dedicated vacuum vessel to check the helium leak tightness of all the venturi tubes. - A magnetic test bench to qualify different technologies of pressure transducer in applied magnetic fields up to 100mT.

  6. An Iterative Uncertainty Assessment Technique for Environmental Modeling

    SciTech Connect

    Engel, David W.; Liebetrau, Albert M.; Jarman, Kenneth D.; Ferryman, Thomas A.; Scheibe, Timothy D.; Didier, Brett T.

    2004-06-28

    The reliability of and confidence in predictions from model simulations are crucial--these predictions can significantly affect risk assessment decisions. For example, the fate of contaminants at the U.S. Department of Energy's Hanford Site has critical impacts on long-term waste management strategies. In the uncertainty estimation efforts for the Hanford Site-Wide Groundwater Modeling program, computational issues severely constrain both the number of uncertain parameters that can be considered and the degree of realism that can be included in the models. Substantial improvements in the overall efficiency of uncertainty analysis are needed to fully explore and quantify significant sources of uncertainty. We have combined state-of-the-art statistical and mathematical techniques in a unique iterative, limited sampling approach to efficiently quantify both local and global prediction uncertainties resulting from model input uncertainties. The approach is designed for application to widely diverse problems across multiple scientific domains. Results are presented for both an analytical model where the response surface is ''known'' and a simplified contaminant fate transport and groundwater flow model. The results show that our iterative method for approximating a response surface (for subsequent calculation of uncertainty estimates) of specified precision requires less computing time than traditional approaches based upon noniterative sampling methods.

  7. Deconvolution of interferometric data using interior point iterative algorithms

    NASA Astrophysics Data System (ADS)

    Theys, C.; Lantéri, H.; Aime, C.

    2016-09-01

    We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.

  8. The Impact of Iterative Reconstruction on Computed Tomography Radiation Dosimetry: Evaluation in a Routine Clinical Setting

    PubMed Central

    Moorin, Rachael E.; Gibson, David A. J.; Forsyth, Rene K.; Fox, Richard

    2015-01-01

    Purpose To evaluate the effect of introduction of iterative reconstruction as a mandated software upgrade on radiation dosimetry in routine clinical practice over a range of computed tomography examinations. Methods Random samples of scanning data were extracted from a centralised Picture Archiving Communication System pertaining to 10 commonly performed computed tomography examination types undertaken at two hospitals in Western Australia, before and after the introduction of iterative reconstruction. Changes in the mean dose length product and effective dose were evaluated along with estimations of associated changes to annual cancer incidence. Results We observed statistically significant reductions in the effective radiation dose for head computed tomography (22–27%) consistent with those reported in the literature. In contrast the reductions observed for non-contrast chest (37–47%); chest pulmonary embolism study (28%), chest/abdominal/pelvic study (16%) and thoracic spine (39%) computed tomography. Statistically significant reductions in radiation dose were not identified in angiographic computed tomography. Dose reductions translated to substantial lowering of the lifetime attributable risk, especially for younger females, and estimated numbers of incident cancers. Conclusion Reduction of CT dose is a priority Iterative reconstruction algorithms have the potential to significantly assist with dose reduction across a range of protocols. However, this reduction in dose is achieved via reductions in image noise. Fully realising the potential dose reduction of iterative reconstruction requires the adjustment of image factors and forgoing the noise reduction potential of the iterative algorithm. Our study has demonstrated a reduction in radiation dose for some scanning protocols, but not to the extent experimental studies had previously shown or in all protocols expected, raising questions about the extent to which iterative reconstruction achieves dose

  9. MO-FG-204-04: How Iterative Reconstruction Algorithms Affect the NPS of CT Images

    SciTech Connect

    Li, G; Liu, X; Dodge, C; Jensen, C; Rong, J

    2015-06-15

    Purpose: To evaluate how the third generation model based iterative reconstruction (MBIR) compares with filtered back-projection (FBP), adaptive statistical iterative reconstruction (ASiR), and the second generation MBIR based on noise power spectrum (NPS) analysis over a wide range of clinically applicable dose levels. Methods: The Catphan 600 CTP515 module, surrounded by an oval, fat-equivalent ring to mimic patient size/shape, was scanned on a GE HD750 CT scanner at 1, 2, 3, 6, 12 and 19mGy CTDIvol levels with typical patient scan parameters: 120kVp, 0.8s, 40mm beam width, large SFOV, 0.984 pitch and reconstructed thickness 2.5mm (VEO3.0: Abd/Pelvis with Texture and NR05). At each CTDIvol level, 10 repeated scans were acquired for achieving sufficient data sampling. The images were reconstructed using Standard kernel with FBP; 20%, 40% and 70% ASiR; and two versions of MBIR (VEO2.0 and 3.0). For evaluating the effect of the ROI spatial location to the Result of NPS, 4 ROI groups were categorized based on their distances from the center of the phantom. Results: VEO3.0 performed inferiorly comparing to VEO2.0 over all dose levels. On the other hand, at low dose levels (less than 3 mGy), it clearly outperformed ASiR and FBP, in NPS values. Therefore, the lower the dose level, the relative performance of MBIR improves. However, the shapes of the NPS show substantial differences in horizontal and vertical sampling dimensions. These differences may determine the characteristics of the noise/texture features in images, and hence, play an important role in image interpretation. Conclusion: The third generation MBIR did not improve over the second generation MBIR in term of NPS analysis. The overall performance of both versions of MBIR improved as compared to other reconstruction algorithms when dose was reduced. The shapes of the NPS curves provided additional value for future characterization of the image noise/texture features.

  10. Adaptive ILC algorithms of nonlinear continuous systems with non-parametric uncertainties for non-repetitive trajectory tracking

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Dong; Lv, Mang-Mang; Ho, John K. L.

    2016-07-01

    In this article, two adaptive iterative learning control (ILC) algorithms are presented for nonlinear continuous systems with non-parametric uncertainties. Unlike general ILC techniques, the proposed adaptive ILC algorithms allow that both the initial error at each iteration and the reference trajectory are iteration-varying in the ILC process, and can achieve non-repetitive trajectory tracking beyond a small initial time interval. Compared to the neural network or fuzzy system-based adaptive ILC schemes and the classical ILC methods, in which the number of iterative variables is generally larger than or equal to the number of control inputs, the first adaptive ILC algorithm proposed in this paper uses just two iterative variables, while the second even uses a single iterative variable provided that some bound information on system dynamics is known. As a result, the memory space in real-time ILC implementations is greatly reduced.

  11. An adaptive singular spectrum analysis approach to murmur detection from heart sounds.

    PubMed

    Sanei, Saeid; Ghodsi, Mansoureh; Hassani, Hossein

    2011-04-01

    Murmur is the result of various heart abnormalities. A new robust approach for separation of murmur from heart sound has been suggested in this article. Singular spectrum analysis (SSA) has been adapted to the changes in the statistical properties of the data and effectively used for detection of murmur from single-channel heart sound (HS) signals. Incorporating a cleverly selected a priori within the SSA reconstruction process, results in an accurate separation of normal HS from the murmur segment. Another contribution of this work is selection of the correct subspace of the desired signal component automatically. In addition, the subspace size can be identified iteratively. A number of HS signals with murmur have been processed using the proposed adaptive SSA (ASSA) technique and the results have been quantified both objectively and subjectively.

  12. ITER Cryoplant Final Design and Construction

    NASA Astrophysics Data System (ADS)

    Monneret, E.; Benkheira, L.; Fauve, E.; Henry, D.; Voigt, T.; Badgujar, S.; Chang, H.-S.; Vincent, G.; Forgeas, A.; Navion-Maillot, N.

    2017-02-01

    The ITER Tokamak supraconducting magnets, thermal shields and cryopumps will require tremendous amount of cooling power. With an average need of 75 kW at 4.5 K and of 600 kW at 80 K, ITER requires a world class cryogenic complex. ITER then relies on a Cryoplant which consists in a cluster of systems dedicated to the management of all fluids required for the Tokamak operation. From storage and purification to liquefaction and refrigeration, the Cryoplant will supply to the distribution system, all fluids to be circulated in the Tokamak. It includes Liquid Helium Plants and Liquid Nitrogen Plants, which generate all of the refrigeration power, an 80 K helium loop capable to circulate large quantities of helium through thermal shields, and all the auxiliaries required for gas storage, purification, and onsite nitrogen production. From the conceptual phase, the design of the Cryoplant has evolved and is now nearing completion. This proceeding will present the final design of the Cryoplant and the organization for the construction phase. Also the latest status of the ITER Cryogenic System will be introduced.

  13. Iteration and Anxiety in Mathematical Literature

    ERIC Educational Resources Information Center

    Capezzi, Rita; Kinsey, L. Christine

    2016-01-01

    We describe our experiences in team-teaching an honors seminar on mathematics and literature. We focus particularly on two of the texts we read: Georges Perec's "How to Ask Your Boss for a Raise" and Alain Robbe-Grillet's "Jealousy," both of which make use of iterative structures.

  14. ITER faces further five-year delay

    NASA Astrophysics Data System (ADS)

    Clery, Daniel

    2016-06-01

    The €14bn ITER fusion reactor currently under construction in Cadarache, France, will require an additional cash injection of €4.6bn if it is to start up in 2025 - a target date that is already five years later than currently scheduled.

  15. Microtearing Instability In The ITER Pedestal

    SciTech Connect

    Wong, K. L.; Mikkelsen, D. R.; Rewoldt, G. M.; Budny, R.

    2010-12-01

    Unstable microtearing modes are discovered by the GS2 gyrokinetic siimulation code, in the pedestal region of a simulated ITER H-mode plasma with approximately 400 WM DT fusion power. Existing nonlinear theory indicates that these instabilities should produce stochastic magnetic fields and broaden the pedestal. The resulted electron thermal conductivity is estimated and the implications of these findings are discussed.

  16. Constructing Easily Iterated Functions with Interesting Properties

    ERIC Educational Resources Information Center

    Sprows, David J.

    2009-01-01

    A number of schools have recently introduced new courses dealing with various aspects of iteration theory or at least have found ways of including topics such as chaos and fractals in existing courses. In this note, we will consider a family of functions whose members are especially well suited to illustrate many of the concepts involved in these…

  17. On the safety of ITER accelerators

    PubMed Central

    Li, Ge

    2013-01-01

    Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate −1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER. PMID:24008267

  18. Solving Differential Equations Using Modified Picard Iteration

    ERIC Educational Resources Information Center

    Robin, W. A.

    2010-01-01

    Many classes of differential equations are shown to be open to solution through a method involving a combination of a direct integration approach with suitably modified Picard iterative procedures. The classes of differential equations considered include typical initial value, boundary value and eigenvalue problems arising in physics and…

  19. Iteration of Complex Functions and Newton's Method

    ERIC Educational Resources Information Center

    Dwyer, Jerry; Barnard, Roger; Cook, David; Corte, Jennifer

    2009-01-01

    This paper discusses some common iterations of complex functions. The presentation is such that similar processes can easily be implemented and understood by undergraduate students. The aim is to illustrate some of the beauty of complex dynamics in an informal setting, while providing a couple of results that are not otherwise readily available in…

  20. Iterated rippled noise discrimination at long durations.

    PubMed

    Yost, William A

    2009-09-01

    Iterated rippled noise (IRN) was used to study discrimination of IRN stimuli with a lower number of iterations from IRN stimuli with a higher number of iterations as a function of stimulus duration (100-2000 ms). Such IRN stimuli differ in the strength of the repetition pitch. In some cases, the gain used to generate IRN stimuli was adjusted so that both IRN stimuli in the discrimination task had the same height of the first peak in the autocorrelation function or autocorrelogram. In previous work involving short-duration IRN stimuli (<500 ms), listeners were not able to discriminate between IRN stimuli that had different numbers of iterations but the same height of the first peak in the autocorrelation function. In the current study, IRN discrimination performance improved with increases in duration, even in cases when the height of the first peak in the autocorrelation was the same for the two IRN stimuli. Thus, future studies involving discrimination of IRN stimuli may need to use longer durations (1 s or greater) than those that have been used in the past.

  1. Iterative solution of the Helmholtz equation

    SciTech Connect

    Larsson, E.; Otto, K.

    1996-12-31

    We have shown that the numerical solution of the two-dimensional Helmholtz equation can be obtained in a very efficient way by using a preconditioned iterative method. We discretize the equation with second-order accurate finite difference operators and take special care to obtain non-reflecting boundary conditions. We solve the large, sparse system of equations that arises with the preconditioned restarted GMRES iteration. The preconditioner is of {open_quotes}fast Poisson type{close_quotes}, and is derived as a direct solver for a modified PDE problem.The arithmetic complexity for the preconditioner is O(n log{sub 2} n), where n is the number of grid points. As a test problem we use the propagation of sound waves in water in a duct with curved bottom. Numerical experiments show that the preconditioned iterative method is very efficient for this type of problem. The convergence rate does not decrease dramatically when the frequency increases. Compared to banded Gaussian elimination, which is a standard solution method for this type of problems, the iterative method shows significant gain in both storage requirement and arithmetic complexity. Furthermore, the relative gain increases when the frequency increases.

  2. [Descriptive statistics].

    PubMed

    Rendón-Macías, Mario Enrique; Villasís-Keever, Miguel Ángel; Miranda-Novales, María Guadalupe

    2016-01-01

    Descriptive statistics is the branch of statistics that gives recommendations on how to summarize clearly and simply research data in tables, figures, charts, or graphs. Before performing a descriptive analysis it is paramount to summarize its goal or goals, and to identify the measurement scales of the different variables recorded in the study. Tables or charts aim to provide timely information on the results of an investigation. The graphs show trends and can be histograms, pie charts, "box and whiskers" plots, line graphs, or scatter plots. Images serve as examples to reinforce concepts or facts. The choice of a chart, graph, or image must be based on the study objectives. Usually it is not recommended to use more than seven in an article, also depending on its length.

  3. Order Statistics and Nonparametric Statistics.

    DTIC Science & Technology

    2014-09-26

    Topics investigated include the following: Probability that a fuze will fire; moving order statistics; distribution theory and properties of the...problem posed by an Army Scientist: A fuze will fire when at least n-i (or n-2) of n detonators function within time span t. What is the probability of

  4. Statistical Optics

    NASA Astrophysics Data System (ADS)

    Goodman, Joseph W.

    2000-07-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research

  5. Enhancement of event related potentials by iterative restoration algorithms

    NASA Astrophysics Data System (ADS)

    Pomalaza-Raez, Carlos A.; McGillem, Clare D.

    1986-12-01

    An iterative procedure for the restoration of event related potentials (ERP) is proposed and implemented. The method makes use of assumed or measured statistical information about latency variations in the individual ERP components. The signal model used for the restoration algorithm consists of a time-varying linear distortion and a positivity/negativity constraint. Additional preprocessing in the form of low-pass filtering is needed in order to mitigate the effects of additive noise. Numerical results obtained with real data show clearly the presence of enhanced and regenerated components in the restored ERP's. The procedure is easy to implement which makes it convenient when compared to other proposed techniques for the restoration of ERP signals.

  6. Adaptive Algebraic Multigrid Methods

    SciTech Connect

    Brezina, M; Falgout, R; MacLachlan, S; Manteuffel, T; McCormick, S; Ruge, J

    2004-04-09

    Our ability to simulate physical processes numerically is constrained by our ability to solve the resulting linear systems, prompting substantial research into the development of multiscale iterative methods capable of solving these linear systems with an optimal amount of effort. Overcoming the limitations of geometric multigrid methods to simple geometries and differential equations, algebraic multigrid methods construct the multigrid hierarchy based only on the given matrix. While this allows for efficient black-box solution of the linear systems associated with discretizations of many elliptic differential equations, it also results in a lack of robustness due to assumptions made on the near-null spaces of these matrices. This paper introduces an extension to algebraic multigrid methods that removes the need to make such assumptions by utilizing an adaptive process. The principles which guide the adaptivity are highlighted, as well as their application to algebraic multigrid solution of certain symmetric positive-definite linear systems.

  7. Optimal application of Morrison's iterative noise removal for deconvolution. Appendices

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1987-01-01

    Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.

  8. Testing Short Samples of ITER Conductors and Projection of Their Performance in ITER Magnets

    SciTech Connect

    Martovetsky, N N

    2007-08-20

    Qualification of the ITER conductor is absolutely necessary. Testing large scale conductors is expensive and time consuming. To test straight 3-4m long samples in a bore of a split solenoid is a relatively economical way in comparison with fabrication of a coil to be tested in a bore of a background field solenoid. However, testing short sample may give ambiguous results due to different constraints in current redistribution in the cable or other end effects which are not present in the large magnet. This paper discusses processes taking place in the ITER conductor, conditions when conductor performance could be distorted and possible signal processing to deduce behavior of ITER conductors in ITER magnets from the test data.

  9. The Impact of Iterative Reconstruction in Low-Dose Computed Tomography on the Evaluation of Diffuse Interstitial Lung Disease

    PubMed Central

    Lim, Hyun-ju; Shin, Kyung Eun; Hwang, Hye Sun; Lee, Kyung Soo

    2016-01-01

    Objective To evaluate the impact of iterative reconstruction (IR) on the assessment of diffuse interstitial lung disease (DILD) using CT. Materials and Methods An American College of Radiology (ACR) phantom (module 4 to assess spatial resolution) was scanned with 10–100 effective mAs at 120 kVp. The images were reconstructed using filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR), with blending ratios of 0%, 30%, 70% and 100%, and model-based iterative reconstruction (MBIR), and their spatial resolution was objectively assessed by the line pair structure method. The patient study was based on retrospective interpretation of prospectively acquired data, and it was approved by the institutional review board. Chest CT scans of 23 patients (mean age 64 years) were performed at 120 kVp using 1) standard dose protocol applying 142–275 mA with dose modulation (high-resolution computed tomography [HRCT]) and 2) low-dose protocol applying 20 mA (low dose CT, LDCT). HRCT images were reconstructed with FBP, and LDCT images were reconstructed using FBP, ASIR, and MBIR. Matching images were randomized and independently reviewed by chest radiologists. Subjective assessment of disease presence and radiological diagnosis was made on a 10-point scale. In addition, semi-quantitative results were compared for the extent of abnormalities estimated to the nearest 5% of parenchymal involvement. Results In the phantom study, ASIR was comparable to FBP in terms of spatial resolution. However, for MBIR, the spatial resolution was greatly decreased under 10 mA. In the patient study, the detection of the presence of disease was not significantly different. The values for area under the curve for detection of DILD by HRCT, FBP, ASIR, and MBIR were as follows: 0.978, 0.979, 0.972, and 0.963. LDCT images reconstructed with FBP, ASIR, and MBIR tended to underestimate reticular or honeycombing opacities (-2.8%, -4.1%, and -5.3%, respectively) and

  10. PATTERN RECOGNITION AND CLASSIFICATION USING ADAPTIVE LINEAR NEURON DEVICES

    DTIC Science & Technology

    adaption by an adaptive linear neuron ( Adaline ), as applied to the pattern recognition and classification problem; (2) Four possible iterative adaption...schemes which may be used to train as Adaline ; (3) Use of Multiple Adalines (Madaline) and two logic layers to increase system capability; and (4) Use...of Adaline in the practical fields of Speech Recognition, Weather Forecasting and Adaptive Control Systems and the possible use of Madaline in the Character Recognition field.

  11. Overview on Experiments On ITER-like Antenna On JET And ICRF Antenna Design For ITER

    SciTech Connect

    Nightingale, M. P. S.; Blackman, T.; Edwards, D.; Fanthome, J.; Graham, M.; Hamlyn-Harris, C.; Hancock, D.; Jacquet, P.; Mayoral, M.-L.; Monakhov, I.; Nicholls, K.; Stork, D.; Whitehurst, A.; Wilson, D.; Wooldridge, E.

    2009-11-26

    Following an overview of the ITER Ion Cyclotron Resonance Frequency (ICRF) system, the JET ITER-like antenna (ILA) will be described. The ILA was designed to test the following ITER issues: (a) reliable operation at power densities of order 8 MW/m{sup 2} at voltages up to 45 kV using a close-packed array of straps; (b) powering through ELMs using an internal (in-vacuum) conjugate-T junction; (c) protection from arcing in a conjugate-T configuration, using both existing and novel systems; and (d) resilience to disruption forces. ITER-relevant results have been achieved: operation at high coupled power density; control of the antenna matching elements in the presence of high inter-strap coupling, use of four conjugate-T systems (as would be used in ITER, should a conjugate-T approach be used); operation with RF voltages on the antenna structures up to 42 kV; achievement of ELM tolerance with a conjugate-T configuration by operating at 3{omega} real impedance at the conjugate-T point; and validation of arc detection systems on conjugate-T configurations in ELMy H-mode plasmas. The impact of these results on the predicted performance and design of the ITER antenna will be reviewed. In particular, the implications of the RF coupling measured on JET will be discussed.

  12. Model Based Iterative Reconstruction for Bright Field Electron Tomography (Postprint)

    DTIC Science & Technology

    2013-02-01

    Reconstruction Technique ( SIRT ) are applied to the data. Model based iterative reconstruction (MBIR) provides a powerful framework for tomographic...the reconstruction when the typical algorithms such as Filtered Back Projection (FBP) and Simultaneous Iterative Reconstruction Technique ( SIRT ) are

  13. The Iterative Structure Analysis of Montgomery Modular Multiplication

    NASA Astrophysics Data System (ADS)

    Jinbo, Wang

    2007-09-01

    Montgomery modular multiplication (MMM) plays a crucial role in the implementation of modular exponentiations of public-key cryptography. In this paper, we discuss the iterative structure and extend the iterative bound condition of MMM. It can be applied to complicated modular exponentiations. Based on the iterative condition of MMM, we can directly use non-modular additions, subtractions and even simple multiplications instead of the modular forms, which make modular exponentiation operation very efficient but more importantly iterative applicability of MMM.

  14. Iterative performance of various formulations of the SPN equations

    NASA Astrophysics Data System (ADS)

    Zhang, Yunhuang; Ragusa, Jean C.; Morel, Jim E.

    2013-11-01

    In this paper, the Standard, Composite, and Canonical forms of the Simplified PN (SPN) equations are reviewed and their corresponding iterative properties are compared. The Gauss-Seidel (FLIP), Explicit, and preconditioned Source Iteration iterative schemes have been analyzed for both isotropic and highly anisotropic (Fokker-Planck) scattering. The iterative performance of the various SPN forms is assessed using Fourier analysis, corroborated with numerical experiments.

  15. Accelerated Path-following Iterative Shrinkage Thresholding Algorithm with Application to Semiparametric Graph Estimation

    PubMed Central

    Zhao, Tuo; Liu, Han

    2016-01-01

    We propose an accelerated path-following iterative shrinkage thresholding algorithm (APISTA) for solving high dimensional sparse nonconvex learning problems. The main difference between APISTA and the path-following iterative shrinkage thresholding algorithm (PISTA) is that APISTA exploits an additional coordinate descent subroutine to boost the computational performance. Such a modification, though simple, has profound impact: APISTA not only enjoys the same theoretical guarantee as that of PISTA, i.e., APISTA attains a linear rate of convergence to a unique sparse local optimum with good statistical properties, but also significantly outperforms PISTA in empirical benchmarks. As an application, we apply APISTA to solve a family of nonconvex optimization problems motivated by estimating sparse semiparametric graphical models. APISTA allows us to obtain new statistical recovery results which do not exist in the existing literature. Thorough numerical results are provided to back up our theory. PMID:28133430

  16. Application Of Iterative Reconstruction Techniques To Conventional Circular Tomography

    NASA Astrophysics Data System (ADS)

    Ghosh Roy, D. N.; Kruger, R. A.; Yih, B. C.; Del Rio, S. P.; Power, R. L.

    1985-06-01

    Two "point-by-point" iteration procedures, namely, Iterative Least Square Technique (ILST) and Simultaneous Iterative Reconstructive Technique (SIRT) were applied to classical circular tomographic reconstruction. The technique of tomosynthetic DSA was used in forming the tomographic images. Reconstructions of a dog's renal and neck anatomy are presented.

  17. Selected Bibliography on Optimizing Techniques in Statistics

    DTIC Science & Technology

    1981-08-01

    1958). Iterative solutions of likelihood equations, Biometrika 14, 128-130. Unland, A. W. and Smith , W. N. (1959). The use of Lagrange multipliers...373. IKubicek, M. Marek, M. and Eckert E. (1971). Quasilinearized regression, Technometrics 13 (3), 601-608. I Smith , F. B. and ham, D. F. (1971). Pm...parameter, J. Amer. Statist. Ass. 67, 641-646. Theobald , C. M. (1975). An inequality with application to nultivariate analysis, Bicmetrika 62, 461-466

  18. Iterative Brinkman penalization for remeshed vortex methods

    NASA Astrophysics Data System (ADS)

    Hejlesen, Mads Mølholm; Koumoutsakos, Petros; Leonard, Anthony; Walther, Jens Honoré

    2015-01-01

    We introduce an iterative Brinkman penalization method for the enforcement of the no-slip boundary condition in remeshed vortex methods. In the proposed method, the Brinkman penalization is applied iteratively only in the neighborhood of the body. This allows for using significantly larger time steps, than what is customary in the Brinkman penalization, thus reducing its computational cost while maintaining the capability of the method to handle complex geometries. We demonstrate the accuracy of our method by considering challenging benchmark problems such as flow past an impulsively started cylinder and normal to an impulsively started and accelerated flat plate. We find that the present method enhances significantly the accuracy of the Brinkman penalization technique for the simulations of highly unsteady flows past complex geometries.

  19. New iterative solvers for the NAG Libraries

    SciTech Connect

    Salvini, S.; Shaw, G.

    1996-12-31

    The purpose of this paper is to introduce the work which has been carried out at NAG Ltd to update the iterative solvers for sparse systems of linear equations, both symmetric and unsymmetric, in the NAG Fortran 77 Library. Our current plans to extend this work and include it in our other numerical libraries in our range are also briefly mentioned. We have added to the Library the new Chapter F11, entirely dedicated to sparse linear algebra. At Mark 17, the F11 Chapter includes sparse iterative solvers, preconditioners, utilities and black-box routines for sparse symmetric (both positive-definite and indefinite) linear systems. Mark 18 will add solvers, preconditioners, utilities and black-boxes for sparse unsymmetric systems: the development of these has already been completed.

  20. ITER Shape Controller and Transport Simulations

    SciTech Connect

    Casper, T A; Meyer, W H; Pearlstein, L D; Portone, A

    2007-05-31

    We currently use the CORSICA integrated modeling code for scenario studies for both the DIII-D and ITER experiments. In these simulations, free- or fixed-boundary equilibria are simultaneously converged with thermal evolution determined from transport models providing temperature and current density profiles. Using a combination of fixed boundary evolution followed by free-boundary calculation to determine the separatrix and coil currents. In the free-boundary calculation, we use the state-space controller representation with transport simulations to provide feedback modeling of shape, vertical stability and profile control. In addition to a tightly coupled calculation with simulator and controller imbedded inside CORSICA, we also use a remote procedure call interface to couple the CORSICA non-linear plasma simulations to the controller environments developed within the Mathworks Matlab/Simulink environment. We present transport simulations using full shape and vertical stability control with evolution of the temperature profiles to provide simulations of the ITER controller and plasma response.

  1. Statistical Neurodynamics.

    NASA Astrophysics Data System (ADS)

    Paine, Gregory Harold

    1982-03-01

    The primary objective of the thesis is to explore the dynamical properties of small nerve networks by means of the methods of statistical mechanics. To this end, a general formalism is developed and applied to elementary groupings of model neurons which are driven by either constant (steady state) or nonconstant (nonsteady state) forces. Neuronal models described by a system of coupled, nonlinear, first-order, ordinary differential equations are considered. A linearized form of the neuronal equations is studied in detail. A Lagrange function corresponding to the linear neural network is constructed which, through a Legendre transformation, provides a constant of motion. By invoking the Maximum-Entropy Principle with the single integral of motion as a constraint, a probability distribution function for the network in a steady state can be obtained. The formalism is implemented for some simple networks driven by a constant force; accordingly, the analysis focuses on a study of fluctuations about the steady state. In particular, a network composed of N noninteracting neurons, termed Free Thinkers, is considered in detail, with a view to interpretation and numerical estimation of the Lagrange multiplier corresponding to the constant of motion. As an archetypical example of a net of interacting neurons, the classical neural oscillator, consisting of two mutually inhibitory neurons, is investigated. It is further shown that in the case of a network driven by a nonconstant force, the Maximum-Entropy Principle can be applied to determine a probability distribution functional describing the network in a nonsteady state. The above examples are reconsidered with nonconstant driving forces which produce small deviations from the steady state. Numerical studies are performed on simplified models of two physical systems: the starfish central nervous system and the mammalian olfactory bulb. Discussions are given as to how statistical neurodynamics can be used to gain a better

  2. Iterative Reconstruction of Coded Source Neutron Radiographs

    SciTech Connect

    Santos-Villalobos, Hector J; Bingham, Philip R; Gregor, Jens

    2012-01-01

    Use of a coded source facilitates high-resolution neutron imaging but requires that the radiographic data be deconvolved. In this paper, we compare direct deconvolution with two different iterative algorithms, namely, one based on direct deconvolution embedded in an MLE-like framework and one based on a geometric model of the neutron beam and a least squares formulation of the inverse imaging problem.

  3. Iterative solution of high order compact systems

    SciTech Connect

    Spotz, W.F.; Carey, G.F.

    1996-12-31

    We have recently developed a class of finite difference methods which provide higher accuracy and greater stability than standard central or upwind difference methods, but still reside on a compact patch of grid cells. In the present study we investigate the performance of several gradient-type iterative methods for solving the associated sparse systems. Both serial and parallel performance studies have been made. Representative examples are taken from elliptic PDE`s for diffusion, convection-diffusion, and viscous flow applications.

  4. Disruptions, loads, and dynamic response of ITER

    SciTech Connect

    Nelson, B.; Riemer, B.; Sayer, R.; Strickler, D.; Barabaschi, P.; Ioki, K.; Johnson, G.; Shimizu, K.; Williamson, D.

    1995-12-31

    Plasma disruptions and the resulting electromagnetic loads are critical to the design of the vacuum vessel and in-vessel components of the International Thermonuclear Experimental Reactor (ITER). This paper describes the status of plasma disruption simulations and related analysis, including the dynamic response of the vacuum vessel and in-vessel components, stresses and deflections in the vacuum vessel, and reaction loads in the support structures.

  5. Iterates of a Berezin-type transform

    NASA Astrophysics Data System (ADS)

    Liu, Congwen

    2007-05-01

    Let be the open unit ball of and dV denote the Lebesgue measure on normalized so that the measure of equals 1. Suppose . The Berezin-type transform of f is defined by We prove that if then the iterates converge to the Poisson extension of the boundary values of f, as k-->[infinity]. This can be viewed as a higher dimensional generalization of a previous result obtained independently by Englis and Zhu.

  6. Iterative solution of the supereigenvalue model

    NASA Astrophysics Data System (ADS)

    Plefka, Jan C.

    1995-02-01

    An integral form of the discrete superloop equations for the supereigenvalue model of Alvarez-Gaumé, Itoyama, Mañes and Zadra is given. By a change of variables from coupling constants to moments we find a compact form of the planar solution for general potentials. In this framework an iterative scheme for the calculation of higher genera contributions to the free energy and the multi-loop correlators is developed. We present explicit results for genus one.

  7. Fourier analysis of the SOR iteration

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.; Trefethen, L. N.

    1986-01-01

    The SOR iteration for solving linear systems of equations depends upon an overrelaxation factor omega. It is shown that for the standard model problem of Poisson's equation on a rectangle, the optimal omega and corresponding convergence rate can be rigorously obtained by Fourier analysis. The trick is to tilt the space-time grid so that the SOR stencil becomes symmetrical. The tilted grid also gives insight into the relation between convergence rates of several variants.

  8. ICRF Review: From ERASMUS To ITER

    SciTech Connect

    Weynants, R. R.

    2009-11-26

    This is a personal account of how I saw ICRF evolve since 1974, with a presentation that is ordered according to the topics: heating, antenna coupling, impurity generation/mitigation and system technology. The nature of the main issues is each time reviewed, recent findings are incorporated, and it is shown how the ICRF community has been able to react to sometimes rapidly changing demands and is indeed resolutely preparing ITER.

  9. The dynamics of iterated transportation simulations

    SciTech Connect

    Nagel, K.; Rickert, M.; Simon, P.M.

    1998-12-01

    Transportation-related decisions of people often depend on what everybody else is doing. For example, decisions about mode choice, route choice, activity scheduling, etc., can depend on congestion, caused by the aggregated behavior of others. From a conceptual viewpoint, this consistency problem causes a deadlock, since nobody can start planning because they do not know what everybody else is doing. It is the process of iterations that is examined in this paper as a method for solving the problem. In this paper, the authors concentrate on the aspect of the iterative process that is probably the most important one from a practical viewpoint, and that is the ``uniqueness`` or ``robustness`` of the results. Also, they define robustness more in terms of common sense than in terms of a mathematical formalism. For this, they do not only want a single iterative process to converge, but they want the result to be independent of any particular implementation. The authors run many computational experiments, sometimes with variations of the same code, sometimes with totally different code, in order to see if any of the results are robust against these changes.

  10. Conformal mapping and convergence of Krylov iterations

    SciTech Connect

    Driscoll, T.A.; Trefethen, L.N.

    1994-12-31

    Connections between conformal mapping and matrix iterations have been known for many years. The idea underlying these connections is as follows. Suppose the spectrum of a matrix or operator A is contained in a Jordan region E in the complex plane with 0 not an element of E. Let {phi}(z) denote a conformal map of the exterior of E onto the exterior of the unit disk, with {phi}{infinity} = {infinity}. Then 1/{vert_bar}{phi}(0){vert_bar} is an upper bound for the optimal asymptotic convergence factor of any Krylov subspace iteration. This idea can be made precise in various ways, depending on the matrix iterations, on whether A is finite or infinite dimensional, and on what bounds are assumed on the non-normality of A. This paper explores these connections for a variety of matrix examples, making use of a new MATLAB Schwarz-Christoffel Mapping Toolbox developed by the first author. Unlike the earlier Fortran Schwarz-Christoffel package SCPACK, the new toolbox computes exterior as well as interior Schwarz-Christoffel maps, making it easy to experiment with spectra that are not necessarily symmetric about an axis.

  11. Iterative pass optimization of sequence data

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  12. Iterative pass optimization of sequence data.

    PubMed

    Wheeler, Ward C

    2003-06-01

    The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed.

  13. Iterative solution of the semiconductor device equations

    SciTech Connect

    Bova, S.W.; Carey, G.F.

    1996-12-31

    Most semiconductor device models can be described by a nonlinear Poisson equation for the electrostatic potential coupled to a system of convection-reaction-diffusion equations for the transport of charge and energy. These equations are typically solved in a decoupled fashion and e.g. Newton`s method is used to obtain the resulting sequences of linear systems. The Poisson problem leads to a symmetric, positive definite system which we solve iteratively using conjugate gradient. The transport equations lead to nonsymmetric, indefinite systems, thereby complicating the selection of an appropriate iterative method. Moreover, their solutions exhibit steep layers and are subject to numerical oscillations and instabilities if standard Galerkin-type discretization strategies are used. In the present study, we use an upwind finite element technique for the transport equations. We also evaluate the performance of different iterative methods for the transport equations and investigate various preconditioners for a few generalized gradient methods. Numerical examples are given for a representative two-dimensional depletion MOSFET.

  14. Recent ADI iteration analysis and results

    SciTech Connect

    Wachspress, E.L.

    1994-12-31

    Some recent ADI iteration analysis and results are discussed. Discovery that the Lyapunov and Sylvester matrix equations are model ADI problems stimulated much research on ADI iteration with complex spectra. The ADI rational Chebyshev analysis parallels the classical linear Chebyshev theory. Two distinct approaches have been applied to these problems. First, parameters which were optimal for real spectra were shown to be nearly optimal for certain families of complex spectra. In the linear case these were spectra bounded by ellipses in the complex plane. In the ADI rational case these were spectra bounded by {open_quotes}elliptic-function regions{close_quotes}. The logarithms of the latter appear like ellipses, and the logarithms of the optimal ADI parameters for these regions are similar to the optimal parameters for linear Chebyshev approximation over superimposed ellipses. W.B. Jordan`s bilinear transformation of real variables to reduce the two-variable problem to one variable was generalized into the complex plane. This was needed for ADI iterative solution of the Sylvester equation.

  15. Iterative Decoding of Concatenated Codes: A Tutorial

    NASA Astrophysics Data System (ADS)

    Regalia, Phillip A.

    2005-12-01

    The turbo decoding algorithm of a decade ago constituted a milestone in error-correction coding for digital communications, and has inspired extensions to generalized receiver topologies, including turbo equalization, turbo synchronization, and turbo CDMA, among others. Despite an accrued understanding of iterative decoding over the years, the "turbo principle" remains elusive to master analytically, thereby inciting interest from researchers outside the communications domain. In this spirit, we develop a tutorial presentation of iterative decoding for parallel and serial concatenated codes, in terms hopefully accessible to a broader audience. We motivate iterative decoding as a computationally tractable attempt to approach maximum-likelihood decoding, and characterize fixed points in terms of a "consensus" property between constituent decoders. We review how the decoding algorithm for both parallel and serial concatenated codes coincides with an alternating projection algorithm, which allows one to identify conditions under which the algorithm indeed converges to a maximum-likelihood solution, in terms of particular likelihood functions factoring into the product of their marginals. The presentation emphasizes a common framework applicable to both parallel and serial concatenated codes.

  16. ITER Creation Safety File Expertise Results

    NASA Astrophysics Data System (ADS)

    Perrault, D.

    2013-06-01

    In March 2010, the ITER operator delivered the facility safety file to the French "Autorité de Sûreté Nucléaire" (ASN) as part of its request for the creation decree, legally necessary before building works can begin on the site. The French "Institut de Radioprotection et de Sûreté Nucléaire" (IRSN), in support to the ASN, recently completed its expertise of the safety measures proposed for ITER, on the basis of this file and of additional technical documents from the operator. This paper presents the IRSN's main conclusions. In particular, they focus on the radioactive materials involved, the safety and radiation protection demonstration (suitability of risk management measures…), foreseeable accidents, building and safety important component design and, finally, wastes and effluents to be produced. This assessment was just the first legally-required step in on-going safety monitoring of the ITER project, which will include other complete regulatory re-evaluations.

  17. Bayesian classification of polarimetric SAR images using adaptive a priori probabilities

    NASA Technical Reports Server (NTRS)

    Van Zyl, J. J.; Burnette, C. F.

    1992-01-01

    The problem of classifying earth terrain by observed polarimetric scattering properties is tackled with an iterative Bayesian scheme using a priori probabilities adaptively. The first classification is based on the use of fixed and not necessarily equal a priori probabilities, and successive iterations change the a priori probabilities adaptively. The approach is applied to an SAR image in which a single water body covers 10 percent of the image area. The classification accuracy for ocean, urban, vegetated, and total area increase, and the percentage of reclassified pixels decreases greatly as the iteration number increases. The iterative scheme is found to improve the a posteriori classification accuracy of maximum likelihood classifiers by iteratively using the local homogeneity in polarimetric SAR images. A few iterations can improve the classification accuracy significantly without sacrificing key high-frequency detail or edges in the image.

  18. Design studies for ITER x-ray diagnostics

    SciTech Connect

    Hill, K.W.; Bitter, M.; von Goeler, S.; Hsuan, H.

    1995-01-01

    Concepts for adapting conventional tokamak x-ray diagnostics to the harsh radiation environment of ITER include use of grazing-incidence (GI) x-ray mirrors or man-made Bragg multilayer (ML) elements to remove the x-ray beam from the neutron beam, or use of bundles of glass-capillary x-ray ``light pipes`` embedded in radiation shields to reduce the neutron/gamma-ray fluxes onto the detectors while maintaining usable x-ray throughput. The x-ray optical element with the broadest bandwidth and highest throughput, the GI mirror, can provide adequate lateral deflection (10 cm for a deflected-path length of 8 m) at x-ray energies up to 12, 22, or 30 keV for one, two, or three deflections, respectively. This element can be used with the broad band, high intensity x-ray imaging system (XIS), the pulseheight analysis (PHA) survey spectrometer, or the high resolution Johann x-ray crystal spectrometer (XCS), which is used for ion-temperature measurement. The ML mirrors can isolate the detector from the neutron beam with a single deflection for energies up to 50 keV, but have much narrower bandwidth and lower x-ray power throughput than do the GI mirrors; they are unsuitable for use with the XIS or PHA, but they could be used with the XCS; in particular, these deflectors could be used between ITER and the biological shield to avoid direct plasma neutron streaming through the biological shield. Graded-d ML mirrors have good reflectivity from 20 to 70 keV, but still at grazing angles (<3 mrad). The efficiency at 70 keV for double reflection (10 percent), as required for adequate separation of the x-ray and neutron beams, is high enough for PHA requirements, but not for the XIS. Further optimization may be possible.

  19. Iterative reconstruction for bioluminescence tomography with total variation regularization

    NASA Astrophysics Data System (ADS)

    Jin, Wenma; He, Yonghong

    2012-12-01

    Bioluminescence tomography(BLT) is an instrumental molecular imaging modality designed for the 3D location and quantification of bioluminescent sources distribution in vivo. In our context, the diffusion approximation(DA) to radiative transfer equation(RTE) is utilized to model the forward process of light propagation. Mathematically, the solution uniqueness does not hold for DA-based BLT which is an inverse source problem of partial differential equations and hence is highly ill-posed. In the current work, we concentrate on a general regularization framework for BLT with Bregman distance as data fidelity and total variation(TV) as regularization. Two specializations of the Bregman distance, the least squares(LS) distance and Kullback-Leibler(KL) divergence, which correspond to the Gaussian and Poisson environments respectively, are demonstrated and the resulting regularization problems are denoted as LS+TV and KL+TV. Based on the constrained Landweber(CL) scheme and expectation maximization(EM) algorithm for BLT, iterative algorithms for the LS+TV and KL+TV problems in the context of BLT are developed, which are denoted as CL-TV and EM-TV respectively. They are both essentially gradient-based algorithms alternatingly performing the standard CL or EM iteration step and the TV correction step which requires the solution of a weighted ROF model. Chambolle's duality-based approach is adapted and extended to solving the weighted ROF subproblem. Numerical experiments for a 3D heterogeneous mouse phantom are carried out and preliminary results are reported to verify and evaluate the proposed algorithms. It is found that for piecewise-constant sources both CL-TV and EM-TV outperform the conventional CL and EM algorithms for BLT.

  20. Adaptive Management for Urban Watersheds: The Slavic Village Pilot Project

    EPA Science Inventory

    Adaptive management is an environmental management strategy that uses an iterative process of decision-making to reduce the uncertainty in environmental management via system monitoring. A central tenet of adaptive management is that management involves a learning process that ca...

  1. A unified treatment of some perturbed fixed point iterative methods with an infinite pool of operators

    NASA Astrophysics Data System (ADS)

    Nikazad, Touraj; Abbasi, Mokhtar

    2017-04-01

    In this paper, we introduce a subclass of strictly quasi-nonexpansive operators which consists of well-known operators as paracontracting operators (e.g., strictly nonexpansive operators, metric projections, Newton and gradient operators), subgradient projections, a useful part of cutter operators, strictly relaxed cutter operators and locally strongly Féjer operators. The members of this subclass, which can be discontinuous, may be employed by fixed point iteration methods; in particular, iterative methods used in convex feasibility problems. The closedness of this subclass, with respect to composition and convex combination of operators, makes it useful and remarkable. Another advantage with members of this subclass is the possibility to adapt them to handle convex constraints. We give convergence result, under mild conditions, for a perturbation resilient iterative method which is based on an infinite pool of operators in this subclass. The perturbation resilient iterative methods are relevant and important for their possible use in the framework of the recently developed superiorization methodology for constrained minimization problems. To assess the convergence result, the class of operators and the assumed conditions, we illustrate some extensions of existence research works and some new results.

  2. Helicopter trim analysis by shooting and finite element methods with optimally damped Newton iterations

    NASA Technical Reports Server (NTRS)

    Achar, N. S.; Gaonkar, G. H.

    1993-01-01

    Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used, and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.

  3. Iterative image-domain decomposition for dual-energy CT

    SciTech Connect

    Niu, Tianye; Dong, Xue; Petrongolo, Michael; Zhu, Lei

    2014-04-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the

  4. The role of ITER in the US MFE Program Strategy

    SciTech Connect

    Glass, A.J.

    1992-07-01

    I want to discuss the role of ITER in the US MFE Program Strategy. I should stress that any opinions I present are purely my own. I`m not speaking ex cathedra, I`m not speaking for the ITER Home Team, and I`m not speaking for the Lawrence Livermore National Laboratory. I`m giving my own personal opinions. In discussing the role of ITER, we have to recognize that ITER plays several roles, and I want to identify how ITER influences MFE program strategy through each of its roles.

  5. The role of ITER in the US MFE Program Strategy

    SciTech Connect

    Glass, A.J.

    1992-07-01

    I want to discuss the role of ITER in the US MFE Program Strategy. I should stress that any opinions I present are purely my own. I'm not speaking ex cathedra, I'm not speaking for the ITER Home Team, and I'm not speaking for the Lawrence Livermore National Laboratory. I'm giving my own personal opinions. In discussing the role of ITER, we have to recognize that ITER plays several roles, and I want to identify how ITER influences MFE program strategy through each of its roles.

  6. Speeding up Newton-type iterations for stiff problems

    NASA Astrophysics Data System (ADS)

    Gonzalez-Pinto, S.; Rojas-Bello, R.

    2005-09-01

    Iterative schemes based on the Cooper and Butcher iteration [5] are considered, in order to implement highly implicit Runge-Kutta methods on stiff problems. By introducing two appropriate parameters in the scheme, a new iteration making use of the last two iterates, is proposed. Specific schemes of this type for the Gauss, Radau IA-IIA and Lobatto IIIA-B-C processes are developed. It is also shown that in many situations the new iteration presents a faster convergence than the original.

  7. Evaluation of ITER MSE Viewing Optics

    SciTech Connect

    Allen, S; Lerner, S; Morris, K; Jayakumar, J; Holcomb, C; Makowski, M; Latkowski, J; Chipman, R

    2007-03-26

    The Motional Stark Effect (MSE) diagnostic on ITER determines the local plasma current density by measuring the polarization angle of light resulting from the interaction of a high energy neutral heating beam and the tokamak plasma. This light signal has to be transmitted from the edge and core of the plasma to a polarization analyzer located in the port plug. The optical system should either preserve the polarization information, or it should be possible to reliably calibrate any changes induced by the optics. This LLNL Work for Others project for the US ITER Project Office (USIPO) is focused on the design of the viewing optics for both the edge and core MSE systems. Several design constraints were considered, including: image quality, lack of polarization aberrations, ease of construction and cost of mirrors, neutron shielding, and geometric layout in the equatorial port plugs. The edge MSE optics are located in ITER equatorial port 3 and view Heating Beam 5, and the core system is located in equatorial port 1 viewing heating beam 4. The current work is an extension of previous preliminary design work completed by the ITER central team (ITER resources were not available to complete a detailed optimization of this system, and then the MSE was assigned to the US). The optimization of the optical systems at this level was done with the ZEMAX optical ray tracing code. The final LLNL designs decreased the ''blur'' in the optical system by nearly an order of magnitude, and the polarization blur was reduced by a factor of 3. The mirror sizes were reduced with an estimated cost savings of a factor of 3. The throughput of the system was greater than or equal to the previous ITER design. It was found that optical ray tracing was necessary to accurately measure the throughput. Metal mirrors, while they can introduce polarization aberrations, were used close to the plasma because of the anticipated high heat, particle, and neutron loads. These mirrors formed an intermediate

  8. On the particular integrals of the Prandtl-Busemann iteration equations for the flow of a compressible fluid

    NASA Technical Reports Server (NTRS)

    Kaplan, Carl

    1951-01-01

    The particular integrals of the second-order and third-order Prandtl-Busemann iteration equations for the flow of a compressible fluid are obtained by means of the method in which the complex conjugate variables are utilized as the independent variables of the analysis. The assumption is made that the Prandtl-Glauert solution of the linearized or first-order iteration equation for the two-dimensional flow of a compressible fluid is known. The forms of the particular integrals, derived for subsonic flow, are readily adapted to supersonic flows with only a change in sign of one of the parameters of the problem.

  9. Adaptive image steganography using contourlet transform

    NASA Astrophysics Data System (ADS)

    Fakhredanesh, Mohammad; Rahmati, Mohammad; Safabakhsh, Reza

    2013-10-01

    This work presents adaptive image steganography methods which locate suitable regions for embedding by contourlet transform, while embedded message bits are carried in discrete cosine transform coefficients. The first proposed method utilizes contourlet transform coefficients to select contour regions of the image. In the embedding procedure, some of the contourlet transform coefficients may change which may cause errors at the message extraction phase. We propose a novel iterative procedure to resolve such problems. In addition, we have proposed an improved version of the first method in which it uses an advanced embedding operation to boost its security. Experimental results show that the proposed base method is an imperceptible image steganography method with zero retrieval error rate. Comparisons with other steganography methods which utilize contourlet transform show that our proposed method is able to retrieve all messages perfectly, whereas the others fail. Moreover, the proposed method outperforms the ContSteg method in terms of PSNR and the higher-order statistics steganalysis method. Experimental evaluations of our methods with the well known DCT-based steganography algorithms have demonstrated that our improved method has superior performance in terms of PSNR and SSIM, and is more secure against the steganalysis attack.

  10. Iterative reconstruction methods in atmospheric tomography: FEWHA, Kaczmarz and Gradient-based algorithm

    NASA Astrophysics Data System (ADS)

    Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.

    2014-07-01

    The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.

  11. Stupid statistics!

    PubMed

    Tellinghuisen, Joel

    2008-01-01

    The method of least squares is probably the most powerful data analysis tool available to scientists. Toward a fuller appreciation of that power, this work begins with an elementary review of statistics fundamentals, and then progressively increases in sophistication as the coverage is extended to the theory and practice of linear and nonlinear least squares. The results are illustrated in application to data analysis problems important in the life sciences. The review of fundamentals includes the role of sampling and its connection to probability distributions, the Central Limit Theorem, and the importance of finite variance. Linear least squares are presented using matrix notation, and the significance of the key probability distributions-Gaussian, chi-square, and t-is illustrated with Monte Carlo calculations. The meaning of correlation is discussed, including its role in the propagation of error. When the data themselves are correlated, special methods are needed for the fitting, as they are also when fitting with constraints. Nonlinear fitting gives rise to nonnormal parameter distributions, but the 10% Rule of Thumb suggests that such problems will be insignificant when the parameter is sufficiently well determined. Illustrations include calibration with linear and nonlinear response functions, the dangers inherent in fitting inverted data (e.g., Lineweaver-Burk equation), an analysis of the reliability of the van't Hoff analysis, the problem of correlated data in the Guggenheim method, and the optimization of isothermal titration calorimetry procedures using the variance-covariance matrix for experiment design. The work concludes with illustrations on assessing and presenting results.

  12. Simultaneous iterative reconstruction technique for diffuse optical tomography imaging: iteration criterion and image recognition

    NASA Astrophysics Data System (ADS)

    Yu, Zong-Han; Wu, Chun-Ming; Lin, Yo-Wei; Chuang, Ming-Lung; Tsai, Jui-che; Sun, Chia-Wei

    2008-02-01

    Diffuse optical tomography (DOT) is an emerging technique for biomedical imaging. The imaging quality of the DOT strongly depends on the reconstruction algorithm. In this paper, four inhomogeneities with various shapes of absorption distributions are simulated by a continues-wave DOT system. The DOT images are obtained based on the simultaneous iterative reconstruction technique (SIRT) method. To solve the trade-off problem between time consumption of reconstruction process and accuracy of reconstructed image, the iteration process needs a optimization criterion in algorithm. In this paper, the comparison between the root mean square error (RMSE) and the convergence rate (CR) in SIRT algorithm are demonstrated. From the simulation results, the CR reveals the information of global minimum in the iteration process. Based on the CR calculation, the SIRT can offer higher efficient image reconstructing in DOT system.

  13. Acceleration of iterative image reconstruction for x-ray imaging for security applications

    NASA Astrophysics Data System (ADS)

    Degirmenci, Soysal; Politte, David G.; Bosch, Carl; Tricha, Nawfel; O'Sullivan, Joseph A.

    2015-03-01

    Three-dimensional image reconstruction for scanning baggage in security applications is becoming increasingly important. Compared to medical x-ray imaging, security imaging systems must be designed for a greater variety of objects. There is a lot of variation in attenuation and nearly every bag scanned has metal present, potentially yielding significant artifacts. Statistical iterative reconstruction algorithms are known to reduce metal artifacts and yield quantitatively more accurate estimates of attenuation than linear methods. For iterative image reconstruction algorithms to be deployed at security checkpoints, the images must be quantitatively accurate and the convergence speed must be increased dramatically. There are many approaches for increasing convergence; two approaches are described in detail in this paper. The first approach includes a scheduled change in the number of ordered subsets over iterations and a reformulation of convergent ordered subsets that was originally proposed by Ahn, Fessler et. al.1 The second approach is based on varying the multiplication factor in front of the additive step in the alternating minimization (AM) algorithm, resulting in more aggressive updates in iterations. Each approach is implemented on real data from a SureScanTM x 1000 Explosive Detection System∗ and compared to straightforward implementations of the alternating minimization algorithm of O'Sullivan and Benac2 with a Huber-type edge-preserving penalty, originally proposed by Lange.3

  14. Corneal topography matching by iterative registration.

    PubMed

    Wang, Junjie; Elsheikh, Ahmed; Davey, Pinakin G; Wang, Weizhuo; Bao, Fangjun; Mottershead, John E

    2014-11-01

    Videokeratography is used for the measurement of corneal topography in overlapping portions (or maps) which must later be joined together to form the overall topography of the cornea. The separate portions are measured from different viewpoints and therefore must be brought together by registration of measurement points in the regions of overlap. The central map is generally the most accurate, but all maps are measured with uncertainty that increases towards the periphery. It becomes the reference (or static) map, and the peripheral (or dynamic) maps must then be transformed by rotation and translation so that the overlapping portions are matched. The process known as registration, of determining the necessary transformation, is a well-understood procedure in image analysis and has been applied in several areas of science and engineering. In this article, direct search optimisation using the Nelder-Mead algorithm and several variants of the iterative closest/corresponding point routine are explained and applied to simulated and real clinical data. The measurement points on the static and dynamic maps are generally different so that it becomes necessary to interpolate, which is done using a truncated series of Zernike polynomials. The point-to-plane iterative closest/corresponding point variant has the advantage of releasing certain optimisation constraints that lead to persistent registration and alignment errors when other approaches are used. The point-to-plane iterative closest/corresponding point routine is found to be robust to measurement noise, insensitive to starting values of the transformation parameters and produces high-quality results when using real clinical data.

  15. Illustrating the practice of statistics

    SciTech Connect

    Hamada, Christina A; Hamada, Michael S

    2009-01-01

    The practice of statistics involves analyzing data and planning data collection schemes to answer scientific questions. Issues often arise with the data that must be dealt with and can lead to new procedures. In analyzing data, these issues can sometimes be addressed through the statistical models that are developed. Simulation can also be helpful in evaluating a new procedure. Moreover, simulation coupled with optimization can be used to plan a data collection scheme. The practice of statistics as just described is much more than just using a statistical package. In analyzing the data, it involves understanding the scientific problem and incorporating the scientist's knowledge. In modeling the data, it involves understanding how the data were collected and accounting for limitations of the data where possible. Moreover, the modeling is likely to be iterative by considering a series of models and evaluating the fit of these models. Designing a data collection scheme involves understanding the scientist's goal and staying within hislher budget in terms of time and the available resources. Consequently, a practicing statistician is faced with such tasks and requires skills and tools to do them quickly. We have written this article for students to provide a glimpse of the practice of statistics. To illustrate the practice of statistics, we consider a problem motivated by some precipitation data that our relative, Masaru Hamada, collected some years ago. We describe his rain gauge observational study in Section 2. We describe modeling and an initial analysis of the precipitation data in Section 3. In Section 4, we consider alternative analyses that address potential issues with the precipitation data. In Section 5, we consider the impact of incorporating additional infonnation. We design a data collection scheme to illustrate the use of simulation and optimization in Section 6. We conclude this article in Section 7 with a discussion.

  16. Iterative repair for scheduling and rescheduling

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Deale, Michael

    1991-01-01

    An iterative repair search method is described called constraint based simulated annealing. Simulated annealing is a hill climbing search technique capable of escaping local minima. The utility of the constraint based framework is shown by comparing search performance with and without the constraint framework on a suite of randomly generated problems. Results are also shown of applying the technique to the NASA Space Shuttle ground processing problem. These experiments show that the search methods scales to complex, real world problems and reflects interesting anytime behavior.

  17. Unifying iteration rule for fractal objects

    NASA Astrophysics Data System (ADS)

    Kittel, A.; Parisi, J.; Peinke, J.; Baier, G.; Klein, M.; Rössler, O. E.

    1997-03-01

    We introduce an iteration rule for real numbers capable to generate attractors with dragon-, snowflake-, sponge-, or Swiss-flag-like cross sections. The idea behind it is the mapping of a torus into two (or more) shrunken and twisted tori located inside the previous one. Three distinct parameters define the symmetry, the dimension, and the connectedness or disconnectedness of the fractal object. For some selected triples of parameter values, a couple of well known fractal geometries (e.g. the Cantor set, the Sierpinski gasket, or the Swiss flag) can be gained as special cases.

  18. Design of the ITER ICRF Antenna

    SciTech Connect

    Hancock, D.; Nightingale, M.; Bamber, R.; Dalton, N.; Lister, J.; Porton, M.; Shannon, M.; Wilson, D.; Wooldridge, E.; Winkler, K.

    2011-12-23

    The CYCLE consortium has been designing the ITER ICRF antenna since March 2010, supported by an F4E grant. Following a brief introduction to the consortium, this paper: describes the present status and layout of the design; highlights the key mechanical engineering features; shows the expected impact of cooling and radiation issues on the design and outlines the need for future R and D to support the design process. A key design requirement is the need for the mechanical design and analysis to be consistent with all requirements following from the RF physics and antenna layout optimisation. As such, this paper complements that of Durodie et al.

  19. Iterative procedure for in-situ EUV optical testing with an incoherent source

    SciTech Connect

    Miyawaka, Ryan; Naulleau, Patrick; Zakhor, Avideh

    2009-12-01

    We propose an iterative method for in-situ optical testing under partially coherent illumination that relies on the rapid computation of aerial images. In this method a known pattern is imaged with the test optic at several planes through focus. A model is created that iterates through possible aberration maps until the through-focus series of aerial images matches the experimental result. The computation time of calculating the through-focus series is significantly reduced by a-SOCS, an adapted form of the Sum Of Coherent Systems (SOCS) decomposition. In this method, the Hopkins formulation is described by an operator S which maps the space of pupil aberrations to the space of aerial images. This operator is well approximated by a truncated sum of its spectral components.

  20. A fast iterative recursive least squares algorithm for Wiener model identification of highly nonlinear systems.

    PubMed

    Kazemi, Mahdi; Arefi, Mohammad Mehdi

    2017-03-01

    In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used.

  1. Assessment of the dose reduction potential of a model-based iterative reconstruction algorithm using a task-based performance metrology

    SciTech Connect

    Samei, Ehsan; Richard, Samuel

    2015-01-15

    Purpose: Different computed tomography (CT) reconstruction techniques offer different image quality attributes of resolution and noise, challenging the ability to compare their dose reduction potential against each other. The purpose of this study was to evaluate and compare the task-based imaging performance of CT systems to enable the assessment of the dose performance of a model-based iterative reconstruction (MBIR) to that of an adaptive statistical iterative reconstruction (ASIR) and a filtered back projection (FBP) technique. Methods: The ACR CT phantom (model 464) was imaged across a wide range of mA setting on a 64-slice CT scanner (GE Discovery CT750 HD, Waukesha, WI). Based on previous work, the resolution was evaluated in terms of a task-based modulation transfer function (MTF) using a circular-edge technique and images from the contrast inserts located in the ACR phantom. Noise performance was assessed in terms of the noise-power spectrum (NPS) measured from the uniform section of the phantom. The task-based MTF and NPS were combined with a task function to yield a task-based estimate of imaging performance, the detectability index (d′). The detectability index was computed as a function of dose for two imaging tasks corresponding to the detection of a relatively small and a relatively large feature (1.5 and 25 mm, respectively). The performance of MBIR in terms of the d′ was compared with that of ASIR and FBP to assess its dose reduction potential. Results: Results indicated that MBIR exhibits a variability spatial resolution with respect to object contrast and noise while significantly reducing image noise. The NPS measurements for MBIR indicated a noise texture with a low-pass quality compared to the typical midpass noise found in FBP-based CT images. At comparable dose, the d′ for MBIR was higher than those of FBP and ASIR by at least 61% and 19% for the small feature and the large feature tasks, respectively. Compared to FBP and ASIR, MBIR

  2. Plastic Surgery Statistics

    MedlinePlus

    ... PRS GO PSN PSEN GRAFT Contact Us News Plastic Surgery Statistics Plastic surgery procedural statistics from the ... Plastic Surgery Statistics 2005 Plastic Surgery Statistics 2016 Plastic Surgery Statistics Stats Report 2016 National Clearinghouse of ...

  3. Adaptive schemes for incomplete quantum process tomography

    SciTech Connect

    Teo, Yong Siah; Englert, Berthold-Georg; Rehacek, Jaroslav; Hradil, Zdenek

    2011-12-15

    We propose an iterative algorithm for incomplete quantum process tomography with the help of quantum state estimation. The algorithm, which is based on the combined principles of maximum likelihood and maximum entropy, yields a unique estimator for an unknown quantum process when one has less than a complete set of linearly independent measurement data to specify the quantum process uniquely. We apply this iterative algorithm adaptively in various situations and so optimize the amount of resources required to estimate a quantum process with incomplete data.

  4. Visual Adaptation

    PubMed Central

    Webster, Michael A.

    2015-01-01

    Sensory systems continuously mold themselves to the widely varying contexts in which they must operate. Studies of these adaptations have played a long and central role in vision science. In part this is because the specific adaptations remain a powerful tool for dissecting vision, by exposing the mechanisms that are adapting. That is, “if it adapts, it's there.” Many insights about vision have come from using adaptation in this way, as a method. A second important trend has been the realization that the processes of adaptation are themselves essential to how vision works, and thus are likely to operate at all levels. That is, “if it's there, it adapts.” This has focused interest on the mechanisms of adaptation as the target rather than the probe. Together both approaches have led to an emerging insight of adaptation as a fundamental and ubiquitous coding strategy impacting all aspects of how we see. PMID:26858985

  5. Final Report on ITER Task Agreement 81-08

    SciTech Connect

    Richard L. Moore

    2008-03-01

    As part of an ITER Implementing Task Agreement (ITA) between the ITER US Participant Team (PT) and the ITER International Team (IT), the INL Fusion Safety Program was tasked to provide the ITER IT with upgrades to the fusion version of the MELCOR 1.8.5 code including a beryllium dust oxidation model. The purpose of this model is to allow the ITER IT to investigate hydrogen production from beryllium dust layers on hot surfaces inside the ITER vacuum vessel (VV) during in-vessel loss-of-cooling accidents (LOCAs). Also included in the ITER ITA was a task to construct a RELAP5/ATHENA model of the ITER divertor cooling loop to model the draining of the loop during a large ex-vessel pipe break followed by an in-vessel divertor break and compare the results to a simular MELCOR model developed by the ITER IT. This report, which is the final report for this agreement, documents the completion of the work scope under this ITER TA, designated as TA 81-08.

  6. Statistical computation of tolerance limits

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1993-01-01

    Based on a new theory, two computer codes were developed specifically to calculate the exact statistical tolerance limits for normal distributions within unknown means and variances for the one-sided and two-sided cases for the tolerance factor, k. The quantity k is defined equivalently in terms of the noncentral t-distribution by the probability equation. Two of the four mathematical methods employ the theory developed for the numerical simulation. Several algorithms for numerically integrating and iteratively root-solving the working equations are written to augment the program simulation. The program codes generate some tables of k's associated with the varying values of the proportion and sample size for each given probability to show accuracy obtained for small sample sizes.

  7. Stability of resistive wall modes with plasma rotation and thick wall in ITER scenario

    NASA Astrophysics Data System (ADS)

    Zheng, L. J.; Kotschenreuther, M.; Chu, M.; Chance, M.; Turnbull, A.

    2004-11-01

    The rotation effect on resistive wall modes (RWMs) is examined for realistically shaped, high-beta tokamak equilibria, including reactor relevant cases with low mach number M and realistic thick walls. For low M, Stabilization of RWMs arises from unusually thin inertial layers. The investigation employs the newly developed adaptive eigenvalue code (AEGIS: Adaptive EiGenfunction Independent Solution), which describes both low and high n modes and is in good agreement with GATO in the benchmark studies. AEGIS is unique in using adaptive methods to resolve such inertial layers with low mach number rotation. This feature is even more desirable for transport barrier cases. Additionally, ITER and reactors have thick conducting walls ( ˜.5-1 m) which are not well modeled as a thin shell. Such thick walls are considered here, including semi-analytical approximations to account for the toroidally segmented nature of real walls.

  8. Iterative image reconstruction in spectral CT

    NASA Astrophysics Data System (ADS)

    Hernandez, Daniel; Michel, Eric; Kim, Hye S.; Kim, Jae G.; Han, Byung H.; Cho, Min H.; Lee, Soo Y.

    2012-03-01

    Scan time of spectral-CTs is much longer than conventional CTs due to limited number of x-ray photons detectable by photon-counting detectors. However, the spectral pixel information in spectral-CT has much richer information on physiological and pathological status of the tissues than the CT-number in conventional CT, which makes the spectral- CT one of the promising future imaging modalities. One simple way to reduce the scan time in spectral-CT imaging is to reduce the number of views in the acquisition of projection data. But, this may result in poorer SNR and strong streak artifacts which can severely compromise the image quality. In this work, spectral-CT projection data were obtained from a lab-built spectral-CT consisting of a single CdTe photon counting detector, a micro-focus x-ray tube and scan mechanics. For the image reconstruction, we used two iterative image reconstruction methods, the simultaneous iterative reconstruction technique (SIRT) and the total variation minimization based on conjugate gradient method (CG-TV), along with the filtered back-projection (FBP) to compare the image quality. From the imaging of the iodine containing phantoms, we have observed that SIRT and CG-TV are superior to the FBP method in terms of SNR and streak artifacts.

  9. ITER Central Solenoid support structure analysis

    SciTech Connect

    Freudenberg, Kevin D; Myatt, R.

    2011-01-01

    The ITER Central Solenoid (CS) is comprised of six independent coils held together by a pre-compression support structure. This structure must provide enough preload to maintain sufficient coil-to-coil contact and interface load throughout the current pulse. End of burn (EOB) represents one of the most extreme time-points doing the reference scenario when the currents in the CS3 coils oppose those of CS1 & CS2. The CS structure is performance limited by the room temperature static yield requirements needed to support the roughly 180 MN preload to resist coil separation during operation. This preload is applied by inner and external tie plates along the length of the coil stack by mechanical fastening methods utilizing Superbolt technology. The preloading structure satisfies the magnet structural design criteria of ITER and will be verified during mockup studies. The solenoid is supported from the bottom of the toroidal field (TF) coil casing in both the vertical radial directions. The upper support of the CS coil structure maintains radial registration with the TF coil in the event of vertical disruptions (VDE) loads and earthquakes. All of these structure systems are analyzed via a global finite element analysis (FEA). The model includes a complete sector of the TF coil and the CS coil/structure in one self-consistent analysis. The corresponding results and design descriptions are described in this report.

  10. Iterative deconvolution methods for ghost imaging

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Situ, Guohai

    2016-10-01

    Ghost imaging (GI) is an important technique in single-pixel imaging. It has been demonstrated that GI has applications in various areas such as imaging through harsh environments and optical encryption. Correlation is widely used to reconstruct the object image in GI. But it only offers the signal-to-noise ratios (SNR) of the reconstructed image linearly proportional to the number of measurements. Here, we develop a kind of iterative deconvolution methods for GI. With the known image transmission matrix in GI, the first one uses an iterative algorithm to decrease the error between the reconstructed image and the ground-truth image. Ideally, the error converges to a minimum for speckle patterns when the number of measurements is larger than the number of resolution cells. The second technique, Gerchberg-Saxton (GS) like GI, takes the advantage of the integral property of the Fourier transform, and treats the captured data as constraints for image reconstruction. According to this property, we can regard the data recorded by the bucket detector as the Fourier transform of the object image evaluated at the origin. Each of the speckle patterns randomly selects certain spectral components of the object and shift them to the origin in the Fourier space. One can use these constraints to reconstruct the image with the GS algorithm. This deconvolution method is suitable for any single pixel imaging models. Compared to conventional GI, both techniques offer a nonlinear growth of the SNR value with respect to the number of measurements.

  11. Iterative Mechanism Solutions with Scenario and ADAMS

    NASA Technical Reports Server (NTRS)

    Rhoades, Daren

    2006-01-01

    This slide presentation reviews the use of iterative solutions using Scenario for Motion (UG NX 2 Motion) to assist in designing the Mars Science Laboratory (MSL). The MSL will have very unique design requirements, and in order to meet these requirements the system must have the ability to design for static stability, simulate mechanism kinematics, simulate dynamic behaviour and be capable of reconfiguration, and iterations as designed. The legacy process used on the Mars Exploration rovers worked, but it was cumbersome using multiple tools, limited configuration control, with manual process and communication, and multiple steps. The aim is to develop a mechanism that would reduce turn around time, and make more reiterations possible, to improve the quality and quantity of data, and to enhance configuration control. Currently for NX Scenario for Motion uses are in the articulation studies, the simulations of traverse motions,and subsystem simulations. The design of the Rover landing model requires accurate results, flexible elements, such as beams, and the use of the full ADAMS solver has been used. In order to achieve this, when required, there has been a direct translation from Scenario to ADAMS, with additional data in ascii format. The process that has been designed to move from Scenario to ADAMS is reviewed.

  12. Transport analysis of tungsten impurity in ITER

    NASA Astrophysics Data System (ADS)

    Murakami, Y.; Amano, T.; Shimizu, K.; Shimada, M.

    2003-03-01

    The radial distribution of tungsten impurity in ITER is calculated by using the 1.5D transport code TOTAL coupled with NCLASS, which can solve the neo-classical impurity flux considering arbitrary aspect ratio and collisionality. An impurity screening effect is observed when the density profile is flat and the line radiation power is smaller than in the case without impurity transport by a factor of 2. It is shown that 90 MW of line radiation power is possible without significant degradation of plasma performance ( HH98( y,2) ˜1) when the fusion power is 700 MW (fusion gain Q=10). The allowable tungsten density is about 7×10 15/m 3, which is 0.01% of the electron density and the increase of the effective ionic charge Zeff is about 0.39. In this case, the total radiation power is more than half of the total heating power 210 MW, and power to the divertor region is less than 100 MW. This operation regime gives an opportunity for high fusion power operation in ITER with acceptable divertor conditions. Simulations for the case with an internal transport barrier (ITB) are also performed and it is found that impurity shielding by an ITB is possible with density profile control.

  13. Intense diagnostic neutral beam development for ITER

    SciTech Connect

    Rej, D.J.; Henins, I.; Fonck, R.J.; Kim, Y.J.

    1992-05-01

    For the next-generation, burning tokamak plasmas such as ITER, diagnostic neutral beams and beam spectroscopy will continue to be used to determine a variety of plasma parameters such as ion temperature, rotation, fluctuations, impurity content, current density profile, and confined alpha particle density and energy distribution. Present-day low-current, long-pulse beam technology will be unable to provide the required signal intensities because of higher beam attenuation and background bremsstrahlung radiation in these larger, higher-density plasmas. To address this problem, we are developing a short-pulse, intense diagnostic neutral beam. Protons or deuterons are accelerated using magnetic-insulated ion-diode technology, and neutralized in a transient gas cell. A prototype 25-kA, 100-kV, 1-{mu}s accelerator is under construction at Los Alamos. Initial experiments will focus on ITER-related issues of beam energy distribution, current density, pulse length, divergence, propagation, impurity content, reproducibility, and maintenance.

  14. Intense diagnostic neutral beam development for ITER

    SciTech Connect

    Rej, D.J.; Henins, I. ); Fonck, R.J.; Kim, Y.J. . Dept. of Nuclear Engineering and Engineering Physics)

    1992-01-01

    For the next-generation, burning tokamak plasmas such as ITER, diagnostic neutral beams and beam spectroscopy will continue to be used to determine a variety of plasma parameters such as ion temperature, rotation, fluctuations, impurity content, current density profile, and confined alpha particle density and energy distribution. Present-day low-current, long-pulse beam technology will be unable to provide the required signal intensities because of higher beam attenuation and background bremsstrahlung radiation in these larger, higher-density plasmas. To address this problem, we are developing a short-pulse, intense diagnostic neutral beam. Protons or deuterons are accelerated using magnetic-insulated ion-diode technology, and neutralized in a transient gas cell. A prototype 25-kA, 100-kV, 1-{mu}s accelerator is under construction at Los Alamos. Initial experiments will focus on ITER-related issues of beam energy distribution, current density, pulse length, divergence, propagation, impurity content, reproducibility, and maintenance.

  15. Diverse Power Iteration Embeddings and Its Applications

    SciTech Connect

    Huang H.; Yoo S.; Yu, D.; Qin, H.

    2014-12-14

    Abstract—Spectral Embedding is one of the most effective dimension reduction algorithms in data mining. However, its computation complexity has to be mitigated in order to apply it for real-world large scale data analysis. Many researches have been focusing on developing approximate spectral embeddings which are more efficient, but meanwhile far less effective. This paper proposes Diverse Power Iteration Embeddings (DPIE), which not only retains the similar efficiency of power iteration methods but also produces a series of diverse and more effective embedding vectors. We test this novel method by applying it to various data mining applications (e.g. clustering, anomaly detection and feature selection) and evaluating their performance improvements. The experimental results show our proposed DPIE is more effective than popular spectral approximation methods, and obtains the similar quality of classic spectral embedding derived from eigen-decompositions. Moreover it is extremely fast on big data applications. For example in terms of clustering result, DPIE achieves as good as 95% of classic spectral clustering on the complex datasets but 4000+ times faster in limited memory environment.

  16. Pedestal stability comparison and ITER pedestal prediction

    NASA Astrophysics Data System (ADS)

    Snyder, P. B.; Aiba, N.; Beurskens, M.; Groebner, R. J.; Horton, L. D.; Hubbard, A. E.; Hughes, J. W.; Huysmans, G. T. A.; Kamada, Y.; Kirk, A.; Konz, C.; Leonard, A. W.; Lönnroth, J.; Maggi, C. F.; Maingi, R.; Osborne, T. H.; Oyama, N.; Pankin, A.; Saarelma, S.; Saibene, G.; Terry, J. L.; Urano, H.; Wilson, H. R.

    2009-08-01

    The pressure at the top of the edge transport barrier (or 'pedestal height') strongly impacts fusion performance, while large edge localized modes (ELMs), driven by the free energy in the pedestal region, can constrain material lifetimes. Accurately predicting the pedestal height and ELM behavior in ITER is an essential element of prediction and optimization of fusion performance. Investigation of intermediate wavelength MHD modes (or 'peeling-ballooning' modes) has led to an improved understanding of important constraints on the pedestal height and the mechanism for ELMs. The combination of high-resolution pedestal diagnostics, including substantial recent improvements, and a suite of highly efficient stability codes, has made edge stability analysis routine on several major tokamaks, contributing both to understanding, and to experimental planning and performance optimization. Here we present extensive comparisons of observations to predicted edge stability boundaries on several tokamaks, both for the standard (Type I) ELM regime, and for small ELM and ELM-free regimes. We further discuss a new predictive model for the pedestal height and width (EPED1), developed by self-consistently combining a simple width model with peeling-ballooning stability calculations. This model is tested against experimental measurements, and used in initial predictions of the pedestal height for ITER.

  17. Suboptimal fractal coding scheme using iterative transformation

    NASA Astrophysics Data System (ADS)

    Kang, Hyun-Soo; Chung, Jae-won

    2001-05-01

    This paper presents a new fractal coding scheme to find a suboptimal transformation by performing an iterative encoding process. The optimal transformation can be defined as the transformation generating the closest attractor to an original image. Unfortunately, it is impossible in practice to find the optimal transformation, due to the heavy computational burden. In this paper, however, by means of some new theorems related with contractive transformations and attractors. It is shown that for some specific cases the optimal or suboptimal transformations can be obtained. The proposed method obtains a suboptimal transformation by performing iterative processes as is done in decoding. Thus, it requires more computation than the conventional method, but it improves the image quality. For a simple case where the optimal transformation can actually be found, the proposed method is experimentally evaluated against both the optimal method and the conventional method. For a general case where the optimal transformation in unavailable due to heavy computational complexity, the proposed method is also evaluated in comparison with the conventional method.

  18. The ITER Radial Neutron Camera Detection System

    SciTech Connect

    Marocco, D.; Belli, F.; Esposito, B.; Petrizzi, L.; Riva, M.; Bonheure, G.; Kaschuck, Y.

    2008-03-12

    A multichannel neutron detection system (Radial Neutron Camera, RNC) will be installed on the ITER equatorial port plug 1 for total neutron source strength, neutron emissivity/ion temperature profiles and n{sub t}/n{sub d} ratio measurements [1]. The system is composed by two fan shaped collimating structures: an ex-vessel structure, looking at the plasma core, containing tree sets of 12 collimators (each set lying on a different toroidal plane), and an in-vessel structure, containing 9 collimators, for plasma edge coverage. The RNC detecting system will work in a harsh environment (neutron fiux up to 10{sup 8}-10{sup 9} n/cm{sup 2} s, magnetic field >0.5 T or in-vessel detectors), should provide both counting and spectrometric information and should be flexible enough to cover the high neutron flux dynamic range expected during the different ITER operation phases. ENEA has been involved in several activities related to RNC design and optimization [2,3]. In the present paper the up-to-date design and the neutron emissivity reconstruction capabilities of the RNC will be described. Different options for detectors suitable for spectrometry and counting (e.g. scintillators and diamonds) focusing on the implications in terms of overall RNC performance will be discussed. The increase of the RNC capabilities offered by the use of new digital data acquisition systems will be also addressed.

  19. Laser cleaning of ITER's diagnostic mirrors

    NASA Astrophysics Data System (ADS)

    Skinner, C. H.; Gentile, C. A.; Doerner, R.

    2012-10-01

    Practical methods to clean ITER's diagnostic mirrors and restore reflectivity will be critical to ITER's plasma operations. We report on laser cleaning of single crystal molybdenum mirrors coated with either carbon or beryllium films 150 - 420 nm thick. A 1.06 μm Nd laser system provided 220 ns pulses at 8 kHz with typical power densities of 1-2 J/cm^2. The laser beam was fiber optically coupled to a scanner suitable for tokamak applications. The efficacy of mirror cleaning was assessed with a new technique that combines microscopic imaging and reflectivity measurements [1]. The method is suitable for hazardous materials such as beryllium as the mirrors remain sealed in a vacuum chamber. Excellent restoration of reflectivity for the carbon coated Mo mirrors was observed after laser scanning under vacuum conditions. For the beryllium coated mirrors restoration of reflectivity has so far been incomplete and modeling indicates that a shorter duration laser pulse is needed. No damage of the molybdenum mirror substrates was observed.[4pt][1] C.H. Skinner et al., Rev. Sci. Instrum. at press.

  20. A holistic strategy for adaptive land management

    USGS Publications Warehouse

    Herrick, Jeffrey E.; Duniway, Michael C.; Pyke, David A.; Bestelmeyer, Brandon T.; Wills, Skye A.; Brown, Joel R.; Karl, Jason W.; Havstad, Kris M.

    2012-01-01

    Adaptive management is widely applied to natural resources management (Holling 1973; Walters and Holling 1990). Adaptive management can be generally defined as an iterative decision-making process that incorporates formulation of management objectives, actions designed to address these objectives, monitoring of results, and repeated adaptation of management until desired results are achieved (Brown and MacLeod 1996; Savory and Butterfield 1999). However, adaptive management is often criticized because very few projects ever complete more than one cycle, resulting in little adaptation and little knowledge gain (Lee 1999; Walters 2007). One significant criticism is that adaptive management is often used as a justification for undertaking actions with uncertain outcomes or as a surrogate for the development of specific, measurable indicators and monitoring programs (Lee 1999; Ruhl 2007).

  1. Mad Libs Statistics: A "Happy" Activity

    ERIC Educational Resources Information Center

    Trumpower, David

    2010-01-01

    This article describes a fun activity that can be used to help students make links between statistical analyses and their real-world implications. Although an illustrative example is provided using analysis of variance, the activity may be adapted for use with other statistical techniques.

  2. Nonlinear Burn Control and Operating Point Optimization in ITER

    NASA Astrophysics Data System (ADS)

    Boyer, Mark; Schuster, Eugenio

    2013-10-01

    Control of the fusion power through regulation of the plasma density and temperature will be essential for achieving and maintaining desired operating points in fusion reactors and burning plasma experiments like ITER. In this work, a volume averaged model for the evolution of the density of energy, deuterium and tritium fuel ions, alpha-particles, and impurity ions is used to synthesize a multi-input multi-output nonlinear feedback controller for stabilizing and modulating the burn condition. Adaptive control techniques are used to account for uncertainty in model parameters, including particle confinement times and recycling rates. The control approach makes use of the different possible methods for altering the fusion power, including adjusting the temperature through auxiliary heating, modulating the density and isotopic mix through fueling, and altering the impurity density through impurity injection. Furthermore, a model-based optimization scheme is proposed to drive the system as close as possible to desired fusion power and temperature references. Constraints are considered in the optimization scheme to ensure that, for example, density and beta limits are avoided, and that optimal operation is achieved even when actuators reach saturation. Supported by the NSF CAREER award program (ECCS-0645086).

  3. A unified noise analysis for iterative image estimation

    SciTech Connect

    Qi, Jinyi

    2003-07-03

    Iterative image estimation methods have been widely used in emission tomography. Accurate estimate of the uncertainty of the reconstructed images is essential for quantitative applications. While theoretical approach has been developed to analyze the noise propagation from iteration to iteration, the current results are limited to only a few iterative algorithms that have an explicit multiplicative update equation. This paper presents a theoretical noise analysis that is applicable to a wide range of preconditioned gradient type algorithms. One advantage is that proposed method does not require an explicit expression of the preconditioner and hence it is applicable to some algorithms that involve line searches. By deriving fixed point expression from the iteration based results, we show that the iteration based noise analysis is consistent with the xed point based analysis. Examples in emission tomography and transmission tomography are shown.

  4. Convergence Results on Iteration Algorithms to Linear Systems

    PubMed Central

    Wang, Zhuande; Yang, Chuansheng; Yuan, Yubo

    2014-01-01

    In order to solve the large scale linear systems, backward and Jacobi iteration algorithms are employed. The convergence is the most important issue. In this paper, a unified backward iterative matrix is proposed. It shows that some well-known iterative algorithms can be deduced with it. The most important result is that the convergence results have been proved. Firstly, the spectral radius of the Jacobi iterative matrix is positive and the one of backward iterative matrix is strongly positive (lager than a positive constant). Secondly, the mentioned two iterations have the same convergence results (convergence or divergence simultaneously). Finally, some numerical experiments show that the proposed algorithms are correct and have the merit of backward methods. PMID:24991640

  5. Adaptive Management

    EPA Science Inventory

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...

  6. Simulation and Analysis of the Hybrid Operating Mode in ITER

    SciTech Connect

    Kessel, C.E.; Budny, R.V.; Indireshkumar, K.

    2005-09-22

    The hybrid operating mode in ITER is examined with 0D systems analysis, 1.5D discharge scenario simulations using TSC and TRANSP, and the ideal MHD stability is discussed. The hybrid mode has the potential to provide very long pulses and significant neutron fluence if the physics regime can be produced in ITER. This paper reports progress in establishing the physics basis and engineering limitation for the hybrid mode in ITER.

  7. The explosive divergence in iterative maps of matrices

    NASA Astrophysics Data System (ADS)

    Navickas, Zenonas; Ragulskis, Minvydas; Vainoras, Alfonsas; Smidtaite, Rasa

    2012-11-01

    The effect of explosive divergence in generalized iterative maps of matrices is defined and described using formal algebraic techniques. It is shown that the effect of explosive divergence can be observed in an iterative map of square matrices of order 2 if and only if the matrix of initial conditions is a nilpotent matrix and the Lyapunov exponent of the corresponding scalar iterative map is greater than zero. Computational experiments with the logistic map and the circle map are used to illustrate the effect of explosive divergence occurring in iterative maps of matrices.

  8. Bounded-Angle Iterative Decoding of LDPC Codes

    NASA Technical Reports Server (NTRS)

    Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2009-01-01

    Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).

  9. ITER Cryoplant Status and Economics of the LHe plants

    NASA Astrophysics Data System (ADS)

    Monneret, E.; Chalifour, M.; Bonneton, M.; Fauve, E.; Voigt, T.; Badgujar, S.; Chang, H.-S.; Vincent, G.

    The ITER cryoplant is composed of helium and nitrogen refrigerators and generator combined with 80 K helium loop plants and external purification systems. Storage and recovery of the helium inventory is provided in warm and cold (80 K and 4.5 K) helium tanks.The conceptual design of the ITER cryoplant has been completed, the technical requirements defined for industrial procurement and contracts signed with industry. Each contract covers the design, manufacturing, installation and commissioning. Design is under finalization and manufacturing has started. First deliveries are scheduled by end of 2015.The various cryoplant systems are designed based on recognized codes and international standards to meet the availability, the reliability and the time between maintenance imposed by the long-term uninterrupted operation of the ITER Tokamak. In addition, ITER has to consider the constraint of a nuclear installation.ITER Organization (IO) is responsible for the liquid helium (LHe) Plants contract signed end of 2012 with industry. It is composed of three LHe Plants, working in parallel and able to provide a total average cooling capacity of 75 kW at 4.5 K. Based on concept designed developed with industries and the procurement phase, ITER has accumulated data to broaden the scaling laws for costing such systems.After describing the status of ITER cryoplant part of the cryogenic system, we shall present the economics of the ITER LHe Plants based on key design requirements, choice and challenges of this ITER Organization procurement.

  10. A synopsis of collective alpha effects and implications for ITER

    SciTech Connect

    Sigmar, D.J.

    1990-10-01

    This paper discusses the following: Alpha Interaction with Toroidal Alfven Eigenmodes; Alpha Interaction with Ballooning Modes; Alpha Interaction with Fishbone Oscillations; and Implications for ITER.

  11. For the Love of Statistics: Appreciating and Learning to Apply Experimental Analysis and Statistics through Computer Programming Activities

    ERIC Educational Resources Information Center

    Mascaró, Maite; Sacristán, Ana Isabel; Rufino, Marta M.

    2016-01-01

    For the past 4 years, we have been involved in a project that aims to enhance the teaching and learning of experimental analysis and statistics, of environmental and biological sciences students, through computational programming activities (using R code). In this project, through an iterative design, we have developed sequences of R-code-based…

  12. Networked iterative learning control approach for nonlinear systems with random communication delay

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Ruan, Xiaoe

    2016-12-01

    This paper constructs a proportional-type networked iterative learning control (NILC) scheme for a class of discrete-time nonlinear systems with the stochastic data communication delay within one operation duration and being subject to Bernoulli-type distribution. In the scheme, the communication delayed data is replaced by successfully captured one at the concurrent sampling moment of the latest iteration. The tracking performance of the addressed NILC algorithm is analysed by statistic technique in virtue of mathematical expectation. The analysis shows that, under certain conditions, the expectation of the tracking error measured in the form of 1-norm is asymptotically convergent to zero. Numerical experiments are carried out to illustrate the validity and effectiveness.

  13. Climate Change Assessment and Adaptation Planning for the Southeast US

    NASA Astrophysics Data System (ADS)

    Georgakakos, A. P.; Yao, H.; Zhang, F.

    2012-12-01

    A climate change assessment is carried out for the Apalachicola-Chattahoochee-Flint River Basin in the southeast US following an integrated water resources assessment and planning framework. The assessment process begins with the development/selection of consistent climate, demographic, socio-economic, and land use/cover scenarios. Historical scenarios and responses are analyzed first to establish baseline conditions. Future climate scenarios are based on GCMs available through the IPCC. Statistical and/or dynamic downscaling of GCM outputs is applied to generate high resolution (12x12 km) atmospheric forcing, such as rainfall, temperature, and ET demand, over the ACF River Basin watersheds. Physically based watershed, aquifer, and estuary models (lumped and distributed) are used to quantify the hydrologic and water quality river basin response to alternative climate and land use/cover scenarios. Demand assessments are carried out for each water sector, for example, water supply for urban, agricultural, and industrial users; hydro-thermal facilities; navigation reaches; and environmental/ecological flow and lake level requirements, aiming to establish aspirational water use targets, performance metrics, and management/adaptation options. Response models for the interconnected river-reservoir-aquifer-estuary system are employed next to assess actual water use levels and other sector outputs under a specific set of hydrologic inputs, demand targets, and management/adaptation options. Adaptive optimization methods are used to generate system-wide management policies conditional on inflow forecasts. The generated information is used to inform stakeholder planning and decision processes aiming to develop consensus on adaptation measures, management strategies, and performance monitoring indicators. The assessment and planning process is driven by stakeholder input and is inherently iterative and sequential.

  14. The PDZ domain as a complex adaptive system.

    PubMed

    Kurakin, Alexei; Swistowski, Andrzej; Wu, Susan C; Bredesen, Dale E

    2007-09-26

    Specific protein associations define the wiring of protein interaction networks and thus control the organization and functioning of the cell as a whole. Peptide recognition by PDZ and other protein interaction domains represents one of the best-studied classes of specific protein associations. However, a mechanistic understanding of the relationship between selectivity and promiscuity commonly observed in the interactions mediated by peptide recognition modules as well as its functional meaning remain elusive. To address these questions in a comprehensive manner, two large populations of artificial and natural peptide ligands of six archetypal PDZ domains from the synaptic proteins PSD95 and SAP97 were generated by target-assisted iterative screening (TAIS) of combinatorial peptide libraries and by synthesis of proteomic fragments, correspondingly. A comparative statistical analysis of affinity-ranked artificial and natural ligands yielded a comprehensive picture of known and novel PDZ ligand specificity determinants, revealing a hitherto unappreciated combination of specificity and adaptive plasticity inherent to PDZ domain recognition. We propose a reconceptualization of the PDZ domain in terms of a complex adaptive system representing a flexible compromise between the rigid order of exquisite specificity and the chaos of unselective promiscuity, which has evolved to mediate two mutually contradictory properties required of such higher order sub-cellular organizations as synapses, cell junctions, and others--organizational structure and organizational plasticity/adaptability. The generalization of this reconceptualization in regard to other protein interaction modules and specific protein associations is consistent with the image of the cell as a complex adaptive macromolecular system as opposed to clockwork.

  15. On an iterative ensemble smoother and its application to a reservoir facies estimation problem

    NASA Astrophysics Data System (ADS)

    Luo, Xiaodong; Chen, Yan; Valestrand, Randi; Stordal, Andreas; Lorentzen, Rolf; Nævdal, Geir

    2014-05-01

    For data assimilation problems there are different ways in utilizing the available observations. While certain data assimilation algorithms, for instance, the ensemble Kalman filter (EnKF, see, for examples, Aanonsen et al., 2009; Evensen, 2006) assimilate the observations sequentially in time, other data assimilation algorithms may instead collect the observations at different time instants and assimilate them simultaneously. In general such algorithms can be classified as smoothers. In this aspect, the ensemble smoother (ES, see, for example, Evensen and van Leeuwen, 2000) can be considered as an smoother counterpart of the EnKF. The EnKF has been widely used for reservoir data assimilation (history matching) problems since its introduction to the community of petroleum engineering (Nævdal et al., 2002). The applications of the ES to reservoir data assimilation problems are also investigated recently (see, for example, Skjervheim and Evensen, 2011). Compared to the EnKF, the ES has certain technical advantages, including, for instance, avoiding the restarts associated with each update step in the EnKF and also having fewer variables to update, which may result in a significant reduction in simulation time, while providing similar assimilation results to those obtained by the EnKF (Skjervheim and Evensen, 2011). To further improve the performance of the ES, some iterative ensemble smoothers are suggested in the literature, in which the iterations are carried out in the forms of certain iterative optimization algorithms, e.g., the Gaussian-Newton (Chen and Oliver, 2012) or the Levenberg-Marquardt method (Chen and Oliver, 2013; Emerick and Reynolds, 2012), or in the context of adaptive Gaussian mixture (AGM, see Stordal and Lorentzen, 2013). In Emerick and Reynolds (2012) the iteration formula is derived based on the idea that, for linear observations, the final results of the iterative ES should be equal to the estimate of the EnKF. In Chen and Oliver (2013), the

  16. Adaptive snakes - Control of damping and material parameters

    NASA Technical Reports Server (NTRS)

    Samadani, Ramin

    1991-01-01

    The stability of active contour models or 'snakes' is studied. It is shown that the modification of snake parameters using adaptive systems improves both the stability of the snakes and the boundaries obtained. The adaptive snakes perform better with images of varying contrasts, noisy images and images with different curvatures along the boundaries. The computational costs at each iteration for the adaptive snakes is still of order N, where N is the number of points on the snakes. Comparisons of the results for non-adaptive and adaptive snakes are shown using both computer simulations and satellite images.

  17. Iterative methods for Toeplitz-like matrices

    SciTech Connect

    Huckle, T.

    1994-12-31

    In this paper the author will give a survey on iterative methods for solving linear equations with Toeplitz matrices, Block Toeplitz matrices, Toeplitz plus Hankel matrices, and matrices with low displacement rank. He will treat the following subjects: (1) optimal (w)-circulant preconditioners is a generalization of circulant preconditioners; (2) Optimal implementation of circulant-like preconditioners in the complex and real case; (3) preconditioning of near-singular matrices; what kind of preconditioners can be used in this case; (4) circulant preconditioning for more general classes of Toeplitz matrices; what can be said about matrices with coefficients that are not l{sub 1}-sequences; (5) preconditioners for Toeplitz least squares problems, for block Toeplitz matrices, and for Toeplitz plus Hankel matrices.

  18. Status of ITER Cryodistribution and Cryoline project

    NASA Astrophysics Data System (ADS)

    Sarkar, B.; Vaghela, H.; Shah, N.; Bhattacharya, R.; Choukekar, K.; Patel, P.; Kapoor, H.; Srinivasa, M.; Chang, H. S.; Badgujar, S.; Monneret, E.

    2017-02-01

    The system of ITER Cryodistribution (CD) and Cryolines (CLs) is an integral interface between the Cryoplant systems and the superconducting (SC) magnets as well as Cryopumps (CPs). The project has progressed from the conceptual stage to the industrial stage. The subsystems are at various stages of design as defined by the project, namely, preliminary design, final design and formal reviews. Significant progresses have been made in the prototypes studies and design validations, such as the CL and cold circulators. While one of the prototype CL is already tested, the other one is in manufacturing phase. Performance test of two cold circulators have been completed. Design requirements are unique due the complexity arising from load specifications, layout constraints, regulatory compliance, operating conditions as well as several hundred interfaces. The present status of the project in terms of technical achievements, implications of the changes and the technical management as well as the risk assessment and its mitigation including path forward towards realization is described.

  19. Iterated Gate Teleportation and Blind Quantum Computation.

    PubMed

    Pérez-Delgado, Carlos A; Fitzsimons, Joseph F

    2015-06-05

    Blind quantum computation allows a user to delegate a computation to an untrusted server while keeping the computation hidden. A number of recent works have sought to establish bounds on the communication requirements necessary to implement blind computation, and a bound based on the no-programming theorem of Nielsen and Chuang has emerged as a natural limiting factor. Here we show that this constraint only holds in limited scenarios, and show how to overcome it using a novel method of iterated gate teleportations. This technique enables drastic reductions in the communication required for distributed quantum protocols, extending beyond the blind computation setting. Applied to blind quantum computation, this technique offers significant efficiency improvements, and in some scenarios offers an exponential reduction in communication requirements.

  20. Orbit of an image under iterated system

    NASA Astrophysics Data System (ADS)

    Singh, S. L.; Mishra, S. N.; Jain, Sarika

    2011-03-01

    An orbital picture depicts the path of an object under semi-group of transformations. The concept initially given by Barnsley [3] has utmost importance in image compression, biological modeling and other areas of fractal geometry. In this paper, we introduce superior iterations to study the role of linear and nonlinear transformations on the orbit of an object. Various characteristics of the computed figures have been discussed to indicate the usefulness of study in mathematical analysis. Modified algorithms are given to compute the orbital picture and V-variable orbital picture. An algorithm to calculate the distance between images makes the study motivating. A brief discussion about the proof of the Cauchy sequence of images is also given.

  1. Iterated Gate Teleportation and Blind Quantum Computation

    NASA Astrophysics Data System (ADS)

    Pérez-Delgado, Carlos A.; Fitzsimons, Joseph F.

    2015-06-01

    Blind quantum computation allows a user to delegate a computation to an untrusted server while keeping the computation hidden. A number of recent works have sought to establish bounds on the communication requirements necessary to implement blind computation, and a bound based on the no-programming theorem of Nielsen and Chuang has emerged as a natural limiting factor. Here we show that this constraint only holds in limited scenarios, and show how to overcome it using a novel method of iterated gate teleportations. This technique enables drastic reductions in the communication required for distributed quantum protocols, extending beyond the blind computation setting. Applied to blind quantum computation, this technique offers significant efficiency improvements, and in some scenarios offers an exponential reduction in communication requirements.

  2. Robot Calibration Using Iteration and Differential Kinematics

    NASA Astrophysics Data System (ADS)

    Ye, S. H.; Wang, Y.; Ren, Y. J.; Li, D. K.

    2006-10-01

    In the applications of seam laser tracking welding robot and general measuring robot station based on stereo vision, the robot calibration is the most difficult step during the whole system calibration progress. Many calibration methods were put forward, but the exact location of base frame has to be known no matter which method was employed. However, the accurate base frame location is hard to be known. In order to obtain the position of base coordinate, this paper presents a novel iterative algorithm which can also get parameters' deviations at the same time. It was a method of employing differential kinematics to solve link parameters' deviations and approaching real values step-by-step. In the end, experiment validation was provided.

  3. Robust tooth surface reconstruction by iterative deformation.

    PubMed

    Jiang, Xiaotong; Dai, Ning; Cheng, Xiaosheng; Wang, Jun; Peng, Qingjin; Liu, Hao; Cheng, Cheng

    2016-01-01

    Digital design technologies have been applied extensively in dental medicine, especially in the field of dental restoration. The all-ceramic crown is an important restoration type of dental CAD systems. This paper presents a robust tooth surface reconstruction algorithm for all-ceramic crown design. The algorithm involves three necessary steps: standard tooth initial positioning and division; salient feature point extraction using Morse theory; and standard tooth deformation using iterative Laplacian Surface Editing and mesh stitching. This algorithm can retain the morphological features of the tooth surface well. It is robust and suitable for almost all types of teeth, including incisor, canine, premolar, and molar. Moreover, it allows dental technicians to use their own preferred library teeth for reconstruction. The algorithm has been successfully integrated in our Dental CAD system, more than 1000 clinical cases have been tested to demonstrate the robustness and effectiveness of the proposed algorithm.

  4. Structural analysis of ITER magnet feeders

    SciTech Connect

    Ilyin, Yuri; Gung, Chen-Yu; Bauer, Pierre; Chen, Yonghua; Jong, Cornelis; Devred, Arnaud; Mitchell, Neil; Lorriere, Philippe; Farek, Jaromir; Nannini, Matthieu

    2012-06-15

    This paper summarizes the results of the static structural analyses, which were conducted in support of the ITER magnet feeder design with the aim of validating certain components against the structural design criteria. While almost every feeder has unique features, they all share many common constructional elements and the same functional specifications. The analysis approach to assess the load conditions and stresses that have driven the design is equivalent for all feeders, except for particularities that needed to be modeled in each case. The mechanical analysis of the feeders follows the sub-modeling approach: the results of the global mechanical model of a feeder assembly are used as input for the detailed models of the feeder' sub-assemblies or single components. Examples of such approach, including the load conditions, stress assessment criteria and solutions for the most critical components, are discussed. It has been concluded that the feeder system is safe in the referential operation scenarios. (authors)

  5. Chaos automata: iterated function systems with memory

    NASA Astrophysics Data System (ADS)

    Ashlock, Dan; Golden, Jim

    2003-07-01

    Transforming biological sequences into fractals in order to visualize them is a long standing technique, in the form of the traditional four-cornered chaos game. In this paper we give a generalization of the standard chaos game visualization for DNA sequences. It incorporates iterated function systems that are called under the control of a finite state automaton, yielding a DNA to fractal transformation system with memory. We term these fractal visualizers chaos automata. The use of memory enables association of widely separated sequence events in the drawing of the fractal, finessing the “forgetfulness” of other fractal visualization methods. We use a genetic algorithm to train chaos automata to distinguish introns and exons in Zea mays (corn). A substantial issue treated here is the creation of a fitness function that leads to good visual separation of distinct data types.

  6. Electron Cyclotron Emission Diagnostics on ITER

    NASA Astrophysics Data System (ADS)

    Ellis, Richard; Austin, Max; Phillips, Perry; Rowan, William; Beno, Joseph; Auroua, Abelhamid; Feder, Russell; Patel, Ashish; Hubbard, Amanda; Pandya, Hitesh

    2010-11-01

    Electron cyclotron emission (ECE) will be employed on ITER to measure the radial profile of electron temperature and non thermal features of the electron distribution as well as measurements of ELMs, magnetic islands, high frequency instabilities, and turbulence. There are two quasioptical systems, designed with Gaussian beam analysis. One view is radial, primarily for temperature profile measurement, the other views at a small angle to radial for measuring non-thermal emission. Radiation is conducted to by a long corrugated waveguide to a multichannel Michelson interferometer which provides wide wavelength coverage but limited time response as well as two microwave radiometers which cover the fundamental and second harmonic ECE and provide excellent time response. Measurements will be made in both X and O mode. In-situ calibration is provided by a novel hot calibration source. We discuss spatial resolution and the implications for physics studies.

  7. ITER L-mode confinement database

    SciTech Connect

    Kaye, S.M.

    1997-10-06

    This paper describes the content of an L-mode database that has been compiled with data from Alcator C-Mod, ASDEX, DIII, DIII-D, FTU, JET, JFT-2M, JT-60, PBX-M, PDX, T-10, TEXTOR, TFTR, and Tore-Supra. The database consists of a total of 2938 entries, 1881 of which are in the L-phase while 922 are ohmically heated only (OH). Each entry contains up to 95 descriptive parameters, including global and kinetic information, machine conditioning, and configuration. The paper presents a description of the database and the variables contained therein, and it also presents global and thermal scalings along with predictions for ITER.

  8. ITER CENTRAL SOLENOID COIL INSULATION QUALIFICATION

    SciTech Connect

    Martovetsky, N N; Mann, T L; Miller, J R; Freudenberg, K D; Reed, R P; Walsh, R P; McColskey, J D; Evans, D

    2009-06-11

    An insulation system for ITER Central Solenoid must have sufficiently high electrical and structural strength. Design efforts to bring stresses in the turn and layer insulation within allowables failed. It turned out to be impossible to eliminate high local tensile stresses in the winding pack. When high local stresses can not be designed out, the qualification procedure requires verification of the acceptable structural and electrical strength by testing. We built two 4 x 4 arrays of the conductor jacket with two options of the CS insulation and subjected the arrays to 1.2 million compressive cycles at 60 MPa and at 76 K. Such conditions simulated stresses in the CS insulation. We performed voltage withstand tests and after end of cycling we measured the breakdown voltages between in the arrays. After that we dissectioned the arrays and studied micro cracks in the insulation. We report details of the specimens preparation, test procedures and test results.

  9. Iterative wavelet thresholding for rapid MRI reconstruction

    NASA Astrophysics Data System (ADS)

    Kayvanrad, Mohammad H.; McKenzie, Charles A.; Peters, Terry M.

    2011-03-01

    According to the developments in the field of compressed sampling and and sparse recovery, one might take advantage of the sparsity of an object, as an additional a priori knowledge about the object, to reconstruct it from fewer samples than that needed by the traditional sampling strategies. Since most magnetic resonance (MR) images are sparse in some domain, in this work we consider the problem of MR reconstruction and how one could apply this idea to accelerate the process of MR image/map acquisition. In particular, based on the Paupolis-Gerchgerg algorithm, an iterative thresholding algorithm for reconstruction of MR images from limited k-space observations is proposed. The proposed method takes advantage of the sparsity of most MR images in the wavelet domain. Initializing with a minimum-energy reconstruction, the object of interest is reconstructed by going through a sequence of thresholding and recovery iterations. Furthermore, MR studies often involve acquisition of multiple images in time that are highly correlated. This correlation can be used as additional knowledge on the object beside the sparsity to further reduce the reconstruction time. The performance of the proposed algorithms is experimentally evaluated and compared to other state-of-the-art methods. In particular, we show that the quality of reconstruction is increased compared to total variation (TV) regularization, and the conventional Papoulis-Gerchberg algorithm both in the absence and in the presence of noise. Also, phantom experiments show good accuracy in the reconstruction of relaxation maps from a set of highly undersampled k-space observations.

  10. Error bounds from extra precise iterative refinement

    SciTech Connect

    Demmel, James; Hida, Yozo; Kahan, William; Li, Xiaoye S.; Mukherjee, Soni; Riedy, E. Jason

    2005-02-07

    We present the design and testing of an algorithm for iterative refinement of the solution of linear equations, where the residual is computed with extra precision. This algorithm was originally proposed in the 1960s [6, 22] as a means to compute very accurate solutions to all but the most ill-conditioned linear systems of equations. However two obstacles have until now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way to access the higher precision arithmetic needed to compute residuals, and (2) it was unclear how to compute a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard [5] has recently removed the first obstacle. To overcome the second obstacle, we show how a single application of iterative refinement can be used to compute an error bound in any norm at small cost, and use this to compute both an error bound in the usual infinity norm, and a componentwise relative error bound. We report extensive test results on over 6.2 million matrices of dimension 5, 10, 100, and 1000. As long as a normwise (resp. componentwise) condition number computed by the algorithm is less than 1/max{l_brace}10,{radical}n{r_brace} {var_epsilon}{sub w}, the computed normwise (resp. componentwise) error bound is at most 2 max{l_brace}10,{radical}n{r_brace} {center_dot} {var_epsilon}{sub w}, and indeed bounds the true error. Here, n is the matrix dimension and w is single precision roundoff error. For worse conditioned problems, we get similarly small correct error bounds in over 89.4% of cases.

  11. Track Filtering via Iterative Correction of TDI Topology.

    PubMed

    Aydogan, Dogu Baran; Shi, Yonggang

    2015-10-01

    We propose a new technique to clean outlier tracks from fiber bundles reconstructed by tractography. Previous techniques were mainly based on computing pair-wise distances and clustering methods to identify unwanted tracks, which relied heavy upon user inputs for parameter tuning. In this work, we propose the use of topological information in track density images (TDI) to achieve a more robust filtering of tracks. There are two main steps of our iterative algorithm. Given a fiber bundle, we first convert it to a TDI, then extract and score its critical points. After that, tracks that contribute to high scoring loops are identified and removed using the Reeb graph of the level set surface of the TDI. Our approach is geometrically intuitive and relies only on a single parameter that enables the user to decide on the length of insignificant loops. In our experiments, we use our method to reconstruct the optic radiation in human brain using the multi-shell HARDI data from the human connectome project (HCP). We compare our results against spectral filtering and show that our approach can achieve cleaner reconstructions. We also apply our method to 215 HCP subjects to test for asymmetry of the optic radiation and obtain statistically significant results that are consistent with post-mortem studies.

  12. Iteratively reweighted unidirectional variational model for stripe non-uniformity correction

    NASA Astrophysics Data System (ADS)

    Huang, Yongzhong; He, Cong; Fang, Houzhang; Wang, Xiaoping

    2016-03-01

    In this paper, we propose an adaptive unidirectional variational nonuniformity correction algorithm for fixed-pattern noise removal. The proposed algorithm is based on a unidirectional variational sparse model that makes use of unidirectional characteristics of stripe nonuniformity noise. The iteratively reweighted least squares (IRLS) technique is introduced to optimize the proposed correction model, which makes the proposed algorithm easy to implement with existing conjugate gradient method without introducing additional variables and parameters. Moreover, we derive a formula to automatically update the regularization parameter from the images. Comparative experimental results on real infrared images indicate that the proposed method can remove the stripe nonuniformity noise effectively while maintaining more useful image details.

  13. Iterative optical vector-matrix processors (survey of selected achievable operations)

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Neuman, C.

    1981-01-01

    An iterative optical vector-matrix multiplier with a microprocessor-controlled feedback loop capable of performing a wealth of diverse operations was described. A survey and description of many of its operations demonstrates the versatility and flexibility of this class of optical processor and its use in diverse applications. General operations described include: linear difference and differential equations, linear algebraic equations, matrix equations, matrix inversion, nonlinear matrix equations, deconvolution and eigenvalue and eigenvector computations. Engineering applications being addressed for these different operations and for the IOP are: adaptive phased-array radar, time-dependent system modeling, deconvolution and optimal control.

  14. MQSA National Statistics

    MedlinePlus

    ... Standards Act and Program MQSA Insights MQSA National Statistics Share Tweet Linkedin Pin it More sharing options ... but should level off with time. Archived Scorecard Statistics 2017 Scorecard Statistics 2016 Scorecard Statistics (Archived) 2015 ...

  15. Iterative build OMIT maps: Map improvement by iterative model-building and refinement without model bias

    SciTech Connect

    Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545, USA; Lawrence Berkeley National Laboratory, One Cyclotron Road, Building 64R0121, Berkeley, CA 94720, USA; Department of Haematology, University of Cambridge, Cambridge CB2 0XY, England; Terwilliger, Thomas; Terwilliger, T.C.; Grosse-Kunstleve, Ralf Wilhelm; Afonine, P.V.; Moriarty, N.W.; Zwart, P.H.; Hung, L.-W.; Read, R.J.; Adams, P.D.

    2008-02-12

    A procedure for carrying out iterative model-building, density modification and refinement is presented in which the density in an OMIT region is essentially unbiased by an atomic model. Density from a set of overlapping OMIT regions can be combined to create a composite 'Iterative-Build' OMIT map that is everywhere unbiased by an atomic model but also everywhere benefiting from the model-based information present elsewhere in the unit cell. The procedure may have applications in the validation of specific features in atomic models as well as in overall model validation. The procedure is demonstrated with a molecular replacement structure and with an experimentally-phased structure, and a variation on the method is demonstrated by removing model bias from a structure from the Protein Data Bank.

  16. Intelligent control and adaptive systems; Proceedings of the Meeting, Philadelphia, PA, Nov. 7, 8, 1989

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo (Editor)

    1990-01-01

    Various papers on intelligent control and adaptive systems are presented. Individual topics addressed include: control architecture for a Mars walking vehicle, representation for error detection and recovery in robot task plans, real-time operating system for robots, execution monitoring of a mobile robot system, statistical mechanics models for motion and force planning, global kinematics for manipulator planning and control, exploration of unknown mechanical assemblies through manipulation, low-level representations for robot vision, harmonic functions for robot path construction, simulation of dual behavior of an autonomous system. Also discussed are: control framework for hand-arm coordination, neural network approach to multivehicle navigation, electronic neural networks for global optimization, neural network for L1 norm linear regression, planning for assembly with robot hands, neural networks in dynamical systems, control design with iterative learning, improved fuzzy process control of spacecraft autonomous rendezvous using a genetic algorithm.

  17. Quality metric in matched Laplacian of Gaussian response domain for blind adaptive optics image deconvolution

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Yang, Yikang; Xu, Rong; Liu, Changhai; Li, Jisheng

    2016-04-01

    Adaptive optics (AO) in conjunction with subsequent postprocessing techniques have obviously improved the resolution of turbulence-degraded images in ground-based astronomical observations or artificial space objects detection and identification. However, important tasks involved in AO image postprocessing, such as frame selection, stopping iterative deconvolution, and algorithm comparison, commonly need manual intervention and cannot be performed automatically due to a lack of widely agreed on image quality metrics. In this work, based on the Laplacian of Gaussian (LoG) local contrast feature detection operator, we propose a LoG domain matching operation to perceive effective and universal image quality statistics. Further, we extract two no-reference quality assessment indices in the matched LoG domain that can be used for a variety of postprocessing tasks. Three typical space object images with distinct structural features are tested to verify the consistency of the proposed metric with perceptual image quality through subjective evaluation.

  18. Nonlinearities and adaptation of color vision from sequential principal curves analysis.

    PubMed

    Laparra, Valero; Jiménez, Sandra; Camps-Valls, Gustavo; Malo, Jesús

    2012-10-01

    Mechanisms of human color vision are characterized by two phenomenological aspects: the system is nonlinear and adaptive to changing environments. Conventional attempts to derive these features from statistics use separate arguments for each aspect. The few statistical explanations that do consider both phenomena simultaneously follow parametric formulations based on empirical models. Therefore, it may be argued that the behavior does not come directly from the color statistics but from the convenient functional form adopted. In addition, many times the whole statistical analysis is based on simplified databases that disregard relevant physical effects in the input signal, as, for instance, by assuming flat Lambertian surfaces. In this work, we address the simultaneous statistical explanation of the nonlinear behavior of achromatic and chromatic mechanisms in a fixed adaptation state and the change of such behavior (i.e., adaptation) under the change of observation conditions. Both phenomena emerge directly from the samples through a single data-driven method: the sequential principal curves analysis (SPCA) with local metric. SPCA is a new manifold learning technique to derive a set of sensors adapted to the manifold using different optimality criteria. Here sequential refers to the fact that sensors (curvilinear dimensions) are designed one after the other, and not to the particular (eventually iterative) method to draw a single principal curve. Moreover, in order to reproduce the empirical adaptation reported under D65 and A illuminations, a new database of colorimetrically calibrated images of natural objects under these illuminants was gathered, thus overcoming the limitations of available databases. The results obtained by applying SPCA show that the psychophysical behavior on color discrimination thresholds, discount of the illuminant, and corresponding pairs in asymmetric color matching emerge directly from realistic data regularities, assuming no a priori

  19. An iterative approach for compound detection in an unknown pharmaceutical drug product: Application on Raman microscopy.

    PubMed

    Boiret, Mathieu; Gorretta, Nathalie; Ginot, Yves-Michel; Roger, Jean-Michel

    2016-02-20

    Raman chemical imaging provides both spectral and spatial information on a pharmaceutical drug product. Even if the main objective of chemical imaging is to obtain distribution maps of each formulation compound, identification of pure signals in a mixture dataset remains of huge interest. In this work, an iterative approach is proposed to identify the compounds in a pharmaceutical drug product, assuming that the chemical composition of the product is not known by the analyst and that a low dose compound can be present in the studied medicine. The proposed approach uses a spectral library, spectral distances and orthogonal projections to iteratively detect pure compounds of a tablet. Since the proposed method is not based on variance decomposition, it should be well adapted for a drug product which contains a low dose product, interpreted as a compound located in few pixels and with low spectral contributions. The method is tested on a tablet specifically manufactured for this study with one active pharmaceutical ingredient and five excipients. A spectral library, constituted of 24 pure pharmaceutical compounds, is used as a reference spectral database. Pure spectra of active and excipients, including a modification of the crystalline form and a low dose compound, are iteratively detected. Once the pure spectra are identified, multivariate curve resolution-alternating least squares process is performed on the data to provide distribution maps of each compound in the studied sample. Distributions of the two crystalline forms of active and the five excipients were in accordance with the theoretical formulation.

  20. Magnet design technical report---ITER definition phase

    SciTech Connect

    Henning, C.

    1989-04-28

    This report contains papers on the following topics: conceptual design; radiation damage of ITER magnet systems; insulation system of the magnets; critical current density and strain sensitivity; toroidal field coil structural analysis; stress analysis for the ITER central solenoid; and volt-second capabilities and PF magnet configurations.

  1. A Model and Simple Iterative Algorithm for Redundancy Analysis.

    ERIC Educational Resources Information Center

    Fornell, Claes; And Others

    1988-01-01

    This paper shows that redundancy maximization with J. K. Johansson's extension can be accomplished via a simple iterative algorithm based on H. Wold's Partial Least Squares. The model and the iterative algorithm for the least squares approach to redundancy maximization are presented. (TJH)

  2. An Iterative Method for Solving Variable Coefficient ODEs

    ERIC Educational Resources Information Center

    Deeba, Elias; Yoon, Jeong-Mi; Zafiris, Vasilis

    2003-01-01

    In this classroom note, the authors present a method to solve variable coefficients ordinary differential equations of the form p(x)y([squared])(x) + q(x)y([superscript 1])(x) + r(x)y(x) = 0. They propose an iterative method as an alternate method to solve the above equation. This iterative method is accessible to an undergraduate student studying…

  3. Language Evolution by Iterated Learning with Bayesian Agents

    ERIC Educational Resources Information Center

    Griffiths, Thomas L.; Kalish, Michael L.

    2007-01-01

    Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute…

  4. Validation of 1-D transport and sawtooth models for ITER

    SciTech Connect

    Connor, J.W.; Turner, M.F.; Attenberger, S.E.; Houlberg, W.A.

    1996-12-31

    In this paper the authors describe progress on validating a number of local transport models by comparing their predictions with relevant experimental data from a range of tokamaks in the ITER profile database. This database, the testing procedure and results are discussed. In addition a model for sawtooth oscillations is used to investigate their effect in an ITER plasma with alpha-particles.

  5. Not so Complex: Iteration in the Complex Plane

    ERIC Educational Resources Information Center

    O'Dell, Robin S.

    2014-01-01

    The simple process of iteration can produce complex and beautiful figures. In this article, Robin O'Dell presents a set of tasks requiring students to use the geometric interpretation of complex number multiplication to construct linear iteration rules. When the outputs are plotted in the complex plane, the graphs trace pleasing designs…

  6. Adaptive SPECT

    PubMed Central

    Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.

    2008-01-01

    Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485

  7. Preliminary consideration of CFETR ITER-like case diagnostic system.

    PubMed

    Li, G S; Yang, Y; Wang, Y M; Ming, T F; Han, X; Liu, S C; Wang, E H; Liu, Y K; Yang, W J; Li, G Q; Hu, Q S; Gao, X

    2016-11-01

    Chinese Fusion Engineering Test Reactor (CFETR) is a new superconducting tokamak device being designed in China, which aims at bridging the gap between ITER and DEMO, where DEMO is a tokamak demonstration fusion reactor. Two diagnostic cases, ITER-like case and towards DEMO case, have been considered for CFETR early and later operating phases, respectively. In this paper, some preliminary consideration of ITER-like case will be presented. Based on ITER diagnostic system, three versions of increased complexity and coverage of the ITER-like case diagnostic system have been developed with different goals and functions. Version A aims only machine protection and basic control. Both of version B and version C are mainly for machine protection, basic and advanced control, but version C has an increased level of redundancy necessary for improved measurements capability. The performance of these versions and needed R&D work are outlined.

  8. Final Report on ITER Task Agreement 81-10

    SciTech Connect

    Brad J. Merrill

    2009-01-01

    An International Thermonuclear Experimental Reactor (ITER) Implementing Task Agreement (ITA) on Magnet Safety was established between the ITER International Organization (IO) and the Idaho National Laboratory (INL) Fusion Safety Program (FSP) during calendar year 2004. The objectives of this ITA were to add new capabilities to the MAGARC code and to use this updated version of MAGARC to analyze unmitigated superconductor quench events for both poloidal field (PF) and toroidal field (TF) coils of the ITER design. This report documents the completion of the work scope for this ITA. Based on the results obtained for this ITA, an unmitigated quench event in an ITER larger PF coil does not appear to be as severe an accident as in an ITER TF coil.

  9. Preliminary consideration of CFETR ITER-like case diagnostic system

    NASA Astrophysics Data System (ADS)

    Li, G. S.; Yang, Y.; Wang, Y. M.; Ming, T. F.; Han, X.; Liu, S. C.; Wang, E. H.; Liu, Y. K.; Yang, W. J.; Li, G. Q.; Hu, Q. S.; Gao, X.

    2016-11-01

    Chinese Fusion Engineering Test Reactor (CFETR) is a new superconducting tokamak device being designed in China, which aims at bridging the gap between ITER and DEMO, where DEMO is a tokamak demonstration fusion reactor. Two diagnostic cases, ITER-like case and towards DEMO case, have been considered for CFETR early and later operating phases, respectively. In this paper, some preliminary consideration of ITER-like case will be presented. Based on ITER diagnostic system, three versions of increased complexity and coverage of the ITER-like case diagnostic system have been developed with different goals and functions. Version A aims only machine protection and basic control. Both of version B and version C are mainly for machine protection, basic and advanced control, but version C has an increased level of redundancy necessary for improved measurements capability. The performance of these versions and needed R&D work are outlined.

  10. The Effect of Iteration on the Design Performance of Primary School Children

    ERIC Educational Resources Information Center

    Looijenga, Annemarie; Klapwijk, Remke; de Vries, Marc J.

    2015-01-01

    Iteration during the design process is an essential element. Engineers optimize their design by iteration. Research on iteration in Primary Design Education is however scarce; possibly teachers believe they do not have enough time for iteration in daily classroom practices. Spontaneous playing behavior of children indicates that iteration fits in…

  11. Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery

    NASA Astrophysics Data System (ADS)

    Hashemi, SayedMasoud; Song, William Y.; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G.; Ruschin, Mark

    2017-04-01

    One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details

  12. Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery.

    PubMed

    Hashemi, SayedMasoud; Song, William Y; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G; Ruschin, Mark

    2017-03-01

    One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details

  13. Evaluating iterative reconstruction performance in computed tomography

    SciTech Connect

    Chen, Baiyu Solomon, Justin; Ramirez Giraldo, Juan Carlos; Samei, Ehsan

    2014-12-15

    Purpose: Iterative reconstruction (IR) offers notable advantages in computed tomography (CT). However, its performance characterization is complicated by its potentially nonlinear behavior, impacting performance in terms of specific tasks. This study aimed to evaluate the performance of IR with both task-specific and task-generic strategies. Methods: The performance of IR in CT was mathematically assessed with an observer model that predicted the detection accuracy in terms of the detectability index (d′). d′ was calculated based on the properties of the image noise and resolution, the observer, and the detection task. The characterizations of image noise and resolution were extended to accommodate the nonlinearity of IR. A library of tasks was mathematically modeled at a range of sizes (radius 1–4 mm), contrast levels (10–100 HU), and edge profiles (sharp and soft). Unique d′ values were calculated for each task with respect to five radiation exposure levels (volume CT dose index, CTDI{sub vol}: 3.4–64.8 mGy) and four reconstruction algorithms (filtered backprojection reconstruction, FBP; iterative reconstruction in imaging space, IRIS; and sinogram affirmed iterative reconstruction with strengths of 3 and 5, SAFIRE3 and SAFIRE5; all provided by Siemens Healthcare, Forchheim, Germany). The d′ values were translated into the areas under the receiver operating characteristic curve (AUC) to represent human observer performance. For each task and reconstruction algorithm, a threshold dose was derived as the minimum dose required to achieve a threshold AUC of 0.9. A task-specific dose reduction potential of IR was calculated as the difference between the threshold doses for IR and FBP. A task-generic comparison was further made between IR and FBP in terms of the percent of all tasks yielding an AUC higher than the threshold. Results: IR required less dose than FBP to achieve the threshold AUC. In general, SAFIRE5 showed the most significant dose reduction

  14. Adaptive Strategies for Materials Design using Uncertainties.

    PubMed

    Balachandran, Prasanna V; Xue, Dezhen; Theiler, James; Hogden, John; Lookman, Turab

    2016-01-21

    We compare several adaptive design strategies using a data set of 223 M2AX family of compounds for which the elastic properties [bulk (B), shear (G), and Young's (E) modulus] have been computed using density functional theory. The design strategies are decomposed into an iterative loop with two main steps: machine learning is used to train a regressor that predicts elastic properties in terms of elementary orbital radii of the individual components of the materials; and a selector uses these predictions and their uncertainties to choose the next material to investigate. The ultimate goal is to obtain a material with desired elastic properties in as few iterations as possible. We examine how the choice of data set size, regressor and selector impact the design. We find that selectors that use information about the prediction uncertainty outperform those that don't. Our work is a step in illustrating how adaptive design tools can guide the search for new materials with desired properties.

  15. Statistics Poker: Reinforcing Basic Statistical Concepts

    ERIC Educational Resources Information Center

    Leech, Nancy L.

    2008-01-01

    Learning basic statistical concepts does not need to be tedious or dry; it can be fun and interesting through cooperative learning in the small-group activity of Statistics Poker. This article describes a teaching approach for reinforcing basic statistical concepts that can help students who have high anxiety and makes learning and reinforcing…

  16. Predict! Teaching Statistics Using Informational Statistical Inference

    ERIC Educational Resources Information Center

    Makar, Katie

    2013-01-01

    Statistics is one of the most widely used topics for everyday life in the school mathematics curriculum. Unfortunately, the statistics taught in schools focuses on calculations and procedures before students have a chance to see it as a useful and powerful tool. Researchers have found that a dominant view of statistics is as an assortment of tools…

  17. An efficient reconstruction method for bioluminescence tomography based on two-step iterative shrinkage approach

    NASA Astrophysics Data System (ADS)

    Guo, Wei; Jia, Kebin; Tian, Jie; Han, Dong; Liu, Xueyan; Wu, Ping; Feng, Jinchao; Yang, Xin

    2012-03-01

    Among many molecular imaging modalities, Bioluminescence tomography (BLT) is an important optical molecular imaging modality. Due to its unique advantages in specificity, sensitivity, cost-effectiveness and low background noise, BLT is widely studied for live small animal imaging. Since only the photon distribution over the surface is measurable and the photo propagation with biological tissue is highly diffusive, BLT is often an ill-posed problem and may bear multiple solutions and aberrant reconstruction in the presence of measurement noise and optical parameter mismatches. For many BLT practical applications, such as early detection of tumors, the volumes of the light sources are very small compared with the whole body. Therefore, the L1-norm sparsity regularization has been used to take advantage of the sparsity prior knowledge and alleviate the ill-posedness of the problem. Iterative shrinkage (IST) algorithm is an important research achievement in a field of compressed sensing and widely applied in sparse signal reconstruction. However, the convergence rate of IST algorithm depends heavily on the linear operator. When the problem is ill-posed, it becomes very slow. In this paper, we present a sparsity regularization reconstruction method for BLT based on the two-step iterated shrinkage approach. By employing Two-step strategy of iterative reweighted shrinkage (IRS) to improve IST, the proposed method shows faster convergence rate and better adaptability for BLT. The simulation experiments with mouse atlas were conducted to evaluate the performance of proposed method. By contrast, the proposed method can obtain the stable and comparable reconstruction solution with less number of iterations.

  18. Effect of Low-Dose MDCT and Iterative Reconstruction on Trabecular Bone Microstructure Assessment

    PubMed Central

    Baum, Thomas; Nasirudin, Radin A.; Mei, Kai; Garcia, Eduardo G.; Burgkart, Rainer; Rummeny, Ernst J.; Kirschke, Jan S.; Noël, Peter B.

    2016-01-01

    We investigated the effects of low-dose multi detector computed tomography (MDCT) in combination with statistical iterative reconstruction algorithms on trabecular bone microstructure parameters. Twelve donated vertebrae were scanned with the routine radiation exposure used in our department (standard-dose) and a low-dose protocol. Reconstructions were performed with filtered backprojection (FBP) and maximum-likelihood based statistical iterative reconstruction (SIR). Trabecular bone microstructure parameters were assessed and statistically compared for each reconstruction. Moreover, fracture loads of the vertebrae were biomechanically determined and correlated to the assessed microstructure parameters. Trabecular bone microstructure parameters based on low-dose MDCT and SIR significantly correlated with vertebral bone strength. There was no significant difference between microstructure parameters calculated on low-dose SIR and standard-dose FBP images. However, the results revealed a strong dependency on the regularization strength applied during SIR. It was observed that stronger regularization might corrupt the microstructure analysis, because the trabecular structure is a very small detail that might get lost during the regularization process. As a consequence, the introduction of SIR for trabecular bone microstructure analysis requires a specific optimization of the regularization parameters. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods. PMID:27447827

  19. Experimental infrared point-source detection using an iterative generalized likelihood ratio test algorithm.

    PubMed

    Nichols, J M; Waterman, J R

    2017-03-01

    This work documents the performance of a recently proposed generalized likelihood ratio test (GLRT) algorithm in detecting thermal point-source targets against a sky background. A calibrated source is placed above the horizon at various ranges and then imaged using a mid-wave infrared camera. The proposed algorithm combines a so-called "shrinkage" estimator of the background covariance matrix and an iterative maximum likelihood estimator of the point-source parameters to produce the GLRT statistic. It is clearly shown that the proposed approach results in better detection performance than either standard energy detection or previous implementations of the GLRT detector.

  20. Progress of ITER Superconducting Magnet Procurement

    NASA Astrophysics Data System (ADS)

    Koizumi, N.

    The ITER superconducting magnet system consists of 18 Toroidal Field (TF) coils, 1 Central Solenoid (CS), 6 Poloidal Field (PF) coils and 18 Correction coils (CC). The TF conductors will be manufactured by China (7%), EU (20%), Korea (20%), Japan (25%), Russia (20%) and US (8%), TF coils by EU (10 coils) and Japan (9 coils), in which one spare is included, all TF coil cases by Japan, all CS conductors by Japan, all CS (7 modules including a spare), PF conductor by China (65%), EU (21%) and Russia (14%), PF coils by EU (5 coils) and Russia (1 coil), all CCs by China and all feeder by China, respectively. Since the TF coil manufacture is one of long-lead items, procurement of the TF conductors have been started. More than 40 TF conductors have already been fabricated. Large-scale trials for TF coil manufacture have also been started and successful results were obtained in both EU and Japan, such as manufacture of full-scale radial plates. The trials for PF coil and CC has been done by Russia and China.

  1. Iterative phase retrieval algorithms. I: optimization.

    PubMed

    Guo, Changliang; Liu, Shi; Sheridan, John T

    2015-05-20

    Two modified Gerchberg-Saxton (GS) iterative phase retrieval algorithms are proposed. The first we refer to as the spatial phase perturbation GS algorithm (SPP GSA). The second is a combined GS hybrid input-output algorithm (GS/HIOA). In this paper (Part I), it is demonstrated that the SPP GS and GS/HIO algorithms are both much better at avoiding stagnation during phase retrieval, allowing them to successfully locate superior solutions compared with either the GS or the HIO algorithms. The performances of the SPP GS and GS/HIO algorithms are also compared. Then, the error reduction (ER) algorithm is combined with the HIO algorithm (ER/HIOA) to retrieve the input object image and the phase, given only some knowledge of its extent and the amplitude in the Fourier domain. In Part II, the algorithms developed here are applied to carry out known plaintext and ciphertext attacks on amplitude encoding and phase encoding double random phase encryption systems. Significantly, ER/HIOA is then used to carry out a ciphertext-only attack on AE DRPE systems.

  2. Stepwise Iterative Fourier Transform: The SIFT

    NASA Technical Reports Server (NTRS)

    Benignus, V. A.; Benignus, G.

    1975-01-01

    A program, designed specifically to study the respective effects of some common data problems on results obtained through stepwise iterative Fourier transformation of synthetic data with known waveform composition, was outlined. Included in this group were the problems of gaps in the data, different time-series lengths, periodic but nonsinusoidal waveforms, and noisy (low signal-to-noise) data. Results on sinusoidal data were also compared with results obtained on narrow band noise with similar characteristics. The findings showed that the analytic procedure under study can reliably reduce data in the nature of (1) sinusoids in noise, (2) asymmetric but periodic waves in noise, and (3) sinusoids in noise with substantial gaps in the data. The program was also able to analyze narrow-band noise well, but with increased interpretational problems. The procedure was shown to be a powerful technique for analysis of periodicities, in comparison with classical spectrum analysis techniques. However, informed use of the stepwise procedure nevertheless requires some background of knowledge concerning characteristics of the biological processes under study.

  3. RF heating needs and plans for ITER

    SciTech Connect

    Bora, Dhiraj; Beaumont, B.; Kobayashi, N.; Tanga, A.; Goulding, R.; Swain, D.; Jacquinot, J.

    2007-09-28

    RF heating systems are required to deliver more than half of the total auxiliary power to operate ITER successfully through the different levels. To achieve this goal, systems in the range of ICRF, LHF and ECRF will be implemented for different tasks in different phases of operation. Power levels proposed to be used in different ranges will vary depending on the needs. Different mixes of power will depend on the physics needs of the experimental programmes. Lower Hybrid power of 20 MW at 5.0 GHz is not planned for the startup phase and therefore no procurement scheme exists at the present time. 20 MW will be delivered into the plasma at 40 to 55 MHz as well as at 170 GHz with the help of Ion Cyclotron Heating (ICH) and Electron Cyclotron Heating (ECH) systems respectively. All the heating systems will have the capability to operate in continuous mode. A dedicated ECH 3.0 MW system at 127.6 GHz will be used for plasma breakdown and start up.

  4. Using Action Research to Develop a Course in Statistical Inference for Workplace-Based Adults

    ERIC Educational Resources Information Center

    Forbes, Sharleen

    2014-01-01

    Many adults who need an understanding of statistical concepts have limited mathematical skills. They need a teaching approach that includes as little mathematical context as possible. Iterative participatory qualitative research (action research) was used to develop a statistical literacy course for adult learners informed by teaching in…

  5. Climate adaptation

    NASA Astrophysics Data System (ADS)

    Kinzig, Ann P.

    2015-03-01

    This paper is intended as a brief introduction to climate adaptation in a conference devoted otherwise to the physics of sustainable energy. Whereas mitigation involves measures to reduce the probability of a potential event, such as climate change, adaptation refers to actions that lessen the impact of climate change. Mitigation and adaptation differ in other ways as well. Adaptation does not necessarily have to be implemented immediately to be effective; it only needs to be in place before the threat arrives. Also, adaptation does not necessarily require global, coordinated action; many effective adaptation actions can be local. Some urban communities, because of land-use change and the urban heat-island effect, currently face changes similar to some expected under climate change, such as changes in water availability, heat-related morbidity, or changes in disease patterns. Concern over those impacts might motivate the implementation of measures that would also help in climate adaptation, despite skepticism among some policy makers about anthropogenic global warming. Studies of ancient civilizations in the southwestern US lends some insight into factors that may or may not be important to successful adaptation.

  6. Hydropower, Adaptive Management, and Biodiversity

    PubMed

    WIERINGA; MORTON

    1996-11-01

    / Adaptive management is a policy framework within which an iterative process of decision making is followed based on the observed responses to and effectiveness of previous decisions. The use of adaptive management allows science-based research and monitoring of natural resource and ecological community responses, in conjunction with societal values and goals, to guide decisions concerning man's activities. The adaptive management process has been proposed for application to hydropower operations at Glen Canyon Dam on the Colorado River, a situation that requires complex balancing of natural resources requirements and competing human uses. This example is representative of the general increase in public interest in the operation of hydropower facilities and possible effects on downstream natural resources and of the growing conflicts between uses and users of river-based resources. This paper describes the adaptive management process, using the Glen Canyon Dam example, and discusses ways to make the process work effectively in managing downstream natural resources and biodiversity. KEY WORDS: Adaptive management; Biodiversity; Hydropower; Glen Canyon Dam; Ecology

  7. Adaptive management of natural resources-framework and issues

    USGS Publications Warehouse

    Williams, B.K.

    2011-01-01

    Adaptive management, an approach for simultaneously managing and learning about natural resources, has been around for several decades. Interest in adaptive decision making has grown steadily over that time, and by now many in natural resources conservation claim that adaptive management is the approach they use in meeting their resource management responsibilities. Yet there remains considerable ambiguity about what adaptive management actually is, and how it is to be implemented by practitioners. The objective of this paper is to present a framework and conditions for adaptive decision making, and discuss some important challenges in its application. Adaptive management is described as a two-phase process of deliberative and iterative phases, which are implemented sequentially over the timeframe of an application. Key elements, processes, and issues in adaptive decision making are highlighted in terms of this framework. Special emphasis is given to the question of geographic scale, the difficulties presented by non-stationarity, and organizational challenges in implementing adaptive management. ?? 2010.

  8. Holographic imaging through a scattering medium by diffuser-assisted statistical averaging

    NASA Astrophysics Data System (ADS)

    Purcell, Michael J.; Kumar, Manish; Rand, Stephen C.

    2016-03-01

    The ability to image through a scattering or diffusive medium such as tissue or hazy atmosphere is a goal which has garnered extensive attention from the scientific community. Existing imaging methods in this field make use of phase conjugation, time of flight, iterative wave-front shaping or statistical averaging approaches, which tend to be either time consuming or complicated to implement. We introduce a novel and practical way of statistical averaging which makes use of a rotating ground glass diffuser to nullify the adverse effects caused by speckle introduced by a first static diffuser / aberrator. This is a Fourier transform-based, holographic approach which demonstrates the ability to recover detailed images and shows promise for further remarkable improvement. The present experiments were performed with 2D flat images, but this method could be easily adapted for recovery of 3D extended object information. The simplicity of the approach makes it fast, reliable, and potentially scalable as a portable technology. Since imaging through a diffuser has direct applications in biomedicine and defense technologies this method may augment advanced imaging capabilities in many fields.

  9. Agricultural Land Classification Based on Statistical Analysis of Full Polarimetric SAR Data

    NASA Astrophysics Data System (ADS)

    Mahdian, M.; Homayouni, S.; Fazel, M. A.; Mohammadimanesh, F.

    2013-09-01

    The discrimination capability of Polarimetric Synthetic Aperture Radar (PolSAR) data makes them a unique source of information with a significant contribution in tackling problems concerning environmental applications. One of the most important applications of these data is land cover classification of the earth surface. These data type, make more detailed classification of phenomena by using the physical parameters and scattering mechanisms. In this paper, we have proposed a contextual unsupervised classification approach for full PolSAR data, which allows the use of multiple sources of statistical evidence. Expectation-Maximization (EM) classification algorithm is basically performed to estimate land cover classes. The EM algorithm is an iterative algorithm that formalizes the problem of parameters estimation of a mixture distribution. To represent the statistical properties and integrate contextual information of the associated image data in the analysis process we used Markov random field (MRF) modelling technique. This model is developed by formulating the maximum posteriori decision rule as the minimization of suitable energy functions. For select optimum distribution which adapts the data more efficiently we used Mellin transform which is a natural analytical tool to study the distribution of products and quotients of independent random variables. Our proposed classification method is applied to a full polarimetric L-band dataset acquired from an agricultural region in Winnipeg, Canada. We evaluate the classification performance based on kappa and overall accuracies of the proposed approach and compared with other well-known classic methods.

  10. Mission of ITER and Challenges for the Young

    NASA Astrophysics Data System (ADS)

    Ikeda, Kaname

    2009-02-01

    It is recognized that the ongoing effort to provide sufficient energy for the wellbeing of the globe's population and to power the world economy is of the greatest importance. ITER is a joint international research and development project that aims to demonstrate the scientific and technical feasibility of fusion power. It represents the responsible actions of governments whose countries comprise over half the world's population, to create fusion power as a source of clean, economic, carbon dioxide-free energy. This is the most important science initiative of our time. The partners in the Project—the ITER Parties—are the European Union, Japan, the People's Republic of China, India, the Republic of Korea, the Russian Federation and the USA. ITER will be constructed in Europe, at Cadarache in the South of France. The talk will illustrate the genesis of the ITER Organization, the ongoing work at the Cadarache site and the planned schedule for construction. There will also be an explanation of the unique aspects of international collaboration that have been developed for ITER. Although the present focus of the project is construction activities, ITER is also a major scientific and technological research program, for which the best of the world's intellectual resources is needed. Challenges for the young, imperative for fulfillment of the objective of ITER will be identified. It is important that young students and researchers worldwide recognize the rapid development of the project, and the fundamental issues that must be overcome in ITER. The talk will also cover the exciting career and fellowship opportunities for young people at the ITER Organization.

  11. Fine-granularity and spatially-adaptive regularization for projection-based image deblurring.

    PubMed

    Li, Xin

    2011-04-01

    This paper studies two classes of regularization strategies to achieve an improved tradeoff between image recovery and noise suppression in projection-based image deblurring. The first is based on a simple fact that r-times Landweber iteration leads to a fixed level of regularization, which allows us to achieve fine-granularity control of projection-based iterative deblurring by varying the value r. The regularization behavior is explained by using the theory of Lagrangian multiplier for variational schemes. The second class of regularization strategy is based on the observation that various regularized filters can be viewed as nonexpansive mappings in the metric space. A deeper understanding about different regularization filters can be gained by probing into their asymptotic behavior--the fixed point of nonexpansive mappings. By making an analogy to the states of matter in statistical physics, we can observe that different image structures (smooth regions, regular edges and textures) correspond to different fixed points of nonexpansive mappings when the temperature(regularization) parameter varies. Such an analogy motivates us to propose a deterministic annealing based approach toward spatial adaptation in projection-based image deblurring. Significant performance improvements over the current state-of-the-art schemes have been observed in our experiments, which substantiates the effectiveness of the proposed regularization strategies.

  12. Stokes-Doppler coherence imaging for ITER boundary tomography

    NASA Astrophysics Data System (ADS)

    Howard, J.; Kocan, M.; Lisgo, S.; Reichle, R.

    2016-11-01

    An optical coherence imaging system is presently being designed for impurity transport studies and other applications on ITER. The wide variation in magnetic field strength and pitch angle (assumed known) across the field of view generates additional Zeeman-polarization-weighting information that can improve the reliability of tomographic reconstructions. Because background reflected light will be somewhat depolarized analysis of only the polarized fraction may be enough to provide a level of background suppression. We present the principles behind these ideas and some simulations that demonstrate how the approach might work on ITER. The views and opinions expressed herein do not necessarily reflect those of the ITER Organization.

  13. Fourier mode analysis of source iteration in spatially periodic media

    SciTech Connect

    Zika, M.R.; Larsen, E.W.

    1998-12-31

    The standard Fourier mode analysis is an indispensable tool when designing acceleration techniques for transport iterations; however, it requires the assumption of a homogeneous infinite medium. For problems of practical interest, material heterogeneities may significantly impact iterative performance. Recent work has applied a Fourier analysis to the discretized two-dimensional transport operator with heterogeneous material properties. The results of these analyses may be difficult to interpret because the heterogeneity effects are inherently coupled to the discretization effects. Here, the authors describe a Fourier analysis of source iteration (SI) that allows the calculation of the eigenvalue spectrum for the one-dimensional continuous transport operator with spatially periodic heterogeneous media.

  14. RF-driven advanced modes of ITER operation

    SciTech Connect

    Garcia, J.; Artaud, J. F.; Basiuk, V.; Decker, J.; Giruzzi, G.; Hawkes, N.; Imbeaux, F.; Litaudon, X.; Mailloux, J.; Peysson, Y.; Schneider, M.; Brix, M.

    2009-11-26

    The impact of the Radio Frequency heating and current drive systems on the ITER advanced scenarios is analyzed by means of the CRONOS suite of codes for integrated tokamak modelling. As a first step, the code is applied to analyze a high power advanced scenario discharge of JET in order to validate both the heating and current drive modules and the overall simulation procedure. Then, ITER advanced scenarios, based on Radio Frequency systems, are studied on the basis of previous results. These simulations show that both hybrid and steady-state scenarios could be possible within the ITER specifications, using RF heating and current drive only.

  15. Noise propagation in iterative reconstruction algorithms with line searches

    SciTech Connect

    Qi, Jinyi

    2003-11-15

    In this paper we analyze the propagation of noise in iterative image reconstruction algorithms. We derive theoretical expressions for the general form of preconditioned gradient algorithms with line searches. The results are applicable to a wide range of iterative reconstruction problems, such as emission tomography, transmission tomography, and image restoration. A unique contribution of this paper comparing to our previous work [1] is that the line search is explicitly modeled and we do not use the approximation that the gradient of the objective function is zero. As a result, the error in the estimate of noise at early iterations is significantly reduced.

  16. Iterative cross section sequence graph for handwritten character segmentation.

    PubMed

    Dawoud, Amer

    2007-08-01

    The iterative cross section sequence graph (ICSSG) is an algorithm for handwritten character segmentation. It expands the cross section sequence graph concept by applying it iteratively at equally spaced thresholds. The iterative thresholding reduces the effect of information loss associated with image binarization. ICSSG preserves the characters' skeletal structure by preventing the interference of pixels that causes flooding of adjacent characters' segments. Improving the structural quality of the characters' skeleton facilitates better feature extraction and classification, which improves the overall performance of optical character recognition (OCR). Experimental results showed significant improvements in OCR recognition rates compared to other well-established segmentation algorithms.

  17. Perturbation-iteration theory for analyzing microwave striplines

    NASA Technical Reports Server (NTRS)

    Kretch, B. E.

    1985-01-01

    A perturbation-iteration technique is presented for determining the propagation constant and characteristic impedance of an unshielded microstrip transmission line. The method converges to the correct solution with a few iterations at each frequency and is equivalent to a full wave analysis. The perturbation-iteration method gives a direct solution for the propagation constant without having to find the roots of a transcendental dispersion equation. The theory is presented in detail along with numerical results for the effective dielectric constant and characteristic impedance for a wide range of substrate dielectric constants, stripline dimensions, and frequencies.

  18. RF-driven advanced modes of ITER operation

    NASA Astrophysics Data System (ADS)

    Garcia, J.; Artaud, J. F.; Basiuk, V.; Brix, M.; Decker, J.; Giruzzi, G.; Hawkes, N.; Imbeaux, F.; Litaudon, X.; Mailloux, J.; Peysson, Y.; Schneider, M.

    2009-11-01

    The impact of the Radio Frequency heating and current drive systems on the ITER advanced scenarios is analyzed by means of the CRONOS suite of codes for integrated tokamak modelling. As a first step, the code is applied to analyze a high power advanced scenario discharge of JET in order to validate both the heating and current drive modules and the overall simulation procedure. Then, ITER advanced scenarios, based on Radio Frequency systems, are studied on the basis of previous results. These simulations show that both hybrid and steady-state scenarios could be possible within the ITER specifications, using RF heating and current drive only.

  19. Integrated Modelling of Iter Hybrid Scenarios with Eccd

    NASA Astrophysics Data System (ADS)

    Giruzzi, G.; Artaud, J. F.; Basiuk, V.; Garcia, J.; Imbeaux, F.; Schneider, M.

    2009-04-01

    ITER hybrid scenarios may require off-axis current drive in order to keep the safety factor above 1. In this type of applications, alignment of the current sources and self-consistency of current and temperature profiles are critical issues, which can only be addressed by integrated modelling. To this end, the CRONOS suite of codes has been applied to the simulation of these scenarios. Results of simulations of ITER hybrid scenarios assisted by ECCD, using the ITER equatorial launcher, for both co- and counter-ECCD, are presented.

  20. Iterative schemes for nonsymmetric and indefinite elliptic boundary value problems

    SciTech Connect

    Bramble, J.H.; Leyk, Z.; Pasciak, J.E.

    1993-01-01

    The purpose of this paper is twofold. The first is to describe some simple and robust iterative schemes for nonsymmetric and indefinite elliptic boundary value problems. The schemes are based in the Sobolev space H ([Omega]) and require minimal hypotheses. The second is to develop algorithms utilizing a coarse-grid approximation. This leads to iteration matrices whose eigenvalues lie in the right half of the complex plane. In fact, for symmetric indefinite problems, the iteration is reduced to a well-conditioned symmetric positive definite system which can be solved by conjugate gradient interation. Applications of the general theory as well as numerical examples are given. 20 refs., 8 tabs.

  1. Iterative method for elliptic problems on regions partitioned into substructures

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Schatz, A.H.

    1986-04-01

    Some new preconditioners for discretizations of elliptic boundary problems are studied. With these preconditioners, the domain under consideration is broken into subdomains and preconditioners are defined which only require the solution of matrix problems on the subdomains. Analytic estimates are given which guarantee that under appropriate hypotheses, the preconditioned iterative procedure converges to the solution of the discrete equations with a rate per iteration that is independent of the number of unknowns. Numerical examples are presented which illustrate the theoretically predicted iterative convergence rates.

  2. SUMMARY REPORT-FY2006 ITER WORK ACCOMPLISHED

    SciTech Connect

    Martovetsky, N N

    2006-04-11

    Six parties (EU, Japan, Russia, US, Korea, China) will build ITER. The US proposed to deliver at least 4 out of 7 modules of the Central Solenoid. Phillip Michael (MIT) and I were tasked by DoE to assist ITER in development of the ITER CS and other magnet systems. We work to help Magnets and Structure division headed by Neil Mitchell. During this visit I worked on the selected items of the CS design and carried out other small tasks, like PF temperature margin assessment.

  3. Conference on iterative methods for large linear systems

    SciTech Connect

    Kincaid, D.R.

    1988-12-01

    This conference is dedicated to providing an overview of the state of the art in the use of iterative methods for solving sparse linear systems with an eye to contributions of the past, present and future. The emphasis is on identifying current and future research directions in the mainstream of modern scientific computing. Recently, the use of iterative methods for solving linear systems has experienced a resurgence of activity as scientists attach extremely complicated three-dimensional problems using vector and parallel supercomputers. Many research advances in the development of iterative methods for high-speed computers over the past forty years are reviewed, as well as focusing on current research.

  4. Impact of irradiation effects on design solutions for ITER diagnostics

    NASA Astrophysics Data System (ADS)

    Costley, A.; deKock, L.; Walker, C.; Janeschitz, G.; Yamamoto, S.; Shikama, T.; Belyakov, V.; Farnum, E.; Hodgson, E.; Nishitani, T.; Orlinski, D.; Zinkle, S.; Kasai, S.; Stott, P.; Young, K.; Zaveriaev, V.

    2000-12-01

    An overview of the results of the irradiation tests on diagnostic components under the ITER technology R&D tasks and the solutions for the present diagnostic design are given in the light of these results. A comprehensive irradiation database of diagnostic components has been accumulated and permits conclusions to be drawn on the application of these components in ITER. Under the ITER technology R&D tasks, not only has work been shared among four home teams, but also several bilateral collaborations and round-robin experiments have been performed to enhance the R&D activities.

  5. Statistical Symbolic Execution with Informed Sampling

    NASA Technical Reports Server (NTRS)

    Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco

    2014-01-01

    Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.

  6. Neuroendocrine Tumor: Statistics

    MedlinePlus

    ... Tumor > Neuroendocrine Tumor: Statistics Request Permissions Neuroendocrine Tumor: Statistics Approved by the Cancer.Net Editorial Board , 11/ ... the body. It is important to remember that statistics on how many people survive this type of ...

  7. Adrenal Gland Tumors: Statistics

    MedlinePlus

    ... Gland Tumor: Statistics Request Permissions Adrenal Gland Tumor: Statistics Approved by the Cancer.Net Editorial Board , 03/ ... primary adrenal gland tumor is very uncommon. Exact statistics are not available for this type of tumor ...

  8. The Role of Bridging Organizations in Enhancing Ecosystem Services and Facilitating Adaptive Management of Social-Ecological Systems

    EPA Science Inventory

    Adaptive management is an approach for monitoring the response of ecological systems to different policies and practices and attempts to reduce the inherent uncertainty in ecological systems via system monitoring and iterative decision making and experimentation (Holling 1978). M...

  9. Acoustic tomography of the atmosphere using iterated unscented Kalman filter

    NASA Astrophysics Data System (ADS)

    Kolouri, Soheil

    Tomography approaches are of great interests because of their non-intrusive nature and their ability to generate a significantly larger amount of data in comparison to the in-situ measurement method. Acoustic tomography is an approach which reconstructs the unknown parameters that affect the propagation of acoustic rays in a field of interest by studying the temporal characteristics of the propagation. Acoustic tomography has been used in several different disciplines such as biomedical imaging, oceanographic studies and atmospheric studies. The focus of this thesis is to study acoustic tomography of the atmosphere in order to reconstruct the temperature and wind velocity fields in the atmospheric surface layer using the travel-times collected from several pairs of transmitter and receiver sensors distributed in the field. Our work consists of three main parts. The first part of this thesis is dedicated to reviewing the existing methods for acoustic tomography of the atmosphere, namely statistical inversion (SI), time dependent statistical inversion (TDSI), simultaneous iterative reconstruction technique (SIRT), and sparse recovery framework. The properties of these methods are then explained extensively and their shortcomings are also mentioned. In the second part of this thesis, a new acoustic tomography method based on Unscented Kalman Filter (UKF) is introduced in order to address some of the shortcomings of the existing methods. Using the UKF, the problem is cast as a state estimation problem in which the temperature and wind velocity fields are the desired states to be reconstructed. The field is discretized into several grids in which the temperature and wind velocity fields are assumed to be constant. Different models, namely random walk, first order 3-D autoregressive (AR) model, and 1-D temporal AR model are used to capture the state evolution in time-space. Given the time of arrival (TOA) equation for acoustic propagation as the observation equation, the

  10. Speed adaptation as Kalman filtering.

    PubMed

    Barraza, Jose F; Grzywacz, Norberto M

    2008-10-01

    If the purpose of adaptation is to fit sensory systems to different environments, it may implement an optimization of the system. What the optimum is depends on the statistics of these environments. Therefore, the system should update its parameters as the environment changes. A Kalman-filtering strategy performs such an update optimally by combining current estimations of the environment with those from the past. We investigate whether the visual system uses such a strategy for speed adaptation. We performed a matching-speed experiment to evaluate the time course of adaptation to an abrupt velocity change. Experimental results are in agreement with Kalman-modeling predictions for speed adaptation. When subjects adapt to a low speed and it suddenly increases, the time course of adaptation presents two phases, namely, a rapid decrease of perceived speed followed by a slower phase. In contrast, when speed changes from fast to slow, adaptation presents a single phase. In the Kalman-model simulations, this asymmetry is due to the prevalence of low speeds in natural images. However, this asymmetry disappears both experimentally and in simulations when the adapting stimulus is noisy. In both transitions, adaptation now occurs in a single phase. Finally, the model also predicts the change in sensitivity to speed discrimination produced by the adaptation.

  11. ITER neutral beam system US conceptual design

    SciTech Connect

    Purgalis, P.

    1990-09-01

    In this document we present the US conceptual design of a neutral beam system for International Thermonuclear Experimental Reactor (ITER). The design incorporates a barium surface conversion D{sup {minus}} source feeding a linear array of accelerator channels. The system uses a dc accelerator with electrostatic quadrupoles for strong focusing. A high voltage power supply that is integrated with the accelerator is presented as an attractive option. A gas neutralizer is used and residual ions exiting the neutralizer are deflected to water-cooled dumps. Cryopanels are located at the accelerator exit to pump excess gas from the source and the neutralizer, and in the ion dump cavity to pump re-neutralized ions and neutralizer gas. All the above components are packaged in compact identical, independent modules which can be removed for remote maintenance. The neutral beam system delivers 75 MW of DO at 1.3 MeV, into three ports with a total of 9 modules arranged in stacks of three modules per port . To increase reliability each module is designed to deliver up to 10 MW; this allows eight modules operating at partial capacity to deliver the required power in the event one module is out of service, and provides 20% excess capacity to improve availability. Radiation protection is provided by shielding and by locating critical components in the source and accelerator 46.5 m from the torus centerline. Neutron shielding in the drift duct and neutralizer provides the added feature of limiting conductance and thus reducing gas flow to and from the torus.

  12. Electrostatic Dust Detection and Removal for ITER

    SciTech Connect

    C.H. Skinner; A. Campos; H. Kugel; J. Leisure; A.L. Roquemore; S. Wagner

    2008-09-01

    We present some recent results on two innovative applications of microelectronics technology to dust inventory measurement and dust removal in ITER. A novel device to detect the settling of dust particles on a remote surface has been developed in the laboratory. A circuit board with a grid of two interlocking conductive traces with 25 μm spacing is biased to 30 – 50 V. Carbon particles landing on the energized grid create a transient short circuit. The current flowing through the short circuit creates a voltage pulse that is recorded by standard nuclear counting electronics and the total number of counts is related to the mass of dust impinging on the grid. The particles typically vaporize in a few seconds restoring the previous voltage standoff. Experience on NSTX however, showed that in a tokamak environment it was still possible for large particles or fibers to remain on the grid causing a long term short circuit. We report on the development of a gas puff system that uses helium to clear such particles. Experiments with varying nozzle designs, backing pressures, puff durations, and exit flow orientations have given an optimal configuration that effectively removes particles from an area up to 25 cm² with a single nozzle. In a separate experiment we are developing an advanced circuit grid of three interlocking traces that can generate a miniature electrostatic traveling wave for transporting dust to a suitable exit port. We have fabricated such a 3-pole circuit board with 25 micron insulated traces that operates with voltages up to 200 V. Recent results showed motion of dust particles with the application of only 50 V bias voltage. Such a device could potentially remove dust continuously without dedicated interventions and without loss of machine availability for plasma operations.

  13. PROBABILITY AND STATISTICS.

    DTIC Science & Technology

    STATISTICAL ANALYSIS, REPORTS), (*PROBABILITY, REPORTS), INFORMATION THEORY, DIFFERENTIAL EQUATIONS, STATISTICAL PROCESSES, STOCHASTIC PROCESSES, MULTIVARIATE ANALYSIS, DISTRIBUTION THEORY , DECISION THEORY, MEASURE THEORY, OPTIMIZATION

  14. An adaptive locally linear embedding manifold learning approach for hyperspectral target detection

    NASA Astrophysics Data System (ADS)

    Ziemann, Amanda K.; Messinger, David W.

    2015-05-01

    Algorithms for spectral analysis commonly use parametric or linear models of the data. Research has shown, however, that hyperspectral data -- particularly in materially cluttered scenes -- are not always well-modeled by statistical or linear methods. Here, we propose an approach to hyperspectral target detection that is based on a graph theory model of the data and a manifold learning transformation. An adaptive nearest neighbor (ANN) graph is built on the data, and then used to implement an adaptive version of locally linear embedding (LLE). We artificially induce a target manifold and incorporate it into the adaptive LLE transformation. The artificial target manifold helps to guide the separation of the target data from the background data in the new, transformed manifold coordinates. Then, target detection is performed in the manifold space using Spectral Angle Mapper. This methodology is an improvement over previous iterations of this approach due to the incorporation of ANN, the artificial target manifold, and the choice of detector in the transformed space. We implement our approach in a spatially local way: the image is delineated into square tiles, and the detection maps are normalized across the entire image. Target detection results will be shown using laboratory-measured and scene-derived target data from the SHARE 2012 collect.

  15. Quantum Estimation, meet Computational Statistics; Computational Statistics, meet Quantum Estimation

    NASA Astrophysics Data System (ADS)

    Ferrie, Chris; Granade, Chris; Combes, Joshua

    2013-03-01

    Quantum estimation, that is, post processing data to obtain classical descriptions of quantum states and processes, is an intractable problem--scaling exponentially with the number of interacting systems. Thankfully there is an entire field, Computational Statistics, devoted to designing algorithms to estimate probabilities for seemingly intractable problems. So, why not look to the most advanced machine learning algorithms for quantum estimation tasks? We did. I'll describe how we adapted and combined machine learning methodologies to obtain an online learning algorithm designed to estimate quantum states and processes.

  16. ITER-like current ramps in JET with ILW: experiments, modelling and consequences for ITER

    NASA Astrophysics Data System (ADS)

    Hogeweij, G. M. D.; Calabrò, G.; Sips, A. C. C.; Maggi, C. F.; De Tommasi, G. M.; Joffrin, E.; Loarte, A.; Maviglia, F.; Mlynar, J.; Rimini, F. G.; Pütterich, Th.; EFDA Contributors, JET

    2015-01-01

    Since the ITER-like wall in JET (JET-ILW) came into operation, dedicated ITER-like plasma current (Ip) ramp-up (RU) and ramp-down (RD) experiments have been performed and matched to similar discharges with the carbon wall (JET-C). The experiments show that access to H-mode early in the Ip RU phase and maintaining H-mode in the Ip RD as long as possible are instrumental to achieve low internal plasma inductance (li) and to minimize flux consumption. In JET-ILW, at a given current rise rate similar variations in li (0.7-0.9) are obtained as in JET-C. In most discharges no strong W accumulation is observed. However, in some low density cases during the early phase of the Ip RU(n_e/n_e^Gw ˜ 0.2) strong core radiation due to W influx led to hollow electron temperature (Te) profiles. In JET-ILW Zeff is significantly lower than in JET-C. W significantly disturbs the discharge evolution when the W concentration approaches 10-4 this threshold is confirmed by predictive transport modelling using the CRONOS code. Ip RD experiments in JET-ILW confirm the result of JET-C that sustained H-mode and elongation reduction are both instrumental in controlling li.

  17. Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2006-01-01

    The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described

  18. Parallel adaptive mesh refinement for electronic structure calculations

    SciTech Connect

    Kohn, S.; Weare, J.; Ong, E.; Baden, S.

    1996-12-01

    We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradients with multigrid preconditioning. We have parallelized our solver using an object-oriented adaptive mesh refinement framework.

  19. Toothbrush Adaptations.

    ERIC Educational Resources Information Center

    Exceptional Parent, 1987

    1987-01-01

    Suggestions are presented for helping disabled individuals learn to use or adapt toothbrushes for proper dental care. A directory lists dental health instructional materials available from various organizations. (CB)

  20. Electron density measurements in the ITER fusion plasma

    NASA Astrophysics Data System (ADS)

    Watts, Christopher; Udintsev, Victor; Andrew, Philip; Vayakis, George; Van Zeeland, Michael; Brower, David; Feder, Russell; Mukhin, Eugene; Tolstyakov, Sergey

    2013-08-01

    The operation of ITER requires high-quality estimates of the plasma electron density over multiple regions in the plasma for plasma evaluation, plasma control and machine protection purposes. Although the density regimes of ITER are not very different from those of existing tokamaks (1018-1021 m-3), the severe conditions of the fusion plasma environment present particular challenges to implementing these density diagnostics. In this paper we present an overview of the array of ITER electron density diagnostics designed to measure over the entire ITER domain: plasma core, pedestal, edge, scrape-off layer and divertor. It will focus on the challenges faced in making these measurements, and the technical solutions of the current designs.