Science.gov

Sample records for adaptive statistical iterative

  1. Statistical iterative reconstruction using adaptive fractional order regularization

    PubMed Central

    Zhang, Yi; Wang, Yan; Zhang, Weihua; Lin, Feng; Pu, Yifei; Zhou, Jiliu

    2016-01-01

    In order to reduce the radiation dose of the X-ray computed tomography (CT), low-dose CT has drawn much attention in both clinical and industrial fields. A fractional order model based on statistical iterative reconstruction framework was proposed in this study. To further enhance the performance of the proposed model, an adaptive order selection strategy, determining the fractional order pixel-by-pixel, was given. Experiments, including numerical and clinical cases, illustrated better results than several existing methods, especially, in structure and texture preservation. PMID:27231604

  2. Model-based iterative reconstruction and adaptive statistical iterative reconstruction: dose-reduced CT for detecting pancreatic calcification

    PubMed Central

    Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni

    2016-01-01

    Background Iterative reconstruction methods have attracted attention for reducing radiation doses in computed tomography (CT). Purpose To investigate the detectability of pancreatic calcification using dose-reduced CT reconstructed with model-based iterative construction (MBIR) and adaptive statistical iterative reconstruction (ASIR). Material and Methods This prospective study approved by Institutional Review Board included 85 patients (57 men, 28 women; mean age, 69.9 years; mean body weight, 61.2 kg). Unenhanced CT was performed three times with different radiation doses (reference-dose CT [RDCT], low-dose CT [LDCT], ultralow-dose CT [ULDCT]). From RDCT, LDCT, and ULDCT, images were reconstructed with filtered-back projection (R-FBP, used for establishing reference standard), ASIR (L-ASIR), and MBIR and ASIR (UL-MBIR and UL-ASIR), respectively. A lesion (pancreatic calcification) detection test was performed by two blinded radiologists with a five-point certainty level scale. Results Dose-length products of RDCT, LDCT, and ULDCT were 410, 97, and 36 mGy-cm, respectively. Nine patients had pancreatic calcification. The sensitivity for detecting pancreatic calcification with UL-MBIR was high (0.67–0.89) compared to L-ASIR or UL-ASIR (0.11–0.44), and a significant difference was seen between UL-MBIR and UL-ASIR for one reader (P = 0.014). The area under the receiver-operating characteristic curve for UL-MBIR (0.818–0.860) was comparable to that for L-ASIR (0.696–0.844). The specificity was lower with UL-MBIR (0.79–0.92) than with L-ASIR or UL-ASIR (0.96–0.99), and a significant difference was seen for one reader (P < 0.01). Conclusion In UL-MBIR, pancreatic calcification can be detected with high sensitivity, however, we should pay attention to the slightly lower specificity. PMID:27110389

  3. Ultralow dose computed tomography attenuation correction for pediatric PET CT using adaptive statistical iterative reconstruction

    SciTech Connect

    Brady, Samuel L.; Shulkin, Barry L.

    2015-02-15

    Purpose: To develop ultralow dose computed tomography (CT) attenuation correction (CTAC) acquisition protocols for pediatric positron emission tomography CT (PET CT). Methods: A GE Discovery 690 PET CT hybrid scanner was used to investigate the change to quantitative PET and CT measurements when operated at ultralow doses (10–35 mA s). CT quantitation: noise, low-contrast resolution, and CT numbers for 11 tissue substitutes were analyzed in-phantom. CT quantitation was analyzed to a reduction of 90% volume computed tomography dose index (0.39/3.64; mGy) from baseline. To minimize noise infiltration, 100% adaptive statistical iterative reconstruction (ASiR) was used for CT reconstruction. PET images were reconstructed with the lower-dose CTAC iterations and analyzed for: maximum body weight standardized uptake value (SUV{sub bw}) of various diameter targets (range 8–37 mm), background uniformity, and spatial resolution. Radiation dose and CTAC noise magnitude were compared for 140 patient examinations (76 post-ASiR implementation) to determine relative dose reduction and noise control. Results: CT numbers were constant to within 10% from the nondose reduced CTAC image for 90% dose reduction. No change in SUV{sub bw}, background percent uniformity, or spatial resolution for PET images reconstructed with CTAC protocols was found down to 90% dose reduction. Patient population effective dose analysis demonstrated relative CTAC dose reductions between 62% and 86% (3.2/8.3–0.9/6.2). Noise magnitude in dose-reduced patient images increased but was not statistically different from predose-reduced patient images. Conclusions: Using ASiR allowed for aggressive reduction in CT dose with no change in PET reconstructed images while maintaining sufficient image quality for colocalization of hybrid CT anatomy and PET radioisotope uptake.

  4. Potential benefit of the CT adaptive statistical iterative reconstruction method for pediatric cardiac diagnosis

    NASA Astrophysics Data System (ADS)

    Miéville, Frédéric A.; Ayestaran, Paul; Argaud, Christophe; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Gudinchet, François; Bochud, François; Verdun, Francis R.

    2010-04-01

    Adaptive Statistical Iterative Reconstruction (ASIR) is a new imaging reconstruction technique recently introduced by General Electric (GE). This technique, when combined with a conventional filtered back-projection (FBP) approach, is able to improve the image noise reduction. To quantify the benefits provided on the image quality and the dose reduction by the ASIR method with respect to the pure FBP one, the standard deviation (SD), the modulation transfer function (MTF), the noise power spectrum (NPS), the image uniformity and the noise homogeneity were examined. Measurements were performed on a control quality phantom when varying the CT dose index (CTDIvol) and the reconstruction kernels. A 64-MDCT was employed and raw data were reconstructed with different percentages of ASIR on a CT console dedicated for ASIR reconstruction. Three radiologists also assessed a cardiac pediatric exam reconstructed with different ASIR percentages using the visual grading analysis (VGA) method. For the standard, soft and bone reconstruction kernels, the SD is reduced when the ASIR percentage increases up to 100% with a higher benefit for low CTDIvol. MTF medium frequencies were slightly enhanced and modifications of the NPS shape curve were observed. However for the pediatric cardiac CT exam, VGA scores indicate an upper limit of the ASIR benefit. 40% of ASIR was observed as the best trade-off between noise reduction and clinical realism of organ images. Using phantom results, 40% of ASIR corresponded to an estimated dose reduction of 30% under pediatric cardiac protocol conditions. In spite of this discrepancy between phantom and clinical results, the ASIR method is as an important option when considering the reduction of radiation dose, especially for pediatric patients.

  5. Impact of adaptive statistical iterative reconstruction on radiation dose in evaluation of trauma patients

    PubMed Central

    Maxfield, Mark W.; Schuster, Kevin M.; McGillicuddy, Edward A.; Young, Calvin J.; Ghita, Monica; Bokhari, S.A. Jamal; Oliva, Isabel B.; Brink, James A.; Davis, Kimberly A.

    2013-01-01

    BACKGROUND A recent study showed that computed tomographic (CT) scans contributed 93% of radiation exposure of 177 patients admitted to our Level I trauma center. Adaptive statistical iterative reconstruction (ASIR) is an algorithm that reduces the noise level in reconstructed images and therefore allows the use of less ionizing radiation during CT scans without significantly affecting image quality. ASIR was instituted on all CT scans performed on trauma patients in June 2009. Our objective was to determine if implementation of ASIR reduced radiation dose without compromising patient outcomes. METHODS We identified 300 patients activating the trauma system before and after the implementation of ASIR imaging. After applying inclusion criteria, 245 charts were reviewed. Baseline demographics, presenting characteristics, number of delayed diagnoses, and missed injuries were recorded. The postexamination volume CT dose index (CTDIvol) and dose-length product (DLP)reported by the scanner for CT scans of the chest, abdomen, and pelvis and CT scans of the brain and cervical spine were recorded. Subjective image quality was compared between the two groups. RESULTS For CT scans of the chest, abdomen, and pelvis, the mean CTDIvol(17.1 mGy vs. 14.2 mGy; p < 0.001) and DLP (1,165 mGy·cm vs. 1,004 mGy·cm; p < 0.001) was lower for studies performed with ASIR. For CT scans of the brain and cervical spine, the mean CTDIvol(61.7 mGy vs. 49.6 mGy; p < 0.001) and DLP (1,327 mGy·cm vs. 1,067 mGy·cm; p < 0.001) was lower for studies performed with ASIR. There was no subjective difference in image quality between ASIR and non-ASIR scans. All CT scans were deemed of good or excellent image quality. There were no delayed diagnoses or missed injuries related to CT scanning identified in either group. CONCLUSION Implementation of ASIR imaging for CT scans performed on trauma patients led to a nearly 20% reduction in ionizing radiation without compromising outcomes or image quality

  6. Adaptive iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Bruder, H.; Raupach, R.; Sunnegardh, J.; Sedlmair, M.; Stierstorfer, K.; Flohr, T.

    2011-03-01

    It is well known that, in CT reconstruction, Maximum A Posteriori (MAP) reconstruction based on a Poisson noise model can be well approximated by Penalized Weighted Least Square (PWLS) minimization based on a data dependent Gaussian noise model. We study minimization of the PWLS objective function using the Gradient Descent (GD) method, and show that if an exact inverse of the forward projector exists, the PWLS GD update equation can be translated into an update equation which entirely operates in the image domain. In case of non-linear regularization and arbitrary noise model this means that a non-linear image filter must exist which solves the optimization problem. In the general case of non-linear regularization and arbitrary noise model, the analytical computation is not trivial and might lead to image filters which are computationally very expensive. We introduce a new iteration scheme in image space, based on a regularization filter with an anisotropic noise model. Basically, this approximates the statistical data weighting and regularization in PWLS reconstruction. If needed, e.g. for compensation of the non-exactness of backprojector, the image-based regularization loop can be preceded by a raw data based loop without regularization and statistical data weighting. We call this combined iterative reconstruction scheme Adaptive Iterative Reconstruction (AIR). It will be shown that in terms of low-contrast visibility, sharpness-to-noise and contrast-to-noise ratio, PWLS and AIR reconstruction are similar to a high degree of accuracy. In clinical images the noise texture of AIR is also superior to the more artificial texture of PWLS.

  7. Characterization of adaptive statistical iterative reconstruction algorithm for dose reduction in CT: A pediatric oncology perspective

    SciTech Connect

    Brady, S. L.; Yee, B. S.; Kaufman, R. A.

    2012-09-15

    Purpose: This study demonstrates a means of implementing an adaptive statistical iterative reconstruction (ASiR Trade-Mark-Sign ) technique for dose reduction in computed tomography (CT) while maintaining similar noise levels in the reconstructed image. The effects of image quality and noise texture were assessed at all implementation levels of ASiR Trade-Mark-Sign . Empirically derived dose reduction limits were established for ASiR Trade-Mark-Sign for imaging of the trunk for a pediatric oncology population ranging from 1 yr old through adolescence/adulthood. Methods: Image quality was assessed using metrics established by the American College of Radiology (ACR) CT accreditation program. Each image quality metric was tested using the ACR CT phantom with 0%-100% ASiR Trade-Mark-Sign blended with filtered back projection (FBP) reconstructed images. Additionally, the noise power spectrum (NPS) was calculated for three common reconstruction filters of the trunk. The empirically derived limitations on ASiR Trade-Mark-Sign implementation for dose reduction were assessed using (1, 5, 10) yr old and adolescent/adult anthropomorphic phantoms. To assess dose reduction limits, the phantoms were scanned in increments of increased noise index (decrementing mA using automatic tube current modulation) balanced with ASiR Trade-Mark-Sign reconstruction to maintain noise equivalence of the 0% ASiR Trade-Mark-Sign image. Results: The ASiR Trade-Mark-Sign algorithm did not produce any unfavorable effects on image quality as assessed by ACR criteria. Conversely, low-contrast resolution was found to improve due to the reduction of noise in the reconstructed images. NPS calculations demonstrated that images with lower frequency noise had lower noise variance and coarser graininess at progressively higher percentages of ASiR Trade-Mark-Sign reconstruction; and in spite of the similar magnitudes of noise, the image reconstructed with 50% or more ASiR Trade-Mark-Sign presented a more

  8. A STUDY OF THE IMAGE QUALITY OF COMPUTED TOMOGRAPHY ADAPTIVE STATISTICAL ITERATIVE RECONSTRUCTED BRAIN IMAGES USING SUBJECTIVE AND OBJECTIVE METHODS.

    PubMed

    Mangat, J; Morgan, J; Benson, E; Båth, M; Lewis, M; Reilly, A

    2016-06-01

    The recent reintroduction of iterative reconstruction in computed tomography has facilitated the realisation of major dose saving. The aim of this article was to investigate the possibility of achieving further savings at a site with well-established Adaptive Statistical iterative Reconstruction (ASiR™) (GE Healthcare) brain protocols. An adult patient study was conducted with observers making visual grading assessments using image quality criteria, which were compared with the frequency domain metrics, noise power spectrum and modulation transfer function. Subjective image quality equivalency was found in the 40-70% ASiR™ range, leading to the proposal of ranges for the objective metrics defining acceptable image quality. Based on the findings of both the patient-based and objective studies of the ASiR™/tube-current combinations tested, 60%/305 mA was found to fall within all, but one, of these ranges. Therefore, it is recommended that an ASiR™ level of 60%, with a noise index of 12.20, is a viable alternative to the currently used protocol featuring a 40% ASiR™ level and a noise index of 11.20, potentially representing a 16% dose saving. PMID:27103646

  9. COMPARISON OF ADAPTIVE STATISTICAL ITERATIVE RECONSTRUCTION (ASIR™) AND MODEL-BASED ITERATIVE RECONSTRUCTION (VEO™) FOR PAEDIATRIC ABDOMINAL CT EXAMINATIONS: AN OBSERVER PERFORMANCE STUDY OF DIAGNOSTIC IMAGE QUALITY.

    PubMed

    Hultenmo, Maria; Caisander, Håkan; Mack, Karsten; Thilander-Klang, Anne

    2016-06-01

    The diagnostic image quality of 75 paediatric abdominal computed tomography (CT) examinations reconstructed with two different iterative reconstruction (IR) algorithms-adaptive statistical IR (ASiR™) and model-based IR (Veo™)-was compared. Axial and coronal images were reconstructed with 70 % ASiR with the Soft™ convolution kernel and with the Veo algorithm. The thickness of the reconstructed images was 2.5 or 5 mm depending on the scanning protocol used. Four radiologists graded the delineation of six abdominal structures and the diagnostic usefulness of the image quality. The Veo reconstruction significantly improved the visibility of most of the structures compared with ASiR in all subgroups of images. For coronal images, the Veo reconstruction resulted in significantly improved ratings of the diagnostic use of the image quality compared with the ASiR reconstruction. This was not seen for the axial images. The greatest improvement using Veo reconstruction was observed for the 2.5 mm coronal slices. PMID:26873711

  10. Image quality of CT angiography with model-based iterative reconstruction in young children with congenital heart disease: comparison with filtered back projection and adaptive statistical iterative reconstruction.

    PubMed

    Son, Sung Sil; Choo, Ki Seok; Jeon, Ung Bae; Jeon, Gye Rok; Nam, Kyung Jin; Kim, Tae Un; Yeom, Jeong A; Hwang, Jae Yeon; Jeong, Dong Wook; Lim, Soo Jin

    2015-06-01

    To retrospectively evaluate the image quality of CT angiography (CTA) reconstructed by model-based iterative reconstruction (MBIR) and to compare this with images obtained by filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR) in newborns and infants with congenital heart disease (CHD). Thirty-seven children (age 4.8 ± 3.7 months; weight 4.79 ± 0.47 kg) with suspected CHD underwent CTA on a 64detector MDCT without ECG gating (80 kVp, 40 mA using tube current modulation). Total dose length product was recorded in all patients. Images were reconstructed using FBP, ASIR, and MBIR. Objective image qualities (density, noise) were measured in the great vessels and heart chambers. The contrast-to-noise ratio (CNR) was calculated by measuring the density and noise of myocardial walls. Two radiologists evaluated images for subjective noise, diagnostic confidence, and sharpness at the level prior to the first branch of the main pulmonary artery. Images were compared with respect to reconstruction method, and reconstruction times were measured. Images from all patients were diagnostic, and the effective dose was 0.22 mSv. The objective image noise of MBIR was significantly lower than those of FBP and ASIR in the great vessels and heart chambers (P < 0.05); however, with respect to attenuations in the four chambers, ascending aorta, descending aorta, and pulmonary trunk, no statistically significant difference was observed among the three methods (P > 0.05). Mean CNR values were 8.73 for FBP, 14.54 for ASIR, and 22.95 for MBIR. In addition, the subjective image noise of MBIR was significantly lower than those of the others (P < 0.01). Furthermore, while FBP had the highest score for image sharpness, ASIR had the highest score for diagnostic confidence (P < 0.05), and mean reconstruction times were 5.1 ± 2.3 s for FBP and ASIR and 15.1 ± 2.4 min for MBIR. While CTA with MBIR in newborns and infants with CHD can reduce image noise and

  11. Dual Energy CT (DECT) Monochromatic Imaging: Added Value of Adaptive Statistical Iterative Reconstructions (ASIR) in Portal Venography

    PubMed Central

    Winklhofer, Sebastian; Jiang, Rong; Wang, Xinlian; He, Wen

    2016-01-01

    Objective To investigate the effect of the adaptive statistical iterative reconstructions (ASIR) on image quality in portal venography by dual energy CT (DECT) imaging. Materials and Methods DECT scans of 45 cirrhotic patients obtained in the portal venous phase were analyzed. Monochromatic images at 70keV were reconstructed with the following 4 ASIR percentages: 0%, 30%, 50%, and 70%. The image noise (IN) (standard deviation, SD) of portal vein (PV), the contrast-to-noise-ratio (CNR), and the subjective score for the sharpness of PV boundaries, and the diagnostic acceptability (DA) were obtained. The IN, CNR, and the subjective scores were compared among the four ASIR groups. Results The IN (in HU) of PV (10.05±3.14, 9.23±3.05, 8.44±2.95 and 7.83±2.90) decreased and CNR values of PV (8.04±3.32, 8.95±3.63, 9.80±4.12 and 10.74±4.73) increased with the increase in ASIR percentage (0%, 30%, 50%, and 70%, respectively), and were statistically different for the 4 ASIR groups (p<0.05). The subjective scores showed that the sharpness of portal vein boundaries (3.13±0.59, 2.82±0.44, 2.73±0.54 and 2.07±0.54) decreased with higher ASIR percentages (p<0.05). The subjective diagnostic acceptability was highest at 30% ASIR (p<0.05). Conclusions 30% ASIR addition in DECT portal venography could improve the 70 keV monochromatic image quality. PMID:27315158

  12. A qualitative and quantitative analysis of radiation dose and image quality of computed tomography images using adaptive statistical iterative reconstruction.

    PubMed

    Hussain, Fahad Ahmed; Mail, Noor; Shamy, Abdulrahman M; Suliman, Alghamdi; Saoudi, Abdelhamid

    2016-01-01

    Image quality is a key issue in radiology, particularly in a clinical setting where it is important to achieve accurate diagnoses while minimizing radiation dose. Some computed tomography (CT) manufacturers have introduced algorithms that claim significant dose reduction. In this study, we assessed CT image quality produced by two reconstruction algorithms provided with GE Healthcare's Discovery 690 Elite positron emission tomography (PET) CT scanner. Image quality was measured for images obtained at various doses with both conventional filtered back-projection (FBP) and adaptive statistical iterative reconstruction (ASIR) algorithms. A stan-dard CT dose index (CTDI) phantom and a pencil ionization chamber were used to measure the CT dose at 120 kVp and an exposure of 260 mAs. Image quality was assessed using two phantoms. CT images of both phantoms were acquired at tube voltage (kV) of 120 with exposures ranging from 25 mAs to 400 mAs. Images were reconstructed using FBP and ASIR ranging from 10% to 100%, then analyzed for noise, low-contrast detectability, contrast-to-noise ratio (CNR), and modulation transfer function (MTF). Noise was 4.6 HU in water phantom images acquired at 260 mAs/FBP 120 kV and 130 mAs/50% ASIR 120 kV. The large objects (fre-quency < 7 lp/cm) retained fairly acceptable image quality at 130 mAs/50% ASIR, compared to 260 mAs/FBP. The application of ASIR for small objects (frequency >7 lp/cm) showed poor visibility compared to FBP at 260 mAs and even worse for images acquired at less than 130 mAs. ASIR blending more than 50% at low dose tends to reduce contrast of small objects (frequency >7 lp/cm). We concluded that dose reduction and ASIR should be applied with close attention if the objects to be detected or diagnosed are small (frequency > 7 lp/cm). Further investigations are required to correlate the small objects (frequency > 7 lp/cm) to patient anatomy and clinical diagnosis. PMID:27167261

  13. THE EFFECT OF ADAPTIVE STATISTICAL ITERATIVE RECONSTRUCTION ON THE ASSESSMENT OF DIAGNOSTIC IMAGE QUALITY AND VISUALISATION OF ANATOMICAL STRUCTURES IN PAEDIATRIC CEREBRAL CT EXAMINATIONS.

    PubMed

    Larsson, Joel; Båth, Magnus; Ledenius, Kerstin; Thilander-Klang, Anne

    2016-06-01

    The purpose of this study was to investigate the effect of adaptive statistical iterative reconstruction (ASiR) on the visualisation of anatomical structures and diagnostic image quality in paediatric cerebral computed tomography (CT) examinations. Forty paediatric patients undergoing routine cerebral CT were included in the study. The raw data from CT scans were reconstructed into stacks of 5 mm thick axial images at various levels of ASiR. Three paediatric radiologists rated six questions related to the visualisation of anatomical structures and one question on diagnostic image quality, in a blinded randomised visual grading study. The evaluated anatomical structures demonstrated enhanced visibility with increasing level of ASiR, apart from the cerebrospinal fluid space around the brain. In this study, 60 % ASiR was found to be the optimal level of ASiR for paediatric cerebral CT examinations. This shows that the commonly used 30 % ASiR may not always be the optimal level. PMID:26873712

  14. Update on the non-prewhitening model observer in computed tomography for the assessment of the adaptive statistical and model-based iterative reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Ott, Julien G.; Becce, Fabio; Monnin, Pascal; Schmidt, Sabine; Bochud, François O.; Verdun, Francis R.

    2014-08-01

    The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.

  15. Adaptive Statistical Iterative Reconstruction-Applied Ultra-Low-Dose CT with Radiography-Comparable Radiation Dose: Usefulness for Lung Nodule Detection

    PubMed Central

    Yoon, Hyun Jung; Hwang, Hye Sun; Moon, Jung Won; Lee, Kyung Soo

    2015-01-01

    Objective To assess the performance of adaptive statistical iterative reconstruction (ASIR)-applied ultra-low-dose CT (ULDCT) in detecting small lung nodules. Materials and Methods Thirty patients underwent both ULDCT and standard dose CT (SCT). After determining the reference standard nodules, five observers, blinded to the reference standard reading results, independently evaluated SCT and both subsets of ASIR- and filtered back projection (FBP)-driven ULDCT images. Data assessed by observers were compared statistically. Results Converted effective doses in SCT and ULDCT were 2.81 ± 0.92 and 0.17 ± 0.02 mSv, respectively. A total of 114 lung nodules were detected on SCT as a standard reference. There was no statistically significant difference in sensitivity between ASIR-driven ULDCT and SCT for three out of the five observers (p = 0.678, 0.735, < 0.01, 0.038, and < 0.868 for observers 1, 2, 3, 4, and 5, respectively). The sensitivity of FBP-driven ULDCT was significantly lower than that of ASIR-driven ULDCT in three out of the five observers (p < 0.01 for three observers, and p = 0.064 and 0.146 for two observers). In jackknife alternative free-response receiver operating characteristic analysis, the mean values of figure-of-merit (FOM) for FBP, ASIR-driven ULDCT, and SCT were 0.682, 0.772, and 0.821, respectively, and there were no significant differences in FOM values between ASIR-driven ULDCT and SCT (p = 0.11), but the FOM value of FBP-driven ULDCT was significantly lower than that of ASIR-driven ULDCT and SCT (p = 0.01 and 0.00). Conclusion Adaptive statistical iterative reconstruction-driven ULDCT delivering a radiation dose of only 0.17 mSv offers acceptable sensitivity in nodule detection compared with SCT and has better performance than FBP-driven ULDCT. PMID:26357505

  16. Can use of adaptive statistical iterative reconstruction reduce radiation dose in unenhanced head CT? An analysis of qualitative and quantitative image quality

    PubMed Central

    Heggen, Kristin Livelten; Pedersen, Hans Kristian; Andersen, Hilde Kjernlie; Martinsen, Anne Catrine T

    2016-01-01

    Background Iterative reconstruction can reduce image noise and thereby facilitate dose reduction. Purpose To evaluate qualitative and quantitative image quality for full dose and dose reduced head computed tomography (CT) protocols reconstructed using filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR). Material and Methods Fourteen patients undergoing follow-up head CT were included. All patients underwent full dose (FD) exam and subsequent 15% dose reduced (DR) exam, reconstructed using FBP and 30% ASIR. Qualitative image quality was assessed using visual grading characteristics. Quantitative image quality was assessed using ROI measurements in cerebrospinal fluid (CSF), white matter, peripheral and central gray matter. Additionally, quantitative image quality was measured in Catphan and vendor’s water phantom. Results There was no significant difference in qualitative image quality between FD FBP and DR ASIR. Comparing same scan FBP versus ASIR, a noise reduction of 28.6% in CSF and between −3.7 and 3.5% in brain parenchyma was observed. Comparing FD FBP versus DR ASIR, a noise reduction of 25.7% in CSF, and −7.5 and 6.3% in brain parenchyma was observed. Image contrast increased in ASIR reconstructions. Contrast-to-noise ratio was improved in DR ASIR compared to FD FBP. In phantoms, noise reduction was in the range of 3 to 28% with image content. Conclusion There was no significant difference in qualitative image quality between full dose FBP and dose reduced ASIR. CNR improved in DR ASIR compared to FD FBP mostly due to increased contrast, not reduced noise. Therefore, we recommend using caution if reducing dose and applying ASIR to maintain image quality. PMID:27583169

  17. ASSESSMENT OF CLINICAL IMAGE QUALITY IN PAEDIATRIC ABDOMINAL CT EXAMINATIONS: DEPENDENCY ON THE LEVEL OF ADAPTIVE STATISTICAL ITERATIVE RECONSTRUCTION (ASiR) AND THE TYPE OF CONVOLUTION KERNEL.

    PubMed

    Larsson, Joel; Båth, Magnus; Ledenius, Kerstin; Caisander, Håkan; Thilander-Klang, Anne

    2016-06-01

    The purpose of this study was to investigate the effect of different combinations of convolution kernel and the level of Adaptive Statistical iterative Reconstruction (ASiR™) on diagnostic image quality as well as visualisation of anatomical structures in paediatric abdominal computed tomography (CT) examinations. Thirty-five paediatric patients with abdominal pain with non-specified pathology undergoing abdominal CT were included in the study. Transaxial stacks of 5-mm-thick images were retrospectively reconstructed at various ASiR levels, in combination with three convolution kernels. Four paediatric radiologists rated the diagnostic image quality and the delineation of six anatomical structures in a blinded randomised visual grading study. Image quality at a given ASiR level was found to be dependent on the kernel, and a more edge-enhancing kernel benefitted from a higher ASiR level. An ASiR level of 70 % together with the Soft™ or Standard™ kernel was suggested to be the optimal combination for paediatric abdominal CT examinations. PMID:26922785

  18. Adaptive statistical iterative reconstruction and bismuth shielding for evaluation of dose reduction to the eye and image quality during head CT

    NASA Astrophysics Data System (ADS)

    Kim, Myeong Seong; Choi, Jiwon; Kim, Sun Young; Kweon, Dae Cheol

    2014-03-01

    There is a concern regarding the adverse effects of increasing radiation doses due to repeated computed tomography (CT) scans, especially in radiosensitive organs and portions thereof, such as the lenses of the eyes. Bismuth shielding with an adaptive statistical iterative reconstruction (ASIR) algorithm was recently introduced in our clinic as a method to reduce the absorbed radiation dose. This technique was applied to the lens of the eye during CT scans. The purpose of this study was to evaluate the reduction in the absorbed radiation dose and to determine the noise level when using bismuth shielding and the ASIR algorithm with the GE DC 750 HD 64-channel CT scanner for CT of the head of a humanoid phantom. With the use of bismuth shielding, the noise level was higher in the beam-hardening artifact areas than in the revealed artifact areas. However, with the use of ASIR, the noise level was lower than that with the use of bismuth alone; it was also lower in the artifact areas. The reduction in the radiation dose with the use of bismuth was greatest at the surface of the phantom to a limited depth. In conclusion, it is possible to reduce the radiation level and slightly decrease the bismuth-induced noise level by using a combination of ASIR as an algorithm process and bismuth as an in-plane hardware-type shielding method.

  19. SU-E-I-86: Ultra-Low Dose Computed Tomography Attenuation Correction for Pediatric PET CT Using Adaptive Statistical Iterative Reconstruction (ASiR™)

    SciTech Connect

    Brady, S; Shulkin, B

    2015-06-15

    Purpose: To develop ultra-low dose computed tomography (CT) attenuation correction (CTAC) acquisition protocols for pediatric positron emission tomography CT (PET CT). Methods: A GE Discovery 690 PET CT hybrid scanner was used to investigate the change to quantitative PET and CT measurements when operated at ultra-low doses (10–35 mAs). CT quantitation: noise, low-contrast resolution, and CT numbers for eleven tissue substitutes were analyzed in-phantom. CT quantitation was analyzed to a reduction of 90% CTDIvol (0.39/3.64; mGy) radiation dose from baseline. To minimize noise infiltration, 100% adaptive statistical iterative reconstruction (ASiR) was used for CT reconstruction. PET images were reconstructed with the lower-dose CTAC iterations and analyzed for: maximum body weight standardized uptake value (SUVbw) of various diameter targets (range 8–37 mm), background uniformity, and spatial resolution. Radiation organ dose, as derived from patient exam size specific dose estimate (SSDE), was converted to effective dose using the standard ICRP report 103 method. Effective dose and CTAC noise magnitude were compared for 140 patient examinations (76 post-ASiR implementation) to determine relative patient population dose reduction and noise control. Results: CT numbers were constant to within 10% from the non-dose reduced CTAC image down to 90% dose reduction. No change in SUVbw, background percent uniformity, or spatial resolution for PET images reconstructed with CTAC protocols reconstructed with ASiR and down to 90% dose reduction. Patient population effective dose analysis demonstrated relative CTAC dose reductions between 62%–86% (3.2/8.3−0.9/6.2; mSv). Noise magnitude in dose-reduced patient images increased but was not statistically different from pre dose-reduced patient images. Conclusion: Using ASiR allowed for aggressive reduction in CTAC dose with no change in PET reconstructed images while maintaining sufficient image quality for co

  20. Adaptive self-calibrating iterative GRAPPA reconstruction.

    PubMed

    Park, Suhyung; Park, Jaeseok

    2012-06-01

    Parallel magnetic resonance imaging in k-space such as generalized auto-calibrating partially parallel acquisition exploits spatial correlation among neighboring signals over multiple coils in calibration to estimate missing signals in reconstruction. It is often challenging to achieve accurate calibration information due to data corruption with noises and spatially varying correlation. The purpose of this work is to address these problems simultaneously by developing a new, adaptive iterative generalized auto-calibrating partially parallel acquisition with dynamic self-calibration. With increasing iterations, under a framework of the Kalman filter spatial correlation is estimated dynamically updating calibration signals in a measurement model and using fixed-point state transition in a process model while missing signals outside the step-varying calibration region are reconstructed, leading to adaptive self-calibration and reconstruction. Noise statistic is incorporated in the Kalman filter models, yielding coil-weighted de-noising in reconstruction. Numerical and in vivo studies are performed, demonstrating that the proposed method yields highly accurate calibration and thus reduces artifacts and noises even at high acceleration. PMID:21994010

  1. Statistical Physics for Adaptive Distributed Control

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2005-01-01

    A viewgraph presentation on statistical physics for distributed adaptive control is shown. The topics include: 1) The Golden Rule; 2) Advantages; 3) Roadmap; 4) What is Distributed Control? 5) Review of Information Theory; 6) Iterative Distributed Control; 7) Minimizing L(q) Via Gradient Descent; and 8) Adaptive Distributed Control.

  2. Feasibility Study of Radiation Dose Reduction in Adult Female Pelvic CT Scan with Low Tube-Voltage and Adaptive Statistical Iterative Reconstruction

    PubMed Central

    Wang, Xinlian; Chen, Jianghong; Hu, Zhihai; Zhao, Liqin

    2015-01-01

    Objective To evaluate image quality of female pelvic computed tomography (CT) scans reconstructed with the adaptive statistical iterative reconstruction (ASIR) technique combined with low tube-voltage and to explore the feasibility of its clinical application. Materials and Methods Ninety-four patients were divided into two groups. The study group used 100 kVp, and images were reconstructed with 30%, 50%, 70%, and 90% ASIR. The control group used 120 kVp, and images were reconstructed with 30% ASIR. The noise index was 15 for the study group and 11 for the control group. The CT values and noise levels of different tissues were measured. The contrast to noise ratio (CNR) was calculated. A subjective evaluation was carried out by two experienced radiologists. The CT dose index volume (CTDIvol) was recorded. Results A 44.7% reduction in CTDIvol was observed in the study group (8.18 ± 3.58 mGy) compared with that in the control group (14.78 ± 6.15 mGy). No significant differences were observed in the tissue noise levels and CNR values between the 70% ASIR group and the control group (p = 0.068-1.000). The subjective scores indicated that visibility of small structures, diagnostic confidence, and the overall image quality score in the 70% ASIR group was the best, and were similar to those in the control group (1.87 vs. 1.79, 1.26 vs. 1.28, and 4.53 vs. 4.57; p = 0.122-0.585). No significant difference in diagnostic accuracy was detected between the study group and the control group (42/47 vs. 43/47, p = 1.000). Conclusion Low tube-voltage combined with automatic tube current modulation and 70% ASIR allowed the low CT radiation dose to be reduced by 44.7% without losing image quality on female pelvic scan. PMID:26357499

  3. Adaptable Iterative and Recursive Kalman Filter Schemes

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato

    2014-01-01

    Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.

  4. Iterative blind deconvolution of adaptive optics images

    NASA Astrophysics Data System (ADS)

    Liang, Ying; Rao, Changhui; Li, Mei; Geng, Zexun

    2006-04-01

    Adaptive optics (AO) technique has been extensively used for large ground-based optical telescopes to overcome the effect of atmospheric turbulence. But the correction is often partial. An iterative blind deconvolution (IBD) algorithm based on maximum-likelihood (ML) method is proposed to restore the details of the object image corrected by AO. IBD algorithm and the procedure are briefly introduced and the experiment results are presented. The results show that IBD algorithm is efficient for the restoration of some useful high-frequency of the image.

  5. Update on the non-prewhitening model observer in computed tomography for the assessment of the adaptive statistical and model-based iterative reconstruction algorithms.

    PubMed

    Ott, Julien G; Becce, Fabio; Monnin, Pascal; Schmidt, Sabine; Bochud, François O; Verdun, Francis R

    2014-08-01

    The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications. PMID:24990844

  6. Statistical Physics of Adaptation

    NASA Astrophysics Data System (ADS)

    Perunov, Nikolay; Marsland, Robert A.; England, Jeremy L.

    2016-04-01

    Whether by virtue of being prepared in a slowly relaxing, high-free energy initial condition, or because they are constantly dissipating energy absorbed from a strong external drive, many systems subject to thermal fluctuations are not expected to behave in the way they would at thermal equilibrium. Rather, the probability of finding such a system in a given microscopic arrangement may deviate strongly from the Boltzmann distribution, raising the question of whether thermodynamics still has anything to tell us about which arrangements are the most likely to be observed. In this work, we build on past results governing nonequilibrium thermodynamics and define a generalized Helmholtz free energy that exactly delineates the various factors that quantitatively contribute to the relative probabilities of different outcomes in far-from-equilibrium stochastic dynamics. By applying this expression to the analysis of two examples—namely, a particle hopping in an oscillating energy landscape and a population composed of two types of exponentially growing self-replicators—we illustrate a simple relationship between outcome-likelihood and dissipative history. In closing, we discuss the possible relevance of such a thermodynamic principle for our understanding of self-organization in complex systems, paying particular attention to a possible analogy to the way evolutionary adaptations emerge in living things.

  7. Statistical properties of an iterated arithmetic mapping

    SciTech Connect

    Feix, M.R.; Rouet, J.L.

    1994-07-01

    We study the (3x = 1)/2 problem from a probabilistic viewpoint and show a forgetting mechanism for the last k binary digits of the seed after k iterations. The problem is subsequently generalized to a trifurcation process, the (lx + m)/3 problem. Finally the sequence of a set of seeds is empirically shown to be equivalent to a random walk of the variable log{sub 2}x (or log{sub 3} x) though computer simulations.

  8. Nuclear Forensic Inferences Using Iterative Multidimensional Statistics

    SciTech Connect

    Robel, M; Kristo, M J; Heller, M A

    2009-06-09

    Nuclear forensics involves the analysis of interdicted nuclear material for specific material characteristics (referred to as 'signatures') that imply specific geographical locations, production processes, culprit intentions, etc. Predictive signatures rely on expert knowledge of physics, chemistry, and engineering to develop inferences from these material characteristics. Comparative signatures, on the other hand, rely on comparison of the material characteristics of the interdicted sample (the 'questioned sample' in FBI parlance) with those of a set of known samples. In the ideal case, the set of known samples would be a comprehensive nuclear forensics database, a database which does not currently exist. In fact, our ability to analyze interdicted samples and produce an extensive list of precise materials characteristics far exceeds our ability to interpret the results. Therefore, as we seek to develop the extensive databases necessary for nuclear forensics, we must also develop the methods necessary to produce the necessary inferences from comparison of our analytical results with these large, multidimensional sets of data. In the work reported here, we used a large, multidimensional dataset of results from quality control analyses of uranium ore concentrate (UOC, sometimes called 'yellowcake'). We have found that traditional multidimensional techniques, such as principal components analysis (PCA), are especially useful for understanding such datasets and drawing relevant conclusions. In particular, we have developed an iterative partial least squares-discriminant analysis (PLS-DA) procedure that has proven especially adept at identifying the production location of unknown UOC samples. By removing classes which fell far outside the initial decision boundary, and then rebuilding the PLS-DA model, we have consistently produced better and more definitive attributions than with a single pass classification approach. Performance of the iterative PLS-DA method

  9. Feasibility Study of Using Gemstone Spectral Imaging (GSI) and Adaptive Statistical Iterative Reconstruction (ASIR) for Reducing Radiation and Iodine Contrast Dose in Abdominal CT Patients with High BMI Values

    PubMed Central

    Zhu, Zheng; Zhao, Xin-ming; Zhao, Yan-feng; Wang, Xiao-yi; Zhou, Chun-wu

    2015-01-01

    Purpose To prospectively investigate the effect of using Gemstone Spectral Imaging (GSI) and adaptive statistical iterative reconstruction (ASIR) for reducing radiation and iodine contrast dose in abdominal CT patients with high BMI values. Materials and Methods 26 patients (weight > 65kg and BMI ≥ 22) underwent abdominal CT using GSI mode with 300mgI/kg contrast material as study group (group A). Another 21 patients (weight ≤ 65kg and BMI ≥ 22) were scanned with a conventional 120 kVp tube voltage for noise index (NI) of 11 with 450mgI/kg contrast material as control group (group B). GSI images were reconstructed at 60keV with 50%ASIR and the conventional 120kVp images were reconstructed with FBP reconstruction. The CT values, standard deviation (SD), signal-noise-ratio (SNR), contrast-noise-ratio (CNR) of 26 landmarks were quantitatively measured and image quality qualitatively assessed using statistical analysis. Results As for the quantitative analysis, the difference of CNR between groups A and B was all significant except for the mesenteric vein. The SNR in group A was higher than B except the mesenteric artery and splenic artery. As for the qualitative analysis, all images had diagnostic quality and the agreement for image quality assessment between the reviewers was substantial (kappa = 0.684). CT dose index (CTDI) values for non-enhanced, arterial phase and portal phase in group A were decreased by 49.04%, 40.51% and 40.54% compared with group B (P = 0.000), respectively. The total dose and the injection rate for the contrast material were reduced by 14.40% and 14.95% in A compared with B. Conclusion The use of GSI and ASIR provides similar enhancement in vessels and image quality with reduced radiation dose and contrast dose, compared with the use of conventional scan protocol. PMID:26079259

  10. Investigation of statistical iterative reconstruction for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Makeev, Andrey; Das, Mini; Glick, Stephen J.

    2012-03-01

    Dedicated breast CT has great potential for improving the detection and diagnosis of breast cancer. In this study, statistical iterative reconstruction with a penalized likelihood objective function and a Huber prior are investigated for use with breast CT. This prior has two free parameters, the penalty weight and the edgepreservation threshold, that need to be evaluated to determine those values that give optimal performance. Computer simulations with breast-like phantoms were used to study these parameters using various figuresof- merit that relate to performance in detecting microcalcifications. Results suggested that a narrow range of Huber prior parameters give optimal performance. Furthermore, iterative reconstruction provided improved performance measures as compared to conventional filtered back-projection.

  11. Renal Cyst Pseudoenhancement: Intraindividual Comparison Between Virtual Monochromatic Spectral Images and Conventional Polychromatic 120-kVp Images Obtained During the Same CT Examination and Comparisons Among Images Reconstructed Using Filtered Back Projection, Adaptive Statistical Iterative Reconstruction, and Model-Based Iterative Reconstruction

    PubMed Central

    Yamada, Yoshitake; Yamada, Minoru; Sugisawa, Koichi; Akita, Hirotaka; Shiomi, Eisuke; Abe, Takayuki; Okuda, Shigeo; Jinzaki, Masahiro

    2015-01-01

    Abstract The purpose of this study was to compare renal cyst pseudoenhancement between virtual monochromatic spectral (VMS) and conventional polychromatic 120-kVp images obtained during the same abdominal computed tomography (CT) examination and among images reconstructed using filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR). Our institutional review board approved this prospective study; each participant provided written informed consent. Thirty-one patients (19 men, 12 women; age range, 59–85 years; mean age, 73.2 ± 5.5 years) with renal cysts underwent unenhanced 120-kVp CT followed by sequential fast kVp-switching dual-energy (80/140 kVp) and 120-kVp abdominal enhanced CT in the nephrographic phase over a 10-cm scan length with a random acquisition order and 4.5-second intervals. Fifty-one renal cysts (maximal diameter, 18.0 ± 14.7 mm [range, 4–61 mm]) were identified. The CT attenuation values of the cysts as well as of the kidneys were measured on the unenhanced images, enhanced VMS images (at 70 keV) reconstructed using FBP and ASIR from dual-energy data, and enhanced 120-kVp images reconstructed using FBP, ASIR, and MBIR. The results were analyzed using the mixed-effects model and paired t test with Bonferroni correction. The attenuation increases (pseudoenhancement) of the renal cysts on the VMS images reconstructed using FBP/ASIR (least square mean, 5.0/6.0 Hounsfield units [HU]; 95% confidence interval, 2.6–7.4/3.6–8.4 HU) were significantly lower than those on the conventional 120-kVp images reconstructed using FBP/ASIR/MBIR (least square mean, 12.1/12.8/11.8 HU; 95% confidence interval, 9.8–14.5/10.4–15.1/9.4–14.2 HU) (all P < .001); on the other hand, the CT attenuation values of the kidneys on the VMS images were comparable to those on the 120-kVp images. Regardless of the reconstruction algorithm, 70-keV VMS images showed

  12. A successive overrelaxation iterative technique for an adaptive equalizer

    NASA Technical Reports Server (NTRS)

    Kosovych, O. S.

    1973-01-01

    An adaptive strategy for the equalization of pulse-amplitude-modulated signals in the presence of intersymbol interference and additive noise is reported. The successive overrelaxation iterative technique is used as the algorithm for the iterative adjustment of the equalizer coefficents during a training period for the minimization of the mean square error. With 2-cyclic and nonnegative Jacobi matrices substantial improvement is demonstrated in the rate of convergence over the commonly used gradient techniques. The Jacobi theorems are also extended to nonpositive Jacobi matrices. Numerical examples strongly indicate that the improvements obtained for the special cases are possible for general channel characteristics. The technique is analytically demonstrated to decrease the mean square error at each iteration for a large range of parameter values for light or moderate intersymbol interference and for small intervals for general channels. Analytically, convergence of the relaxation algorithm was proven in a noisy environment and the coefficient variance was demonstrated to be bounded.

  13. Estimated spectrum adaptive postfilter and the iterative prepost filtering algirighms

    NASA Technical Reports Server (NTRS)

    Linares, Irving (Inventor)

    2004-01-01

    The invention presents The Estimated Spectrum Adaptive Postfilter (ESAP) and the Iterative Prepost Filter (IPF) algorithms. These algorithms model a number of image-adaptive post-filtering and pre-post filtering methods. They are designed to minimize Discrete Cosine Transform (DCT) blocking distortion caused when images are highly compressed with the Joint Photographic Expert Group (JPEG) standard. The ESAP and the IPF techniques of the present invention minimize the mean square error (MSE) to improve the objective and subjective quality of low-bit-rate JPEG gray-scale images while simultaneously enhancing perceptual visual quality with respect to baseline JPEG images.

  14. Adaptively Tuned Iterative Low Dose CT Image Denoising

    PubMed Central

    Hashemi, SayedMasoud; Paul, Narinder S.; Beheshti, Soosan; Cobbold, Richard S. C.

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  15. Iterative Re-Weighted Instance Transfer for Domain Adaptation

    NASA Astrophysics Data System (ADS)

    Paul, A.; Rottensteiner, F.; Heipke, C.

    2016-06-01

    Domain adaptation techniques in transfer learning try to reduce the amount of training data required for classification by adapting a classifier trained on samples from a source domain to a new data set (target domain) where the features may have different distributions. In this paper, we propose a new technique for domain adaptation based on logistic regression. Starting with a classifier trained on training data from the source domain, we iteratively include target domain samples for which class labels have been obtained from the current state of the classifier, while at the same time removing source domain samples. In each iteration the classifier is re-trained, so that the decision boundaries are slowly transferred to the distribution of the target features. To make the transfer procedure more robust we introduce weights as a function of distance from the decision boundary and a new way of regularisation. Our methodology is evaluated using a benchmark data set consisting of aerial images and digital surface models. The experimental results show that in the majority of cases our domain adaptation approach can lead to an improvement of the classification accuracy without additional training data, but also indicate remaining problems if the difference in the feature distributions becomes too large.

  16. Iterative-Transform Phase Retrieval Using Adaptive Diversity

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein

  17. Adaptive restoration of river terrace vegetation through iterative experiments

    USGS Publications Warehouse

    Dela Cruz, Michelle P.; Beauchamp, Vanessa B.; Shafroth, Patrick B.; Decker, Cheryl E.; O’Neil, Aviva

    2014-01-01

    Restoration projects can involve a high degree of uncertainty and risk, which can ultimately result in failure. An adaptive restoration approach can reduce uncertainty through controlled, replicated experiments designed to test specific hypotheses and alternative management approaches. Key components of adaptive restoration include willingness of project managers to accept the risk inherent in experimentation, interest of researchers, availability of funding for experimentation and monitoring, and ability to restore sites as iterative experiments where results from early efforts can inform the design of later phases. This paper highlights an ongoing adaptive restoration project at Zion National Park (ZNP), aimed at reducing the cover of exotic annual Bromus on riparian terraces, and revegetating these areas with native plant species. Rather than using a trial-and-error approach, ZNP staff partnered with academic, government, and private-sector collaborators to conduct small-scale experiments to explicitly address uncertainties concerning biomass removal of annual bromes, herbicide application rates and timing, and effective seeding methods for native species. Adaptive restoration has succeeded at ZNP because managers accept the risk inherent in experimentation and ZNP personnel are committed to continue these projects over a several-year period. Techniques that result in exotic annual Bromus removal and restoration of native plant species at ZNP can be used as a starting point for adaptive restoration projects elsewhere in the region.

  18. SAR imaging via iterative adaptive approach and sparse Bayesian learning

    NASA Astrophysics Data System (ADS)

    Xue, Ming; Santiago, Enrique; Sedehi, Matteo; Tan, Xing; Li, Jian

    2009-05-01

    We consider sidelobe reduction and resolution enhancement in synthetic aperture radar (SAR) imaging via an iterative adaptive approach (IAA) and a sparse Bayesian learning (SBL) method. The nonparametric weighted least squares based IAA algorithm is a robust and user parameter-free adaptive approach originally proposed for array processing. We show that it can be used to form enhanced SAR images as well. SBL has been used as a sparse signal recovery algorithm for compressed sensing. It has been shown in the literature that SBL is easy to use and can recover sparse signals more accurately than the l 1 based optimization approaches, which require delicate choice of the user parameter. We consider using a modified expectation maximization (EM) based SBL algorithm, referred to as SBL-1, which is based on a three-stage hierarchical Bayesian model. SBL-1 is not only more accurate than benchmark SBL algorithms, but also converges faster. SBL-1 is used to further enhance the resolution of the SAR images formed by IAA. Both IAA and SBL-1 are shown to be effective, requiring only a limited number of iterations, and have no need for polar-to-Cartesian interpolation of the SAR collected data. This paper characterizes the achievable performance of these two approaches by processing the complex backscatter data from both a sparse case study and a backhoe vehicle in free space with different aperture sizes.

  19. Krylov iterative methods and synthetic acceleration for transport in binary statistical media

    SciTech Connect

    Fichtl, Erin D; Warsa, James S; Prinja, Anil K

    2008-01-01

    In particle transport applications there are numerous physical constructs in which heterogeneities are randomly distributed. The quantity of interest in these problems is the ensemble average of the flux, or the average of the flux over all possible material 'realizations.' The Levermore-Pomraning closure assumes Markovian mixing statistics and allows a closed, coupled system of equations to be written for the ensemble averages of the flux in each material. Generally, binary statistical mixtures are considered in which there are two (homogeneous) materials and corresponding coupled equations. The solution process is iterative, but convergence may be slow as either or both materials approach the diffusion and/or atomic mix limits. A three-part acceleration scheme is devised to expedite convergence, particularly in the atomic mix-diffusion limit where computation is extremely slow. The iteration is first divided into a series of 'inner' material and source iterations to attenuate the diffusion and atomic mix error modes separately. Secondly, atomic mix synthetic acceleration is applied to the inner material iteration and S{sup 2} synthetic acceleration to the inner source iterations to offset the cost of doing several inner iterations per outer iteration. Finally, a Krylov iterative solver is wrapped around each iteration, inner and outer, to further expedite convergence. A spectral analysis is conducted and iteration counts and computing cost for the new two-step scheme are compared against those for a simple one-step iteration, to which a Krylov iterative method can also be applied.

  20. Adaptive Iterated Extended Kalman Filter and Its Application to Autonomous Integrated Navigation for Indoor Robot

    PubMed Central

    Chen, Xiyuan; Li, Qinghua

    2014-01-01

    As the core of the integrated navigation system, the data fusion algorithm should be designed seriously. In order to improve the accuracy of data fusion, this work proposed an adaptive iterated extended Kalman (AIEKF) which used the noise statistics estimator in the iterated extended Kalman (IEKF), and then AIEKF is used to deal with the nonlinear problem in the inertial navigation systems (INS)/wireless sensors networks (WSNs)-integrated navigation system. Practical test has been done to evaluate the performance of the proposed method. The results show that the proposed method is effective to reduce the mean root-mean-square error (RMSE) of position by about 92.53%, 67.93%, 55.97%, and 30.09% compared with the INS only, WSN, EKF, and IEKF. PMID:24693225

  1. Adaptive iterated extended Kalman filter and its application to autonomous integrated navigation for indoor robot.

    PubMed

    Xu, Yuan; Chen, Xiyuan; Li, Qinghua

    2014-01-01

    As the core of the integrated navigation system, the data fusion algorithm should be designed seriously. In order to improve the accuracy of data fusion, this work proposed an adaptive iterated extended Kalman (AIEKF) which used the noise statistics estimator in the iterated extended Kalman (IEKF), and then AIEKF is used to deal with the nonlinear problem in the inertial navigation systems (INS)/wireless sensors networks (WSNs)-integrated navigation system. Practical test has been done to evaluate the performance of the proposed method. The results show that the proposed method is effective to reduce the mean root-mean-square error (RMSE) of position by about 92.53%, 67.93%, 55.97%, and 30.09% compared with the INS only, WSN, EKF, and IEKF. PMID:24693225

  2. Adaptive iterated function systems filter for images highly corrupted with fixed - Value impulse noise

    NASA Astrophysics Data System (ADS)

    Shanmugavadivu, P.; Eliahim Jeevaraj, P. S.

    2014-06-01

    The Adaptive Iterated Functions Systems (AIFS) Filter presented in this paper has an outstanding potential to attenuate the fixed-value impulse noise in images. This filter has two distinct phases namely noise detection and noise correction which uses Measure of Statistics and Iterated Function Systems (IFS) respectively. The performance of AIFS filter is assessed by three metrics namely, Peak Signal-to-Noise Ratio (PSNR), Mean Structural Similarity Index Matrix (MSSIM) and Human Visual Perception (HVP). The quantitative measures PSNR and MSSIM endorse the merit of this filter in terms of degree of noise suppression and details/edge preservation respectively, in comparison with the high performing filters reported in the recent literature. The qualitative measure HVP confirms the noise suppression ability of the devised filter. This computationally simple noise filter broadly finds application wherein the images are highly degraded by fixed-value impulse noise.

  3. The iterative adaptive approach in medical ultrasound imaging.

    PubMed

    Jensen, Are Charles; Austeng, Andreas

    2014-10-01

    Many medical ultrasound imaging systems are based on sweeping the image plane with a set of narrow beams. Usually, the returning echo from each of these beams is used to form one or a few azimuthal image samples. We model, for each radial distance, jointly the full azimuthal scanline. The model consists of the amplitudes of a set of densely placed potential reflectors (or scatterers), cf. sparse signal representation. To fit the model, we apply the iterative adaptive approach (IAA) on data formed by a sequenced time delay and phase shift. The performance of the IAA in combination with our time-delayed and phase-shifted data are studied on both simulated data of scenes consisting of point targets and hollow cyst-like structures, and recorded ultrasound phantom data from a specially adapted commercially available scanner. The results show that the proposed IAA is more capable of resolving point targets and gives better defined and more geometrically correct cyst-like structures in speckle images compared with the conventional delay-and-sum (DAS) approach. Compared with a Capon beamformer, the IAA showed an improved rendering of cyst-like structures and a similar point-target resolvability. Unlike the Capon beamformer, the IAA has no user parameters and seems unaffected by signal cancellation. The disadvantage of the IAA is a high computational load. PMID:25265177

  4. Statistical mechanics of Hamiltonian adaptive resolution simulations.

    PubMed

    Español, P; Delgado-Buscalioni, R; Everaers, R; Potestio, R; Donadio, D; Kremer, K

    2015-02-14

    The Adaptive Resolution Scheme (AdResS) is a hybrid scheme that allows to treat a molecular system with different levels of resolution depending on the location of the molecules. The construction of a Hamiltonian based on the this idea (H-AdResS) allows one to formulate the usual tools of ensembles and statistical mechanics. We present a number of exact and approximate results that provide a statistical mechanics foundation for this simulation method. We also present simulation results that illustrate the theory. PMID:25681895

  5. Investigation of statistical iterative reconstruction for dedicated breast CT

    SciTech Connect

    Makeev, Andrey; Glick, Stephen J.

    2013-08-15

    Purpose: Dedicated breast CT has great potential for improving the detection and diagnosis of breast cancer. Statistical iterative reconstruction (SIR) in dedicated breast CT is a promising alternative to traditional filtered backprojection (FBP). One of the difficulties in using SIR is the presence of free parameters in the algorithm that control the appearance of the resulting image. These parameters require tuning in order to achieve high quality reconstructions. In this study, the authors investigated the penalized maximum likelihood (PML) method with two commonly used types of roughness penalty functions: hyperbolic potential and anisotropic total variation (TV) norm. Reconstructed images were compared with images obtained using standard FBP. Optimal parameters for PML with the hyperbolic prior are reported for the task of detecting microcalcifications embedded in breast tissue.Methods: Computer simulations were used to acquire projections in a half-cone beam geometry. The modeled setup describes a realistic breast CT benchtop system, with an x-ray spectra produced by a point source and an a-Si, CsI:Tl flat-panel detector. A voxelized anthropomorphic breast phantom with 280 μm microcalcification spheres embedded in it was used to model attenuation properties of the uncompressed woman's breast in a pendant position. The reconstruction of 3D images was performed using the separable paraboloidal surrogates algorithm with ordered subsets. Task performance was assessed with the ideal observer detectability index to determine optimal PML parameters.Results: The authors' findings suggest that there is a preferred range of values of the roughness penalty weight and the edge preservation threshold in the penalized objective function with the hyperbolic potential, which resulted in low noise images with high contrast microcalcifications preserved. In terms of numerical observer detectability index, the PML method with optimal parameters yielded substantially improved

  6. Finite-approximation-error-based discrete-time iterative adaptive dynamic programming.

    PubMed

    Wei, Qinglai; Wang, Fei-Yue; Liu, Derong; Yang, Xiong

    2014-12-01

    In this paper, a new iterative adaptive dynamic programming (ADP) algorithm is developed to solve optimal control problems for infinite horizon discrete-time nonlinear systems with finite approximation errors. First, a new generalized value iteration algorithm of ADP is developed to make the iterative performance index function converge to the solution of the Hamilton-Jacobi-Bellman equation. The generalized value iteration algorithm permits an arbitrary positive semi-definite function to initialize it, which overcomes the disadvantage of traditional value iteration algorithms. When the iterative control law and iterative performance index function in each iteration cannot accurately be obtained, for the first time a new "design method of the convergence criteria" for the finite-approximation-error-based generalized value iteration algorithm is established. A suitable approximation error can be designed adaptively to make the iterative performance index function converge to a finite neighborhood of the optimal performance index function. Neural networks are used to implement the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the developed method. PMID:25265640

  7. Performance Enhancement for a GPS Vector-Tracking Loop Utilizing an Adaptive Iterated Extended Kalman Filter

    PubMed Central

    Chen, Xiyuan; Wang, Xiying; Xu, Yuan

    2014-01-01

    This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively. PMID:25502124

  8. Statistical iterative reconstruction using fast optimization transfer algorithm with successively increasing factor in Digital Breast Tomosynthesis

    NASA Astrophysics Data System (ADS)

    Xu, Shiyu; Zhang, Zhenxi; Chen, Ying

    2014-03-01

    Statistical iterative reconstruction exhibits particularly promising since it provides the flexibility of accurate physical noise modeling and geometric system description in transmission tomography system. However, to solve the objective function is computationally intensive compared to analytical reconstruction methods due to multiple iterations needed for convergence and each iteration involving forward/back-projections by using a complex geometric system model. Optimization transfer (OT) is a general algorithm converting a high dimensional optimization to a parallel 1-D update. OT-based algorithm provides a monotonic convergence and a parallel computing framework but slower convergence rate especially around the global optimal. Based on an indirect estimation on the spectrum of the OT convergence rate matrix, we proposed a successively increasing factor- scaled optimization transfer (OT) algorithm to seek an optimal step size for a faster rate. Compared to a representative OT based method such as separable parabolic surrogate with pre-computed curvature (PC-SPS), our algorithm provides comparable image quality (IQ) with fewer iterations. Each iteration retains a similar computational cost to PC-SPS. The initial experiment with a simulated Digital Breast Tomosynthesis (DBT) system shows that a total 40% computing time is saved by the proposed algorithm. In general, the successively increasing factor-scaled OT exhibits a tremendous potential to be a iterative method with a parallel computation, a monotonic and global convergence with fast rate.

  9. Iter

    NASA Astrophysics Data System (ADS)

    Iotti, Robert

    2015-04-01

    ITER is an international experimental facility being built by seven Parties to demonstrate the long term potential of fusion energy. The ITER Joint Implementation Agreement (JIA) defines the structure and governance model of such cooperation. There are a number of necessary conditions for such international projects to be successful: a complete design, strong systems engineering working with an agreed set of requirements, an experienced organization with systems and plans in place to manage the project, a cost estimate backed by industry, and someone in charge. Unfortunately for ITER many of these conditions were not present. The paper discusses the priorities in the JIA which led to setting up the project with a Central Integrating Organization (IO) in Cadarache, France as the ITER HQ, and seven Domestic Agencies (DAs) located in the countries of the Parties, responsible for delivering 90%+ of the project hardware as Contributions-in-Kind and also financial contributions to the IO, as ``Contributions-in-Cash.'' Theoretically the Director General (DG) is responsible for everything. In practice the DG does not have the power to control the work of the DAs, and there is not an effective management structure enabling the IO and the DAs to arbitrate disputes, so the project is not really managed, but is a loose collaboration of competing interests. Any DA can effectively block a decision reached by the DG. Inefficiencies in completing design while setting up a competent organization from scratch contributed to the delays and cost increases during the initial few years. So did the fact that the original estimate was not developed from industry input. Unforeseen inflation and market demand on certain commodities/materials further exacerbated the cost increases. Since then, improvements are debatable. Does this mean that the governance model of ITER is a wrong model for international scientific cooperation? I do not believe so. Had the necessary conditions for success

  10. An iterative method for the solution of the statistical and radiative equilibrium equations in expanding atmospheres

    NASA Astrophysics Data System (ADS)

    Hillier, D. J.

    1990-05-01

    A method for the solution of the statistical equilibrium, and radiative equilibrium equations in spherical atmospheres is presented. The iterative scheme uses a tridiagonal (or pentadiagonal) Newton-Raphson operator, and is based on the complete linearization method of Auer and Mihalas (1969) but requires less memory, and imposes no limit on the number of transitions that can be treated. The method is also related to iterative techniques that use approximate diagonal lambda operators but it has a vastly superior convergence rate. Calculations of WN and WC model atmospheres illustrate the excellent rate of convergence.

  11. Policy iteration adaptive dynamic programming algorithm for discrete-time nonlinear systems.

    PubMed

    Liu, Derong; Wei, Qinglai

    2014-03-01

    This paper is concerned with a new discrete-time policy iteration adaptive dynamic programming (ADP) method for solving the infinite horizon optimal control problem of nonlinear systems. The idea is to use an iterative ADP technique to obtain the iterative control law, which optimizes the iterative performance index function. The main contribution of this paper is to analyze the convergence and stability properties of policy iteration method for discrete-time nonlinear systems for the first time. It shows that the iterative performance index function is nonincreasingly convergent to the optimal solution of the Hamilton-Jacobi-Bellman equation. It is also proven that any of the iterative control laws can stabilize the nonlinear systems. Neural networks are used to approximate the performance index function and compute the optimal control law, respectively, for facilitating the implementation of the iterative ADP algorithm, where the convergence of the weight matrices is analyzed. Finally, the numerical results and analysis are presented to illustrate the performance of the developed method. PMID:24807455

  12. Statistical iterative reconstruction algorithm for X-ray phase-contrast CT

    PubMed Central

    Hahn, Dieter; Thibault, Pierre; Fehringer, Andreas; Bech, Martin; Koehler, Thomas; Pfeiffer, Franz; Noël, Peter B.

    2015-01-01

    Grating-based phase-contrast computed tomography (PCCT) is a promising imaging tool on the horizon for pre-clinical and clinical applications. Until now PCCT has been plagued by strong artifacts when dense materials like bones are present. In this paper, we present a new statistical iterative reconstruction algorithm which overcomes this limitation. It makes use of the fact that an X-ray interferometer provides a conventional absorption as well as a dark-field signal in addition to the phase-contrast signal. The method is based on a statistical iterative reconstruction algorithm utilizing maximum-a-posteriori principles and integrating the statistical properties of the raw data as well as information of dense objects gained from the absorption signal. Reconstruction of a pre-clinical mouse scan illustrates that artifacts caused by bones are significantly reduced and image quality is improved when employing our approach. Especially small structures, which are usually lost because of streaks, are recovered in our results. In comparison with the current state-of-the-art algorithms our approach provides significantly improved image quality with respect to quantitative and qualitative results. In summary, we expect that our new statistical iterative reconstruction method to increase the general usability of PCCT imaging for medical diagnosis apart from applications focused solely on soft tissue visualization. PMID:26067714

  13. Non-iterative adaptive optical microscopy using wavefront sensing

    NASA Astrophysics Data System (ADS)

    Tao, X.; Azucena, O.; Kubby, J.

    2016-03-01

    This paper will review the development of wide-field and confocal microscopes with wavefront sensing and adaptive optics for correcting refractive aberrations and compensating scattering when imaging through thick tissues (Drosophila embryos and mouse brain tissue). To make wavefront measurements in biological specimens we have modified the laser guide-star techniques used in astronomy for measuring wavefront aberrations that occur as star light passes through Earth's turbulent atmosphere. Here sodium atoms in Earth's mesosphere, at an altitude of 95 km, are excited to fluoresce at resonance by a high-power sodium laser. The fluorescent light creates a guide-star reference beacon at the top of the atmosphere that can be used for measuring wavefront aberrations that occur as the light passes through the atmosphere. We have developed a related approach for making wavefront measurements in biological specimens using cellular structures labeled with fluorescent proteins as laser guide-stars. An example is a fluorescently labeled centrosome in a fruit fly embryo or neurons and dendrites in mouse brains. Using adaptive optical microscopy we show that the Strehl ratio, the ratio of the peak intensity of an aberrated point source relative to the diffraction limited image, can be improved by an order of magnitude when imaging deeply into live dynamic specimens, enabling near diffraction limited deep tissue imaging.

  14. Adaptive iterative learning control for a class of non-linearly parameterised systems with input saturations

    NASA Astrophysics Data System (ADS)

    Zhang, Ruikun; Hou, Zhongsheng; Ji, Honghai; Yin, Chenkun

    2016-04-01

    In this paper, an adaptive iterative learning control scheme is proposed for a class of non-linearly parameterised systems with unknown time-varying parameters and input saturations. By incorporating a saturation function, a new iterative learning control mechanism is presented which includes a feedback term and a parameter updating term. Through the use of parameter separation technique, the non-linear parameters are separated from the non-linear function and then a saturated difference updating law is designed in iteration domain by combining the unknown parametric term of the local Lipschitz continuous function and the unknown time-varying gain into an unknown time-varying function. The analysis of convergence is based on a time-weighted Lyapunov-Krasovskii-like composite energy function which consists of time-weighted input, state and parameter estimation information. The proposed learning control mechanism warrants a L2[0, T] convergence of the tracking error sequence along the iteration axis. Simulation results are provided to illustrate the effectiveness of the adaptive iterative learning control scheme.

  15. Genomewide Multiple-Loci Mapping in Experimental Crosses by Iterative Adaptive Penalized Regression

    PubMed Central

    Sun, Wei; Ibrahim, Joseph G.; Zou, Fei

    2010-01-01

    Genomewide multiple-loci mapping can be viewed as a challenging variable selection problem where the major objective is to select genetic markers related to a trait of interest. It is challenging because the number of genetic markers is large (often much larger than the sample size) and there is often strong linkage or linkage disequilibrium between markers. In this article, we developed two methods for genomewide multiple loci mapping: the Bayesian adaptive Lasso and the iterative adaptive Lasso. Compared with eight existing methods, the proposed methods have improved variable selection performance in both simulation and real data studies. The advantages of our methods come from the assignment of adaptive weights to different genetic makers and the iterative updating of these adaptive weights. The iterative adaptive Lasso is also computationally much more efficient than the commonly used marginal regression and stepwise regression methods. Although our methods are motivated by multiple-loci mapping, they are general enough to be applied to other variable selection problems. PMID:20157003

  16. Iterative Robust Capon Beamforming with Adaptively Updated Array Steering Vector Mismatch Levels

    PubMed Central

    Sun, Liguo

    2014-01-01

    The performance of the conventional adaptive beamformer is sensitive to the array steering vector (ASV) mismatch. And the output signal-to interference and noise ratio (SINR) suffers deterioration, especially in the presence of large direction of arrival (DOA) error. To improve the robustness of traditional approach, we propose a new approach to iteratively search the ASV of the desired signal based on the robust capon beamformer (RCB) with adaptively updated uncertainty levels, which are derived in the form of quadratically constrained quadratic programming (QCQP) problem based on the subspace projection theory. The estimated levels in this iterative beamformer present the trend of decreasing. Additionally, other array imperfections also degrade the performance of beamformer in practice. To cover several kinds of mismatches together, the adaptive flat ellipsoid models are introduced in our method as tight as possible. In the simulations, our beamformer is compared with other methods and its excellent performance is demonstrated via the numerical examples. PMID:27355008

  17. Adaptive and optimal detection of elastic object scattering with single-channel monostatic iterative time reversal

    NASA Astrophysics Data System (ADS)

    Ying, Ying-Zi; Ma, Li; Guo, Sheng-Ming

    2011-05-01

    In active sonar operation, the presence of background reverberation and the low signal-to-noise ratio hinder the detection of targets. This paper investigates the application of single-channel monostatic iterative time reversal to mitigate the difficulties by exploiting the resonances of the target. Theoretical analysis indicates that the iterative process will adaptively lead echoes to converge to a narrowband signal corresponding to a scattering object's dominant resonance mode, thus optimising the return level. The experiments in detection of targets in free field and near a planar interface have been performed. The results illustrate the feasibility of the method.

  18. Iterative time independent calculation of the cumulative reaction probability within a basis adapted preconditioner

    NASA Astrophysics Data System (ADS)

    Woittequand, F.; Monnerville, M.; Briquez, S.

    2006-01-01

    A band preconditioner matrix coupled to an iterative approach based on the generalized minimal residual (GMRes) method is presented to determine the cumulative reaction probability (CRP) N( E). The CRP is calculated using the Seideman, Manthe and Miller Lanczos-based boundary condition method [J. Chem. Phys. 96 (1992) 4412; 99 (1993) 3411]. Using this basis adapted preconditioner, the iterative GMRes scheme is found to be more efficient than a direct method based on the LU decomposition. The efficiency of this approach is illustrated by calculating the CRP for the H + O 2 → HO + O reaction, assuming zero total angular momentum.

  19. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  20. Statistical Models of Adaptive Immune populations

    NASA Astrophysics Data System (ADS)

    Sethna, Zachary; Callan, Curtis; Walczak, Aleksandra; Mora, Thierry

    The availability of large (104-106 sequences) datasets of B or T cell populations from a single individual allows reliable fitting of complex statistical models for naïve generation, somatic selection, and hypermutation. It is crucial to utilize a probabilistic/informational approach when modeling these populations. The inferred probability distributions allow for population characterization, calculation of probability distributions of various hidden variables (e.g. number of insertions), as well as statistical properties of the distribution itself (e.g. entropy). In particular, the differences between the T cell populations of embryonic and mature mice will be examined as a case study. Comparing these populations, as well as proposed mixed populations, provides a concrete exercise in model creation, comparison, choice, and validation.

  1. GOSIM: A multi-scale iterative multiple-point statistics algorithm with global optimization

    NASA Astrophysics Data System (ADS)

    Yang, Liang; Hou, Weisheng; Cui, Chanjie; Cui, Jie

    2016-04-01

    Most current multiple-point statistics (MPS) algorithms are based on a sequential simulation procedure, during which grid values are updated according to the local data events. Because the realization is updated only once during the sequential process, errors that occur while updating data events cannot be corrected. Error accumulation during simulations decreases the realization quality. Aimed at improving simulation quality, this study presents an MPS algorithm based on global optimization, called GOSIM. An objective function is defined for representing the dissimilarity between a realization and the TI in GOSIM, which is minimized by a multi-scale EM-like iterative method that contains an E-step and M-step in each iteration. The E-step searches for TI patterns that are most similar to the realization and match the conditioning data. A modified PatchMatch algorithm is used to accelerate the search process in E-step. M-step updates the realization based on the most similar patterns found in E-step and matches the global statistics of TI. During categorical data simulation, k-means clustering is used for transforming the obtained continuous realization into a categorical realization. The qualitative and quantitative comparison results of GOSIM, MS-CCSIM and SNESIM suggest that GOSIM has a better pattern reproduction ability for both unconditional and conditional simulations. A sensitivity analysis illustrates that pattern size significantly impacts the time costs and simulation quality. In conditional simulations, the weights of conditioning data should be as small as possible to maintain a good simulation quality. The study shows that big iteration numbers at coarser scales increase simulation quality and small iteration numbers at finer scales significantly save simulation time.

  2. Practical improvements of multi-grid iteration for adaptive mesh refinement method

    NASA Astrophysics Data System (ADS)

    Miyashita, Hisashi; Yamada, Yoshiyuki

    2005-03-01

    Adaptive mesh refinement(AMR) is a powerful tool to efficiently solve multi-scaled problems. However, the vanilla AMR method has a well-known critical demerit, i.e., it cannot be applied to non-local problems. Although multi-grid iteration (MGI) can be regarded as a good remedy for a non-local problem such as the Poisson equation, we observed fundamental difficulties in applying the MGI technique in AMR to realistic problems under complicated mesh layouts because it does not converge or it requires too many iterations even if it does converge. To cope with the problem, when updating the next approximation in the MGI process, we calculate the precise total corrections that are relatively accurate to the current residual by introducing a new iteration for such a total correction. This procedure greatly accelerates the MGI convergence speed especially under complicated mesh layouts.

  3. Parameter estimation with an iterative version of the adaptive Gaussian mixture filter

    NASA Astrophysics Data System (ADS)

    Stordal, A.; Lorentzen, R.

    2012-04-01

    The adaptive Gaussian mixture filter (AGM) was introduced in Stordal et. al. (ECMOR 2010) as a robust filter technique for large scale applications and an alternative to the well known ensemble Kalman filter (EnKF). It consists of two analysis steps, one linear update and one weighting/resampling step. The bias of AGM is determined by two parameters, one adaptive weight parameter (forcing the weights to be more uniform to avoid filter collapse) and one pre-determined bandwidth parameter which decides the size of the linear update. It has been shown that if the adaptive parameter approaches one and the bandwidth parameter decrease with increasing sample size, the filter can achieve asymptotic optimality. For large scale applications with a limited sample size the filter solution may be far from optimal as the adaptive parameter gets close to zero depending on how well the samples from the prior distribution match the data. The bandwidth parameter must often be selected significantly different from zero in order to make large enough linear updates to match the data, at the expense of bias in the estimates. In the iterative AGM we take advantage of the fact that the history matching problem is usually estimation of parameters and initial conditions. If the prior distribution of initial conditions and parameters is close to the posterior distribution, it is possible to match the historical data with a small bandwidth parameter and an adaptive weight parameter that gets close to one. Hence the bias of the filter solution is small. In order to obtain this scenario we iteratively run the AGM throughout the data history with a very small bandwidth to create a new prior distribution from the updated samples after each iteration. After a few iterations, nearly all samples from the previous iteration match the data and the above scenario is achieved. A simple toy problem shows that it is possible to reconstruct the true posterior distribution using the iterative version of

  4. Distributed adaptive fuzzy iterative learning control of coordination problems for higher order multi-agent systems

    NASA Astrophysics Data System (ADS)

    Li, Jinsha; Li, Junmin

    2016-07-01

    In this paper, the adaptive fuzzy iterative learning control scheme is proposed for coordination problems of Mth order (M ≥ 2) distributed multi-agent systems. Every follower agent has a higher order integrator with unknown nonlinear dynamics and input disturbance. The dynamics of the leader are a higher order nonlinear systems and only available to a portion of the follower agents. With distributed initial state learning, the unified distributed protocols combined time-domain and iteration-domain adaptive laws guarantee that the follower agents track the leader uniformly on [0, T]. Then, the proposed algorithm extends to achieve the formation control. A numerical example and a multiple robotic system are provided to demonstrate the performance of the proposed approach.

  5. Adaptive approximation of higher order posterior statistics

    SciTech Connect

    Lee, Wonjung

    2014-02-01

    Filtering is an approach for incorporating observed data into time-evolving systems. Instead of a family of Dirac delta masses that is widely used in Monte Carlo methods, we here use the Wiener chaos expansion for the parametrization of the conditioned probability distribution to solve the nonlinear filtering problem. The Wiener chaos expansion is not the best method for uncertainty propagation without observations. Nevertheless, the projection of the system variables in a fixed polynomial basis spanning the probability space might be a competitive representation in the presence of relatively frequent observations because the Wiener chaos approach not only leads to an accurate and efficient prediction for short time uncertainty quantification, but it also allows to apply several data assimilation methods that can be used to yield a better approximate filtering solution. The aim of the present paper is to investigate this hypothesis. We answer in the affirmative for the (stochastic) Lorenz-63 system based on numerical simulations in which the uncertainty quantification method and the data assimilation method are adaptively selected by whether the dynamics is driven by Brownian motion and the near-Gaussianity of the measure to be updated, respectively.

  6. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    NASA Astrophysics Data System (ADS)

    Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing

    2015-08-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).

  7. Bias in iterative reconstruction of low-statistics PET data: benefits of a resolution model

    NASA Astrophysics Data System (ADS)

    Walker, M. D.; Asselin, M.-C.; Julyan, P. J.; Feldmann, M.; Talbot, P. S.; Jones, T.; Matthews, J. C.

    2011-02-01

    Iterative image reconstruction methods such as ordered-subset expectation maximization (OSEM) are widely used in PET. Reconstructions via OSEM are however reported to be biased for low-count data. We investigated this and considered the impact for dynamic PET. Patient listmode data were acquired in [11C]DASB and [15O]H2O scans on the HRRT brain PET scanner. These data were subsampled to create many independent, low-count replicates. The data were reconstructed and the images from low-count data were compared to the high-count originals (from the same reconstruction method). This comparison enabled low-statistics bias to be calculated for the given reconstruction, as a function of the noise-equivalent counts (NEC). Two iterative reconstruction methods were tested, one with and one without an image-based resolution model (RM). Significant bias was observed when reconstructing data of low statistical quality, for both subsampled human and simulated data. For human data, this bias was substantially reduced by including a RM. For [11C]DASB the low-statistics bias in the caudate head at 1.7 M NEC (approx. 30 s) was -5.5% and -13% with and without RM, respectively. We predicted biases in the binding potential of -4% and -10%. For quantification of cerebral blood flow for the whole-brain grey- or white-matter, using [15O]H2O and the PET autoradiographic method, a low-statistics bias of <2.5% and <4% was predicted for reconstruction with and without the RM. The use of a resolution model reduces low-statistics bias and can hence be beneficial for quantitative dynamic PET.

  8. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Tang, Jie; Nett, Brian E.; Chen, Guang-Hong

    2009-10-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  9. Applying statistical process control to the adaptive rate control problem

    NASA Astrophysics Data System (ADS)

    Manohar, Nelson R.; Willebeek-LeMair, Marc H.; Prakash, Atul

    1997-12-01

    Due to the heterogeneity and shared resource nature of today's computer network environments, the end-to-end delivery of multimedia requires adaptive mechanisms to be effective. We present a framework for the adaptive streaming of heterogeneous media. We introduce the application of online statistical process control (SPC) to the problem of dynamic rate control. In SPC, the goal is to establish (and preserve) a state of statistical quality control (i.e., controlled variability around a target mean) over a process. We consider the end-to-end streaming of multimedia content over the internet as the process to be controlled. First, at each client, we measure process performance and apply statistical quality control (SQC) with respect to application-level requirements. Then, we guide an adaptive rate control (ARC) problem at the server based on the statistical significance of trends and departures on these measurements. We show this scheme facilitates handling of heterogeneous media. Last, because SPC is designed to monitor long-term process performance, we show that our online SPC scheme could be used to adapt to various degrees of long-term (network) variability (i.e., statistically significant process shifts as opposed to short-term random fluctuations). We develop several examples and analyze its statistical behavior and guarantees.

  10. Enhancement and bias removal of optical coherence tomography images: An iterative approach with adaptive bilateral filtering.

    PubMed

    Sudeep, P V; Issac Niwas, S; Palanisamy, P; Rajan, Jeny; Xiaojun, Yu; Wang, Xianghong; Luo, Yuemei; Liu, Linbo

    2016-04-01

    Optical coherence tomography (OCT) has continually evolved and expanded as one of the most valuable routine tests in ophthalmology. However, noise (speckle) in the acquired images causes quality degradation of OCT images and makes it difficult to analyze the acquired images. In this paper, an iterative approach based on bilateral filtering is proposed for speckle reduction in multiframe OCT data. Gamma noise model is assumed for the observed OCT image. First, the adaptive version of the conventional bilateral filter is applied to enhance the multiframe OCT data and then the bias due to noise is reduced from each of the filtered frames. These unbiased filtered frames are then refined using an iterative approach. Finally, these refined frames are averaged to produce the denoised OCT image. Experimental results on phantom images and real OCT retinal images demonstrate the effectiveness of the proposed filter. PMID:26907572

  11. Statistical iterative reconstruction for multi-contrast x-ray micro-tomography

    NASA Astrophysics Data System (ADS)

    Allner, S.; Velroyen, A.; Fehringer, A.; Pfeiffer, F.; Noël, P. B.

    2015-03-01

    Scanning times have always been an important issue in x-ray micro-tomography. To reach high-quality reconstructions the exposure times for each projection can be very long due to small detector pixel sizes and limited flux of x-ray sources. In addition, the required number of projections is a factor which limits a reduction of exposure beyond a certain level. This applies particularly to grating-based phase-contrast computed tomography (PCCT), as several images per projection have to be acquired in order to obtain absorption, phase and dark-field information. In this work we qualitatively compare statistical iterative reconstruction (SIR) and filtered back-projection (FBP) reconstruction from undersampled projection data based on a formalin-fixated mouse sample measured in a grating-based phase-contrast small-animal scanner. The results from our assessment illustrate that SIR offers not only significantly higher image quality, but also enables high-resolution imaging from severely undersampled data in comparison to the FBP algorithm. Therefore, the application of advanced iterative reconstruction methods in micro-tomography entails major advantages over state-of-the-art FBP reconstruction while offering the opportunity to shorten scan durations via a reduction of exposure time per projection and number of angular views.

  12. Low-dose CT statistical iterative reconstruction via modified MRF regularization.

    PubMed

    Shangguan, Hong; Zhang, Quan; Liu, Yi; Cui, Xueying; Bai, Yunjiao; Gui, Zhiguo

    2016-01-01

    It is desirable to reduce the excessive radiation exposure to patients in repeated medical CT applications. One of the most effective ways is to reduce the X-ray tube current (mAs) or tube voltage (kVp). However, it is difficult to achieve accurate reconstruction from the noisy measurements. Compared with the conventional filtered back-projection (FBP) algorithm leading to the excessive noise in the reconstructed images, the approaches using statistical iterative reconstruction (SIR) with low mAs show greater image quality. To eliminate the undesired artifacts and improve reconstruction quality, we proposed, in this work, an improved SIR algorithm for low-dose CT reconstruction, constrained by a modified Markov random field (MRF) regularization. Specifically, the edge-preserving total generalized variation (TGV), which is a generalization of total variation (TV) and can measure image characteristics up to a certain degree of differentiation, was introduced to modify the MRF regularization. In addition, a modified alternating iterative algorithm was utilized to optimize the cost function. Experimental results demonstrated that images reconstructed by the proposed method could not only generate high accuracy and resolution properties, but also ensure a higher peak signal-to-noise ratio (PSNR) in comparison with those using existing methods. PMID:26542474

  13. J-adaptive estimation with estimated noise statistics

    NASA Technical Reports Server (NTRS)

    Jazwinski, A. H.; Hipkins, C.

    1973-01-01

    The J-adaptive sequential estimator is extended to include simultaneous estimation of the noise statistics in a model for system dynamics. This extension completely automates the estimator, eliminating the requirement of an analyst in the loop. Simulations in satellite orbit determination demonstrate the efficacy of the sequential estimation algorithm.

  14. Comparison of image quality from filtered back projection, statistical iterative reconstruction, and model-based iterative reconstruction algorithms in abdominal computed tomography

    PubMed Central

    Kuo, Yu; Lin, Yi-Yang; Lee, Rheun-Chuan; Lin, Chung-Jung; Chiou, Yi-You; Guo, Wan-Yuo

    2016-01-01

    Abstract The purpose of this study was to compare the image noise-reducing abilities of iterative model reconstruction (IMR) with those of traditional filtered back projection (FBP) and statistical iterative reconstruction (IR) in abdominal computed tomography (CT) images This institutional review board-approved retrospective study enrolled 103 patients; informed consent was waived. Urinary bladder (n = 83) and renal cysts (n = 44) were used as targets for evaluating imaging quality. Raw data were retrospectively reconstructed using FBP, statistical IR, and IMR. Objective image noise and signal-to-noise ratio (SNR) were calculated and analyzed using one-way analysis of variance. Subjective image quality was evaluated and analyzed using Wilcoxon signed-rank test with Bonferroni correction. Objective analysis revealed a reduction in image noise for statistical IR compared with that for FBP, with no significant differences in SNR. In the urinary bladder group, IMR achieved up to 53.7% noise reduction, demonstrating a superior performance to that of statistical IR. IMR also yielded a significantly superior SNR to that of statistical IR. Similar results were obtained in the cyst group. Subjective analysis revealed reduced image noise for IMR, without inferior margin delineation or diagnostic confidence. IMR reduced noise and increased SNR to greater degrees than did FBP and statistical IR. Applying the IMR technique to abdominal CT imaging has potential for reducing the radiation dose without sacrificing imaging quality. PMID:27495078

  15. Adaptive switching detection algorithm for iterative-MIMO systems to enable power savings

    NASA Astrophysics Data System (ADS)

    Tadza, N.; Laurenson, D.; Thompson, J. S.

    2014-11-01

    This paper attempts to tackle one of the challenges faced in soft input soft output Multiple Input Multiple Output (MIMO) detection systems, which is to achieve optimal error rate performance with minimal power consumption. This is realized by proposing a new algorithm design that comprises multiple thresholds within the detector that, in real time, specify the receiver behavior according to the current channel in both slow and fast fading conditions, giving it adaptivity. This adaptivity enables energy savings within the system since the receiver chooses whether to accept or to reject the transmission, according to the success rate of detecting thresholds. The thresholds are calculated using the mutual information of the instantaneous channel conditions between the transmitting and receiving antennas of iterative-MIMO systems. In addition, the power saving technique, Dynamic Voltage and Frequency Scaling, helps to reduce the circuit power demands of the adaptive algorithm. This adaptivity has the potential to save up to 30% of the total energy when it is implemented on Xilinx®Virtex-5 simulation hardware. Results indicate the benefits of having this "intelligence" in the adaptive algorithm due to the promising performance-complexity tradeoff parameters in both software and hardware codesign simulation.

  16. Adaptive mesh refinement and multilevel iteration for multiphase, multicomponent flow in porous media

    SciTech Connect

    Hornung, R.D.

    1996-12-31

    An adaptive local mesh refinement (AMR) algorithm originally developed for unsteady gas dynamics is extended to multi-phase flow in porous media. Within the AMR framework, we combine specialized numerical methods to treat the different aspects of the partial differential equations. Multi-level iteration and domain decomposition techniques are incorporated to accommodate elliptic/parabolic behavior. High-resolution shock capturing schemes are used in the time integration of the hyperbolic mass conservation equations. When combined with AMR, these numerical schemes provide high resolution locally in a more efficient manner than if they were applied on a uniformly fine computational mesh. We will discuss the interplay of physical, mathematical, and numerical concerns in the application of adaptive mesh refinement to flow in porous media problems of practical interest.

  17. Detection of fiducial points in ECG waves using iteration based adaptive thresholds.

    PubMed

    Wonjune Kang; Kyunguen Byun; Hong-Goo Kang

    2015-08-01

    This paper presents an algorithm for the detection of fiducial points in electrocardiogram (ECG) waves using iteration based adaptive thresholds. By setting the search range of the processing frame to the interval between two consecutive R peaks, the peaks of T and P waves are used as reference salient points (RSPs) to detect the fiducial points. The RSPs are selected from candidates whose slope variation factors are larger than iteratively defined adaptive thresholds. Considering the fact that the number of RSPs varies depending on whether the ECG wave is normal or not, the proposed algorithm proceeds with a different methodology for determining fiducial points based on the number of detected RSPs. Testing was performed using twelve records from the MIT-BIH Arrhythmia Database that were manually marked for comparison with the estimated locations of the fiducial points. The means of absolute distances between the true locations and the points estimated by the algorithm are 12.2 ms and 7.9 ms for the starting points of P and Q waves, and 9.3 ms and 13.9 ms for the ending points of S and T waves. Since the computational complexity of the proposed algorithm is very low, it is feasible for use in mobile devices. PMID:26736854

  18. Pixelwise-adaptive blind optical flow assuming nonstationary statistics.

    PubMed

    Foroosh, Hassan

    2005-02-01

    In this paper, we address some of the major issues in optical flow within a new framework assuming nonstationary statistics for the motion field and for the errors. Problems addressed include the preservation of discontinuities, model/data errors, outliers, confidence measures, and performance evaluation. In solving these problems, we assume that the statistics of the motion field and the errors are not only spatially varying, but also unknown. We, thus, derive a blind adaptive technique based on generalized cross validation for estimating an independent regularization parameter for each pixel. Our formulation is pixelwise and combines existing first- and second-order constraints with a new second-order temporal constraint. We derive a new confidence measure for an adaptive rejection of erroneous and outlying motion vectors, and compare our results to other techniques in the literature. A new performance measure is also derived for estimating the signal-to-noise ratio for real sequences when the ground truth is unknown. PMID:15700527

  19. STATISTICS. The reusable holdout: Preserving validity in adaptive data analysis.

    PubMed

    Dwork, Cynthia; Feldman, Vitaly; Hardt, Moritz; Pitassi, Toniann; Reingold, Omer; Roth, Aaron

    2015-08-01

    Misapplication of statistical data analysis is a common cause of spurious discoveries in scientific research. Existing approaches to ensuring the validity of inferences drawn from data assume a fixed procedure to be performed, selected before the data are examined. In common practice, however, data analysis is an intrinsically adaptive process, with new analyses generated on the basis of data exploration, as well as the results of previous analyses on the same data. We demonstrate a new approach for addressing the challenges of adaptivity based on insights from privacy-preserving data analysis. As an application, we show how to safely reuse a holdout data set many times to validate the results of adaptively chosen analyses. PMID:26250683

  20. Adaptive Strategy for the Statistical Analysis of Connectomes

    PubMed Central

    Meskaldji, Djalel Eddine; Ottet, Marie-Christine; Cammoun, Leila; Hagmann, Patric; Meuli, Reto; Eliez, Stephan; Thiran, Jean Philippe; Morgenthaler, Stephan

    2011-01-01

    We study an adaptive statistical approach to analyze brain networks represented by brain connection matrices of interregional connectivity (connectomes). Our approach is at a middle level between a global analysis and single connections analysis by considering subnetworks of the global brain network. These subnetworks represent either the inter-connectivity between two brain anatomical regions or by the intra-connectivity within the same brain anatomical region. An appropriate summary statistic, that characterizes a meaningful feature of the subnetwork, is evaluated. Based on this summary statistic, a statistical test is performed to derive the corresponding p-value. The reformulation of the problem in this way reduces the number of statistical tests in an orderly fashion based on our understanding of the problem. Considering the global testing problem, the p-values are corrected to control the rate of false discoveries. Finally, the procedure is followed by a local investigation within the significant subnetworks. We contrast this strategy with the one based on the individual measures in terms of power. We show that this strategy has a great potential, in particular in cases where the subnetworks are well defined and the summary statistics are properly chosen. As an application example, we compare structural brain connection matrices of two groups of subjects with a 22q11.2 deletion syndrome, distinguished by their IQ scores. PMID:21829681

  1. Towards Validation of an Adaptive Flight Control Simulation Using Statistical Emulation

    NASA Technical Reports Server (NTRS)

    He, Yuning; Lee, Herbert K. H.; Davies, Misty D.

    2012-01-01

    Traditional validation of flight control systems is based primarily upon empirical testing. Empirical testing is sufficient for simple systems in which a.) the behavior is approximately linear and b.) humans are in-the-loop and responsible for off-nominal flight regimes. A different possible concept of operation is to use adaptive flight control systems with online learning neural networks (OLNNs) in combination with a human pilot for off-nominal flight behavior (such as when a plane has been damaged). Validating these systems is difficult because the controller is changing during the flight in a nonlinear way, and because the pilot and the control system have the potential to co-adapt in adverse ways traditional empirical methods are unlikely to provide any guarantees in this case. Additionally, the time it takes to find unsafe regions within the flight envelope using empirical testing means that the time between adaptive controller design iterations is large. This paper describes a new concept for validating adaptive control systems using methods based on Bayesian statistics. This validation framework allows the analyst to build nonlinear models with modal behavior, and to have an uncertainty estimate for the difference between the behaviors of the model and system under test.

  2. Noise performance of statistical model based iterative reconstruction in clinical CT systems

    NASA Astrophysics Data System (ADS)

    Li, Ke; Tang, Jie; Chen, Guang-Hong

    2014-03-01

    The statistical model based iterative reconstruction (MBIR) method has been introduced to clinical CT systems. Due to the nonlinearity of this method, the noise characteristics of MBIR are expected to differ from those of filtered backprojection (FBP). This paper reports an experimental characterization of the noise performance of MBIR equipped on several state-of-the-art clinical CT scanners at our institution. The thoracic section of an anthropomorphic phantom was scanned 50 times to generate image ensembles for noise analysis. Noise power spectra (NPS) and noise standard deviation maps were assessed locally at different anatomical locations. It was found that MBIR lead to significant reduction in noise magnitude and improvement in noise spatial uniformity when compared with FBP. Meanwhile, MBIR shifted the NPS of the reconstructed CT images towards lower frequencies along both the axial and the z frequency axes. This effect was confirmed by a relaxed slice thicknesstradeoff relationship shown in our experimental data. The unique noise characteristics of MBIR imply that extra effort must be made to optimize CT scanning parameters for MBIR to maximize its potential clinical benefits.

  3. Statistical-uncertainty-based adaptive filtering of lidar signals

    SciTech Connect

    Fuehrer, P. L.; Friehe, C. A.; Hristov, T. S.; Cooper, D. I.; Eichinger, W. E.

    2000-02-10

    An adaptive filter signal processing technique is developed to overcome the problem of Raman lidar water-vapor mixing ratio (the ratio of the water-vapor density to the dry-air density) with a highly variable statistical uncertainty that increases with decreasing photomultiplier-tube signal strength and masks the true desired water-vapor structure. The technique, applied to horizontal scans, assumes only statistical horizontal homogeneity. The result is a variable spatial resolution water-vapor signal with a constant variance out to a range limit set by a specified signal-to-noise ratio. The technique was applied to Raman water-vapor lidar data obtained at a coastal pier site together with in situ instruments located 320 m from the lidar. The micrometerological humidity data were used to calibrate the ratio of the lidar gains of the H{sub 2}O and the N{sub 2} photomultiplier tubes and set the water-vapor mixing ratio variance for the adaptive filter. For the coastal experiment the effective limit of the lidar range was found to be approximately 200 m for a maximum noise-to-signal variance ratio of 0.1 with the implemented data-reduction procedure. The technique can be adapted to off-horizontal scans with a small reduction in the constraints and is also applicable to other remote-sensing devices that exhibit the same inherent range-dependent signal-to-noise ratio problem. (c) 2000 Optical Society of America.

  4. Strehl-constrained iterative blind deconvolution for post-adaptive-optics data

    NASA Astrophysics Data System (ADS)

    Desiderà, G.; Carbillet, M.

    2009-12-01

    Aims: We aim to improve blind deconvolution applied to post-adaptive-optics (AO) data by taking into account one of their basic characteristics, resulting from the necessarily partial AO correction: the Strehl ratio. Methods: We apply a Strehl constraint in the framework of iterative blind deconvolution (IBD) of post-AO near-infrared images simulated in a detailed end-to-end manner and considering a case that is as realistic as possible. Results: The results obtained clearly show the advantage of using such a constraint, from the point of view of both performance and stability, especially for poorly AO-corrected data. The proposed algorithm has been implemented in the freely-distributed and CAOS-based Software Package AIRY.

  5. Non-iterative adaptive time stepping with truncation error control for simulating variable-density flow

    NASA Astrophysics Data System (ADS)

    Hirthe, E. M.; Graf, T.

    2012-04-01

    Fluid density variations occur due to changes in the solute concentration, temperature and pressure of groundwater. Examples are interaction between freshwater and seawater, radioactive waste disposal, groundwater contamination, and geothermal energy production. The physical coupling between flow and transport introduces non-linearity in the governing mathematical equations, such that solving variable-density flow problems typically requires very long computational time. Computational efficiency can be attained through the use of adaptive time-stepping schemes. The aim of this work is therefore to apply a non-iterative adaptive time-stepping scheme based on local truncation error in variable-density flow problems. That new scheme is implemented into the code of the HydroGeoSphere model (Therrien et al., 2011). The new time-stepping scheme is applied to the Elder (1967) and the Shikaze et al. (1998) problem of free convection in porous and fractured-porous media, respectively. Numerical simulations demonstrate that non-iterative time-stepping based on local truncation error control fully automates the time step size and efficiently limits the temporal discretization error to the user-defined tolerance. Results of the Elder problem show that the new time-stepping scheme presented here is significantly more efficient than uniform time-stepping when high accuracy is required. Results of the Shikaze problem reveal that the new scheme is considerably faster than conventional time-stepping where time step sizes are either constant or controlled by absolute head/concentration changes. Future research will focus on the application of the new time-stepping scheme to variable-density flow in complex real-world fractured-porous rock.

  6. A Self-Adaptive Missile Guidance System for Statistical Inputs

    NASA Technical Reports Server (NTRS)

    Peery, H. Rodney

    1960-01-01

    A method of designing a self-adaptive missile guidance system is presented. The system inputs are assumed to be known in a statistical sense only. Newton's modified Wiener theory is utilized in the design of the system and to establish the performance criterion. The missile is assumed to be a beam rider, to have a g limiter, and to operate over a flight envelope where the open-loop gain varies by a factor of 20. It is shown that the percent of time that missile acceleration limiting occurs can be used effectively to adjust the coefficients of the Wiener filter. The result is a guidance system which adapts itself to a changing environment and gives essentially optimum filtering and minimum miss distance.

  7. Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method

    SciTech Connect

    Tao, Yinghua; Chen, Guang-Hong; Hacker, Timothy A.; Raval, Amish N.; Van Lysel, Michael S.; Speidel, Michael A.

    2014-07-15

    Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937

  8. Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method

    PubMed Central

    Tao, Yinghua; Chen, Guang-Hong; Hacker, Timothy A.; Raval, Amish N.; Van Lysel, Michael S.; Speidel, Michael A.

    2014-01-01

    Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937

  9. Adapting iterative algorithms for solving large sparse linear systems for efficient use on the CDC CYBER 205

    NASA Technical Reports Server (NTRS)

    Kincaid, D. R.; Young, D. M.

    1984-01-01

    Adapting and designing mathematical software to achieve optimum performance on the CYBER 205 is discussed. Comments and observations are made in light of recent work done on modifying the ITPACK software package and on writing new software for vector supercomputers. The goal was to develop very efficient vector algorithms and software for solving large sparse linear systems using iterative methods.

  10. Adaptive optimal control of unknown constrained-input systems using policy iteration and neural networks.

    PubMed

    Modares, Hamidreza; Lewis, Frank L; Naghibi-Sistani, Mohammad-Bagher

    2013-10-01

    This paper presents an online policy iteration (PI) algorithm to learn the continuous-time optimal control solution for unknown constrained-input systems. The proposed PI algorithm is implemented on an actor-critic structure where two neural networks (NNs) are tuned online and simultaneously to generate the optimal bounded control policy. The requirement of complete knowledge of the system dynamics is obviated by employing a novel NN identifier in conjunction with the actor and critic NNs. It is shown how the identifier weights estimation error affects the convergence of the critic NN. A novel learning rule is developed to guarantee that the identifier weights converge to small neighborhoods of their ideal values exponentially fast. To provide an easy-to-check persistence of excitation condition, the experience replay technique is used. That is, recorded past experiences are used simultaneously with current data for the adaptation of the identifier weights. Stability of the whole system consisting of the actor, critic, system state, and system identifier is guaranteed while all three networks undergo adaptation. Convergence to a near-optimal control law is also shown. The effectiveness of the proposed method is illustrated with a simulation example. PMID:24808590

  11. Iterative version of the QRD for adaptive recursive least squares (RLS) filtering

    NASA Astrophysics Data System (ADS)

    Goetze, Juergen

    1994-10-01

    A modified version of the QR-decomposition (QRD) is presented. It uses approximate Givens rotations instead of exact Givens rotations, i.e., a matrix entry usually annihilated with an exact rotation by an angle (sigma) is only reduced by using an approximate rotation by an angle (sigma) . The approximation of the rotations is based on the idea of CORDIC. Evaluating a CORDIC-based approximate rotation is to determine the angle (sigma) equals (sigma) t equals arctan 2-t, which is closest to the exact rotation angle (sigma) . This angle (sigma) t is applied instead of (sigma) . Using approximate rotations for computing the QRD results in an iterative version of the original QRD. A recursive version of this QRD using CORDIC-based approximate rotations is applied to adaptive RLS filtering. Only a few angles of the CORDIC sequence, r say (r << b, where b is the word length), work as well as using exact rotations (r equals b, original CORDIC). The misadjustment error decreases as r increases. The convergence of the QRD-RLS algorithm, however, is insensitive to the value of r. Adapting the approximation accuracy during the course of the QRD-RLS algorithm is also discussed. Simulations (channel equalization) confirm the results.

  12. Array model interpolation and subband iterative adaptive filters applied to beamforming-based acoustic echo cancellation.

    PubMed

    Bai, Mingsian R; Chi, Li-Wen; Liang, Li-Huang; Lo, Yi-Yang

    2016-02-01

    In this paper, an evolutionary exposition is given in regard to the enhancing strategies for acoustic echo cancellers (AECs). A fixed beamformer (FBF) is utilized to focus on the near-end speaker while suppressing the echo from the far end. In reality, the array steering vector could differ considerably from the ideal freefield plane wave model. Therefore, an experimental procedure is developed to interpolate a practical array model from the measured frequency responses. Subband (SB) filtering with polyphase implementation is exploited to accelerate the cancellation process. Generalized sidelobe canceller (GSC) composed of an FBF and an adaptive blocking module is combined with AEC to maximize cancellation performance. Another enhancement is an internal iteration (IIT) procedure that enables efficient convergence in the adaptive SB filters within a sample time. Objective tests in terms of echo return loss enhancement (ERLE), perceptual evaluation of speech quality (PESQ), word recognition rate for automatic speech recognition (ASR), and subjective listening tests are conducted to validate the proposed AEC approaches. The results show that the GSC-SB-AEC-IIT approach has attained the highest ERLE without speech quality degradation, even in double-talk scenarios. PMID:26936567

  13. Efficient pulse compression for LPI waveforms based on a nonparametric iterative adaptive approach

    NASA Astrophysics Data System (ADS)

    Li, Zhengzheng; Nepal, Ramesh; Zhang, Yan; Blake, WIlliam

    2015-05-01

    In order to achieve low probability-of-intercept (LPI), radar waveforms are usually long and randomly generated. Due to the randomized nature, Matched filter responses (autocorrelation) of those waveforms can have high sidelobes which would mask weaker targets near a strong target, limiting radar's ability to distinguish close-by targets. To improve resolution and reduced sidelobe contaminations, a waveform independent pulse compression filter is desired. Furthermore, the pulse compression filter needs to be able to adapt to received signal to achieve optimized performance. As many existing pulse techniques require intensive computation, real-time implementation is infeasible. This paper introduces a new adaptive pulse compression technique for LPI waveforms that is based on a nonparametric iterative adaptive approach (IAA). Due to the nonparametric nature, no parameter tuning is required for different waveforms. IAA can achieve super-resolution and sidelobe suppression in both range and Doppler domains. Also it can be extended to directly handle the matched filter (MF) output (called MF-IAA), which further reduces the computational load. The practical impact of LPI waveform operations on IAA and MF-IAA has not been carefully studied in previous work. Herein the typical LPI waveforms such as random phase coding and other non- PI waveforms are tested with both single-pulse and multi-pulse IAA processing. A realistic airborne radar simulator as well as actual measured radar data are used for the validations. It is validated that in spite of noticeable difference with different test waveforms, the IAA algorithms and its improvement can effectively achieve range-Doppler super-resolution in realistic data.

  14. Iterative development and the scope for plasticity: contrasts among trait categories in an adaptive radiation.

    PubMed

    Foster, S A; Wund, M A; Graham, M A; Earley, R L; Gardiner, R; Kearns, T; Baker, J A

    2015-10-01

    Phenotypic plasticity can influence evolutionary change in a lineage, ranging from facilitation of population persistence in a novel environment to directing the patterns of evolutionary change. As the specific nature of plasticity can impact evolutionary consequences, it is essential to consider how plasticity is manifested if we are to understand the contribution of plasticity to phenotypic evolution. Most morphological traits are developmentally plastic, irreversible, and generally considered to be costly, at least when the resultant phenotype is mis-matched to the environment. At the other extreme, behavioral phenotypes are typically activational (modifiable on very short time scales), and not immediately costly as they are produced by constitutive neural networks. Although patterns of morphological and behavioral plasticity are often compared, patterns of plasticity of life history phenotypes are rarely considered. Here we review patterns of plasticity in these trait categories within and among populations, comprising the adaptive radiation of the threespine stickleback fish Gasterosteus aculeatus. We immediately found it necessary to consider the possibility of iterated development, the concept that behavioral and life history trajectories can be repeatedly reset on activational (usually behavior) or developmental (usually life history) time frames, offering fine tuning of the response to environmental context. Morphology in stickleback is primarily reset only in that developmental trajectories can be altered as environments change over the course of development. As anticipated, the boundaries between the trait categories are not clear and are likely to be linked by shared, underlying physiological and genetic systems. PMID:26243135

  15. Statistical model based iterative reconstruction (MBIR) in clinical CT systems: Experimental assessment of noise performance

    SciTech Connect

    Li, Ke; Tang, Jie; Chen, Guang-Hong

    2014-04-15

    Purpose: To reduce radiation dose in CT imaging, the statistical model based iterative reconstruction (MBIR) method has been introduced for clinical use. Based on the principle of MBIR and its nonlinear nature, the noise performance of MBIR is expected to be different from that of the well-understood filtered backprojection (FBP) reconstruction method. The purpose of this work is to experimentally assess the unique noise characteristics of MBIR using a state-of-the-art clinical CT system. Methods: Three physical phantoms, including a water cylinder and two pediatric head phantoms, were scanned in axial scanning mode using a 64-slice CT scanner (Discovery CT750 HD, GE Healthcare, Waukesha, WI) at seven different mAs levels (5, 12.5, 25, 50, 100, 200, 300). At each mAs level, each phantom was repeatedly scanned 50 times to generate an image ensemble for noise analysis. Both the FBP method with a standard kernel and the MBIR method (Veo{sup ®}, GE Healthcare, Waukesha, WI) were used for CT image reconstruction. Three-dimensional (3D) noise power spectrum (NPS), two-dimensional (2D) NPS, and zero-dimensional NPS (noise variance) were assessed both globally and locally. Noise magnitude, noise spatial correlation, noise spatial uniformity and their dose dependence were examined for the two reconstruction methods. Results: (1) At each dose level and at each frequency, the magnitude of the NPS of MBIR was smaller than that of FBP. (2) While the shape of the NPS of FBP was dose-independent, the shape of the NPS of MBIR was strongly dose-dependent; lower dose lead to a “redder” NPS with a lower mean frequency value. (3) The noise standard deviation (σ) of MBIR and dose were found to be related through a power law of σ ∝ (dose){sup −β} with the component β ≈ 0.25, which violated the classical σ ∝ (dose){sup −0.5} power law in FBP. (4) With MBIR, noise reduction was most prominent for thin image slices. (5) MBIR lead to better noise spatial

  16. Statistical model based iterative reconstruction (MBIR) in clinical CT systems: Experimental assessment of noise performance

    PubMed Central

    Li, Ke; Tang, Jie; Chen, Guang-Hong

    2014-01-01

    Purpose: To reduce radiation dose in CT imaging, the statistical model based iterative reconstruction (MBIR) method has been introduced for clinical use. Based on the principle of MBIR and its nonlinear nature, the noise performance of MBIR is expected to be different from that of the well-understood filtered backprojection (FBP) reconstruction method. The purpose of this work is to experimentally assess the unique noise characteristics of MBIR using a state-of-the-art clinical CT system. Methods: Three physical phantoms, including a water cylinder and two pediatric head phantoms, were scanned in axial scanning mode using a 64-slice CT scanner (Discovery CT750 HD, GE Healthcare, Waukesha, WI) at seven different mAs levels (5, 12.5, 25, 50, 100, 200, 300). At each mAs level, each phantom was repeatedly scanned 50 times to generate an image ensemble for noise analysis. Both the FBP method with a standard kernel and the MBIR method (Veo®, GE Healthcare, Waukesha, WI) were used for CT image reconstruction. Three-dimensional (3D) noise power spectrum (NPS), two-dimensional (2D) NPS, and zero-dimensional NPS (noise variance) were assessed both globally and locally. Noise magnitude, noise spatial correlation, noise spatial uniformity and their dose dependence were examined for the two reconstruction methods. Results: (1) At each dose level and at each frequency, the magnitude of the NPS of MBIR was smaller than that of FBP. (2) While the shape of the NPS of FBP was dose-independent, the shape of the NPS of MBIR was strongly dose-dependent; lower dose lead to a “redder” NPS with a lower mean frequency value. (3) The noise standard deviation (σ) of MBIR and dose were found to be related through a power law of σ ∝ (dose)−β with the component β ≈ 0.25, which violated the classical σ ∝ (dose)−0.5 power law in FBP. (4) With MBIR, noise reduction was most prominent for thin image slices. (5) MBIR lead to better noise spatial uniformity when compared

  17. Pilot Study on Image Quality and Radiation Dose of CT Colonography with Adaptive Iterative Dose Reduction Three-Dimensional

    PubMed Central

    Shen, Hesong; Liang, Dan; Luo, Mingyue; Duan, Chaijie; Cai, Wenli; Zhu, Shanshan; Qiu, Jianping; Li, Wenru

    2015-01-01

    Objective To investigate image quality and radiation dose of CT colonography (CTC) with adaptive iterative dose reduction three-dimensional (AIDR3D). Methods Ten segments of porcine colon phantom were collected, and 30 pedunculate polyps with diameters ranging from 1 to 15 mm were simulated on each segment. Image data were acquired with tube voltage of 120 kVp, and current doses of 10 mAs, 20 mAs, 30 mAs, 40 mAs, 50 mAs, respectively. CTC images were reconstructed using filtered back projection (FBP) and AIDR3D. Two radiologists blindly evaluated image quality. Quantitative evaluation of image quality included image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). Qualitative image quality was evaluated with a five-score scale. Radiation dose was calculated based on dose-length product. Ten volunteers were examined supine 50 mAs with FBP and prone 20 mAs with AIDR3D, and image qualities were assessed. Paired t test was performed for statistical analysis. Results For 20 mAs with AIDR3D and 50 mAs with FBP, image noise, SNRs and CNRs were (16.4 ± 1.6) HU vs. (16.8 ± 2.6) HU, 1.9 ± 0.2 vs. 1.9 ± 0.4, and 62.3 ± 6.8 vs. 62.0 ± 6.2, respectively; qualitative image quality scores were 4.1 and 4.3, respectively; their differences were all not statistically significant. Compared with 50 mAs with FBP, radiation dose (1.62 mSv) of 20 mAs with AIDR3D was decreased by 60.0%. There was no statistically significant difference in image noise, SNRs, CNRs and qualitative image quality scores between prone 20 mAs with AIDR3D and supine 50 mAs with FBP in 10 volunteers, the former reduced radiation dose by 61.1%. Conclusion Image quality of CTC using 20 mAs with AIDR3D could be comparable to standard 50 mAs with FBP, radiation dose of the former reduced by about 60.0% and was only 1.62 mSv. PMID:25635839

  18. Iterative graph cuts for image segmentation with a nonlinear statistical shape prior

    PubMed Central

    Chang, Joshua C.; Chou, Tom

    2013-01-01

    Shape-based regularization has proven to be a useful method for delineating objects within noisy images where one has prior knowledge of the shape of the targeted object. When a collection of possible shapes is available, the specification of a shape prior using kernel density estimation is a natural technique. Unfortunately, energy functionals arising from kernel density estimation are of a form that makes them impossible to directly minimize using efficient optimization algorithms such as graph cuts. Our main contribution is to show how one may recast the energy functional into a form that is minimizable iteratively and efficiently using graph cuts. PMID:24678141

  19. Statistics of intensity in adaptive-optics images and their usefulness for detection and photometry of exoplanets.

    PubMed

    Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C

    2010-11-01

    This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument. PMID:21045892

  20. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    SciTech Connect

    Rolland, Joran Simonnet, Eric

    2015-02-15

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations.

  1. A new adaptive classifier using iterative filtering. [classification of remotely sensed data in visible and near infrared bands

    NASA Technical Reports Server (NTRS)

    Actkinson, A. L.

    1974-01-01

    To cope with signature variability, an algorithm has been defined which will adaptively classify remotely sensed data in the visible and near infrared band. The signal is divided into a space-dependent component and a target-dependent component. The target-dependent component is assumed fixed across the image for each target type. The space-dependent component is estimated iteratively by a weighted, least-squares algorithm. Included are the derivations of the sensor model and the two-dimensional, estimation algorithm.

  2. Non-iterative adaptive time-stepping scheme with temporal truncation error control for simulating variable-density flow

    NASA Astrophysics Data System (ADS)

    Hirthe, Eugenia M.; Graf, Thomas

    2012-12-01

    The automatic non-iterative second-order time-stepping scheme based on the temporal truncation error proposed by Kavetski et al. [Kavetski D, Binning P, Sloan SW. Non-iterative time-stepping schemes with adaptive truncation error control for the solution of Richards equation. Water Resour Res 2002;38(10):1211, http://dx.doi.org/10.1029/2001WR000720.] is implemented into the code of the HydroGeoSphere model. This time-stepping scheme is applied for the first time to the low-Rayleigh-number thermal Elder problem of free convection in porous media [van Reeuwijk M, Mathias SA, Simmons CT, Ward JD. Insights from a pseudospectral approach to the Elder problem. Water Resour Res 2009;45:W04416, http://dx.doi.org/10.1029/2008WR007421.], and to the solutal [Shikaze SG, Sudicky EA, Schwartz FW. Density-dependent solute transport in discretely-fractured geological media: is prediction possible? J Contam Hydrol 1998;34:273-91] problem of free convection in fractured-porous media. Numerical simulations demonstrate that the proposed scheme efficiently limits the temporal truncation error to a user-defined tolerance by controlling the time-step size. The non-iterative second-order time-stepping scheme can be applied to (i) thermal and solutal variable-density flow problems, (ii) linear and non-linear density functions, and (iii) problems including porous and fractured-porous media.

  3. Adapting internal statistical models for interpreting visual cues to depth

    PubMed Central

    Seydell, Anna; Knill, David C.; Trommershäuser, Julia

    2010-01-01

    The informativeness of sensory cues depends critically on statistical regularities in the environment. However, statistical regularities vary between different object categories and environments. We asked whether and how the brain changes the prior assumptions about scene statistics used to interpret visual depth cues when stimulus statistics change. Subjects judged the slants of stereoscopically presented figures by adjusting a virtual probe perpendicular to the surface. In addition to stereoscopic disparities, the aspect ratio of the stimulus in the image provided a “figural compression” cue to slant, whose reliability depends on the distribution of aspect ratios in the world. As we manipulated this distribution from regular to random and back again, subjects’ reliance on the compression cue relative to stereoscopic cues changed accordingly. When we randomly interleaved stimuli from shape categories (ellipses and diamonds) with different statistics, subjects gave less weight to the compression cue for figures from the category with more random aspect ratios. Our results demonstrate that relative cue weights vary rapidly as a function of recently experienced stimulus statistics, and that the brain can use different statistical models for different object categories. We show that subjects’ behavior is consistent with that of a broad class of Bayesian learning models. PMID:20465321

  4. Adaptation and the color statistics of natural images.

    PubMed

    Webster, M A; Mollon, J D

    1997-12-01

    Color perception depends profoundly on adaptation processes that adjust sensitivity in response to the prevailing pattern of stimulation. We examined how color sensitivity and appearance might be influenced by adaptation to the color distributions characteristic of natural images. Color distributions were measured for natural scenes by sampling an array of locations within each scene with a spectroradiometer, or by recording each scene with a digital camera successively through 31 interference filters. The images were used to reconstruct the L, M and S cone excitation at each spatial location, and the contrasts along three post-receptoral axes [L + M, L - M or S - (L + M)]. Individual scenes varied substantially in their mean chromaticity and luminance, in the principal color-luminance axes of their distributions, and in the range of contrasts in their distributions. Chromatic contrasts were biased along a relatively narrow range of bluish to yellowish-green angles, lying roughly between the S - (L + M) axis (which was more characteristic of scenes with lush vegetation and little sky) and a unique blue-yellow axis (which was more typical of arid scenes). For many scenes L - M and S - (L + M) signals were highly correlated, with weaker correlations between luminance and chromaticity. We use a two-stage model (von Kries scaling followed by decorrelation) to show how the appearance of colors may be altered by light adaptation to the mean of the distributions and by contrast adaptation to the contrast range and principal axes of the distributions; and we show that such adjustments are qualitatively consistent with empirical measurements of asymmetric color matches obtained after adaptation to successive random samples drawn from natural distributions of chromaticities and lightnesses. Such adaptation effects define the natural range of operating states of the visual system. PMID:9425544

  5. Polychromatic Iterative Statistical Material Image Reconstruction for Photon-Counting Computed Tomography.

    PubMed

    Weidinger, Thomas; Buzug, Thorsten M; Flohr, Thomas; Kappler, Steffen; Stierstorfer, Karl

    2016-01-01

    This work proposes a dedicated statistical algorithm to perform a direct reconstruction of material-decomposed images from data acquired with photon-counting detectors (PCDs) in computed tomography. It is based on local approximations (surrogates) of the negative logarithmic Poisson probability function. Exploiting the convexity of this function allows for parallel updates of all image pixels. Parallel updates can compensate for the rather slow convergence that is intrinsic to statistical algorithms. We investigate the accuracy of the algorithm for ideal photon-counting detectors. Complementarily, we apply the algorithm to simulation data of a realistic PCD with its spectral resolution limited by K-escape, charge sharing, and pulse-pileup. For data from both an ideal and realistic PCD, the proposed algorithm is able to correct beam-hardening artifacts and quantitatively determine the material fractions of the chosen basis materials. Via regularization we were able to achieve a reduction of image noise for the realistic PCD that is up to 90% lower compared to material images form a linear, image-based material decomposition using FBP images. Additionally, we find a dependence of the algorithms convergence speed on the threshold selection within the PCD. PMID:27195003

  6. Polychromatic Iterative Statistical Material Image Reconstruction for Photon-Counting Computed Tomography

    PubMed Central

    Weidinger, Thomas; Buzug, Thorsten M.; Flohr, Thomas; Kappler, Steffen; Stierstorfer, Karl

    2016-01-01

    This work proposes a dedicated statistical algorithm to perform a direct reconstruction of material-decomposed images from data acquired with photon-counting detectors (PCDs) in computed tomography. It is based on local approximations (surrogates) of the negative logarithmic Poisson probability function. Exploiting the convexity of this function allows for parallel updates of all image pixels. Parallel updates can compensate for the rather slow convergence that is intrinsic to statistical algorithms. We investigate the accuracy of the algorithm for ideal photon-counting detectors. Complementarily, we apply the algorithm to simulation data of a realistic PCD with its spectral resolution limited by K-escape, charge sharing, and pulse-pileup. For data from both an ideal and realistic PCD, the proposed algorithm is able to correct beam-hardening artifacts and quantitatively determine the material fractions of the chosen basis materials. Via regularization we were able to achieve a reduction of image noise for the realistic PCD that is up to 90% lower compared to material images form a linear, image-based material decomposition using FBP images. Additionally, we find a dependence of the algorithms convergence speed on the threshold selection within the PCD. PMID:27195003

  7. Unscented fuzzy-controlled current statistic model and adaptive filtering for tracking maneuvering targets

    NASA Astrophysics Data System (ADS)

    Hu, Hongtao; Jing, Zhongliang; Hu, Shiqiang

    2006-12-01

    A novel adaptive algorithm for tracking maneuvering targets is proposed. The algorithm is implemented with fuzzy-controlled current statistic model adaptive filtering and unscented transformation. A fuzzy system allows the filter to tune the magnitude of maximum accelerations to adapt to different target maneuvers, and unscented transformation can effectively handle nonlinear system. A bearing-only tracking scenario simulation results show the proposed algorithm has a robust advantage over a wide range of maneuvers and overcomes the shortcoming of the traditional current statistic model and adaptive filtering algorithm.

  8. Statistical iterative reconstruction for streak artefact reduction when using multidetector CT to image the dento-alveolar structures

    PubMed Central

    Hayakawa, Y; Kober, C

    2014-01-01

    Objectives: When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Methods: Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood–expectation maximization algorithm was applied. Next, the ordered subset–expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. Results: The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset–expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. Conclusions: A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset–expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction. PMID:24754471

  9. Diversity of immune strategies explained by adaptation to pathogen statistics

    PubMed Central

    Mayer, Andreas; Mora, Thierry; Rivoire, Olivier; Walczak, Aleksandra M.

    2016-01-01

    Biological organisms have evolved a wide range of immune mechanisms to defend themselves against pathogens. Beyond molecular details, these mechanisms differ in how protection is acquired, processed, and passed on to subsequent generations—differences that may be essential to long-term survival. Here, we introduce a mathematical framework to compare the long-term adaptation of populations as a function of the pathogen dynamics that they experience and of the immune strategy that they adopt. We find that the two key determinants of an optimal immune strategy are the frequency and the characteristic timescale of the pathogens. Depending on these two parameters, our framework identifies distinct modes of immunity, including adaptive, innate, bet-hedging, and CRISPR-like immunities, which recapitulate the diversity of natural immune systems. PMID:27432970

  10. Diversity of immune strategies explained by adaptation to pathogen statistics.

    PubMed

    Mayer, Andreas; Mora, Thierry; Rivoire, Olivier; Walczak, Aleksandra M

    2016-08-01

    Biological organisms have evolved a wide range of immune mechanisms to defend themselves against pathogens. Beyond molecular details, these mechanisms differ in how protection is acquired, processed, and passed on to subsequent generations-differences that may be essential to long-term survival. Here, we introduce a mathematical framework to compare the long-term adaptation of populations as a function of the pathogen dynamics that they experience and of the immune strategy that they adopt. We find that the two key determinants of an optimal immune strategy are the frequency and the characteristic timescale of the pathogens. Depending on these two parameters, our framework identifies distinct modes of immunity, including adaptive, innate, bet-hedging, and CRISPR-like immunities, which recapitulate the diversity of natural immune systems. PMID:27432970

  11. Fast iterative adaptive nonuniformity correction with gradient minimization for infrared focal plane arrays

    NASA Astrophysics Data System (ADS)

    Zhao, Jufeng; Gao, Xiumin; Chen, Yueting; Feng, Huajun; Xu, Zhihai; Li, Qi

    2014-07-01

    A fast scene-based nonuniformity correction algorithm is proposed for fixed-pattern noise removal in infrared focal plane array imagery. Based on minimization of L0 gradient of the estimated irradiance, the correction function is optimized through correction parameters estimation via iterative optimization strategy. When applied to different real IR data, the proposed method provides enhanced results with good visual effect, making a good balance between nonuniformity correction and details preservation. Comparing with other excellent approaches, this algorithm can accurately estimate the irradiance rapidly with fewer ghosting artifacts.

  12. Iterative adaptive radiations of fossil canids show no evidence for diversity-dependent trait evolution.

    PubMed

    Slater, Graham J

    2015-04-21

    A long-standing hypothesis in adaptive radiation theory is that ecological opportunity constrains rates of phenotypic evolution, generating a burst of morphological disparity early in clade history. Empirical support for the early burst model is rare in comparative data, however. One possible reason for this lack of support is that most phylogenetic tests have focused on extant clades, neglecting information from fossil taxa. Here, I test for the expected signature of adaptive radiation using the outstanding 40-My fossil record of North American canids. Models implying time- and diversity-dependent rates of morphological evolution are strongly rejected for two ecologically important traits, body size and grinding area of the molar teeth. Instead, Ornstein-Uhlenbeck processes implying repeated, and sometimes rapid, attraction to distinct dietary adaptive peaks receive substantial support. Diversity-dependent rates of morphological evolution seem uncommon in clades, such as canids, that exhibit a pattern of replicated adaptive radiation. Instead, these clades might best be thought of as deterministic radiations in constrained Simpsonian subzones of a major adaptive zone. Support for adaptive peak models may be diagnostic of subzonal radiations. It remains to be seen whether early burst or ecological opportunity models can explain broader adaptive radiations, such as the evolution of higher taxa. PMID:25901311

  13. Iterative adaptive radiations of fossil canids show no evidence for diversity-dependent trait evolution

    NASA Astrophysics Data System (ADS)

    Slater, Graham J.

    2015-04-01

    A long-standing hypothesis in adaptive radiation theory is that ecological opportunity constrains rates of phenotypic evolution, generating a burst of morphological disparity early in clade history. Empirical support for the early burst model is rare in comparative data, however. One possible reason for this lack of support is that most phylogenetic tests have focused on extant clades, neglecting information from fossil taxa. Here, I test for the expected signature of adaptive radiation using the outstanding 40-My fossil record of North American canids. Models implying time- and diversity-dependent rates of morphological evolution are strongly rejected for two ecologically important traits, body size and grinding area of the molar teeth. Instead, Ornstein-Uhlenbeck processes implying repeated, and sometimes rapid, attraction to distinct dietary adaptive peaks receive substantial support. Diversity-dependent rates of morphological evolution seem uncommon in clades, such as canids, that exhibit a pattern of replicated adaptive radiation. Instead, these clades might best be thought of as deterministic radiations in constrained Simpsonian subzones of a major adaptive zone. Support for adaptive peak models may be diagnostic of subzonal radiations. It remains to be seen whether early burst or ecological opportunity models can explain broader adaptive radiations, such as the evolution of higher taxa.

  14. Iterative adaptive radiations of fossil canids show no evidence for diversity-dependent trait evolution

    PubMed Central

    Slater, Graham J.

    2015-01-01

    A long-standing hypothesis in adaptive radiation theory is that ecological opportunity constrains rates of phenotypic evolution, generating a burst of morphological disparity early in clade history. Empirical support for the early burst model is rare in comparative data, however. One possible reason for this lack of support is that most phylogenetic tests have focused on extant clades, neglecting information from fossil taxa. Here, I test for the expected signature of adaptive radiation using the outstanding 40-My fossil record of North American canids. Models implying time- and diversity-dependent rates of morphological evolution are strongly rejected for two ecologically important traits, body size and grinding area of the molar teeth. Instead, Ornstein–Uhlenbeck processes implying repeated, and sometimes rapid, attraction to distinct dietary adaptive peaks receive substantial support. Diversity-dependent rates of morphological evolution seem uncommon in clades, such as canids, that exhibit a pattern of replicated adaptive radiation. Instead, these clades might best be thought of as deterministic radiations in constrained Simpsonian subzones of a major adaptive zone. Support for adaptive peak models may be diagnostic of subzonal radiations. It remains to be seen whether early burst or ecological opportunity models can explain broader adaptive radiations, such as the evolution of higher taxa. PMID:25901311

  15. J-Adaptive estimation with estimated noise statistics. [for orbit determination

    NASA Technical Reports Server (NTRS)

    Jazwinski, A. H.; Hipkins, C.

    1975-01-01

    The J-Adaptive estimator described by Jazwinski and Hipkins (1972) is extended to include the simultaneous estimation of the statistics of the unmodeled system accelerations. With the aid of simulations it is demonstrated that the J-Adaptive estimator with estimated noise statistics can automatically estimate satellite orbits to an accuracy comparable with the data noise levels, when excellent, continuous tracking coverage is available. Such tracking coverage will be available from satellite-to-satellite tracking.

  16. TH-C-18A-01: Is Automatic Tube Current Modulation Still Necessary with Statistical Iterative Reconstruction?

    SciTech Connect

    Li, K; Zhao, W; Gomez-Cardona, D; Chen, G

    2014-06-15

    Purpose: Automatic tube current modulation (TCM) has been widely used in modern multi-detector CT to reduce noise spatial nonuniformity and streaks to improve dose efficiency. With the advent of statistical iterative reconstruction (SIR), it is expected that the importance of TCM may diminish, since SIR incorporates statistical weighting factors to reduce the negative influence of photon-starved rays. The purpose of this work is to address the following questions: Does SIR offer the same benefits as TCM? If yes, are there still any clinical benefits to using TCM? Methods: An anthropomorphic CIRS chest phantom was scanned using a state-of-the-art clinical CT system equipped with an SIR engine (Veo™, GE Healthcare). The phantom was first scanned with TCM using a routine protocol and a low-dose (LD) protocol. It was then scanned without TCM using the same protocols. For each acquisition, both FBP and Veo reconstructions were performed. All scans were repeated 50 times to generate an image ensemble from which noise spatial nonuniformity (NSN) and streak artifact levels were quantified. Monte-Carlo experiments were performed to estimate skin dose. Results: For FBP, noise streaks were reduced by 4% using TCM for both routine and LD scans. NSN values were actually slightly higher with TCM (0.25) than without TCM (0.24) for both routine and LD scans. In contrast, for Veo, noise streaks became negligible (<1%) with or without TCM for both routine and LD scans, and the NSN was reduced to 0.10 (low dose) or 0.08 (routine). The overall skin dose was 2% lower at the shoulders and more uniformly distributed across the skin without TCM. Conclusion: SIR without TCM offers superior reduction in noise nonuniformity and streaks relative to FBP with TCM. For some clinical applications in which skin dose may be a concern, SIR without TCM may be a better option. K. Li, W. Zhao, D. Gomez-Cardona: Nothing to disclose; G.-H. Chen: Research funded, General Electric Company Research funded

  17. The brain uses adaptive internal models of scene statistics for sensorimotor estimation and planning.

    PubMed

    Kwon, Oh-Sang; Knill, David C

    2013-03-12

    Because of uncertainty and noise, the brain should use accurate internal models of the statistics of objects in scenes to interpret sensory signals. Moreover, the brain should adapt its internal models to the statistics within local stimulus contexts. Consider the problem of hitting a baseball. The impoverished nature of the visual information available makes it imperative that batters use knowledge of the temporal statistics and history of previous pitches to accurately estimate pitch speed. Using a laboratory analog of hitting a baseball, we tested the hypothesis that the brain uses adaptive internal models of the statistics of object speeds to plan hand movements to intercept moving objects. We fit Bayesian observer models to subjects' performance to estimate the statistical environments in which subjects' performance would be ideal and compared the estimated statistics with the true statistics of stimuli in an experiment. A first experiment showed that subjects accurately estimated and used the variance of object speeds in a stimulus set to time hitting behavior but also showed serial biases that are suboptimal for stimuli that were uncorrelated over time. A second experiment showed that the strength of the serial biases depended on the temporal correlations within a stimulus set, even when the biases were estimated from uncorrelated stimulus pairs subsampled from the larger set. Taken together, the results show that subjects adapted their internal models of the variance and covariance of object speeds within a stimulus set to plan interceptive movements but retained a bias to positive correlations. PMID:23440185

  18. Saccadic gain adaptation is predicted by the statistics of natural fluctuations in oculomotor function

    PubMed Central

    Albert, Mark V.; Catz, Nicolas; Thier, Peter; Kording, Konrad

    2012-01-01

    Due to multiple factors such as fatigue, muscle strengthening, and neural plasticity, the responsiveness of the motor apparatus to neural commands changes over time. To enable precise movements the nervous system must adapt to compensate for these changes. Recent models of motor adaptation derive from assumptions about the way the motor apparatus changes. Characterizing these changes is difficult because motor adaptation happens at the same time, masking most of the effects of ongoing changes. Here, we analyze eye movements of monkeys with lesions to the posterior cerebellar vermis that impair adaptation. Their fluctuations better reveal the underlying changes of the motor system over time. When these measured, unadapted changes are used to derive optimal motor adaptation rules the prediction precision significantly improves. Among three models that similarly fit single-day adaptation results, the model that also matches the temporal correlations of the non-adapting saccades most accurately predicts multiple day adaptation. Saccadic gain adaptation is well matched to the natural statistics of fluctuations of the oculomotor plant. PMID:23230397

  19. Fast Parallel MR Image Reconstruction via B1-based, Adaptive Restart, Iterative Soft Thresholding Algorithms (BARISTA)

    PubMed Central

    Noll, Douglas C.; Fessler, Jeffrey A.

    2014-01-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484

  20. Adaptive Perfectionism, Maladaptive Perfectionism and Statistics Anxiety in Graduate Psychology Students

    ERIC Educational Resources Information Center

    Comerchero, Victoria; Fortugno, Dominick

    2013-01-01

    The current study examined if correlations between statistics anxiety and dimensions of perfectionism (adaptive and maladaptive) were present amongst a sample of psychology graduate students (N = 96). Results demonstrated that scores on the APS-R Discrepancy scale, corresponding to maladaptive perfectionism, correlated with higher levels of…

  1. Observer performance for adaptive, image-based denoising and filtered back projection compared to scanner-based iterative reconstruction for lower dose CT enterography

    PubMed Central

    Fletcher, Joel G.; Hara, Amy K.; Fidler, Jeff L.; Silva, Alvin C.; Barlow, John M.; Carter, Rickey E.; Bartley, Adam; Shiung, Maria; Holmes, David R.; Weber, Nicolas K.; Bruining, David H.; Yu, Lifeng; McCollough, Cynthia H.

    2015-01-01

    Purpose The purpose of this study was to compare observer performance for detection of intestinal inflammation for low-dose CT enterography (LD-CTE) using scanner-based iterative reconstruction (IR) vs. vendor-independent, adaptive image-based noise reduction (ANLM) or filtered back projection (FBP). Methods Sixty-two LD-CTE exams were performed. LD-CTE images were reconstructed using IR, ANLM, and FBP. Three readers, blinded to image type, marked intestinal inflammation directly on patient images using a specialized workstation over three sessions, interpreting one image type/patient/session. Reference standard was created by a gastroenterologist and radiologist, who reviewed all available data including dismissal Gastroenterology records, and who marked all inflamed bowel segments on the same workstation. Reader and reference localizations were then compared. Non-inferiority was tested using Jackknife free-response ROC (JAFROC) figures of merit (FOM) for ANLM and FBP compared to IR. Patient-level analyses for the presence or absence of inflammation were also conducted. Results There were 46 inflamed bowel segments in 24/62 patients (CTDIvol interquartile range 6.9–10.1 mGy). JAFROC FOM for ANLM and FBP were 0.84 (95% CI 0.75–0.92) and 0.84 (95% CI 0.75–0.92), and were statistically non-inferior to IR (FOM 0.84; 95% CI 0.76–0.93). Patient-level pooled confidence intervals for sensitivity widely overlapped, as did specificities. Image quality was rated as better with IR and AMLM compared to FBP (p < 0.0001), with no difference in reading times (p = 0.89). Conclusions Vendor-independent adaptive image-based noise reduction and FBP provided observer performance that was non-inferior to scanner-based IR methods. Adaptive image-based noise reduction maintained or improved upon image quality ratings compared to FBP when performing CTE at lower dose levels. PMID:25725794

  2. Statistical model based iterative reconstruction (MBIR) in clinical CT systems. Part II. Experimental assessment of spatial resolution performance

    PubMed Central

    Li, Ke; Garrett, John; Ge, Yongshuai; Chen, Guang-Hong

    2014-01-01

    Purpose: Statistical model based iterative reconstruction (MBIR) methods have been introduced to clinical CT systems and are being used in some clinical diagnostic applications. The purpose of this paper is to experimentally assess the unique spatial resolution characteristics of this nonlinear reconstruction method and identify its potential impact on the detectabilities and the associated radiation dose levels for specific imaging tasks. Methods: The thoracic section of a pediatric phantom was repeatedly scanned 50 or 100 times using a 64-slice clinical CT scanner at four different dose levels [CTDIvol =4, 8, 12, 16 (mGy)]. Both filtered backprojection (FBP) and MBIR (Veo®, GE Healthcare, Waukesha, WI) were used for image reconstruction and results were compared with one another. Eight test objects in the phantom with contrast levels ranging from 13 to 1710 HU were used to assess spatial resolution. The axial spatial resolution was quantified with the point spread function (PSF), while the z resolution was quantified with the slice sensitivity profile. Both were measured locally on the test objects and in the image domain. The dependence of spatial resolution on contrast and dose levels was studied. The study also features a systematic investigation of the potential trade-off between spatial resolution and locally defined noise and their joint impact on the overall image quality, which was quantified by the image domain-based channelized Hotelling observer (CHO) detectability index d′. Results: (1) The axial spatial resolution of MBIR depends on both radiation dose level and image contrast level, whereas it is supposedly independent of these two factors in FBP. The axial spatial resolution of MBIR always improved with an increasing radiation dose level and/or contrast level. (2) The axial spatial resolution of MBIR became equivalent to that of FBP at some transitional contrast level, above which MBIR demonstrated superior spatial resolution than FBP (and vice

  3. Statistical model based iterative reconstruction (MBIR) in clinical CT systems. Part II. Experimental assessment of spatial resolution performance

    SciTech Connect

    Li, Ke; Chen, Guang-Hong; Garrett, John; Ge, Yongshuai

    2014-07-15

    Purpose: Statistical model based iterative reconstruction (MBIR) methods have been introduced to clinical CT systems and are being used in some clinical diagnostic applications. The purpose of this paper is to experimentally assess the unique spatial resolution characteristics of this nonlinear reconstruction method and identify its potential impact on the detectabilities and the associated radiation dose levels for specific imaging tasks. Methods: The thoracic section of a pediatric phantom was repeatedly scanned 50 or 100 times using a 64-slice clinical CT scanner at four different dose levels [CTDI{sub vol} =4, 8, 12, 16 (mGy)]. Both filtered backprojection (FBP) and MBIR (Veo{sup ®}, GE Healthcare, Waukesha, WI) were used for image reconstruction and results were compared with one another. Eight test objects in the phantom with contrast levels ranging from 13 to 1710 HU were used to assess spatial resolution. The axial spatial resolution was quantified with the point spread function (PSF), while the z resolution was quantified with the slice sensitivity profile. Both were measured locally on the test objects and in the image domain. The dependence of spatial resolution on contrast and dose levels was studied. The study also features a systematic investigation of the potential trade-off between spatial resolution and locally defined noise and their joint impact on the overall image quality, which was quantified by the image domain-based channelized Hotelling observer (CHO) detectability index d′. Results: (1) The axial spatial resolution of MBIR depends on both radiation dose level and image contrast level, whereas it is supposedly independent of these two factors in FBP. The axial spatial resolution of MBIR always improved with an increasing radiation dose level and/or contrast level. (2) The axial spatial resolution of MBIR became equivalent to that of FBP at some transitional contrast level, above which MBIR demonstrated superior spatial resolution than

  4. Image Quality and Radiation Dose of CT Coronary Angiography with Automatic Tube Current Modulation and Strong Adaptive Iterative Dose Reduction Three-Dimensional (AIDR3D)

    PubMed Central

    Shen, Hesong; Dai, Guochao; Luo, Mingyue; Duan, Chaijie; Cai, Wenli; Liang, Dan; Wang, Xinhua; Zhu, Dongyun; Li, Wenru; Qiu, Jianping

    2015-01-01

    Purpose To investigate image quality and radiation dose of CT coronary angiography (CTCA) scanned using automatic tube current modulation (ATCM) and reconstructed by strong adaptive iterative dose reduction three-dimensional (AIDR3D). Methods Eighty-four consecutive CTCA patients were collected for the study. All patients were scanned using ATCM and reconstructed with strong AIDR3D, standard AIDR3D and filtered back-projection (FBP) respectively. Two radiologists who were blinded to the patients' clinical data and reconstruction methods evaluated image quality. Quantitative image quality evaluation included image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). To evaluate image quality qualitatively, coronary artery is classified into 15 segments based on the modified guidelines of the American Heart Association. Qualitative image quality was evaluated using a 4-point scale. Radiation dose was calculated based on dose-length product. Results Compared with standard AIDR3D, strong AIDR3D had lower image noise, higher SNR and CNR, their differences were all statistically significant (P<0.05); compared with FBP, strong AIDR3D decreased image noise by 46.1%, increased SNR by 84.7%, and improved CNR by 82.2%, their differences were all statistically significant (P<0.05 or 0.001). Segments with diagnostic image quality for strong AIDR3D were 336 (100.0%), 486 (96.4%), and 394 (93.8%) in proximal, middle, and distal part respectively; whereas those for standard AIDR3D were 332 (98.8%), 472 (93.7%), 378 (90.0%), respectively; those for FBP were 217 (64.6%), 173 (34.3%), 114 (27.1%), respectively; total segments with diagnostic image quality in strong AIDR3D (1216, 96.5%) were higher than those of standard AIDR3D (1182, 93.8%) and FBP (504, 40.0%); the differences between strong AIDR3D and standard AIDR3D, strong AIDR3D and FBP were all statistically significant (P<0.05 or 0.001). The mean effective radiation dose was (2.55±1.21) mSv. Conclusion

  5. Adaptive Iterative Dose Reduction Using Three Dimensional Processing (AIDR3D) Improves Chest CT Image Quality and Reduces Radiation Exposure

    PubMed Central

    Yamashiro, Tsuneo; Miyara, Tetsuhiro; Honda, Osamu; Kamiya, Hisashi; Murata, Kiyoshi; Ohno, Yoshiharu; Tomiyama, Noriyuki; Moriya, Hiroshi; Koyama, Mitsuhiro; Noma, Satoshi; Kamiya, Ayano; Tanaka, Yuko; Murayama, Sadayuki

    2014-01-01

    Objective To assess the advantages of Adaptive Iterative Dose Reduction using Three Dimensional Processing (AIDR3D) for image quality improvement and dose reduction for chest computed tomography (CT). Methods Institutional Review Boards approved this study and informed consent was obtained. Eighty-eight subjects underwent chest CT at five institutions using identical scanners and protocols. During a single visit, each subject was scanned using different tube currents: 240, 120, and 60 mA. Scan data were converted to images using AIDR3D and a conventional reconstruction mode (without AIDR3D). Using a 5-point scale from 1 (non-diagnostic) to 5 (excellent), three blinded observers independently evaluated image quality for three lung zones, four patterns of lung disease (nodule/mass, emphysema, bronchiolitis, and diffuse lung disease), and three mediastinal measurements (small structure visibility, streak artifacts, and shoulder artifacts). Differences in these scores were assessed by Scheffe's test. Results At each tube current, scans using AIDR3D had higher scores than those without AIDR3D, which were significant for lung zones (p<0.0001) and all mediastinal measurements (p<0.01). For lung diseases, significant improvements with AIDR3D were frequently observed at 120 and 60 mA. Scans with AIDR3D at 120 mA had significantly higher scores than those without AIDR3D at 240 mA for lung zones and mediastinal streak artifacts (p<0.0001), and slightly higher or equal scores for all other measurements. Scans with AIDR3D at 60 mA were also judged superior or equivalent to those without AIDR3D at 120 mA. Conclusion For chest CT, AIDR3D provides better image quality and can reduce radiation exposure by 50%. PMID:25153797

  6. Radiation dose reduction for coronary artery calcium scoring at 320-detector CT with adaptive iterative dose reduction 3D.

    PubMed

    Tatsugami, Fuminari; Higaki, Toru; Fukumoto, Wataru; Kaichi, Yoko; Fujioka, Chikako; Kiguchi, Masao; Yamamoto, Hideya; Kihara, Yasuki; Awai, Kazuo

    2015-06-01

    To assess the possibility of reducing the radiation dose for coronary artery calcium (CAC) scoring by using adaptive iterative dose reduction 3D (AIDR 3D) on a 320-detector CT scanner. Fifty-four patients underwent routine- and low-dose CT for CAC scoring. Low-dose CT was performed at one-third of the tube current used for routine-dose CT. Routine-dose CT was reconstructed with filtered back projection (FBP) and low-dose CT was reconstructed with AIDR 3D. We compared the calculated Agatston-, volume-, and mass scores of these images. The overall percentage difference in the Agatston-, volume-, and mass scores between routine- and low-dose CT studies was 15.9, 11.6, and 12.6%, respectively. There were no significant differences in the routine- and low-dose CT studies irrespective of the scoring algorithms applied. The CAC measurements of both imaging modalities were highly correlated with respect to the Agatston- (r = 0.996), volume- (r = 0.996), and mass score (r = 0.997; p < 0.001, all); the Bland-Altman limits of agreement scores were -37.4 to 51.4, -31.2 to 36.4 and -30.3 to 40.9%, respectively, suggesting that AIDR 3D was a good alternative for FBP. The mean effective radiation dose for routine- and low-dose CT was 2.2 and 0.7 mSv, respectively. The use of AIDR 3D made it possible to reduce the radiation dose by 67% for CAC scoring without impairing the quantification of coronary calcification. PMID:25754302

  7. Modified H-statistic with adaptive Winsorized mean in two groups test

    NASA Astrophysics Data System (ADS)

    Teh, Kian Wooi; Abdullah, Suhaida; Yahaya, Sharipah Soaad Syed; Yusof, Zahayu Md

    2014-06-01

    t-test is a commonly used test statistics when comparing two independent groups. The computation of this test is simple yet it is powerful under normal distribution and equal variance dataset. However, in real life data, sometimes it is hard to get dataset which has this package. The violation of assumptions (normality and equal variances) will give the devastating effect on the Type I error rate control to the t-test. On the same time, the statistical power also will be reduced. Therefore in this study, the adaptive Winsorised mean with hinge estimator in H-statistic (AWM-H) is proposed. The H-statistic is one of the robust statistics that able to handle the problem of nonnormality in comparing independent group. This procedure originally used Modified One-step M (MOM) estimator which employed trimming process. In the AWM-H procedure, the MOM estimator is replaced with the adaptive Winsorized mean (AWM) as the central tendency measure of the test. The Winsorization process is based on hinge estimator HQ or HQ1. Overall results showed that the proposed method performed better than the original method and the classical method especially under heavy tailed distribution.

  8. Adaptive Colour Contrast Coding in the Salamander Retina Efficiently Matches Natural Scene Statistics

    PubMed Central

    Vasserman, Genadiy; Schneidman, Elad; Segev, Ronen

    2013-01-01

    The visual system continually adjusts its sensitivity to the statistical properties of the environment through an adaptation process that starts in the retina. Colour perception and processing is commonly thought to occur mainly in high visual areas, and indeed most evidence for chromatic colour contrast adaptation comes from cortical studies. We show that colour contrast adaptation starts in the retina where ganglion cells adjust their responses to the spectral properties of the environment. We demonstrate that the ganglion cells match their responses to red-blue stimulus combinations according to the relative contrast of each of the input channels by rotating their functional response properties in colour space. Using measurements of the chromatic statistics of natural environments, we show that the retina balances inputs from the two (red and blue) stimulated colour channels, as would be expected from theoretical optimal behaviour. Our results suggest that colour is encoded in the retina based on the efficient processing of spectral information that matches spectral combinations in natural scenes on the colour processing level. PMID:24205373

  9. Research and Teaching: Statistics across the Curriculum Using an Iterative, Interactive Approach in an Inquiry-Based Lab Sequence

    ERIC Educational Resources Information Center

    Remsburg, Alysa J.; Harris, Michelle A.; Batzli, Janet M.

    2014-01-01

    How can science instructors prepare students for the statistics needed in authentic inquiry labs? We designed and assessed four instructional modules with the goals of increasing student confidence, appreciation, and performance in both experimental design and data analysis. Using extensions from a just-in-time teaching approach, we introduced…

  10. Small Sample Properties of an Adaptive Filter with Application to Low Volume Statistical Process Control

    SciTech Connect

    CROWDER, STEPHEN V.

    1999-09-01

    In many manufacturing environments such as the nuclear weapons complex, emphasis has shifted from the regular production and delivery of large orders to infrequent small orders. However, the challenge to maintain the same high quality and reliability standards while building much smaller lot sizes remains. To meet this challenge, specific areas need more attention, including fast and on-target process start-up, low volume statistical process control, process characterization with small experiments, and estimating reliability given few actual performance tests of the product. In this paper we address the issue of low volume statistical process control. We investigate an adaptive filtering approach to process monitoring with a relatively short time series of autocorrelated data. The emphasis is on estimation and minimization of mean squared error rather than the traditional hypothesis testing and run length analyses associated with process control charting. We develop an adaptive filtering technique that assumes initial process parameters are unknown, and updates the parameters as more data become available. Using simulation techniques, we study the data requirements (the length of a time series of autocorrelated data) necessary to adequately estimate process parameters. We show that far fewer data values are needed than is typically recommended for process control applications. We also demonstrate the techniques with a case study from the nuclear weapons manufacturing complex.

  11. Small sample properties of an adaptive filter with application to low volume statistical process control

    SciTech Connect

    Crowder, S.V.; Eshleman, L.

    1998-08-01

    In many manufacturing environments such as the nuclear weapons complex, emphasis has shifted from the regular production and delivery of large orders to infrequent small orders. However, the challenge to maintain the same high quality and reliability standards white building much smaller lot sizes remains. To meet this challenge, specific areas need more attention, including fast and on-target process start-up, low volume statistical process control, process characterization with small experiments, and estimating reliability given few actual performance tests of the product. In this paper the authors address the issue of low volume statistical process control. They investigate an adaptive filtering approach to process monitoring with a relatively short time series of autocorrelated data. The emphasis is on estimation and minimization of mean squared error rather than the traditional hypothesis testing and run length analyses associated with process control charting. The authors develop an adaptive filtering technique that assumes initial process parameters are unknown, and updates the parameters as more data become available. Using simulation techniques, they study the data requirements (the length of a time series of autocorrelated data) necessary to adequately estimate process parameters. They show that far fewer data values are needed than is typically recommended for process control applications. And they demonstrate the techniques with a case study from the nuclear weapons manufacturing complex.

  12. Drifter-based Predictions of the Spread of Surface Contamination Using Iterative Statistics: A Local Example with Global Applications

    NASA Astrophysics Data System (ADS)

    Fertitta, D. A.; Macdonald, A. M.; Rypina, I.

    2015-12-01

    In the aftermath of the 2011 Fukushima nuclear power plant accident, it became critical to determine how radionuclides, both from atmospheric deposition and direct ocean discharge, were spreading in the ocean. One successful method used drifter observations from the Global Drifter Program (GDP) to predict the timing of the spread of surface contamination. U.S. coasts are home to a number of nuclear power plants as well as other industries capable of leaking contamination into the surface ocean. Here, the spread of surface contamination from a hypothetical accident at the existing Pilgrim nuclear power plant on the coast of Massachusetts is used as an example to show how the historical drifter dataset can be used as a prediction tool. Our investigation uses a combined dataset of drifter tracks from the GDP and the NOAA Northeast Fisheries Science Center. Two scenarios are examined to estimate the spread of surface contamination: a local direct leakage scenario and a broader atmospheric deposition scenario that could result from an explosion. The local leakage scenario is used to study the spread of contamination within and beyond Cape Cod Bay, and the atmospheric deposition scenario is used to study the large-scale spread of contamination throughout the North Atlantic Basin. A multiple-iteration method of estimating probability makes best use of the available drifter data. This technique, which allows for direct observationally-based predictions, can be applied anywhere that drifter data are available to calculate estimates of the likelihood and general timing of the spread of surface contamination in the ocean.

  13. Person Fit Based on Statistical Process Control in an Adaptive Testing Environment. Research Report 98-13.

    ERIC Educational Resources Information Center

    van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R.

    Person-fit research in the context of paper-and-pencil tests is reviewed, and some specific problems regarding person fit in the context of computerized adaptive testing (CAT) are discussed. Some new methods are proposed to investigate person fit in a CAT environment. These statistics are based on Statistical Process Control (SPC) theory. A…

  14. Iterative adaption of the bidimensional wall of the French T2 wind tunnel around a C5 axisymmetrical model: Infinite variation of the Mach number at zero incidence and a test at increased incidence

    NASA Technical Reports Server (NTRS)

    Archambaud, J. P.; Dor, J. B.; Payry, M. J.; Lamarche, L.

    1986-01-01

    The top and bottom two-dimensional walls of the T2 wind tunnel are adapted through an iterative process. The adaptation calculation takes into account the flow three-dimensionally. This method makes it possible to start with any shape of walls. The tests were performed with a C5 axisymmetric model at ambient temperature. Comparisons are made with the results of a true three-dimensional adaptation.

  15. Adaptive volume rendering of cardiac 3D ultrasound images: utilizing blood pool statistics

    NASA Astrophysics Data System (ADS)

    Åsen, Jon Petter; Steen, Erik; Kiss, Gabriel; Thorstensen, Anders; Rabben, Stein Inge

    2012-03-01

    In this paper we introduce and investigate an adaptive direct volume rendering (DVR) method for real-time visualization of cardiac 3D ultrasound. DVR is commonly used in cardiac ultrasound to visualize interfaces between tissue and blood. However, this is particularly challenging with ultrasound images due to variability of the signal within tissue as well as variability of noise signal within the blood pool. Standard DVR involves a global mapping of sample values to opacity by an opacity transfer function (OTF). While a global OTF may represent the interface correctly in one part of the image, it may result in tissue dropouts, or even artificial interfaces within the blood pool in other parts of the image. In order to increase correctness of the rendered image, the presented method utilizes blood pool statistics to do regional adjustments of the OTF. The regional adaptive OTF was compared with a global OTF in a dataset of apical recordings from 18 subjects. For each recording, three renderings from standard views (apical 4-chamber (A4C), inverted A4C (IA4C) and mitral valve (MV)) were generated for both methods, and each rendering was tuned to the best visual appearance by a physician echocardiographer. For each rendering we measured the mean absolute error (MAE) between the rendering depth buffer and a validated left ventricular segmentation. The difference d in MAE between the global and regional method was calculated and t-test results are reported with significant improvements for the regional adaptive method (dA4C = 1.5 +/- 0.3 mm, dIA4C = 2.5 +/- 0.4 mm, dMV = 1.7 +/- 0.2 mm, d.f. = 17, all p < 0.001). This improvement by the regional adaptive method was confirmed through qualitative visual assessment by an experienced physician echocardiographer who concluded that the regional adaptive method produced rendered images with fewer tissue dropouts and less spurious structures inside the blood pool in the vast majority of the renderings. The algorithm has been

  16. Adaptive Markov chain Monte Carlo forward projection for statistical analysis in epidemic modelling of human papillomavirus.

    PubMed

    Korostil, Igor A; Peters, Gareth W; Cornebise, Julien; Regan, David G

    2013-05-20

    A Bayesian statistical model and estimation methodology based on forward projection adaptive Markov chain Monte Carlo is developed in order to perform the calibration of a high-dimensional nonlinear system of ordinary differential equations representing an epidemic model for human papillomavirus types 6 and 11 (HPV-6, HPV-11). The model is compartmental and involves stratification by age, gender and sexual-activity group. Developing this model and a means to calibrate it efficiently is relevant because HPV is a very multi-typed and common sexually transmitted infection with more than 100 types currently known. The two types studied in this paper, types 6 and 11, are causing about 90% of anogenital warts. We extend the development of a sexual mixing matrix on the basis of a formulation first suggested by Garnett and Anderson, frequently used to model sexually transmitted infections. In particular, we consider a stochastic mixing matrix framework that allows us to jointly estimate unknown attributes and parameters of the mixing matrix along with the parameters involved in the calibration of the HPV epidemic model. This matrix describes the sexual interactions between members of the population under study and relies on several quantities that are a priori unknown. The Bayesian model developed allows one to estimate jointly the HPV-6 and HPV-11 epidemic model parameters as well as unknown sexual mixing matrix parameters related to assortativity. Finally, we explore the ability of an extension to the class of adaptive Markov chain Monte Carlo algorithms to incorporate a forward projection strategy for the ordinary differential equation state trajectories. Efficient exploration of the Bayesian posterior distribution developed for the ordinary differential equation parameters provides a challenge for any Markov chain sampling methodology, hence the interest in adaptive Markov chain methods. We conclude with simulation studies on synthetic and recent actual data. PMID

  17. Identifying minefields and verifying clearance: adapting statistical methods for UXO target detection

    NASA Astrophysics Data System (ADS)

    Gilbert, Richard O.; O'Brien, Robert F.; Wilson, John E.; Pulsipher, Brent A.; McKinstry, Craig A.

    2003-09-01

    It may not be feasible to completely survey large tracts of land suspected of containing minefields. It is desirable to develop a characterization protocol that will confidently identify minefields within these large land tracts if they exist. Naturally, surveying areas of greatest concern and most likely locations would be necessary but will not provide the needed confidence that an unknown minefield had not eluded detection. Once minefields are detected, methods are needed to bound the area that will require detailed mine detection surveys. The US Department of Defense Strategic Environmental Research and Development Program (SERDP) is sponsoring the development of statistical survey methods and tools for detecting potential UXO targets. These methods may be directly applicable to demining efforts. Statistical methods are employed to determine the optimal geophysical survey transect spacing to have confidence of detecting target areas of a critical size, shape, and anomaly density. Other methods under development determine the proportion of a land area that must be surveyed to confidently conclude that there are no UXO present. Adaptive sampling schemes are also being developed as an approach for bounding the target areas. These methods and tools will be presented and the status of relevant research in this area will be discussed.

  18. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    PubMed Central

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  19. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    PubMed

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  20. Statistics

    Cancer.gov

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  1. WE-G-18A-04: 3D Dictionary Learning Based Statistical Iterative Reconstruction for Low-Dose Cone Beam CT Imaging

    SciTech Connect

    Bai, T; Yan, H; Shi, F; Jia, X; Jiang, Steve B.; Lou, Y; Xu, Q; Mou, X

    2014-06-15

    clinical application. A high zresolution is preferred to stabilize statistical iterative reconstruction. This work was supported in part by NIH(1R01CA154747-01), NSFC((No. 61172163), Research Fund for the Doctoral Program of Higher Education of China (No. 20110201110011), China Scholarship Council.

  2. Iterative 4D cardiac micro-CT image reconstruction using an adaptive spatio-temporal sparsity prior

    NASA Astrophysics Data System (ADS)

    Ritschl, Ludwig; Sawall, Stefan; Knaup, Michael; Hess, Andreas; Kachelrieß, Marc

    2012-03-01

    Temporal-correlated image reconstruction, also known as 4D CT image reconstruction, is a big challenge in computed tomography. The reasons for incorporating the temporal domain into the reconstruction are motions of the scanned object, which would otherwise lead to motion artifacts. The standard method for 4D CT image reconstruction is extracting single motion phases and reconstructing them separately. These reconstructions can suffer from undersampling artifacts due to the low number of used projections in each phase. There are different iterative methods which try to incorporate some a priori knowledge to compensate for these artifacts. In this paper we want to follow this strategy. The cost function we use is a higher dimensional cost function which accounts for the sparseness of the measured signal in the spatial and temporal directions. This leads to the definition of a higher dimensional total variation. The method is validated using in vivo cardiac micro-CT mouse data. Additionally, we compare the results to phase-correlated reconstructions using the FDK algorithm and a total variation constrained reconstruction, where the total variation term is only defined in the spatial domain. The reconstructed datasets show strong improvements in terms of artifact reduction and low-contrast resolution compared to other methods. Thereby the temporal resolution of the reconstructed signal is not affected.

  3. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels

    PubMed Central

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J.

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively “hiding” its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research. PMID:25505378

  4. Data-driven and adaptive statistical residual evaluation for fault detection with an automotive application

    NASA Astrophysics Data System (ADS)

    Svärd, Carl; Nyberg, Mattias; Frisk, Erik; Krysander, Mattias

    2014-03-01

    An important step in model-based fault detection is residual evaluation, where residuals are evaluated with the aim to detect changes in their behavior caused by faults. To handle residuals subject to time-varying uncertainties and disturbances, which indeed are present in practice, a novel statistical residual evaluation approach is presented. The main contribution is to base the residual evaluation on an explicit comparison of the probability distribution of the residual, estimated online using current data, with a no-fault residual distribution. The no-fault distribution is based on a set of a priori known no-fault residual distributions, and is continuously adapted to the current situation. As a second contribution, a method is proposed for estimating the required set of no-fault residual distributions off-line from no-fault training data. The proposed residual evaluation approach is evaluated with measurement data on a residual for fault detection in the gas-flow system of a Scania truck diesel engine. Results show that small faults can be reliably detected with the proposed approach in cases where regular methods fail.

  5. Performances of the fractal iterative method with an internal model control law on the ESO end-to-end ELT adaptive optics simulator

    NASA Astrophysics Data System (ADS)

    Béchet, C.; Le Louarn, M.; Tallon, M.; Thiébaut, É.

    2008-07-01

    Adaptive Optics systems under study for the Extremely Large Telescopes gave rise to a new generation of algorithms for both wavefront reconstruction and the control law. In the first place, the large number of controlled actuators impose the use of computationally efficient methods. Secondly, the performance criterion is no longer solely based on nulling residual measurements. Priors on turbulence must be inserted. In order to satisfy these two requirements, we suggested to associate the Fractal Iterative Method for the estimation step with an Internal Model Control. This combination has now been tested on an end-to-end adaptive optics numerical simulator at ESO, named Octopus. Results are presented here and performance of our method is compared to the classical Matrix-Vector Multiplication combined with a pure integrator. In the light of a theoretical analysis of our control algorithm, we investigate the influence of several errors contributions on our simulations. The reconstruction error varies with the signal-to-noise ratio but is limited by the use of priors. The ratio between the system loop delay and the wavefront coherence time also impacts on the reachable Strehl ratio. Whereas no instabilities are observed, correction quality is obviously affected at low flux, when subapertures extinctions are frequent. Last but not least, the simulations have demonstrated the robustness of the method with respect to sensor modeling errors and actuators misalignments.

  6. Image Restoration Using the Damped Richardson-Lucy Iteration

    NASA Astrophysics Data System (ADS)

    White, R. L.

    The most widely used image restoration technique for optical astronomical data is the Richardson-Lucy (RL) iteration. The RL method is well-suited to optical and ultraviolet because it converges to the maximum likelihood solution for Poisson statistics in the data, which is appropriate for astronomical images taken with CCD or photon-counting detectors. Images restored using the RL iteration have good good photometric linearity and can be used for quantitative analysis, and typical RL restorations require a manageable amount of computer time. Despite its advantages, the RL method has some serious shortcomings. Noise amplification is a problem, as for all maximum likelihood techniques. If one performs many RL iterations on an image containing an extended object such as a galaxy, the extended emission develops a ``speckled'' appearance. The speckles are the result of fitting the noise in the data too closely. The only limit on the amount of noise amplification in the RL method is the requirement that the image not become negative. The usual practical approach to limiting noise amplification is simply to stop the iteration when the restored image appears to become too noisy. However, in most cases the number of iterations needed is different for different parts of the image. Hundreds of iterations may be required to get a good fit to the high signal-to-noise image of a bright star, while a smooth, extended object may be fitted well after only a few iterations. Thus, one would like to be able to slow or stop the iteration automatically in regions where a smooth model fits the data adequately, while continuing to iterate in regions where there are sharp features (edges or point sources). The need for a spatially adaptive convergence criterion is exacerbated when CCD readout noise is included in the RL algorithm (Snyder, Hammoud, & White, 1993, JOSA A , 10 , 1014), because the rate of convergence is then slower for faint stars than for bright stars. This paper will

  7. ITER's woes

    NASA Astrophysics Data System (ADS)

    jjeherrera; Duffield, John; ZoloftNotWorking; esromac; protogonus; mleconte; cmfluteguy; adivita

    2014-07-01

    In reply to the physicsworld.com news story “US sanctions on Russia hit ITER council” (20 May, http://ow.ly/xF7oc and also June p8), about how a meeting of the fusion experiment's council had to be moved from St Petersburg and the US Congress's call for ITER boss Osamu Motojima to step down.

  8. The Impact of Different Levels of Adaptive Iterative Dose Reduction 3D on Image Quality of 320-Row Coronary CT Angiography: A Clinical Trial

    PubMed Central

    Feger, Sarah; Rief, Matthias; Zimmermann, Elke; Martus, Peter; Schuijf, Joanne Désirée; Blobel, Jörg; Richter, Felicitas; Dewey, Marc

    2015-01-01

    Purpose The aim of this study was the systematic image quality evaluation of coronary CT angiography (CTA), reconstructed with the 3 different levels of adaptive iterative dose reduction (AIDR 3D) and compared to filtered back projection (FBP) with quantum denoising software (QDS). Methods Standard-dose CTA raw data of 30 patients with mean radiation dose of 3.2 ± 2.6 mSv were reconstructed using AIDR 3D mild, standard, strong and compared to FBP/QDS. Objective image quality comparison (signal, noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), contour sharpness) was performed using 21 measurement points per patient, including measurements in each coronary artery from proximal to distal. Results Objective image quality parameters improved with increasing levels of AIDR 3D. Noise was lowest in AIDR 3D strong (p≤0.001 at 20/21 measurement points; compared with FBP/QDS). Signal and contour sharpness analysis showed no significant difference between the reconstruction algorithms for most measurement points. Best coronary SNR and CNR were achieved with AIDR 3D strong. No loss of SNR or CNR in distal segments was seen with AIDR 3D as compared to FBP. Conclusions On standard-dose coronary CTA images, AIDR 3D strong showed higher objective image quality than FBP/QDS without reducing contour sharpness. Trial Registration Clinicaltrials.gov NCT00967876 PMID:25945924

  9. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  10. Improved Liver Lesion Conspicuity With Iterative Reconstruction in Computed Tomography Imaging.

    PubMed

    Jensen, Kristin; Andersen, Hilde Kjernlie; Tingberg, Anders; Reisse, Claudius; Fosse, Erik; Martinsen, Anne Catrine T

    2016-01-01

    Studies on iterative reconstruction techniques on computed tomographic (CT) scanners show reduced noise and changed image texture. The purpose of this study was to address the possibility of dose reduction and improved conspicuity of lesions in a liver phantom for different iterative reconstruction algorithms. An anthropomorphic upper abdomen phantom, specially designed for receiver operating characteristic analysis was scanned with 2 different CT models from the same vendor, GE CT750 HD and GE Lightspeed VCT. Images were obtained at 3 dose levels, 5, 10, and 15mGy, and reconstructed with filtered back projection (FBP), and 2 different iterative reconstruction algorithms; adaptive statistical iterative reconstruction and Veo. Overall, 5 interpreters evaluated the images and receiver operating characteristic analysis was performed. Standard deviation and the contrast to noise ratio were measured. Veo image reconstruction resulted in larger area under curves compared with those adaptive statistical iterative reconstruction and FBP image reconstruction for given dose levels. For the CT750 HD, iterative reconstruction at the 10mGy dose level resulted in larger or similar area under curves compared with FBP at the 15mGy dose level (0.88-0.95 vs 0.90). This was not shown for the Lightspeed VCT (0.83-0.85 vs 0.92). The results in this study indicate that the possibility for radiation dose reduction using iterative reconstruction techniques depends on both reconstruction technique and the CT scanner model used. PMID:26790606

  11. Dynamic Range Adaptation to Sound Level Statistics in the Auditory Nerve

    PubMed Central

    Wen, Bo; Wang, Grace I.; Dean, Isabel; Delgutte, Bertrand

    2009-01-01

    The auditory system operates over a vast range of sound pressure levels (100–120 dB) with nearly constant discrimination ability across most of the range, well exceeding the dynamic range of most auditory neurons (20–40 dB). Dean et al. (Nat. Neurosci. 8:1684, 2005) have reported that the dynamic range of midbrain auditory neurons adapts to the distribution of sound levels in a continuous, dynamic stimulus by shifting towards the most frequently occurring level. Here we show that dynamic range adaptation, distinct from classic firing rate adaptation, also occurs in primary auditory neurons in anesthetized cats for tone and noise stimuli. Specifically, the range of sound levels over which firing rates of auditory-nerve (AN) fibers grows rapidly with level shifts nearly linearly with the most probable levels in a dynamic sound stimulus. This dynamic range adaptation was observed for fibers with all characteristic frequencies and spontaneous discharge rates. As in the midbrain, dynamic range adaptation improved the precision of level coding by the AN fiber population for the prevailing sound levels in the stimulus. However, dynamic range adaptation in the AN was weaker than in the midbrain, and not sufficient (0.25 dB/dB on average for broadband noise) to prevent a significant degradation of the precision of level coding by the AN population above 60 dB SPL. These findings suggest that adaptive processing of sound levels first occurs in the auditory periphery and is enhanced along the auditory pathway. PMID:19889991

  12. Cross-cultural adaptation of research instruments: language, setting, time and statistical considerations

    PubMed Central

    2010-01-01

    Background Research questionnaires are not always translated appropriately before they are used in new temporal, cultural or linguistic settings. The results based on such instruments may therefore not accurately reflect what they are supposed to measure. This paper aims to illustrate the process and required steps involved in the cross-cultural adaptation of a research instrument using the adaptation process of an attitudinal instrument as an example. Methods A questionnaire was needed for the implementation of a study in Norway 2007. There was no appropriate instruments available in Norwegian, thus an Australian-English instrument was cross-culturally adapted. Results The adaptation process included investigation of conceptual and item equivalence. Two forward and two back-translations were synthesized and compared by an expert committee. Thereafter the instrument was pretested and adjusted accordingly. The final questionnaire was administered to opioid maintenance treatment staff (n=140) and harm reduction staff (n=180). The overall response rate was 84%. The original instrument failed confirmatory analysis. Instead a new two-factor scale was identified and found valid in the new setting. Conclusions The failure of the original scale highlights the importance of adapting instruments to current research settings. It also emphasizes the importance of ensuring that concepts within an instrument are equal between the original and target language, time and context. If the described stages in the cross-cultural adaptation process had been omitted, the findings would have been misleading, even if presented with apparent precision. Thus, it is important to consider possible barriers when making a direct comparison between different nations, cultures and times. PMID:20144247

  13. Adaptive nonlocal means-based regularization for statistical image reconstruction of low-dose X-ray CT

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Ma, Jianhua; Wang, Jing; Liu, Yan; Han, Hao; Li, Lihong; Moore, William; Liang, Zhengrong

    2015-03-01

    To reduce radiation dose in X-ray computed tomography (CT) imaging, one of the common strategies is to lower the milliampere-second (mAs) setting during projection data acquisition. However, this strategy would inevitably increase the projection data noise, and the resulting image by the filtered back-projection (FBP) method may suffer from excessive noise and streak artifacts. The edge-preserving nonlocal means (NLM) filtering can help to reduce the noise-induced artifacts in the FBP reconstructed image, but it sometimes cannot completely eliminate them, especially under very low-dose circumstance when the image is severely degraded. To deal with this situation, we proposed a statistical image reconstruction scheme using a NLM-based regularization, which can suppress the noise and streak artifacts more effectively. However, we noticed that using uniform filtering parameter in the NLM-based regularization was rarely optimal for the entire image. Therefore, in this study, we further developed a novel approach for designing adaptive filtering parameters by considering local characteristics of the image, and the resulting regularization is referred to as adaptive NLM-based regularization. Experimental results with physical phantom and clinical patient data validated the superiority of using the proposed adaptive NLM-regularized statistical image reconstruction method for low-dose X-ray CT, in terms of noise/streak artifacts suppression and edge/detail/contrast/texture preservation.

  14. Research of adaptive threshold edge detection algorithm based on statistics canny operator

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Wang, Huaisuo; Huang, Hua

    2015-12-01

    The traditional Canny operator cannot get the optimal threshold in different scene, on this foundation, an improved Canny edge detection algorithm based on adaptive threshold is proposed. The result of the experiment pictures indicate that the improved algorithm can get responsible threshold, and has the better accuracy and precision in the edge detection.

  15. When the leak is weak - how the first-passage statistics of a biased random walk can approximate the ISI statistics of an adapting neuron

    NASA Astrophysics Data System (ADS)

    Schwalger, T.; Miklody, D.; Lindner, B.

    2013-10-01

    Sequences of first-passage times can describe the interspike intervals (ISI) between subsequent action potentials of sensory neurons. Here, we consider the ISI statistics of a stochastic neuron model, a leaky integrate-and-fire neuron, which is driven by a strong mean input current, white Gaussian current noise, and a spike-frequency adaptation current. In previous studies, it has been shown that without a leak current, i.e. for a so-called perfect integrate-and-fire (PIF) neuron, the ISI density can be well approximated by an inverse Gaussian corresponding to the first-passage-time density of a biased random walk. Furthermore, the serial correlations between ISIs, which are induced by the adaptation current, can be described by a geometric series. By means of stochastic simulations, we inspect whether these results hold true in the presence of a modest leak current. Specifically, we measure mean and variance of the ISI in the full model with leak and use the analytical results for the perfect IF model to relate these cumulants of the ISI to effective values of the mean input and noise intensity of an equivalent perfect IF model. This renormalization procedure yields semi-analytical approximations for the ISI density and the ISI serial correlation coeffcient in the full model with leak. We find that both in the absence and the presence of an adaptation current, the ISI density can be well approximated in this way if the leak current constitutes only a weak modification of the dynamics. Moreover, also the serial correlations of the model with leak are well reproduced by the expressions for a PIF model with renormalized parameters. Our results explain, why expressions derived for the rather special perfect integrate-and-fire model can nevertheless be often well fit to experimental data.

  16. A statistical channel model for adaptive HF communications via a severely disturbed ionosphere

    NASA Astrophysics Data System (ADS)

    Haines, D. M.

    1983-12-01

    Motivation for the resurgence of interest in improving HF communication is presented. This includes the continued widespread use of the HF band, and the new technology that now makes it feasible to vastly improve the historically poor quality of communications in this band. Non-conventional HF techniques or systems are classified into four general categories according to the technical specialties that spawned them. These categories are Adaptive Frequency Management. Digital Waveform Processing, Networking, and Adaptive Antennas. A 15 parameter channel model is presented which forms the basis for the on-going RADC measurement program. These parameters address the dispersion and dynamics of time, frequency and spatial distortion imposed by the skywave channel. Next, measurement techniques are evaluated for characterization of these parameters, resulting in the selection of a six station Arctic network of wideband pulse compression (matched filter) channel probes. A description of waveform generation, receiver signal processing and the program plans and schedule are presented.

  17. A Unifying Framework for Adaptive Radar Detection in Homogeneous Plus Structured Interference— Part I: On the Maximal Invariant Statistic

    NASA Astrophysics Data System (ADS)

    Ciuonzo, D.; De Maio, A.; Orlando, D.

    2016-06-01

    This paper deals with the problem of adaptive multidimensional/multichannel signal detection in homogeneous Gaussian disturbance with unknown covariance matrix and structured deterministic interference. The aforementioned problem corresponds to a generalization of the well-known Generalized Multivariate Analysis of Variance (GMANOVA). In this first part of the work, we formulate the considered problem in canonical form and, after identifying a desirable group of transformations for the considered hypothesis testing, we derive a Maximal Invariant Statistic (MIS) for the problem at hand. Furthermore, we provide the MIS distribution in the form of a stochastic representation. Finally, strong connections to the MIS obtained in the open literature in simpler scenarios are underlined.

  18. On- and off-axis statistical behavior of adaptive-optics-corrected short-exposure Strehl ratio.

    PubMed

    Fusco, Thierry; Conan, Jean-Marc

    2004-07-01

    Statistical behavior of the adaptive-optics- (AO-) corrected short-exposure point-spread function (PSF) is derived assuming a perfect correction of the phase's low spatial frequencies. Analytical expressions of the Strehl ratio (SR) fluctuations of on- and off-axis short-exposure PSFs are obtained. A theoretical expression of the short SR angular correlation is proposed and used to derive a definition of an anisoplanatic angle for AO-corrected images. Several applications of the analytical expressions are proposed: AO performance characterization, postprocessing imaging, light coupling into fiber, and exoplanet detection from a ground-based telescope. PMID:15260259

  19. Semi-automatic medical image segmentation with adaptive local statistics in Conditional Random Fields framework.

    PubMed

    Hu, Yu-Chi J; Grossberg, Michael D; Mageras, Gikas S

    2008-01-01

    Planning radiotherapy and surgical procedures usually require onerous manual segmentation of anatomical structures from medical images. In this paper we present a semi-automatic and accurate segmentation method to dramatically reduce the time and effort required of expert users. This is accomplished by giving a user an intuitive graphical interface to indicate samples of target and non-target tissue by loosely drawing a few brush strokes on the image. We use these brush strokes to provide the statistical input for a Conditional Random Field (CRF) based segmentation. Since we extract purely statistical information from the user input, we eliminate the need of assumptions on boundary contrast previously used by many other methods, A new feature of our method is that the statistics on one image can be reused on related images without registration. To demonstrate this, we show that boundary statistics provided on a few 2D slices of volumetric medical data, can be propagated through the entire 3D stack of images without using the geometric correspondence between images. In addition, the image segmentation from the CRF can be formulated as a minimum s-t graph cut problem which has a solution that is both globally optimal and fast. The combination of a fast segmentation and minimal user input that is reusable, make this a powerful technique for the segmentation of medical images. PMID:19163362

  20. Statistical learning and adaptive decision-making underlie human response time variability in inhibitory control

    PubMed Central

    Ma, Ning; Yu, Angela J.

    2015-01-01

    Response time (RT) is an oft-reported behavioral measure in psychological and neurocognitive experiments, but the high level of observed trial-to-trial variability in this measure has often limited its usefulness. Here, we combine computational modeling and psychophysics to examine the hypothesis that fluctuations in this noisy measure reflect dynamic computations in human statistical learning and corresponding cognitive adjustments. We present data from the stop-signal task (SST), in which subjects respond to a go stimulus on each trial, unless instructed not to by a subsequent, infrequently presented stop signal. We model across-trial learning of stop signal frequency, P(stop), and stop-signal onset time, SSD (stop-signal delay), with a Bayesian hidden Markov model, and within-trial decision-making with an optimal stochastic control model. The combined model predicts that RT should increase with both expected P(stop) and SSD. The human behavioral data (n = 20) bear out this prediction, showing P(stop) and SSD both to be significant, independent predictors of RT, with P(stop) being a more prominent predictor in 75% of the subjects, and SSD being more prominent in the remaining 25%. The results demonstrate that humans indeed readily internalize environmental statistics and adjust their cognitive/behavioral strategy accordingly, and that subtle patterns in RT variability can serve as a valuable tool for validating models of statistical learning and decision-making. More broadly, the modeling tools presented in this work can be generalized to a large body of behavioral paradigms, in order to extract insights about cognitive and neural processing from apparently quite noisy behavioral measures. We also discuss how this behaviorally validated model can then be used to conduct model-based analysis of neural data, in order to help identify specific brain areas for representing and encoding key computational quantities in learning and decision-making. PMID:26321966

  1. Adaptation of the simple or complex nature of V1 receptive fields to visual statistics.

    PubMed

    Fournier, Julien; Monier, Cyril; Pananceau, Marc; Frégnac, Yves

    2011-08-01

    Receptive fields in primary visual cortex (V1) are categorized as simple or complex, depending on their spatial selectivity to stimulus contrast polarity. We studied the dependence of this classification on visual context by comparing, in the same cell, the synaptic responses to three classical receptive field mapping protocols: sparse noise, ternary dense noise and flashed Gabor noise. Intracellular recordings revealed that the relative weights of simple-like and complex-like receptive field components were scaled so as to make the same receptive field more simple-like with dense noise stimulation and more complex-like with sparse or Gabor noise stimulations. However, once these context-dependent receptive fields were convolved with the corresponding stimulus, the balance between simple-like and complex-like contributions to the synaptic responses appeared to be invariant across input statistics. This normalization of the linear/nonlinear input ratio suggests a previously unknown form of homeostatic control of V1 functional properties, optimizing the network nonlinearities to the statistical structure of the visual input. PMID:21765424

  2. The Use of Statistical Process Control-Charts for Person-Fit Analysis on Computerized Adaptive Testing. LSAC Research Report Series.

    ERIC Educational Resources Information Center

    Meijer, Rob R.; van Krimpen-Stoop, Edith M. L. A.

    In this study a cumulative-sum (CUSUM) procedure from the theory of Statistical Process Control was modified and applied in the context of person-fit analysis in a computerized adaptive testing (CAT) environment. Six person-fit statistics were proposed using the CUSUM procedure, and three of them could be used to investigate the CAT in online test…

  3. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  4. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    PubMed

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. PMID:25463325

  5. FLAGS: A Flexible and Adaptive Association Test for Gene Sets Using Summary Statistics.

    PubMed

    Huang, Jianfei; Wang, Kai; Wei, Peng; Liu, Xiangtao; Liu, Xiaoming; Tan, Kai; Boerwinkle, Eric; Potash, James B; Han, Shizhong

    2016-03-01

    Genome-wide association studies (GWAS) have been widely used for identifying common variants associated with complex diseases. Despite remarkable success in uncovering many risk variants and providing novel insights into disease biology, genetic variants identified to date fail to explain the vast majority of the heritability for most complex diseases. One explanation is that there are still a large number of common variants that remain to be discovered, but their effect sizes are generally too small to be detected individually. Accordingly, gene set analysis of GWAS, which examines a group of functionally related genes, has been proposed as a complementary approach to single-marker analysis. Here, we propose a FL: exible and A: daptive test for G: ene S: ets (FLAGS), using summary statistics. Extensive simulations showed that this method has an appropriate type I error rate and outperforms existing methods with increased power. As a proof of principle, through real data analyses of Crohn's disease GWAS data and bipolar disorder GWAS meta-analysis results, we demonstrated the superior performance of FLAGS over several state-of-the-art association tests for gene sets. Our method allows for the more powerful application of gene set analysis to complex diseases, which will have broad use given that GWAS summary results are increasingly publicly available. PMID:26773050

  6. Dual adaptive statistical approach for quantitative noise reduction in photon-counting medical imaging: application to nuclear medicine images

    NASA Astrophysics Data System (ADS)

    Hannequin, Pascal Paul

    2015-06-01

    Noise reduction in photon-counting images remains challenging, especially at low count levels. We have developed an original procedure which associates two complementary filters using a Wiener-derived approach. This approach combines two statistically adaptive filters into a dual-weighted (DW) filter. The first one, a statistically weighted adaptive (SWA) filter, replaces the central pixel of a sliding window with a statistically weighted sum of its neighbors. The second one, a statistical and heuristic noise extraction (extended) (SHINE-Ext) filter, performs a discrete cosine transformation (DCT) using sliding blocks. Each block is reconstructed using its significant components which are selected using tests derived from multiple linear regression (MLR). The two filters are weighted according to Wiener theory. This approach has been validated using a numerical phantom and a real planar Jaszczak phantom. It has also been illustrated using planar bone scintigraphy and myocardial single-photon emission computed tomography (SPECT) data. Performances of filters have been tested using mean normalized absolute error (MNAE) between the filtered images and the reference noiseless or high-count images. Results show that the proposed filters quantitatively decrease the MNAE in the images and then increase the signal-to-noise Ratio (SNR). This allows one to work with lower count images. The SHINE-Ext filter is well suited to high-size images and low-variance areas. DW filtering is efficient for low-size images and in high-variance areas. The relative proportion of eliminated noise generally decreases when count level increases. In practice, SHINE filtering alone is recommended when pixel spacing is less than one-quarter of the effective resolution of the system and/or the size of the objects of interest. It can also be used when the practical interest of high frequencies is low. In any case, DW filtering will be preferable. The proposed filters have been applied to nuclear

  7. Dual adaptive statistical approach for quantitative noise reduction in photon-counting medical imaging: application to nuclear medicine images.

    PubMed

    Hannequin, Pascal Paul

    2015-06-01

    Noise reduction in photon-counting images remains challenging, especially at low count levels. We have developed an original procedure which associates two complementary filters using a Wiener-derived approach. This approach combines two statistically adaptive filters into a dual-weighted (DW) filter. The first one, a statistically weighted adaptive (SWA) filter, replaces the central pixel of a sliding window with a statistically weighted sum of its neighbors. The second one, a statistical and heuristic noise extraction (extended) (SHINE-Ext) filter, performs a discrete cosine transformation (DCT) using sliding blocks. Each block is reconstructed using its significant components which are selected using tests derived from multiple linear regression (MLR). The two filters are weighted according to Wiener theory. This approach has been validated using a numerical phantom and a real planar Jaszczak phantom. It has also been illustrated using planar bone scintigraphy and myocardial single-photon emission computed tomography (SPECT) data. Performances of filters have been tested using mean normalized absolute error (MNAE) between the filtered images and the reference noiseless or high-count images.Results show that the proposed filters quantitatively decrease the MNAE in the images and then increase the signal-to-noise Ratio (SNR). This allows one to work with lower count images. The SHINE-Ext filter is well suited to high-size images and low-variance areas. DW filtering is efficient for low-size images and in high-variance areas. The relative proportion of eliminated noise generally decreases when count level increases. In practice, SHINE filtering alone is recommended when pixel spacing is less than one-quarter of the effective resolution of the system and/or the size of the objects of interest. It can also be used when the practical interest of high frequencies is low. In any case, DW filtering will be preferable.The proposed filters have been applied to nuclear

  8. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  9. Truncated States Obtained by Iteration

    NASA Astrophysics Data System (ADS)

    Cardoso B., W.; Almeida G. de, N.

    2008-02-01

    We introduce the concept of truncated states obtained via iterative processes (TSI) and study its statistical features, making an analogy with dynamical systems theory (DST). As a specific example, we have studied TSI for the doubling and the logistic functions, which are standard functions in studying chaos. TSI for both the doubling and logistic functions exhibit certain similar patterns when their statistical features are compared from the point of view of DST.

  10. Statistical adaptation of ALADIN RCM outputs over the French alpine massifs - application to future climate and snow cover

    NASA Astrophysics Data System (ADS)

    Rousselot, M.; Durand, Y.; Giraud, G.; Mérindol, L.; Dombrowski-Etchevers, I.; Déqué, M.

    2012-01-01

    In this study, snowpack scenarios are modelled across the French Alps using dynamically downscaled variables from the ALADIN Regional Climate Model (RCM) for the control period (1961-1990) and three emission scenarios (SRES B1, A1B and A2) by the mid- and late of the 21st century (2021-2050 and 2071-2100). These variables are statistically adapted to the different elevations, aspects and slopes of the alpine massifs. For this purpose, we use a simple analogue criterion with ERA40 series as well as an existing detailed climatology of the French Alps (Durand et al., 2009a) that provides complete meteorological fields from the SAFRAN analysis model. The resulting scenarios of precipitation, temperature, wind, cloudiness, longwave and shortwave radiation, and humidity are used to run the physical snow model CROCUS and simulate snowpack evolution over the massifs studied. The seasonal and regional characteristics of the simulated climate and snow cover changes are explored, as is the influence of the scenarios on these changes. Preliminary results suggest that the Snow Water Equivalent (SWE) of the snowpack will decrease dramatically in the next century, especially in the Southern and Extreme Southern part of the Alps. This decrease seems to result primarily from a general warming throughout the year, and possibly a deficit of precipitation in the autumn. The magnitude of the snow cover decline follows a marked altitudinal gradient, with the highest altitudes being less exposed to climate change. Scenario A2, with its high concentrations of greenhouse gases, results in a SWE reduction roughly twice as large as in the low-emission scenario B1 by the end of the century. This study needs to be completed using simulations from other RCMs, since a multi-model approach is essential for uncertainty analysis.

  11. Statistical adaptation of ALADIN RCM outputs over the French Alps - application to future climate and snow cover

    NASA Astrophysics Data System (ADS)

    Rousselot, M.; Durand, Y.; Giraud, G.; Mérindol, L.; Dombrowski-Etchevers, I.; Déqué, M.; Castebrunet, H.

    2012-07-01

    In this study, snowpack scenarios are modelled across the French Alps using dynamically downscaled variables from the ALADIN Regional Climate Model (RCM) for the control period (1961-1990) and three emission scenarios (SRES B1, A1B and A2) for the mid- and late 21st century (2021-2050 and 2071-2100). These variables are statistically adapted to the different elevations, aspects and slopes of the Alpine massifs. For this purpose, we use a simple analogue criterion with ERA40 series as well as an existing detailed climatology of the French Alps (Durand et al., 2009a) that provides complete meteorological fields from the SAFRAN analysis model. The resulting scenarios of precipitation, temperature, wind, cloudiness, longwave and shortwave radiation, and humidity are used to run the physical snow model CROCUS and simulate snowpack evolution over the massifs studied. The seasonal and regional characteristics of the simulated climate and snow cover changes are explored, as is the influence of the scenarios on these changes. Preliminary results suggest that the snow water equivalent (SWE) of the snowpack will decrease dramatically in the next century, especially in the Southern and Extreme Southern parts of the Alps. This decrease seems to result primarily from a general warming throughout the year, and possibly a deficit of precipitation in the autumn. The magnitude of the snow cover decline follows a marked altitudinal gradient, with the highest altitudes being less exposed to climate change. Scenario A2, with its high concentrations of greenhouse gases, results in a SWE reduction roughly twice as large as in the low-emission scenario B1 by the end of the century. This study needs to be completed using simulations from other RCMs, since a multi-model approach is essential for uncertainty analysis.

  12. Acceleration of iterative image restoration algorithms.

    PubMed

    Biggs, D S; Andrews, M

    1997-03-10

    A new technique for the acceleration of iterative image restoration algorithms is proposed. The method is based on the principles of vector extrapolation and does not require the minimization of a cost function. The algorithm is derived and its performance illustrated with Richardson-Lucy (R-L) and maximum entropy (ME) deconvolution algorithms and the Gerchberg-Saxton magnitude and phase retrieval algorithms. Considerable reduction in restoration times is achieved with little image distortion or computational overhead per iteration. The speedup achieved is shown to increase with the number of iterations performed and is easily adapted to suit different algorithms. An example R-L restoration achieves an average speedup of 40 times after 250 iterations and an ME method 20 times after only 50 iterations. An expression for estimating the acceleration factor is derived and confirmed experimentally. Comparisons with other acceleration techniques in the literature reveal significant improvements in speed and stability. PMID:18250863

  13. Online Phenotype Discovery in High-Content RNAi Screens using Gap Statistics

    NASA Astrophysics Data System (ADS)

    Yin, Zheng; Zhou, Xiaobo; Bakal, Chris; Li, Fuhai; Sun, Youxian; Perrimon, Norbert; Wong, Stephen T. C.

    2007-11-01

    Discovering and identifying novel phenotypes from images inputting online is a major challenge in high-content RNA interference (RNAi) screens. Discovered phenotypes should be visually distinct from existing ones and make biological sense. An online phenotype discovery method featuring adaptive phenotype modeling and iterative cluster merging using gap statistics is proposed. The method works well on discovering new phenotypes adaptively when applied to both of synthetic data sets and RNAi high content screen (HCS) images with ground truth labels.

  14. US ITER Moving Forward

    ScienceCinema

    US ITER / ORNL

    2012-03-16

    US ITER Project Manager Ned Sauthoff, joined by Wayne Reiersen, Team Leader Magnet Systems, and Jan Berry, Team Leader Tokamak Cooling System, discuss the U.S.'s role in the ITER international collaboration.

  15. Imaging task-based optimal kV and mA selection for CT radiation dose reduction: from filtered backprojection (FBP) to statistical model based iterative reconstruction (MBIR)

    NASA Astrophysics Data System (ADS)

    Li, Ke; Gomez-Cardona, Daniel; Lubner, Meghan G.; Pickhardt, Perry J.; Chen, Guang-Hong

    2015-03-01

    Optimal selections of tube potential (kV) and tube current (mA) are essential in maximizing the diagnostic potential of a given CT technology while minimizing radiation dose. The use of a lower tube potential may improve image contrast, but may also require a significantly higher tube current to compensate for the rapid decrease of tube output at lower tube potentials. Therefore, the selection of kV and mA should take those kinds of constraints as well as the specific diagnostic imaging task in to consideration. For conventional quasi-linear CT systems employing linear filtered back-projection (FBP) image reconstruction algorithm, the optimization of kV-mA combinations are relatively straightforward, as neither spatial resolution nor noise texture has significant dependence on kV and mA settings. In these cases, zero-frequency analysis such as contrast-to-noise ratio (CNR) or normalized CNR by dose (CNRD) can be used for optimal kV-mA selection. The recently introduced statistical model-based iterative reconstruction (MBIR) method, however, has introduced new challenges to optimal kV and mA selection, as both spatial resolution and noise texture become closely correlated with kV and mA. In this work, a task-based approach based on modern signal detection theory and the corresponding frequency-dependent analysis has been proposed to perform the kV and mA optimization for both FBP and MBIR. By performing exhaustive measurements of task-based detectability index through the technically accessible kV-mA parameter space, iso-detectability contours were generated and overlaid on top of iso-dose contours, from which the kV-mA pair that minimize dose yet still achieving the desired detectability level can be identified.

  16. Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR)

    SciTech Connect

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Robins, Marthony; Colsher, James; Samei, Ehsan

    2013-11-15

    Purpose: Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables.Methods: Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision.Results: Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A.Conclusions: The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of

  17. The relative power of genome scans to detect local adaptation depends on sampling design and statistical method.

    PubMed

    Lotterhos, Katie E; Whitlock, Michael C

    2015-03-01

    Although genome scans have become a popular approach towards understanding the genetic basis of local adaptation, the field still does not have a firm grasp on how sampling design and demographic history affect the performance of genome scans on complex landscapes. To explore these issues, we compared 20 different sampling designs in equilibrium (i.e. island model and isolation by distance) and nonequilibrium (i.e. range expansion from one or two refugia) demographic histories in spatially heterogeneous environments. We simulated spatially complex landscapes, which allowed us to exploit local maxima and minima in the environment in 'pair' and 'transect' sampling strategies. We compared F(ST) outlier and genetic-environment association (GEA) methods for each of two approaches that control for population structure: with a covariance matrix or with latent factors. We show that while the relative power of two methods in the same category (F(ST) or GEA) depended largely on the number of individuals sampled, overall GEA tests had higher power in the island model and F(ST) had higher power under isolation by distance. In the refugia models, however, these methods varied in their power to detect local adaptation at weakly selected loci. At weakly selected loci, paired sampling designs had equal or higher power than transect or random designs to detect local adaptation. Our results can inform sampling designs for studies of local adaptation and have important implications for the interpretation of genome scans based on landscape data. PMID:25648189

  18. Bayesian Adaptive Exploration

    NASA Astrophysics Data System (ADS)

    Loredo, Thomas J.

    2004-04-01

    I describe a framework for adaptive scientific exploration based on iterating an Observation-Inference-Design cycle that allows adjustment of hypotheses and observing protocols in response to the results of observation on-the-fly, as data are gathered. The framework uses a unified Bayesian methodology for the inference and design stages: Bayesian inference to quantify what we have learned from the available data and predict future data, and Bayesian decision theory to identify which new observations would teach us the most. When the goal of the experiment is simply to make inferences, the framework identifies a computationally efficient iterative ``maximum entropy sampling'' strategy as the optimal strategy in settings where the noise statistics are independent of signal properties. Results of applying the method to two ``toy'' problems with simulated data-measuring the orbit of an extrasolar planet, and locating a hidden one-dimensional object-show the approach can significantly improve observational efficiency in settings that have well-defined nonlinear models. I conclude with a list of open issues that must be addressed to make Bayesian adaptive exploration a practical and reliable tool for optimizing scientific exploration.

  19. Comparison of Iterative and Non-Iterative Strain-Gage Balance Load Calculation Methods

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2010-01-01

    The accuracy of iterative and non-iterative strain-gage balance load calculation methods was compared using data from the calibration of a force balance. Two iterative and one non-iterative method were investigated. In addition, transformations were applied to balance loads in order to process the calibration data in both direct read and force balance format. NASA's regression model optimization tool BALFIT was used to generate optimized regression models of the calibration data for each of the three load calculation methods. This approach made sure that the selected regression models met strict statistical quality requirements. The comparison of the standard deviation of the load residuals showed that the first iterative method may be applied to data in both the direct read and force balance format. The second iterative method, on the other hand, implicitly assumes that the primary gage sensitivities of all balance gages exist. Therefore, the second iterative method only works if the given balance data is processed in force balance format. The calibration data set was also processed using the non-iterative method. Standard deviations of the load residuals for the three load calculation methods were compared. Overall, the standard deviations show very good agreement. The load prediction accuracies of the three methods appear to be compatible as long as regression models used to analyze the calibration data meet strict statistical quality requirements. Recent improvements of the regression model optimization tool BALFIT are also discussed in the paper.

  20. ADAPTATION OF THE ADVANCED STATISTICAL TRAJECTORY REGIONAL AIR POLLUTION (ASTRAP) MODEL TO THE EPA VAX COMPUTER - MODIFICATIONS AND TESTING

    EPA Science Inventory

    The Advanced Statistical Trajectory Regional Air Pollution (ASTRAP) model simulates long-term transport and deposition of oxides of and nitrogen. t is a potential screening tool for assessing long-term effects on regional visibility from sulfur emission sources. owever, a rigorou...

  1. Smooth statistical torsion angle potential derived from a large conformational database via adaptive kernel density estimation improves the quality of NMR protein structures

    PubMed Central

    Bermejo, Guillermo A; Clore, G Marius; Schwieters, Charles D

    2012-01-01

    Statistical potentials that embody torsion angle probability densities in databases of high-quality X-ray protein structures supplement the incomplete structural information of experimental nuclear magnetic resonance (NMR) datasets. By biasing the conformational search during the course of structure calculation toward highly populated regions in the database, the resulting protein structures display better validation criteria and accuracy. Here, a new statistical torsion angle potential is developed using adaptive kernel density estimation to extract probability densities from a large database of more than 106 quality-filtered amino acid residues. Incorporated into the Xplor-NIH software package, the new implementation clearly outperforms an older potential, widely used in NMR structure elucidation, in that it exhibits simultaneously smoother and sharper energy surfaces, and results in protein structures with improved conformation, nonbonded atomic interactions, and accuracy. PMID:23011872

  2. Iterative and noniterative nonuniform quantisation techniques in digital holography

    NASA Astrophysics Data System (ADS)

    Shortt, Alison E.; Naughton, Thomas J.; Javidi, Bahram

    2006-04-01

    Compression is essential for efficient storage and transmission of three-dimensional (3D) digital holograms. The inherent speckle content in holographic data causes lossless compression techniques, such as Huffman and Burrows-Wheeler (BW), to perform poorly. Therefore, the combination of lossy quantisation followed by lossless compression is essential for effective compression of digital holograms. Our complex-valued digital holograms of 3D real-world objects were captured using phase-shift interferometry (PSI). Quantisation reduces the number of different real and imaginary values required to describe each hologram. Traditional data compression techniques can then be applied to the hologram to actually reduce its size. Since our data has a nonuniform distribution, the uniform quantisation technique does not perform optimally. We require nonuniform quantisation, since in a histogram representation our data is denser around the origin (low amplitudes), thus requiring more cluster centres, and sparser away from the origin (high amplitudes). By nonuniformly positioning the cluster centres to match the fact that there is a higher probability that the pixel will have a low amplitude value, the cluster centres can be used more efficiently. Nonuniform quantisation results in cluster centres that are adapted to the exact statistics of the input data. We analyse a number of iterative (k-means clustering, Kohonen competitive neural network, SOM, and annealed Hopfield neural network), and non-iterative (companding, histogram, and optimal) nonuniform quantisation techniques. We discuss the strengths and weaknesses of each technique and highlight important factors that must be considered when choosing between iterative and non-iterative nonuniform quantisation. We measure the degradation due to lossy quantisation in the reconstruction domain, using the normalised rms (NRMS) metric.

  3. ITER EDA project status

    NASA Astrophysics Data System (ADS)

    Chuyanov, V. A.

    1996-10-01

    The status of the ITER design is as presented in the Interim Design Report accepted by the ITER council for considerations by ITER parties. Physical and technical parameters of the machine, conditions of operation of main nuclear systems, corresponding design and material choices are described, with conventional materials selected. To fully utilize the safety and economical potential of fusion advanced materials are necessary. ITER shall and can be built with materials already available. The ITER project and advanced fusion material developments can proceed in parallel. The role of ITER is to establish (experimentally) requirements to these materials and to provide a test bed for their final qualification in fusion reactor environment. To achieve this goal, the first wall/blanket modules test program is foreseen.

  4. Adaptive Management of Ecosystems

    EPA Science Inventory

    Adaptive management is an approach to natural resource management that emphasizes learning through management. As such, management may be treated as experiment, with replication, or management may be conducted in an iterative manner. Although the concept has resonated with many...

  5. Energy Monitoring and Targeting as diagnosis; Applying work analysis to adapt a statistical change detection strategy using representation aiding

    NASA Astrophysics Data System (ADS)

    Hilliard, Antony

    Energy Monitoring and Targeting is a well-established business process that develops information about utility energy consumption in a business or institution. While M&T has persisted as a worthwhile energy conservation support activity, it has not been widely adopted. This dissertation explains M&T challenges in terms of diagnosing and controlling energy consumption, informed by a naturalistic field study of M&T work. A Cognitive Work Analysis of M&T identifies structures that diagnosis can search, information flows un-supported in canonical support tools, and opportunities to extend the most popular tool for MM&T: Cumulative Sum of Residuals (CUSUM) charts. A design application outlines how CUSUM charts were augmented with a more contemporary statistical change detection strategy, Recursive Parameter Estimates, modified to better suit the M&T task using Representation Aiding principles. The design was experimentally evaluated in a controlled M&T synthetic task, and was shown to significantly improve diagnosis performance.

  6. Reducing the latency of the Fractal Iterative Method to half an iteration

    NASA Astrophysics Data System (ADS)

    Béchet, Clémentine; Tallon, Michel

    2013-12-01

    The fractal iterative method for atmospheric tomography (FRiM-3D) has been introduced to solve the wavefront reconstruction at the dimensions of an ELT with a low-computational cost. Previous studies reported the requirement of only 3 iterations of the algorithm in order to provide the best adaptive optics (AO) performance. Nevertheless, any iterative method in adaptive optics suffer from the intrinsic latency induced by the fact that one iteration can start only once the previous one is completed. Iterations hardly match the low-latency requirement of the AO real-time computer. We present here a new approach to avoid iterations in the computation of the commands with FRiM-3D, thus allowing low-latency AO response even at the scale of the European ELT (E-ELT). The method highlights the importance of "warm-start" strategy in adaptive optics. To our knowledge, this particular way to use the "warm-start" has not been reported before. Futhermore, removing the requirement of iterating to compute the commands, the computational cost of the reconstruction with FRiM-3D can be simplified and at least reduced to half the computational cost of a classical iteration. Thanks to simulations of both single-conjugate and multi-conjugate AO for the E-ELT,with FRiM-3D on Octopus ESO simulator, we demonstrate the benefit of this approach. We finally enhance the robustness of this new implementation with respect to increasing measurement noise, wind speed and even modeling errors.

  7. Preconditioned Iterative Solver

    2002-08-01

    AztecOO contains a collection of preconditioned iterative methods for the solution of sparse linear systems of equations. In addition to providing many of the common algebraic preconditioners and basic iterative methods, AztecOO can be easily extended to interact with user-provided preconditioners and matrix operators.

  8. Iteration, Not Induction

    ERIC Educational Resources Information Center

    Dobbs, David E.

    2009-01-01

    The main purpose of this note is to present and justify proof via iteration as an intuitive, creative and empowering method that is often available and preferable as an alternative to proofs via either mathematical induction or the well-ordering principle. The method of iteration depends only on the fact that any strictly decreasing sequence of…

  9. Perl Modules for Constructing Iterators

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt

    2009-01-01

    The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.

  10. ITER nominates next leader

    NASA Astrophysics Data System (ADS)

    Clery, Daniel

    2015-01-01

    Bernard Bigot, chair of France’s Alternative Energies and Atomic Energy Commission (CEA), has been chosen as the next director-general of ITER - the experimental fusion reactor currently being built in Cadarache, France.

  11. ITER convertible blanket evaluation

    SciTech Connect

    Wong, C.P.C.; Cheng, E.

    1995-09-01

    Proposed International Thermonuclear Experimental Reactor (ITER) convertible blankets were reviewed. Key design difficulties were identified. A new particle filter concept is introduced and key performance parameters estimated. Results show that this particle filter concept can satisfy all of the convertible blanket design requirements except the generic issue of Be blanket lifetime. If the convertible blanket is an acceptable approach for ITER operation, this particle filter option should be a strong candidate.

  12. F-8C adaptive control law refinement and software development

    NASA Technical Reports Server (NTRS)

    Hartmann, G. L.; Stein, G.

    1981-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.

  13. Iterative electro-optic matrix processor

    NASA Astrophysics Data System (ADS)

    Carlotto, M. J.

    An electro-optic vector matrix processor with electronic feedback is described. The iterative optical processor (IOP) is designed for the rapid solution of linear algebraic equations. The IOP and the iterative algorithm it realizes are analyzed and simulated. A version of the system was fabricated using advanced solid state light sources and detectors plus fiber optic technology, and its performance is evaluated. An extension of the system using wavelength multiplexing is developed and the basic system concepts demonstrated. Its use in the restoration of degraded images or signals (deconvolution) and the computation of matrix eigenvectors and eigenvalues and matrix inversion are demonstrated. The two major case studies pursued are: adaptive phased array radar processing and optimal control. In the former case, the system is used to compute the adaptive antenna weights for a radar system. In the latter case, the IOP solves the linear quadratic regular and algebraic Ricatti equations of modern control theory.

  14. Impossible expectations: fMRI adaptation in the lateral occipital complex (LOC) is modulated by the statistical regularities of 3D structural information.

    PubMed

    Freud, Erez; Ganel, Tzvi; Avidan, Galia

    2015-11-15

    fMRI adaptation (fMRIa), the attenuation of fMRI signal which follows repeated presentation of a stimulus, is a well-documented phenomenon. Yet, the underlying neural mechanisms supporting this effect are not fully understood. Recently, short-term perceptual expectations, induced by specific experimental settings, were shown to play an important modulating role in fMRIa. Here we examined the role of long-term expectations, based on 3D structural statistical regularities, in the modulation of fMRIa. To this end, human participants underwent fMRI scanning while performing a same-different task on pairs of possible (regular, expected) objects and spatially impossible (irregular, unexpected) objects. We hypothesized that given the spatial irregularity of impossible objects in relation to real-world visual experience, the visual system would always generate a prediction which is biased to the possible version of the objects. Consistently, fMRIa effects in the lateral occipital cortex (LOC) were found for possible, but not for impossible objects. Additionally, in alternating trials the order of stimulus presentation modulated LOC activity. That is, reduced activation was observed in trials in which the impossible version of the object served as the prime object (i.e. first object) and was followed by the possible version compared to the reverse order. These results were also supported by the behavioral advantage observed for trials that were primed by possible objects. Together, these findings strongly emphasize the importance of perceptual expectations in object representation and provide novel evidence for the role of real-world statistical regularities in eliciting fMRIa. PMID:26254586

  15. ITER tokamak device

    NASA Astrophysics Data System (ADS)

    Doggett, J.; Salpietro, E.; Shatalov, G.

    1991-07-01

    The results of the Conceptual Design Activities for the International Thermonuclear Experimental Reactor (ITER) are summarized. These activities, carried out between April 1988 and December 1990, produced a consistent set of technical characteristics and preliminary plans for co-ordinated research and development support of ITER, a conceptual design, a description of design requirements and a preliminary construction schedule and cost estimate. After a description of the design basis, an overview is given of the tokamak device, its auxiliary systems, facility and maintenance. The interrelation and integration of the various subsystems that form the ITER tokamak concept are discussed. The 16 ITER equatorial port allocations, used for nuclear testing, diagnostics, fueling, maintenance, and heating and current drive, are given, as well as a layout of the reactor building. Finally, brief descriptions are given of the major ITER sub-systems, i.e., (1) magnet systems (toroidal and poloidal field coils and cryogenic systems), (2) containment structures (vacuum and cryostat vessels, machine gravity supports, attaching locks, passive loops and active coils), (3) first wall, (4) divertor plate (design and materials, performance and lifetime, a.o.), (5) blanket/shield system, (6) maintenance equipment, (7) current drive and heating, (8) fuel cycle system, and (9) diagnostics.

  16. Statistical Engineering in Air Traffic Management Research

    NASA Technical Reports Server (NTRS)

    Wilson, Sara R.

    2015-01-01

    NASA is working to develop an integrated set of advanced technologies to enable efficient arrival operations in high-density terminal airspace for the Next Generation Air Transportation System. This integrated arrival solution is being validated and verified in laboratories and transitioned to a field prototype for an operational demonstration at a major U.S. airport. Within NASA, this is a collaborative effort between Ames and Langley Research Centers involving a multi-year iterative experimentation process. Designing and analyzing a series of sequential batch computer simulations and human-in-the-loop experiments across multiple facilities and simulation environments involves a number of statistical challenges. Experiments conducted in separate laboratories typically have different limitations and constraints, and can take different approaches with respect to the fundamental principles of statistical design of experiments. This often makes it difficult to compare results from multiple experiments and incorporate findings into the next experiment in the series. A statistical engineering approach is being employed within this project to support risk-informed decision making and maximize the knowledge gained within the available resources. This presentation describes a statistical engineering case study from NASA, highlights statistical challenges, and discusses areas where existing statistical methodology is adapted and extended.

  17. Robust iterative methods

    SciTech Connect

    Saadd, Y.

    1994-12-31

    In spite of the tremendous progress achieved in recent years in the general area of iterative solution techniques, there are still a few obstacles to the acceptance of iterative methods in a number of applications. These applications give rise to very indefinite or highly ill-conditioned non Hermitian matrices. Trying to solve these systems with the simple-minded standard preconditioned Krylov subspace methods can be a frustrating experience. With the mathematical and physical models becoming more sophisticated, the typical linear systems which we encounter today are far more difficult to solve than those of just a few years ago. This trend is likely to accentuate. This workshop will discuss (1) these applications and the types of problems that they give rise to; and (2) recent progress in solving these problems with iterative methods. The workshop will end with a hopefully stimulating panel discussion with the speakers.

  18. Statistics of the sodium layer parameters at low geographic latitude and its impact on adaptive-optics sodium laser guide star characteristics

    NASA Astrophysics Data System (ADS)

    Moussaoui, N.; Clemesha, B. R.; Holzlöhner, R.; Simonich, D. M.; Bonaccini Calia, D.; Hackenberg, W.; Batista, P. P.

    2010-02-01

    Aims: To aid the design of laser guide star (LGS) assisted adaptive optics (AO) systems, we present an analysis of the statistics of the mesospheric sodium layer based on long-term observations (35 years). Methods: We analyze measurements of the Na-layer characteristics covering a long period (1973-2008), acquired at latitude 23degr south, in Stilde{a}o Josacute{e} dos Compos, Stilde{a}o Paulo, Brazil. We note that Paranal (Chile) is located at latitude 24degr south, approximately the same latitude as Stilde{a}o Paulo. Results: This study allowed us to assess the availability of LGS-assisted AO systems depending on the sodium layer properties. We also present an analysis of the LGSs spot elongation over the year, as well as the nocturnal and the seasonal variation in the mesospheric sodium layer parameters. Conclusions: The average values of the sodium layer parameters are 92.09 km for the centroid height, 11.37 km for the layer thickness, and 5×1013 m-2 for the column abundance. Assuming a laser of sufficient power to produce an adequate photon return flux for an AO system with a column abundance of 4× 1013 m-2, a telescope could observe at low geographic latitudes with the sodium LGS more than 250 days per year. Increasing this power by 20%, we could observe throughout the entire year.

  19. Rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, produce modified schedules, quickly, and exhibits 'anytime' behavior. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. We also show the anytime characteristics of the system. These experiments were performed within the domain of Space Shuttle ground processing.

  20. Rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, and produce modified schedules quickly. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. These experiments were performed within the domain of Space Shuttle ground processing.

  1. Iterated multidimensional wave conversion

    NASA Astrophysics Data System (ADS)

    Brizard, A. J.; Tracy, E. R.; Johnston, D.; Kaufman, A. N.; Richardson, A. S.; Zobin, N.

    2011-12-01

    Mode conversion can occur repeatedly in a two-dimensional cavity (e.g., the poloidal cross section of an axisymmetric tokamak). We report on two novel concepts that allow for a complete and global visualization of the ray evolution under iterated conversions. First, iterated conversion is discussed in terms of ray-induced maps from the two-dimensional conversion surface to itself (which can be visualized in terms of three-dimensional rooms). Second, the two-dimensional conversion surface is shown to possess a symplectic structure derived from Dirac constraints associated with the two dispersion surfaces of the interacting waves.

  2. A holistic strategy for adaptive land management

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Adaptive management is widely applied to natural resources management. Adaptive management can be generally defined as an iterative decision-making process that incorporates formulation of management objectives, actions designed to address these objectives, monitoring of results, and repeated adapta...

  3. Parallel iterative methods for sparse linear and nonlinear equations

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1989-01-01

    As three-dimensional models are gaining importance, iterative methods will become almost mandatory. Among these, preconditioned Krylov subspace methods have been viewed as the most efficient and reliable, when solving linear as well as nonlinear systems of equations. There has been several different approaches taken to adapt iterative methods for supercomputers. Some of these approaches are discussed and the methods that deal more specifically with general unstructured sparse matrices, such as those arising from finite element methods, are emphasized.

  4. Iterative Vessel Segmentation of Fundus Images.

    PubMed

    Roychowdhury, Sohini; Koozekanani, Dara D; Parhi, Keshab K

    2015-07-01

    This paper presents a novel unsupervised iterative blood vessel segmentation algorithm using fundus images. First, a vessel enhanced image is generated by tophat reconstruction of the negative green plane image. An initial estimate of the segmented vasculature is extracted by global thresholding the vessel enhanced image. Next, new vessel pixels are identified iteratively by adaptive thresholding of the residual image generated by masking out the existing segmented vessel estimate from the vessel enhanced image. The new vessel pixels are, then, region grown into the existing vessel, thereby resulting in an iterative enhancement of the segmented vessel structure. As the iterations progress, the number of false edge pixels identified as new vessel pixels increases compared to the number of actual vessel pixels. A key contribution of this paper is a novel stopping criterion that terminates the iterative process leading to higher vessel segmentation accuracy. This iterative algorithm is robust to the rate of new vessel pixel addition since it achieves 93.2-95.35% vessel segmentation accuracy with 0.9577-0.9638 area under ROC curve (AUC) on abnormal retinal images from the STARE dataset. The proposed algorithm is computationally efficient and consistent in vessel segmentation performance for retinal images with variations due to pathology, uneven illumination, pigmentation, and fields of view since it achieves a vessel segmentation accuracy of about 95% in an average time of 2.45, 3.95, and 8 s on images from three public datasets DRIVE, STARE, and CHASE_DB1, respectively. Additionally, the proposed algorithm has more than 90% segmentation accuracy for segmenting peripapillary blood vessels in the images from the DRIVE and CHASE_DB1 datasets. PMID:25700436

  5. An Iterative Angle Trisection

    ERIC Educational Resources Information Center

    Muench, Donald L.

    2007-01-01

    The problem of angle trisection continues to fascinate people even though it has long been known that it can't be done with straightedge and compass alone. However, for practical purposes, a good iterative procedure can get you as close as you want. In this note, we present such a procedure. Using only straightedge and compass, our procedure…

  6. Iterative software kernels

    SciTech Connect

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  7. ITER Fusion Energy

    ScienceCinema

    Dr. Norbert Holtkamp

    2010-01-08

    ITER (in Latin ?the way?) is designed to demonstrate the scientific and technological feasibility of fusion energy. Fusion is the process by which two light atomic nuclei combine to form a heavier over one and thus release energy. In the fusion process two isotopes of hydrogen ? deuterium and tritium ? fuse together to form a helium atom and a neutron. Thus fusion could provide large scale energy production without greenhouse effects; essentially limitless fuel would be available all over the world. The principal goals of ITER are to generate 500 megawatts of fusion power for periods of 300 to 500 seconds with a fusion power multiplication factor, Q, of at least 10. Q ? 10 (input power 50 MW / output power 500 MW). The ITER Organization was officially established in Cadarache, France, on 24 October 2007. The seven members engaged in the project ? China, the European Union, India, Japan, Korea, Russia and the United States ? represent more than half the world?s population. The costs for ITER are shared by the seven members. The cost for the construction will be approximately 5.5 billion Euros, a similar amount is foreseen for the twenty-year phase of operation and the subsequent decommissioning.

  8. Cosmic statistics of statistics

    NASA Astrophysics Data System (ADS)

    Szapudi, István; Colombi, Stéphane; Bernardeau, Francis

    1999-12-01

    The errors on statistics measured in finite galaxy catalogues are exhaustively investigated. The theory of errors on factorial moments by Szapudi & Colombi is applied to cumulants via a series expansion method. All results are subsequently extended to the weakly non-linear regime. Together with previous investigations this yields an analytic theory of the errors for moments and connected moments of counts in cells from highly non-linear to weakly non-linear scales. For non-linear functions of unbiased estimators, such as the cumulants, the phenomenon of cosmic bias is identified and computed. Since it is subdued by the cosmic errors in the range of applicability of the theory, correction for it is inconsequential. In addition, the method of Colombi, Szapudi & Szalay concerning sampling effects is generalized, adapting the theory for inhomogeneous galaxy catalogues. While previous work focused on the variance only, the present article calculates the cross-correlations between moments and connected moments as well for a statistically complete description. The final analytic formulae representing the full theory are explicit but somewhat complicated. Therefore we have made available a fortran program capable of calculating the described quantities numerically (for further details e-mail SC at colombi@iap.fr). An important special case is the evaluation of the errors on the two-point correlation function, for which this should be more accurate than any method put forward previously. This tool will be immensely useful in the future for assessing the precision of measurements from existing catalogues, as well as aiding the design of new galaxy surveys. To illustrate the applicability of the results and to explore the numerical aspects of the theory qualitatively and quantitatively, the errors and cross-correlations are predicted under a wide range of assumptions for the future Sloan Digital Sky Survey. The principal results concerning the cumulants ξ, Q3 and Q4 is that

  9. The solution of radiative transfer problems in molecular bands without the LTE assumption by accelerated lambda iteration methods

    NASA Technical Reports Server (NTRS)

    Kutepov, A. A.; Kunze, D.; Hummer, D. G.; Rybicki, G. B.

    1991-01-01

    An iterative method based on the use of approximate transfer operators, which was designed initially to solve multilevel NLTE line formation problems in stellar atmospheres, is adapted and applied to the solution of the NLTE molecular band radiative transfer in planetary atmospheres. The matrices to be constructed and inverted are much smaller than those used in the traditional Curtis matrix technique, which makes possible the treatment of more realistic problems using relatively small computers. This technique converges much more rapidly than straightforward iteration between the transfer equation and the equations of statistical equilibrium. A test application of this new technique to the solution of NLTE radiative transfer problems for optically thick and thin bands (the 4.3 micron CO2 band in the Venusian atmosphere and the 4.7 and 2.3 micron CO bands in the earth's atmosphere) is described.

  10. X-Ray Dose Reduction in Abdominal Computed Tomography Using Advanced Iterative Reconstruction Algorithms

    PubMed Central

    Ning, Peigang; Zhu, Shaocheng; Shi, Dapeng; Guo, Ying; Sun, Minghua

    2014-01-01

    Objective This work aims to explore the effects of adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) algorithms in reducing computed tomography (CT) radiation dosages in abdominal imaging. Methods CT scans on a standard male phantom were performed at different tube currents. Images at the different tube currents were reconstructed with the filtered back-projection (FBP), 50% ASiR and MBIR algorithms and compared. The CT value, image noise and contrast-to-noise ratios (CNRs) of the reconstructed abdominal images were measured. Volumetric CT dose indexes (CTDIvol) were recorded. Results At different tube currents, 50% ASiR and MBIR significantly reduced image noise and increased the CNR when compared with FBP. The minimal tube current values required by FBP, 50% ASiR, and MBIR to achieve acceptable image quality using this phantom were 200, 140, and 80 mA, respectively. At the identical image quality, 50% ASiR and MBIR reduced the radiation dose by 35.9% and 59.9% respectively when compared with FBP. Conclusions Advanced iterative reconstruction techniques are able to reduce image noise and increase image CNRs. Compared with FBP, 50% ASiR and MBIR reduced radiation doses by 35.9% and 59.9%, respectively. PMID:24664174

  11. OBJECTIVE TASK-BASED ASSESSMENT OF LOW-CONTRAST DETECTABILITY IN ITERATIVE RECONSTRUCTION.

    PubMed

    Racine, Damien; Ott, Julien G; Ba, Alexandre; Ryckx, Nick; Bochud, François O; Verdun, Francis R

    2016-06-01

    Evaluating image quality by using receiver operating characteristic studies is time consuming and difficult to implement. This work assesses a new iterative algorithm using a channelised Hotelling observer (CHO). For this purpose, an anthropomorphic abdomen phantom with spheres of various sizes and contrasts was scanned at 3 volume computed tomography dose index (CTDIvol) levels on a GE Revolution CT. Images were reconstructed using the iterative reconstruction method adaptive statistical iterative reconstruction-V (ASIR-V) at ASIR-V 0, 50 and 70 % and assessed by applying a CHO with dense difference of Gaussian and internal noise. Both CHO and human observers (HO) were compared based on a four-alternative forced-choice experiment, using the percentage correct as a figure of merit. The results showed accordance between CHO and HO. Moreover, an improvement in the low-contrast detection was observed when switching from ASIR-V 0 to 50 %. The results underpin the finding that ASIR-V allows dose reduction. PMID:26922787

  12. Synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON): A statistical model based iterative image reconstruction method to eliminate limited-view artifacts and to mitigate the temporal-average artifacts in time-resolved CT

    PubMed Central

    Chen, Guang-Hong; Li, Yinsheng

    2015-01-01

    Purpose: In x-ray computed tomography (CT), a violation of the Tuy data sufficiency condition leads to limited-view artifacts. In some applications, it is desirable to use data corresponding to a narrow temporal window to reconstruct images with reduced temporal-average artifacts. However, the need to reduce temporal-average artifacts in practice may result in a violation of the Tuy condition and thus undesirable limited-view artifacts. In this paper, the authors present a new iterative reconstruction method, synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON), to eliminate limited-view artifacts using data acquired within an ultranarrow temporal window that severely violates the Tuy condition. Methods: In time-resolved contrast enhanced CT acquisitions, image contrast dynamically changes during data acquisition. Each image reconstructed from data acquired in a given temporal window represents one time frame and can be denoted as an image vector. Conventionally, each individual time frame is reconstructed independently. In this paper, all image frames are grouped into a spatial–temporal image matrix and are reconstructed together. Rather than the spatial and/or temporal smoothing regularizers commonly used in iterative image reconstruction, the nuclear norm of the spatial–temporal image matrix is used in SMART-RECON to regularize the reconstruction of all image time frames. This regularizer exploits the low-dimensional structure of the spatial–temporal image matrix to mitigate limited-view artifacts when an ultranarrow temporal window is desired in some applications to reduce temporal-average artifacts. Both numerical simulations in two dimensional image slices with known ground truth and in vivo human subject data acquired in a contrast enhanced cone beam CT exam have been used to validate the proposed SMART-RECON algorithm and to demonstrate the initial performance of the algorithm. Reconstruction errors and temporal fidelity

  13. ITER breeding blanket design

    SciTech Connect

    Gohar, Y.; Cardella, A.; Ioki, K.; Lousteau, D.; Mohri, K.; Raffray, R.; Zolti, E.

    1995-12-31

    A breeding blanket design has been developed for ITER to provide the necessary tritium fuel to achieve the technical objectives of the Enhanced Performance Phase. It uses a ceramic breeder and water coolant for compatibility with the ITER machine design of the Basic Performance Phase. Lithium zirconate and lithium oxide am the selected ceramic breeders based on the current data base. Enriched lithium and beryllium neutron multiplier are used for both breeders. Both forms of beryllium material, blocks and pebbles are used at different blanket locations based on thermo-mechanical considerations and beryllium thickness requirements. Type 316LN austenitic steel is used as structural material similar to the shielding blanket. Design issues and required R&D data are identified during the development of the design.

  14. Experimental investigation of iterative reconstruction techniques for high resolution mammography

    NASA Astrophysics Data System (ADS)

    Vengrinovich, Valery L.; Zolotarev, Sergei A.; Linev, Vladimir N.

    2014-02-01

    The further development of the new iterative reconstruction algorithms to improve three-dimensional breast images quality restored from incomplete and noisy mammograms, is provided. The algebraic reconstruction method with simultaneous iterations - Simultaneous Algebraic Reconstruction Technique (SART) and the iterative method of statistical reconstruction Bayesian Iterative Reconstruction (BIR) are referred here as the preferable iterative methods suitable to improve the image quality. For better processing we use the Graphics Processing Unit (GPU). Method of minimizing the Total Variation (TV) is used as a priori support for regularization of iteration process and to reduce the level of noise in the reconstructed image. Preliminary results with physical phantoms show that all examined methods are capable to reconstruct structures layer-by-layer and to separate layers which images are overlapped in the Z- direction. It was found that the method of traditional Shift-And-Add tomosynthesis (SAA) is worse than iterative methods SART and BIR in terms of suppression of the anatomical noise and image blurring in between the adjacent layers. Despite of the fact that the measured contrast/noise ratio in the presence of low contrast internal structures is higher for the method of tomosynthesis SAA than for SART and BIR methods, its effectiveness in the presence of structured background is rather poor. In our opinion the optimal results can be achieved using Bayesian iterative reconstruction BIR.

  15. Neutron activation for ITER

    SciTech Connect

    Barnes, C.W.; Loughlin, M.J.; Nishitani, Takeo

    1996-04-29

    There are three primary goals for the Neutron Activation system for ITER: maintain a robust relative measure of fusion power with stability and high dynamic range (7 orders of magnitude); allow an absolute calibration of fusion power (energy); and provide a flexible and reliable system for materials testing. The nature of the activation technique is such that stability and high dynamic range can be intrinsic properties of the system. It has also been the technique that demonstrated (on JET and TFTR) the highest accuracy neutron measurements in DT operation. Since the gamma-ray detectors are not located on the tokamak and are therefore amenable to accurate characterization, and if material foils are placed very close to the ITER plasma with minimum scattering or attenuation, high overall accuracy in the fusion energy production (7--10%) should be achievable on ITER. In the paper, a conceptual design is presented. A system is shown to be capable of meeting these three goals, also detailed design issues remain to be solved.

  16. Linear iterative solvers for implicit ODE methods

    NASA Technical Reports Server (NTRS)

    Saylor, Paul E.; Skeel, Robert D.

    1990-01-01

    The numerical solution of stiff initial value problems, which lead to the problem of solving large systems of mildly nonlinear equations are considered. For many problems derived from engineering and science, a solution is possible only with methods derived from iterative linear equation solvers. A common approach to solving the nonlinear equations is to employ an approximate solution obtained from an explicit method. The error is examined to determine how it is distributed among the stiff and non-stiff components, which bears on the choice of an iterative method. The conclusion is that error is (roughly) uniformly distributed, a fact that suggests the Chebyshev method (and the accompanying Manteuffel adaptive parameter algorithm). This method is described, also commenting on Richardson's method and its advantages for large problems. Richardson's method and the Chebyshev method with the Mantueffel algorithm are applied to the solution of the nonlinear equations by Newton's method.

  17. High contrast laminography using iterative algorithms

    NASA Astrophysics Data System (ADS)

    Kroupa, M.; Jakubek, J.

    2011-01-01

    3D X-ray imaging of internal structure of large flat objects is often complicated by limited access to all viewing angles or extremely high absorption in certain directions, therefore the standard method of computed tomography (CT) fails. This problem can be solved by the method of laminography. During a laminographic measurement the imaging detector is placed close to the sample while the X-ray source irradiates both sample and detector at different angles. The application of the state-of-the-art pixel detector Medipix in laminography together with adapted tomographic iterative alghorithms for 3D reconstruction of sample structure has been investigated. Iterative algorithms such as EM (Expectation Maximization) and OSEM (Ordered Subset Expectation Maximization) improve the quality of the reconstruction and allow including more complex physical models. In this contribution results and proposed future approaches which could be used for resolution enhancement are presented.

  18. Morphological representation of order-statistics filters.

    PubMed

    Charif-Chefchaouni, M; Schonfeld, D

    1995-01-01

    We propose a comprehensive theory for the morphological bounds on order-statistics filters (and their repeated iterations). Conditions are derived for morphological openings and closings to serve as bounds (lower and upper, respectively) on order-statistics filters (and their repeated iterations). Under various assumptions, morphological open-closings and close-openings are also shown to serve as (tighter) bounds (lower and upper, respectively) on iterations of order-statistics filters. Simulations of the application of the results presented to image restoration are finally provided. PMID:18290034

  19. Searching with iterated maps

    PubMed Central

    Elser, V.; Rankenburg, I.; Thibault, P.

    2007-01-01

    In many problems that require extensive searching, the solution can be described as satisfying two competing constraints, where satisfying each independently does not pose a challenge. As an alternative to tree-based and stochastic searching, for these problems we propose using an iterated map built from the projections to the two constraint sets. Algorithms of this kind have been the method of choice in a large variety of signal-processing applications; we show here that the scope of these algorithms is surprisingly broad, with applications as diverse as protein folding and Sudoku. PMID:17202267

  20. Iterative Magnetometer Calibration

    NASA Technical Reports Server (NTRS)

    Sedlak, Joseph

    2006-01-01

    This paper presents an iterative method for three-axis magnetometer (TAM) calibration that makes use of three existing utilities recently incorporated into the attitude ground support system used at NASA's Goddard Space Flight Center. The method combines attitude-independent and attitude-dependent calibration algorithms with a new spinning spacecraft Kalman filter to solve for biases, scale factors, nonorthogonal corrections to the alignment, and the orthogonal sensor alignment. The method is particularly well-suited to spin-stabilized spacecraft, but may also be useful for three-axis stabilized missions given sufficient data to provide observability.

  1. Two-step iterative reconstruction of region-of-interest with truncated projection in computed tomography

    NASA Astrophysics Data System (ADS)

    Yamakawa, Keisuke; Kojima, Shinichi

    2014-03-01

    Iteratively reconstructing data only inside the region of interest (ROI) is widely used to acquire CT images in less computation time while maintaining high spatial resolution. A method that subtracts projected data outside the ROI from full-coverage measured data has been proposed. A serious problem with this method is that the accuracy of the measured data confined inside the ROI decreases according to the truncation error outside the ROI. We propose a two-step iterative method that reconstructs image inside the full-coverage in addition to a conventional iterative method inside the ROI to reduce the truncation error inside full-coverage images. Statistical information (e.g., quantum-noise distributions) acquired by detected X-ray photons is generally used in iterative methods as a photon weight to efficiently reduce image noise. Our proposed method applies one of two kinds of weights (photon or constant weights) chosen adaptively by taking into consideration the influence of truncation error. The effectiveness of the proposed method compared with that of the conventional method was evaluated in terms of simulated CT values by using elliptical phantoms and an abdomen phantom. The standard deviation of error and the average absolute error of the proposed method on the profile curve were respectively reduced from 3.4 to 0.4 [HU] and from 2.8 to 0.8 [HU] compared with that of the conventional method. As a result, applying a suitable weight on the basis of a target object made it possible to effectively reduce the errors in CT images.

  2. ITER helium ash accumulation

    SciTech Connect

    Hogan, J.T.; Hillis, D.L.; Galambos, J.; Uckan, N.A. ); Dippel, K.H.; Finken, K.H. . Inst. fuer Plasmaphysik); Hulse, R.A.; Budny, R.V. . Plasma Physics Lab.)

    1990-01-01

    Many studies have shown the importance of the ratio {upsilon}{sub He}/{upsilon}{sub E} in determining the level of He ash accumulation in future reactor systems. Results of the first tokamak He removal experiments have been analysed, and a first estimate of the ratio {upsilon}{sub He}/{upsilon}{sub E} to be expected for future reactor systems has been made. The experiments were carried out for neutral beam heated plasmas in the TEXTOR tokamak, at KFA/Julich. Helium was injected both as a short puff and continuously, and subsequently extracted with the Advanced Limiter Test-II pump limiter. The rate at which the He density decays has been determined with absolutely calibrated charge exchange spectroscopy, and compared with theoretical models, using the Multiple Impurity Species Transport (MIST) code. An analysis of energy confinement has been made with PPPL TRANSP code, to distinguish beam from thermal confinement, especially for low density cases. The ALT-II pump limiter system is found to exhaust the He with maximum exhaust efficiency (8 pumps) of {approximately}8%. We find 1<{upsilon}{sub He}/{upsilon}{sub E}<3.3 for the database of cases analysed to date. Analysis with the ITER TETRA systems code shows that these values would be adequate to achieve the required He concentration with the present ITER divertor He extraction system.

  3. Improving IRT Item Bias Detection with Iterative Linking and Ability Scale Purification.

    ERIC Educational Resources Information Center

    Park, Dong-Gun; Lautenschlager, Gary J.

    1990-01-01

    The effectiveness of two iterative methods of item response theory (IRT) item bias detection was examined in a simulation study. A modified form of the iterative item parameter linking method of F. Drasgow and an adaptation of the test purification procedure of F. M. Lord were compared. (SLD)

  4. Application of Adaptive Design Methodology in Development of a Long-Acting Glucagon-Like Peptide-1 Analog (Dulaglutide): Statistical Design and Simulations

    PubMed Central

    Skrivanek, Zachary; Berry, Scott; Berry, Don; Chien, Jenny; Geiger, Mary Jane; Anderson, James H.; Gaydos, Brenda

    2012-01-01

    Background Dulaglutide (dula, LY2189265), a long-acting glucagon-like peptide-1 analog, is being developed to treat type 2 diabetes mellitus. Methods To foster the development of dula, we designed a two-stage adaptive, dose-finding, inferentially seamless phase 2/3 study. The Bayesian theoretical framework is used to adaptively randomize patients in stage 1 to 7 dula doses and, at the decision point, to either stop for futility or to select up to 2 dula doses for stage 2. After dose selection, patients continue to be randomized to the selected dula doses or comparator arms. Data from patients assigned the selected doses will be pooled across both stages and analyzed with an analysis of covariance model, using baseline hemoglobin A1c and country as covariates. The operating characteristics of the trial were assessed by extensive simulation studies. Results Simulations demonstrated that the adaptive design would identify the correct doses 88% of the time, compared to as low as 6% for a fixed-dose design (the latter value based on frequentist decision rules analogous to the Bayesian decision rules for adaptive design). Conclusions This article discusses the decision rules used to select the dula dose(s); the mathematical details of the adaptive algorithm—including a description of the clinical utility index used to mathematically quantify the desirability of a dose based on safety and efficacy measurements; and a description of the simulation process and results that quantify the operating characteristics of the design. PMID:23294775

  5. ECRH System For ITER

    SciTech Connect

    Darbos, C.; Henderson, M.; Gandini, F.; Albajar, F.; Bomcelli, T.; Heidinger, R.; Saibene, G.; Chavan, R.; Goodman, T.; Hogge, J. P.; Sauter, O.; Denisov, G.; Farina, D.; Kajiwara, K.; Kasugai, A.; Kobayashi, N.; Oda, Y.; Ramponi, G.

    2009-11-26

    A 26 MW Electron Cyclotron Heating and Current Drive (EC H and CD) system is to be installed for ITER. The main objectives are to provide, start-up assist, central H and CD and control of MHD activity. These are achieved by a combination of two types of launchers, one located in an equatorial port and the second type in four upper ports. The physics applications are partitioned between the two launchers, based on the deposition location and driven current profiles. The equatorial launcher (EL) will access from the plasma axis to mid radius with a relatively broad profile useful for central heating and current drive applications, while the upper launchers (ULs) will access roughly the outer half of the plasma radius with a very narrow peaked profile for the control of the Neoclassical Tearing Modes (NTM) and sawtooth oscillations. The EC power can be switched between launchers on a time scale as needed by the immediate physics requirements. A revision of all injection angles of all launchers is under consideration for increased EC physics capabilities while relaxing the engineering constraints of both the EL and ULs. A series of design reviews are being planned with the five parties (EU, IN, JA, RF, US) procuring the EC system, the EC community and ITER Organization (IO). The review meetings qualify the design and provide an environment for enhancing performances while reducing costs, simplifying interfaces, predicting technology upgrades and commercial availability. In parallel, the test programs for critical components are being supported by IO and performed by the Domestic Agencies (DAs) for minimizing risks. The wide participation of the DAs provides a broad representation from the EC community, with the aim of collecting all expertise in guiding the EC system optimization. Still a strong relationship between IO and the DA is essential for optimizing the design of the EC system and for the installation and commissioning of all ex-vessel components when several

  6. ECRH System For ITER

    NASA Astrophysics Data System (ADS)

    Darbos, C.; Henderson, M.; Albajar, F.; Bigelow, T.; Bomcelli, T.; Chavan, R.; Denisov, G.; Farina, D.; Gandini, F.; Heidinger, R.; Goodman, T.; Hogge, J. P.; Kajiwara, K.; Kasugai, A.; Kern, S.; Kobayashi, N.; Oda, Y.; Ramponi, G.; Rao, S. L.; Rasmussen, D.; Rzesnicki, T.; Saibene, G.; Sakamoto, K.; Sauter, O.; Scherer, T.; Strauss, D.; Takahashi, K.; Zohm, H.

    2009-11-01

    A 26 MW Electron Cyclotron Heating and Current Drive (EC H&CD) system is to be installed for ITER. The main objectives are to provide, start-up assist, central H&CD and control of MHD activity. These are achieved by a combination of two types of launchers, one located in an equatorial port and the second type in four upper ports. The physics applications are partitioned between the two launchers, based on the deposition location and driven current profiles. The equatorial launcher (EL) will access from the plasma axis to mid radius with a relatively broad profile useful for central heating and current drive applications, while the upper launchers (ULs) will access roughly the outer half of the plasma radius with a very narrow peaked profile for the control of the Neoclassical Tearing Modes (NTM) and sawtooth oscillations. The EC power can be switched between launchers on a time scale as needed by the immediate physics requirements. A revision of all injection angles of all launchers is under consideration for increased EC physics capabilities while relaxing the engineering constraints of both the EL and ULs. A series of design reviews are being planned with the five parties (EU, IN, JA, RF, US) procuring the EC system, the EC community and ITER Organization (IO). The review meetings qualify the design and provide an environment for enhancing performances while reducing costs, simplifying interfaces, predicting technology upgrades and commercial availability. In parallel, the test programs for critical components are being supported by IO and performed by the Domestic Agencies (DAs) for minimizing risks. The wide participation of the DAs provides a broad representation from the EC community, with the aim of collecting all expertise in guiding the EC system optimization. Still a strong relationship between IO and the DA is essential for optimizing the design of the EC system and for the installation and commissioning of all ex-vessel components when several teams

  7. Iterative modulo scheduling

    SciTech Connect

    Rau, B.R.

    1996-02-01

    Modulo scheduling is a framework within which algorithms for software pipelining innermost loops may be defined. The framework specifies a set of constraints that must be met in order to achieve a legal modulo schedule. A wide variety of algorithms and heuristics can be defined within this framework. Little work has been done to evaluate and compare alternative algorithms and heuristics for modulo scheduling from the viewpoints of schedule quality as well as computational complexity. This, along with a vague and unfounded perception that modulo scheduling is computationally expensive as well as difficult to implement, have inhibited its corporation into product compilers. This paper presents iterative modulo scheduling, a practical algorithm that is capable of dealing with realistic machine models. The paper also characterizes the algorithm in terms of the quality of the generated schedules as well as the computational incurred.

  8. A bounded iterative closest point method for minimally invasive registration of the femur.

    PubMed

    Rodriguez y Baena, Ferdinando; Hawke, Trevor; Jakopec, Matjaz

    2013-10-01

    This article describes a novel method for image-based, minimally invasive registration of the femur, for application to computer-assisted unicompartmental knee arthroplasty. The method is adapted from the well-known iterative closest point algorithm. By utilising an estimate of the hip centre on both the preoperative model and intraoperative patient anatomy, the proposed 'bounded' iterative closest point algorithm robustly produces accurate varus-valgus and anterior-posterior femoral alignment with minimal distal access requirements. Similar to the original iterative closest point implementation, the bounded iterative closest point algorithm converges monotonically to the closest minimum, and the presented case includes a common method for global minimum identification. The bounded iterative closest point method has shown to have exceptional resistance to noise during feature acquisition through simulations and in vitro plastic bone trials, where its performance is compared to a standard form of the iterative closest point algorithm. PMID:23959859

  9. ITER Diagnostic First Wal

    SciTech Connect

    G. Douglas Loesser, et. al.

    2012-09-21

    The ITER Diagnostic Division is responsible for designing and procuring the First Wall Blankets that are mounted on the vacuum vessel port plugs at both the upper and equatorial levels This paper will discuss the effects of the diagnostic aperture shape and configuration on the coolant circuit design. The DFW design is driven in large part by the need to conform the coolant arrangement to a wide variety of diagnostic apertures combined with the more severe heating conditions at the surface facing the plasma, the first wall. At the first wall, a radiant heat flux of 35W/cm2 combines with approximate peak volumetric heating rates of 8W/cm3 (equatorial ports) and 5W/cm3 (upper ports). Here at the FW, a fast thermal response is desirable and leads to a thin element between the heat flux and coolant. This requirement is opposed by the wish for a thicker FW element to accommodate surface erosion and other off-normal plasma events.

  10. Mode conversion in ITER

    NASA Astrophysics Data System (ADS)

    Jaeger, E. F.; Berry, L. A.; Myra, J. R.

    2006-10-01

    Fast magnetosonic waves in the ion cyclotron range of frequencies (ICRF) can convert to much shorter wavelength modes such as ion Bernstein waves (IBW) and ion cyclotron waves (ICW) [1]. These modes are potentially useful for plasma control through the generation of localized currents and sheared flows. As part of the SciDAC Center for Simulation of Wave-Plasma Interactions project, the AORSA global-wave solver [2] has been ported to the new, dual-core Cray XT-3 (Jaguar) at ORNL where it demonstrates excellent scaling with the number of processors. Preliminary calculations using 4096 processors have allowed the first full-wave simulations of mode conversion in ITER. Mode conversion from the fast wave to the ICW is observed in mixtures of deuterium, tritium and helium3 at 53 MHz. The resulting flow velocity and electric field shear will be calculated. [1] F.W. Perkins, Nucl. Fusion 17, 1197 (1977). [2] E.F. Jaeger, L.A. Berry, J.R. Myra, et al., Phys. Rev. Lett. 90, 195001-1 (2003).

  11. Iterative denoising of ghost imaging.

    PubMed

    Yao, Xu-Ri; Yu, Wen-Kai; Liu, Xue-Feng; Li, Long-Zhen; Li, Ming-Fei; Wu, Ling-An; Zhai, Guang-Jie

    2014-10-01

    We present a new technique to denoise ghost imaging (GI) in which conventional intensity correlation GI and an iteration process have been combined to give an accurate estimate of the actual noise affecting image quality. The blurring influence of the speckle areas in the beam is reduced in the iteration by setting a threshold. It is shown that with an appropriate choice of threshold value, the quality of the iterative GI reconstructed image is much better than that of differential GI for the same number of measurements. This denoising method thus offers a very effective approach to promote the implementation of GI in real applications. PMID:25322001

  12. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-01

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/. PMID:25083512

  13. Channeled spectropolarimetry using iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Dennis J.; LaCasse, Charles F.; Craven, Julia M.

    2016-05-01

    Channeled spectropolarimeters (CSP) measure the polarization state of light as a function of wavelength. Conventional Fourier reconstruction suffers from noise, assumes the channels are band-limited, and requires uniformly spaced samples. To address these problems, we propose an iterative reconstruction algorithm. We develop a mathematical model of CSP measurements and minimize a cost function based on this model. We simulate a measured spectrum using example Stokes parameters, from which we compare conventional Fourier reconstruction and iterative reconstruction. Importantly, our iterative approach can reconstruct signals that contain more bandwidth, an advancement over Fourier reconstruction. Our results also show that iterative reconstruction mitigates noise effects, processes non-uniformly spaced samples without interpolation, and more faithfully recovers the ground truth Stokes parameters. This work offers a significant improvement to Fourier reconstruction for channeled spectropolarimetry.

  14. The ITER project construction status

    NASA Astrophysics Data System (ADS)

    Motojima, O.

    2015-10-01

    The pace of the ITER project in St Paul-lez-Durance, France is accelerating rapidly into its peak construction phase. With the completion of the B2 slab in August 2014, which will support about 400 000 metric tons of the tokamak complex structures and components, the construction is advancing on a daily basis. Magnet, vacuum vessel, cryostat, thermal shield, first wall and divertor structures are under construction or in prototype phase in the ITER member states of China, Europe, India, Japan, Korea, Russia, and the United States. Each of these member states has its own domestic agency (DA) to manage their procurements of components for ITER. Plant systems engineering is being transformed to fully integrate the tokamak and its auxiliary systems in preparation for the assembly and operations phase. CODAC, diagnostics, and the three main heating and current drive systems are also progressing, including the construction of the neutral beam test facility building in Padua, Italy. The conceptual design of the Chinese test blanket module system for ITER has been completed and those of the EU are well under way. Significant progress has been made addressing several outstanding physics issues including disruption load characterization, prediction, avoidance, and mitigation, first wall and divertor shaping, edge pedestal and SOL plasma stability, fuelling and plasma behaviour during confinement transients and W impurity transport. Further development of the ITER Research Plan has included a definition of the required plant configuration for 1st plasma and subsequent phases of ITER operation as well as the major plasma commissioning activities and the needs of the accompanying R&D program to ITER construction by the ITER parties.

  15. Iterative reconstruction of detector response of an Anger gamma camera.

    PubMed

    Morozov, A; Solovov, V; Alves, F; Domingos, V; Martins, R; Neves, F; Chepel, V

    2015-05-21

    Statistical event reconstruction techniques can give better results for gamma cameras than the traditional centroid method. However, implementation of such techniques requires detailed knowledge of the photomultiplier tube light-response functions. Here we describe an iterative method which allows one to obtain the response functions from flood irradiation data without imposing strict requirements on the spatial uniformity of the event distribution. A successful application of the method for medical gamma cameras is demonstrated using both simulated and experimental data. An implementation of the iterative reconstruction technique capable of operating in real time is presented. We show that this technique can also be used for monitoring photomultiplier gain variations. PMID:25951792

  16. Iterative reconstruction of detector response of an Anger gamma camera

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Solovov, V.; Alves, F.; Domingos, V.; Martins, R.; Neves, F.; Chepel, V.

    2015-05-01

    Statistical event reconstruction techniques can give better results for gamma cameras than the traditional centroid method. However, implementation of such techniques requires detailed knowledge of the photomultiplier tube light-response functions. Here we describe an iterative method which allows one to obtain the response functions from flood irradiation data without imposing strict requirements on the spatial uniformity of the event distribution. A successful application of the method for medical gamma cameras is demonstrated using both simulated and experimental data. An implementation of the iterative reconstruction technique capable of operating in real time is presented. We show that this technique can also be used for monitoring photomultiplier gain variations.

  17. Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report

    SciTech Connect

    Saad, Yousef

    2014-01-16

    The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners for solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the

  18. A randomised trial of adaptive pacing therapy, cognitive behaviour therapy, graded exercise, and specialist medical care for chronic fatigue syndrome (PACE): statistical analysis plan

    PubMed Central

    2013-01-01

    Background The publication of protocols by medical journals is increasingly becoming an accepted means for promoting good quality research and maximising transparency. Recently, Finfer and Bellomo have suggested the publication of statistical analysis plans (SAPs).The aim of this paper is to make public and to report in detail the planned analyses that were approved by the Trial Steering Committee in May 2010 for the principal papers of the PACE (Pacing, graded Activity, and Cognitive behaviour therapy: a randomised Evaluation) trial, a treatment trial for chronic fatigue syndrome. It illustrates planned analyses of a complex intervention trial that allows for the impact of clustering by care providers, where multiple care-providers are present for each patient in some but not all arms of the trial. Results The trial design, objectives and data collection are reported. Considerations relating to blinding, samples, adherence to the protocol, stratification, centre and other clustering effects, missing data, multiplicity and compliance are described. Descriptive, interim and final analyses of the primary and secondary outcomes are then outlined. Conclusions This SAP maximises transparency, providing a record of all planned analyses, and it may be a resource for those who are developing SAPs, acting as an illustrative example for teaching and methodological research. It is not the sum of the statistical analysis sections of the principal papers, being completed well before individual papers were drafted. Trial registration ISRCTN54285094 assigned 22 May 2003; First participant was randomised on 18 March 2005. PMID:24225069

  19. Comments on the iterated knapsack attack

    SciTech Connect

    Brickell, E.F.

    1983-01-01

    L. Adleman has proposed a three step method for breaking the iterated knapsack that runs in polynomial time and is linear in the number of iterations. In this paper, we show that the first step is possibly exponential in the number of iterations, and that the second and third steps are exponential even for only three iterations.

  20. Solving Upwind-Biased Discretizations: Defect-Correction Iterations

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    1999-01-01

    This paper considers defect-correction solvers for a second order upwind-biased discretization of the 2D convection equation. The following important features are reported: (1) The asymptotic convergence rate is about 0.5 per defect-correction iteration. (2) If the operators involved in defect-correction iterations have different approximation order, then the initial convergence rates may be very slow. The number of iterations required to get into the asymptotic convergence regime might grow on fine grids as a negative power of h. In the case of a second order target operator and a first order driver operator, this number of iterations is roughly proportional to h-1/3. (3) If both the operators have the second approximation order, the defect-correction solver demonstrates the asymptotic convergence rate after three iterations at most. The same three iterations are required to converge algebraic error below the truncation error level. A novel comprehensive half-space Fourier mode analysis (which, by the way, can take into account the influence of discretized outflow boundary conditions as well) for the defect-correction method is developed. This analysis explains many phenomena observed in solving non-elliptic equations and provides a close prediction of the actual solution behavior. It predicts the convergence rate for each iteration and the asymptotic convergence rate. As a result of this analysis, a new very efficient adaptive multigrid algorithm solving the discrete problem to within a given accuracy is proposed. Numerical simulations confirm the accuracy of the analysis and the efficiency of the proposed algorithm. The results of the numerical tests are reported.

  1. Use of Statistics by Librarians.

    ERIC Educational Resources Information Center

    Christensen, John O.

    1988-01-01

    Description of common errors found in the statistical methodologies of research carried out by librarians, focuses on sampling and generalizability. The discussion covers the need to either adapt library research to the statistical abilities of librarians or to educate librarians in the proper use of statistics. (15 references) (CLB)

  2. On pre-image iterations for speech enhancement.

    PubMed

    Leitner, Christina; Pernkopf, Franz

    2015-01-01

    In this paper, we apply kernel PCA for speech enhancement and derive pre-image iterations for speech enhancement. Both methods make use of a Gaussian kernel. The kernel variance serves as tuning parameter that has to be adapted according to the SNR and the desired degree of de-noising. We develop a method to derive a suitable value for the kernel variance from a noise estimate to adapt pre-image iterations to arbitrary SNRs. In experiments, we compare the performance of kernel PCA and pre-image iterations in terms of objective speech quality measures and automatic speech recognition. The speech data is corrupted by white and colored noise at 0, 5, 10, and 15 dB SNR. As a benchmark, we provide results of the generalized subspace method, of spectral subtraction, and of the minimum mean-square error log-spectral amplitude estimator. In terms of the scores of the PEASS (Perceptual Evaluation Methods for Audio Source Separation) toolbox, the proposed methods achieve a similar performance as the reference methods. The speech recognition experiments show that the utterances processed by pre-image iterations achieve a consistently better word recognition accuracy than the unprocessed noisy utterances and than the utterances processed by the generalized subspace method. PMID:26085973

  3. ITER plant layout and site services

    NASA Astrophysics Data System (ADS)

    Chuyanov, V. A.

    2000-03-01

    The ITER site has not yet been determined. Nevertheless, to develop a construction plan and a cost estimate, it is necessary to have a detailed layout of the buildings, structures and outdoor equipment integrated with the balance of plant service systems prototypical of large fusion power plants. These services include electrical power for magnet feeds and plasma heating systems, cryogenic and conventional cooling systems, compressed air, gas supplies, demineralized water, steam and drainage. Nuclear grade facilities are provided to handle tritium fuel and activated waste, as well as to prevent radiation exposure of workers and the public. To prevent interference between services of different types and for efficient arrangement of buildings, structures and equipment within the site area, a plan was developed which segregated different classes of services to four quadrants surrounding the tokamak building, placed at the approximate geographical centre of the site. The locations of the buildings on the generic site were selected to meet all design requirements at minimum total project cost. A similar approach was used to determine the locations of services above, at and below grade. The generic site plan can be adapted to the site selected for ITER without significant changes to the buildings or equipment. Some rearrangements may be required by site topography, resulting primarily in changes to the length of services that link the buildings and equipment.

  4. Ordinal neural networks without iterative tuning.

    PubMed

    Fernández-Navarro, Francisco; Riccardi, Annalisa; Carloni, Sante

    2014-11-01

    Ordinal regression (OR) is an important branch of supervised learning in between the multiclass classification and regression. In this paper, the traditional classification scheme of neural network is adapted to learn ordinal ranks. The model proposed imposes monotonicity constraints on the weights connecting the hidden layer with the output layer. To do so, the weights are transcribed using padding variables. This reformulation leads to the so-called inequality constrained least squares (ICLS) problem. Its numerical solution can be obtained by several iterative methods, for example, trust region or line search algorithms. In this proposal, the optimum is determined analytically according to the closed-form solution of the ICLS problem estimated from the Karush-Kuhn-Tucker conditions. Furthermore, following the guidelines of the extreme learning machine framework, the weights connecting the input and the hidden layers are randomly generated, so the final model estimates all its parameters without iterative tuning. The model proposed achieves competitive performance compared with the state-of-the-art neural networks methods for OR. PMID:25330430

  5. ITER Construction--Plant System Integration

    SciTech Connect

    Tada, E.; Matsuda, S.

    2009-02-19

    This brief paper introduces how the ITER will be built in the international collaboration. The ITER Organization plays a central role in constructing ITER and leading it into operation. Since most of the ITER components are to be provided in-kind from the member countries, integral project management should be scoped in advance of real work. Those include design, procurement, system assembly, testing, licensing and commissioning of ITER.

  6. Flight data processing with the F-8 adaptive algorithm

    NASA Technical Reports Server (NTRS)

    Hartmann, G.; Stein, G.; Petersen, K.

    1977-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters has been designed for NASA's DFBW F-8 aircraft. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm has been implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer and surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software. The software and its performance evaluation based on flight data are described

  7. Adaptive Image Denoising by Mixture Adaptation.

    PubMed

    Luo, Enming; Chan, Stanley H; Nguyen, Truong Q

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms. PMID:27416593

  8. ITER Disruption Mitigation System Design

    NASA Astrophysics Data System (ADS)

    Rasmussen, David; Lyttle, M. S.; Baylor, L. R.; Carmichael, J. R.; Caughman, J. B. O.; Combs, S. K.; Ericson, N. M.; Bull-Ezell, N. D.; Fehling, D. T.; Fisher, P. W.; Foust, C. R.; Ha, T.; Meitner, S. J.; Nycz, A.; Shoulders, J. M.; Smith, S. F.; Warmack, R. J.; Coburn, J. D.; Gebhart, T. E.; Fisher, J. T.; Reed, J. R.; Younkin, T. R.

    2015-11-01

    The disruption mitigation system for ITER is under design and will require injection of up to 10 kPa-m3 of deuterium, helium, neon, or argon material for thermal mitigation and up to 100 kPa-m3 of material for suppression of runaway electrons. A hybrid unit compatible with the ITER nuclear, thermal and magnetic field environment is being developed. The unit incorporates a fast gas valve for massive gas injection (MGI) and a shattered pellet injector (SPI) to inject a massive spray of small particles, and can be operated as an SPI with a frozen pellet or an MGI without a pellet. Three ITER upper port locations will have three SPI/MGI units with a common delivery tube. One equatorial port location has space for sixteen similar SPI/MGI units. Supported by US DOE under DE-AC05-00OR22725.

  9. Error Field Correction in ITER

    SciTech Connect

    Park, Jong-kyu; Boozer, Allen H.; Menard, Jonathan E.; Schaffer, Michael J.

    2008-05-22

    A new method for correcting magnetic field errors in the ITER tokamak is developed using the Ideal Perturbed Equilibrium Code (IPEC). The dominant external magnetic field for driving islands is shown to be localized to the outboard midplane for three ITER equilibria that represent the projected range of operational scenarios. The coupling matrices between the poloidal harmonics of the external magnetic perturbations and the resonant fields on the rational surfaces that drive islands are combined for different equilibria and used to determine an ordered list of the dominant errors in the external magnetic field. It is found that efficient and robust error field correction is possible with a fixed setting of the correction currents relative to the currents in the main coils across the range of ITER operating scenarios that was considered.

  10. Construction Safety Forecast for ITER

    SciTech Connect

    cadwallader, lee charles

    2006-11-01

    The International Thermonuclear Experimental Reactor (ITER) project is poised to begin its construction activity. This paper gives an estimate of construction safety as if the experiment was being built in the United States. This estimate of construction injuries and potential fatalities serves as a useful forecast of what can be expected for construction of such a major facility in any country. These data should be considered by the ITER International Team as it plans for safety during the construction phase. Based on average U.S. construction rates, ITER may expect a lost workday case rate of < 4.0 and a fatality count of 0.5 to 0.9 persons per year.

  11. ITER EDA design confinement capability

    NASA Astrophysics Data System (ADS)

    Uckan, N. A.

    Major device parameters for ITER-EDA and CDA are given in this paper. Ignition capability of the EDA (and CDA) operational scenarios is evaluated using both the 1 1/2-D time-dependent transport simulations and 0-D global models under different confinement ((chi((gradient)(T)(sub e)(sub crit)), empirical global energy confinement scalings, chi(empirical), etc.) assumptions. Results from some of these transport simulations and confinement assessments are summarized in and compared with the ITER CDA results.

  12. ITER LHe Plants Parallel Operation

    NASA Astrophysics Data System (ADS)

    Fauve, E.; Bonneton, M.; Chalifour, M.; Chang, H.-S.; Chodimella, C.; Monneret, E.; Vincent, G.; Flavien, G.; Fabre, Y.; Grillot, D.

    The ITER Cryogenic System includes three identical liquid helium (LHe) plants, with a total average cooling capacity equivalent to 75 kW at 4.5 K.The LHe plants provide the 4.5 K cooling power to the magnets and cryopumps. They are designed to operate in parallel and to handle heavy load variations.In this proceedingwe will describe the presentstatusof the ITER LHe plants with emphasis on i) the project schedule, ii) the plantscharacteristics/layout and iii) the basic principles and control strategies for a stable operation of the three LHe plants in parallel.

  13. Parallel inverse iteration with reorthogonalization

    SciTech Connect

    Fann, G.I.; Littlefield, R.J.

    1993-03-01

    A parallel method for finding orthogonal eigenvectors of real symmetric tridiagonal is described. The method uses inverse iteration with repeated Modified Gram-Schmidt (MGS) reorthogonalization of the unconverged iterates for clustered eigenvalues. This approach is more parallelizable than reorthogonalizing against fully converged eigenvectors, as is done by LAPACK's current DSTEIN routine. The new method is found to provide accuracy and speed comparable to DSTEIN's and to have good parallel scalability even for matrices with large clusters of eigenvalues. We present al results for residual and orthogonality tests, plus timings on IBM RS/6000 (sequential) and Intel Touchstone DELTA (parallel) computers.

  14. Parallel inverse iteration with reorthogonalization

    SciTech Connect

    Fann, G.I.; Littlefield, R.J.

    1993-03-01

    A parallel method for finding orthogonal eigenvectors of real symmetric tridiagonal is described. The method uses inverse iteration with repeated Modified Gram-Schmidt (MGS) reorthogonalization of the unconverged iterates for clustered eigenvalues. This approach is more parallelizable than reorthogonalizing against fully converged eigenvectors, as is done by LAPACK`s current DSTEIN routine. The new method is found to provide accuracy and speed comparable to DSTEIN`s and to have good parallel scalability even for matrices with large clusters of eigenvalues. We present al results for residual and orthogonality tests, plus timings on IBM RS/6000 (sequential) and Intel Touchstone DELTA (parallel) computers.

  15. Iterated binomial sums and their associated iterated integrals

    NASA Astrophysics Data System (ADS)

    Ablinger, J.; Blümlein, J.; Raab, C. G.; Schneider, C.

    2014-11-01

    We consider finite iterated generalized harmonic sums weighted by the binomial binom{2k}{k} in numerators and denominators. A large class of these functions emerges in the calculation of massive Feynman diagrams with local operator insertions starting at 3-loop order in the coupling constant and extends the classes of the nested harmonic, generalized harmonic, and cyclotomic sums. The binomially weighted sums are associated by the Mellin transform to iterated integrals over square-root valued alphabets. The values of the sums for N → ∞ and the iterated integrals at x = 1 lead to new constants, extending the set of special numbers given by the multiple zeta values, the cyclotomic zeta values and special constants which emerge in the limit N → ∞ of generalized harmonic sums. We develop algorithms to obtain the Mellin representations of these sums in a systematic way. They are of importance for the derivation of the asymptotic expansion of these sums and their analytic continuation to N in {C}. The associated convolution relations are derived for real parameters and can therefore be used in a wider context, as, e.g., for multi-scale processes. We also derive algorithms to transform iterated integrals over root-valued alphabets into binomial sums. Using generating functions we study a few aspects of infinite (inverse) binomial sums.

  16. On the solution of evolution equations based on multigrid and explicit iterative methods

    NASA Astrophysics Data System (ADS)

    Zhukov, V. T.; Novikova, N. D.; Feodoritova, O. B.

    2015-08-01

    Two schemes for solving initial-boundary value problems for three-dimensional parabolic equations are studied. One is implicit and is solved using the multigrid method, while the other is explicit iterative and is based on optimal properties of the Chebyshev polynomials. In the explicit iterative scheme, the number of iteration steps and the iteration parameters are chosen as based on the approximation and stability conditions, rather than on the optimization of iteration convergence to the solution of the implicit scheme. The features of the multigrid scheme include the implementation of the intergrid transfer operators for the case of discontinuous coefficients in the equation and the adaptation of the smoothing procedure to the spectrum of the difference operators. The results produced by these schemes as applied to model problems with anisotropic discontinuous coefficients are compared.

  17. A Pleiotropy-Informed Bayesian False Discovery Rate Adapted to a Shared Control Design Finds New Disease Associations From GWAS Summary Statistics

    PubMed Central

    Liley, James; Wallace, Chris

    2015-01-01

    Genome-wide association studies (GWAS) have been successful in identifying single nucleotide polymorphisms (SNPs) associated with many traits and diseases. However, at existing sample sizes, these variants explain only part of the estimated heritability. Leverage of GWAS results from related phenotypes may improve detection without the need for larger datasets. The Bayesian conditional false discovery rate (cFDR) constitutes an upper bound on the expected false discovery rate (FDR) across a set of SNPs whose p values for two diseases are both less than two disease-specific thresholds. Calculation of the cFDR requires only summary statistics and have several advantages over traditional GWAS analysis. However, existing methods require distinct control samples between studies. Here, we extend the technique to allow for some or all controls to be shared, increasing applicability. Several different SNP sets can be defined with the same cFDR value, and we show that the expected FDR across the union of these sets may exceed expected FDR in any single set. We describe a procedure to establish an upper bound for the expected FDR among the union of such sets of SNPs. We apply our technique to pairwise analysis of p values from ten autoimmune diseases with variable sharing of controls, enabling discovery of 59 SNP-disease associations which do not reach GWAS significance after genomic control in individual datasets. Most of the SNPs we highlight have previously been confirmed using replication studies or larger GWAS, a useful validation of our technique; we report eight SNP-disease associations across five diseases not previously declared. Our technique extends and strengthens the previous algorithm, and establishes robust limits on the expected FDR. This approach can improve SNP detection in GWAS, and give insight into shared aetiology between phenotypically related conditions. PMID:25658688

  18. Learning to improve iterative repair scheduling

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene

    1992-01-01

    This paper presents a general learning method for dynamically selecting between repair heuristics in an iterative repair scheduling system. The system employs a version of explanation-based learning called Plausible Explanation-Based Learning (PEBL) that uses multiple examples to confirm conjectured explanations. The basic approach is to conjecture contradictions between a heuristic and statistics that measure the quality of the heuristic. When these contradictions are confirmed, a different heuristic is selected. To motivate the utility of this approach we present an empirical evaluation of the performance of a scheduling system with respect to two different repair strategies. We show that the scheduler that learns to choose between the heuristics outperforms the same scheduler with any one of two heuristics alone.

  19. Liver recognition based on statistical shape model in CT images

    NASA Astrophysics Data System (ADS)

    Xiang, Dehui; Jiang, Xueqing; Shi, Fei; Zhu, Weifang; Chen, Xinjian

    2016-03-01

    In this paper, an automatic method is proposed to recognize the liver on clinical 3D CT images. The proposed method effectively use statistical shape model of the liver. Our approach consist of three main parts: (1) model training, in which shape variability is detected using principal component analysis from the manual annotation; (2) model localization, in which a fast Euclidean distance transformation based method is able to localize the liver in CT images; (3) liver recognition, the initial mesh is locally and iteratively adapted to the liver boundary, which is constrained with the trained shape model. We validate our algorithm on a dataset which consists of 20 3D CT images obtained from different patients. The average ARVD was 8.99%, the average ASSD was 2.69mm, the average RMSD was 4.92mm, the average MSD was 28.841mm, and the average MSD was 13.31%.

  20. ODE System Solver W. Krylov Iteration & Rootfinding

    SciTech Connect

    Hindmarsh, Alan C.

    1991-09-09

    LSODKR is a new initial value ODE solver for stiff and nonstiff systems. It is a variant of the LSODPK and LSODE solvers, intended mainly for large stiff systems. The main differences between LSODKR and LSODE are the following: (a) for stiff systems, LSODKR uses a corrector iteration composed of Newton iteration and one of four preconditioned Krylov subspace iteration methods. The user must supply routines for the preconditioning operations, (b) Within the corrector iteration, LSODKR does automatic switching between functional (fixpoint) iteration and modified Newton iteration, (c) LSODKR includes the ability to find roots of given functions of the solution during the integration.

  1. ODE System Solver W. Krylov Iteration & Rootfinding

    1991-09-09

    LSODKR is a new initial value ODE solver for stiff and nonstiff systems. It is a variant of the LSODPK and LSODE solvers, intended mainly for large stiff systems. The main differences between LSODKR and LSODE are the following: (a) for stiff systems, LSODKR uses a corrector iteration composed of Newton iteration and one of four preconditioned Krylov subspace iteration methods. The user must supply routines for the preconditioning operations, (b) Within the corrector iteration,more » LSODKR does automatic switching between functional (fixpoint) iteration and modified Newton iteration, (c) LSODKR includes the ability to find roots of given functions of the solution during the integration.« less

  2. Delayed Over-Relaxation for iterative methods

    NASA Astrophysics Data System (ADS)

    Antuono, M.; Colicchio, G.

    2016-09-01

    We propose a variant of the relaxation step used in the most widespread iterative methods (e.g. Jacobi Over-Relaxation, Successive Over-Relaxation) which combines the iteration at the predicted step, namely (n + 1), with the iteration at step (n - 1). We provide a theoretical analysis of the proposed algorithm by applying such a delayed relaxation step to a generic (convergent) iterative scheme. We prove that, under proper assumptions, this significantly improves the convergence rate of the initial iterative method. As a relevant example, we apply the proposed algorithm to the solution of the Poisson equation, highlighting the advantages in comparison with classical iterative models.

  3. Statistical-information-based performance criteria for Richardson-Lucy image deblurring.

    PubMed

    Prasad, Sudhakar

    2002-07-01

    Iterative image deconvolution algorithms generally lack objective criteria for deciding when to terminate iterations, often relying on ad hoc metrics for determining optimal performance. A statistical-information-based analysis of the popular Richardson-Lucy iterative deblurring algorithm is presented after clarification of the detailed nature of noise amplification and resolution recovery as the algorithm iterates. Monitoring the information content of the reconstructed image furnishes an alternative criterion for assessing and stopping such an iterative algorithm. It is straightforward to implement prior knowledge and other conditioning tools in this statistical approach. PMID:12095196

  4. Networking Theories by Iterative Unpacking

    ERIC Educational Resources Information Center

    Koichu, Boris

    2014-01-01

    An iterative unpacking strategy consists of sequencing empirically-based theoretical developments so that at each step of theorizing one theory serves as an overarching conceptual framework, in which another theory, either existing or emerging, is embedded in order to elaborate on the chosen element(s) of the overarching theory. The strategy is…

  5. Prospects of ITER Instability Control

    NASA Astrophysics Data System (ADS)

    Kolemen, Egemen

    2015-11-01

    Prospects for real-time MHD stability analysis, plasma response calculations, and their use in ELM, NTM, RWM control and EFC will be discussed. ITER will need various controls to work together in order to achieve the stated goal of Q >= 10 for multiple minutes. These systems will allow operating at high beta while avoiding disruptions that may lead to damage to the reactor. However, it has not yet been demonstrated whether the combined real-time feedback control aim is feasible given the spectrum of plasma instabilities, the quality of the real-time diagnostic measurement/analysis, and the actuator set at ITER. We will explain challenges of instability control for ITER based on experimental and simulation results. We will demonstrate that it will not be possible to parameterize all possible disruption avoidance and ramp down scenarios that ITER may encounter. An alternative approach based on real-time MHD stability analysis and plasma response calculations, and its use in ELM, NTM, RWM control and EFC, will be demonstrated. Supported by the US DOE under DE-AC02-09CH11466.

  6. Energetic ions in ITER plasmas

    SciTech Connect

    Pinches, S. D.; Chapman, I. T.; Sharapov, S. E.; Lauber, Ph. W.; Oliver, H. J. C.; Shinohara, K.; Tani, K.

    2015-02-15

    This paper discusses the behaviour and consequences of the expected populations of energetic ions in ITER plasmas. It begins with a careful analytic and numerical consideration of the stability of Alfvén Eigenmodes in the ITER 15 MA baseline scenario. The stability threshold is determined by balancing the energetic ion drive against the dominant damping mechanisms and it is found that only in the outer half of the plasma (r/a>0.5) can the fast ions overcome the thermal ion Landau damping. This is in spite of the reduced numbers of alpha-particles and beam ions in this region but means that any Alfvén Eigenmode-induced redistribution is not expected to influence the fusion burn process. The influence of energetic ions upon the main global MHD phenomena expected in ITER's primary operating scenarios, including sawteeth, neoclassical tearing modes and Resistive Wall Modes, is also reviewed. Fast ion losses due to the non-axisymmetric fields arising from the finite number of toroidal field coils, the inclusion of ferromagnetic inserts, the presence of test blanket modules containing ferromagnetic material, and the fields created by the Edge Localised Mode (ELM) control coils in ITER are discussed. The greatest losses and associated heat loads onto the plasma facing components arise due to the use of the ELM control coils and come from neutral beam ions that are ionised in the plasma edge.

  7. Neural Network Aided Adaptive Extended Kalman Filtering Approach for DGPS Positioning

    NASA Astrophysics Data System (ADS)

    Jwo, Dah-Jing; Huang, Hung-Chih

    2004-09-01

    The extended Kalman filter, when employed in the GPS receiver as the navigation state estimator, provides optimal solutions if the noise statistics for the measurement and system are completely known. In practice, the noise varies with time, which results in performance degradation. The covariance matching method is a conventional adaptive approach for estimation of noise covariance matrices. The technique attempts to make the actual filter residuals consistent with their theoretical covariance. However, this innovation-based adaptive estimation shows very noisy results if the window size is small. To resolve the problem, a multilayered neural network is trained to identify the measurement noise covariance matrix, in which the back-propagation algorithm is employed to iteratively adjust the link weights using the steepest descent technique. Numerical simulations show that based on the proposed approach the adaptation performance is substantially enhanced and the positioning accuracy is substantially improved.

  8. A simple and flexible graphical approach for adaptive group-sequential clinical trials.

    PubMed

    Sugitani, Toshifumi; Bretz, Frank; Maurer, Willi

    2016-01-01

    In this article, we introduce a graphical approach to testing multiple hypotheses in group-sequential clinical trials allowing for midterm design modifications. It is intended for structured study objectives in adaptive clinical trials and extends the graphical group-sequential designs from Maurer and Bretz (Statistics in Biopharmaceutical Research 2013; 5: 311-320) to adaptive trial designs. The resulting test strategies can be visualized graphically and performed iteratively. We illustrate the methodology with two examples from our clinical trial practice. First, we consider a three-armed gold-standard trial with the option to reallocate patients to either the test drug or the active control group, while stopping the recruitment of patients to placebo, after having demonstrated superiority of the test drug over placebo at an interim analysis. Second, we consider a confirmatory two-stage adaptive design with treatment selection at interim. PMID:25372071

  9. Descriptive statistics.

    PubMed

    Shi, Runhua; McLarty, Jerry W

    2009-10-01

    In this article, we introduced basic concepts of statistics, type of distributions, and descriptive statistics. A few examples were also provided. The basic concepts presented herein are only a fraction of the concepts related to descriptive statistics. Also, there are many commonly used distributions not presented herein, such as Poisson distributions for rare events and exponential distributions, F distributions, and logistic distributions. More information can be found in many statistics books and publications. PMID:19891281

  10. Statistical Diversions

    ERIC Educational Resources Information Center

    Petocz, Peter; Sowey, Eric

    2008-01-01

    As a branch of knowledge, Statistics is ubiquitous and its applications can be found in (almost) every field of human endeavour. In this article, the authors track down the possible source of the link between the "Siren song" and applications of Statistics. Answers to their previous five questions and five new questions on Statistics are presented.

  11. Statistical Software.

    ERIC Educational Resources Information Center

    Callamaras, Peter

    1983-01-01

    This buyer's guide to seven major types of statistics software packages for microcomputers reviews Edu-Ware Statistics 3.0; Financial Planning; Speed Stat; Statistics with DAISY; Human Systems Dynamics package of Stats Plus, ANOVA II, and REGRESS II; Maxistat; and Moore-Barnes' MBC Test Construction and MBC Correlation. (MBR)

  12. Bayesian Statistics.

    ERIC Educational Resources Information Center

    Meyer, Donald L.

    Bayesian statistical methodology and its possible uses in the behavioral sciences are discussed in relation to the solution of problems in both the use and teaching of fundamental statistical methods, including confidence intervals, significance tests, and sampling. The Bayesian model explains these statistical methods and offers a consistent…

  13. An Iterative Reweighted Method for Tucker Decomposition of Incomplete Tensors

    NASA Astrophysics Data System (ADS)

    Yang, Linxiao; Fang, Jun; Li, Hongbin; Zeng, Bing

    2016-09-01

    We consider the problem of low-rank decomposition of incomplete multiway tensors. Since many real-world data lie on an intrinsically low dimensional subspace, tensor low-rank decomposition with missing entries has applications in many data analysis problems such as recommender systems and image inpainting. In this paper, we focus on Tucker decomposition which represents an Nth-order tensor in terms of N factor matrices and a core tensor via multilinear operations. To exploit the underlying multilinear low-rank structure in high-dimensional datasets, we propose a group-based log-sum penalty functional to place structural sparsity over the core tensor, which leads to a compact representation with smallest core tensor. The method for Tucker decomposition is developed by iteratively minimizing a surrogate function that majorizes the original objective function, which results in an iterative reweighted process. In addition, to reduce the computational complexity, an over-relaxed monotone fast iterative shrinkage-thresholding technique is adapted and embedded in the iterative reweighted process. The proposed method is able to determine the model complexity (i.e. multilinear rank) in an automatic way. Simulation results show that the proposed algorithm offers competitive performance compared with other existing algorithms.

  14. Multimodal and Adaptive Learning Management: An Iterative Design

    ERIC Educational Resources Information Center

    Squires, David R.; Orey, Michael A.

    2015-01-01

    The purpose of this study is to measure the outcome of a comprehensive learning management system implemented at a Spinal Cord Injury (SCI) hospital in the Southeast United States. Specifically this SCI hospital has been experiencing an evident volume of patients returning seeking more information about the nature of their injuries. Recognizing…

  15. Correctness properties for iterated hardware structures

    NASA Technical Reports Server (NTRS)

    Windley, Phillip J.

    1993-01-01

    Iterated structures occur frequently in hardware. This paper describes properties required of mathematical relations that can be implemented iteratively and demonstrates the use of these properties on a generalized class of adders. This work provides a theoretical basis for the correct synthesis of iterated arithmetic structures.

  16. Bioinspired iterative synthesis of polyketides

    PubMed Central

    Zheng, Kuan; Xie, Changmin; Hong, Ran

    2015-01-01

    Diverse array of biopolymers and second metabolites (particularly polyketide natural products) has been manufactured in nature through an enzymatic iterative assembly of simple building blocks. Inspired by this strategy, molecules with inherent modularity can be efficiently synthesized by repeated succession of similar reaction sequences. This privileged strategy has been widely adopted in synthetic supramolecular chemistry. Its value also has been reorganized in natural product synthesis. A brief overview of this approach is given with a particular emphasis on the total synthesis of polyol-embedded polyketides, a class of vastly diverse structures and biologically significant natural products. This viewpoint also illustrates the limits of known individual modules in terms of diastereoselectivity and enantioselectivity. More efficient and practical iterative strategies are anticipated to emerge in the future development. PMID:26052510

  17. Projection Classification Based Iterative Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Ruiqiu; Li, Chen; Gao, Wenhua

    2015-05-01

    Iterative algorithm has good performance as it does not need complete projection data in 3D image reconstruction area. It is possible to be applied in BGA based solder joints inspection but with low convergence speed which usually acts with x-ray Laminography that has a worse reconstruction image compared to the former one. This paper explores to apply one projection classification based method which tries to separate the object to three parts, i.e. solute, solution and air, and suppose that the reconstruction speed decrease from solution to two other parts on both side lineally. And then SART and CAV algorithms are improved under the proposed idea. Simulation experiment result with incomplete projection images indicates the fast convergence speed of the improved iterative algorithms and the effectiveness of the proposed method. Less the projection images, more the superiority is also founded.

  18. Overhead Image Statistics

    SciTech Connect

    Vijayaraj, Veeraraghavan; Cheriyadat, Anil M; Bhaduri, Budhendra L; Vatsavai, Raju; Bright, Eddie A

    2008-01-01

    Statistical properties of high-resolution overhead images representing different land use categories are analyzed using various local and global statistical image properties based on the shape of the power spectrum, image gradient distributions, edge co-occurrence, and inter-scale wavelet coefficient distributions. The analysis was performed on a database of high-resolution (1 meter) overhead images representing a multitude of different downtown, suburban, commercial, agricultural and wooded exemplars. Various statistical properties relating to these image categories and their relationship are discussed. The categorical variations in power spectrum contour shapes, the unique gradient distribution characteristics of wooded categories, the similarity in edge co-occurrence statistics for overhead and natural images, and the unique edge co-occurrence statistics of downtown categories are presented in this work. Though previous work on natural image statistics has showed some of the unique characteristics for different categories, the relationships for overhead images are not well understood. The statistical properties of natural images were used in previous studies to develop prior image models, to predict and index objects in a scene and to improve computer vision models. The results from our research findings can be used to augment and adapt computer vision algorithms that rely on prior image statistics to process overhead images, calibrate the performance of overhead image analysis algorithms, and derive features for better discrimination of overhead image categories.

  19. US ITER limiter module design

    SciTech Connect

    Mattas, R.F.; Billone, M.; Hassanein, A.

    1996-08-01

    The recent U.S. effort on the ITER (International Thermonuclear Experimental Reactor) shield has been focused on the limiter module design. This is a multi-disciplinary effort that covers design layout, fabrication, thermal hydraulics, materials evaluation, thermo- mechanical response, and predicted response during off-normal events. The results of design analyses are presented. Conclusions and recommendations are also presented concerning, the capability of the limiter modules to meet performance goals and to be fabricated within design specifications using existing technology.

  20. ITER Plasma Control System Development

    NASA Astrophysics Data System (ADS)

    Snipes, Joseph; ITER PCS Design Team

    2015-11-01

    The development of the ITER Plasma Control System (PCS) continues with the preliminary design phase for 1st plasma and early plasma operation in H/He up to Ip = 15 MA in L-mode. The design is being developed through a contract between the ITER Organization and a consortium of plasma control experts from EU and US fusion laboratories, which is expected to be completed in time for a design review at the end of 2016. This design phase concentrates on breakdown including early ECH power and magnetic control of the poloidal field null, plasma current, shape, and position. Basic kinetic control of the heating (ECH, ICH, NBI) and fueling systems is also included. Disruption prediction, mitigation, and maintaining stable operation are also included because of the high magnetic and kinetic stored energy present already for early plasma operation. Support functions for error field topology and equilibrium reconstruction are also required. All of the control functions also must be integrated into an architecture that will be capable of the required complexity of all ITER scenarios. A database is also being developed to collect and manage PCS functional requirements from operational scenarios that were defined in the Conceptual Design with links to proposed event handling strategies and control algorithms for initial basic control functions. A brief status of the PCS development will be presented together with a proposed schedule for design phases up to DT operation.

  1. ITER EDA Newsletter. Volume 3, no. 2

    NASA Astrophysics Data System (ADS)

    1994-02-01

    This issue of the ITER EDA (Engineering Design Activities) Newsletter contains reports on the Fifth ITER Council Meeting held in Garching, Germany, January 27-28, 1994, a visit (January 28, 1994) of an international group of Harvard Fellows to the San Diego Joint Work Site, the Inauguration Ceremony of the EC-hosted ITER joint work site in Garching (January 28, 1994), on an ITER Technical Meeting on Assembly and Maintenance held in Garching, Germany, January 19-26, 1994, and a report on a Technical Committee Meeting on radiation effects on in-vessel components held in Garching, Germany, November 15-19, 1993, as well as an ITER Status Report.

  2. Iterative methods for mixed finite element equations

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.

    1985-01-01

    Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.

  3. Arc detection for the ICRF system on ITER

    NASA Astrophysics Data System (ADS)

    D'Inca, R.

    2011-12-01

    The ICRF system for ITER is designed to respect the high voltage breakdown limits. However arcs can still statistically happen and must be quickly detected and suppressed by shutting the RF power down. For the conception of a reliable and efficient detector, the analysis of the mechanism of arcs is necessary to find their unique signature. Numerous systems have been conceived to address the issues of arc detection. VSWR-based detectors, RF noise detectors, sound detectors, optical detectors, S-matrix based detectors. Until now, none of them has succeeded in demonstrating the fulfillment of all requirements and the studies for ITER now follow three directions: improvement of the existing concepts to fix their flaws, development of new theoretically fully compliant detectors (like the GUIDAR) and combination of several detectors to benefit from the advantages of each of them. Together with the physical and engineering challenges, the development of an arc detection system for ITER raises methodological concerns to extrapolate the results from basic experiments and present machines to the ITER scale ICRF system and to conduct a relevant risk analysis.

  4. ITER on the road to fusion energy

    NASA Astrophysics Data System (ADS)

    Ikeda, Kaname

    2010-01-01

    On 21 November 2006, the government representatives of China, the European Union, India, Japan, Korea, Russia and the United States firmly committed to building the International Thermonuclear Experimental Reactor (ITER) [1] by signing the ITER Agreement. The ITER Organization, which was formally established on 24 October 2007 after ratification of the ITER Agreement in each Member country, is the outcome of a two-decade-long collaborative effort aimed at demonstrating the scientific and technical feasibility of fusion energy. Each ITER partner has established a Domestic Agency (DA) for the construction of ITER, and the ITER Organization, based in Cadarache, in Southern France, is growing at a steady pace. The total number of staff reached 398 people from more than 20 nations by the end of September 2009. ITER will be built largely (90%) through in-kind contribution by the seven Members. On site, the levelling of the 40 ha platform has been completed. The roadworks necessary for delivering the ITER components from Fos harbour, close to Marseille, to the site are in the final stage of completion. With the aim of obtaining First Plasma in 2018, a new reference schedule has been developed by the ITER Organization and the DAs. Rapid attainment of the ITER goals is critical to accelerate fusion development—a crucial issue today in a world of increasing competition for scarce resources.

  5. Statistical x-ray computed tomography imaging from photon-starved measurements

    NASA Astrophysics Data System (ADS)

    Chang, Zhiqian; Zhang, Ruoqiao; Thibault, Jean-Baptiste; Sauer, Ken; Bouman, Charles

    2013-03-01

    Dose reduction in clinical X-ray computed tomography (CT) causes low signal-to-noise ratio (SNR) in photonsparse situations. Statistical iterative reconstruction algorithms have the advantage of retaining image quality while reducing input dosage, but they meet their limits of practicality when significant portions of the sinogram near photon starvation. The corruption of electronic noise leads to measured photon counts taking on negative values, posing a problem for the log() operation in preprocessing of data. In this paper, we propose two categories of projection correction methods: an adaptive denoising filter and Bayesian inference. The denoising filter is easy to implement and preserves local statistics, but it introduces correlation between channels and may affect image resolution. Bayesian inference is a point-wise estimation based on measurements and prior information. Both approaches help improve diagnostic image quality at dramatically reduced dosage.

  6. Adaptive WMMR filters for edge enhancement

    NASA Astrophysics Data System (ADS)

    Zhou, Jun; Longbotham, Harold G.

    1993-05-01

    In this paper, an adaptive WMMR filter is introduced, which adaptively changes its window size to accommodate edge width variations. We prove that for any given one dimensional input signal convergence is to fixed points, which are PICO (piecewise constant), by iterative application of the adaptive WMMR filter. An application of the filters to one-D data (non- PICO) and images of printed circuit boards are then provided. Application to images in general is discussed.

  7. Statistical databases

    SciTech Connect

    Kogalovskii, M.R.

    1995-03-01

    This paper presents a review of problems related to statistical database systems, which are wide-spread in various fields of activity. Statistical databases (SDB) are referred to as databases that consist of data and are used for statistical analysis. Topics under consideration are: SDB peculiarities, properties of data models adequate for SDB requirements, metadata functions, null-value problems, SDB compromise protection problems, stored data compression techniques, and statistical data representation means. Also examined is whether the present Database Management Systems (DBMS) satisfy the SDB requirements. Some actual research directions in SDB systems are considered.

  8. Morbidity statistics

    PubMed Central

    Smith, Alwyn

    1969-01-01

    This paper is based on an analysis of questionnaires sent to the health ministries of Member States of WHO asking for information about the extent, nature, and scope of morbidity statistical information. It is clear that most countries collect some statistics of morbidity and many countries collect extensive data. However, few countries relate their collection to the needs of health administrators for information, and many countries collect statistics principally for publication in annual volumes which may appear anything up to 3 years after the year to which they refer. The desiderata of morbidity statistics may be summarized as reliability, representativeness, and relevance to current health problems. PMID:5306722

  9. Statistical Diversions

    ERIC Educational Resources Information Center

    Petocz, Peter; Sowey, Eric

    2008-01-01

    In this article, the authors focus on hypothesis testing--that peculiarly statistical way of deciding things. Statistical methods for testing hypotheses were developed in the 1920s and 1930s by some of the most famous statisticians, in particular Ronald Fisher, Jerzy Neyman and Egon Pearson, who laid the foundations of almost all modern methods of…

  10. Adaptive management of watersheds and related resources

    USGS Publications Warehouse

    Williams, Byron K.

    2009-01-01

    The concept of learning about natural resources through the practice of management has been around for several decades and by now is associated with the term adaptive management. The objectives of this paper are to offer a framework for adaptive management that includes an operational definition, a description of conditions in which it can be usefully applied, and a systematic approach to its application. Adaptive decisionmaking is described as iterative, learning-based management in two phases, each with its own mechanisms for feedback and adaptation. The linkages between traditional experimental science and adaptive management are discussed.

  11. The physics role of ITER

    SciTech Connect

    Rutherford, P.H.

    1997-04-01

    Experimental research on the International Thermonuclear Experimental Reactor (ITER) will go far beyond what is possible on present-day tokamaks to address new and challenging issues in the physics of reactor-like plasmas. First and foremost, experiments in ITER will explore the physics issues of burning plasmas--plasmas that are dominantly self-heated by alpha-particles created by the fusion reactions themselves. Such issues will include (i) new plasma-physical effects introduced by the presence within the plasma of an intense population of energetic alpha particles; (ii) the physics of magnetic confinement for a burning plasma, which will involve a complex interplay of transport, stability and an internal self-generated heat source; and (iii) the physics of very-long-pulse/steady-state burning plasmas, in which much of the plasma current is also self-generated and which will require effective control of plasma purity and plasma-wall interactions. Achieving and sustaining burning plasma regimes in a tokamak necessarily requires plasmas that are larger than those in present experiments and have higher energy content and power flow, as well as much longer pulse length. Accordingly, the experimental program on ITER will embrace the study of issues of plasma physics and plasma-materials interactions that are specific to a reactor-scale fusion experiment. Such issues will include (i) confinement physics for a tokamak in which, for the first time, the core-plasma and the edge-plasma are simultaneously in a reactor-like regime; (ii) phenomena arising during plasma transients, including so-called disruptions, in regimes of high plasma current and thermal energy; and (iii) physics of a radiative divertor designed for handling high power flow for long pulses, including novel plasma and atomic-physics effects as well as materials science of surfaces subject to intense plasma interaction. Experiments on ITER will be conducted by researchers in control rooms situated at major

  12. Iterates of maps with symmetry

    NASA Technical Reports Server (NTRS)

    Chossat, Pascal; Golubitsky, Martin

    1988-01-01

    Fixed-point bifurcation, period doubling, and Hopf bifurcation (HB) for iterates of equivariant mappings are investigated analytically, with a focus on HB in the presence of symmetry. An algebraic formulation for the hypotheses of the theorem of Ruelle (1973) is derived, and the case of standing waves in a system of ordinary differential equations with O(2) symmetry is considered in detail. In this case, it is shown that HB can lead directly to motion on an invariant 3-torus, with an unexpected third frequency due to drift of standing waves along the torus.

  13. Iterative Sparse Approximation of the Gravitational Potential

    NASA Astrophysics Data System (ADS)

    Telschow, R.

    2012-04-01

    In recent applications in the approximation of gravitational potential fields, several new challenges arise. We are concerned with a huge quantity of data (e.g. in case of the Earth) or strongly irregularly distributed data points (e.g. in case of the Juno mission to Jupiter), where both of these problems bring the established approximation methods to their limits. Our novel method, which is a matching pursuit, however, iteratively chooses a best basis out of a large redundant family of trial functions to reconstruct the signal. It is independent of the data points which makes it possible to take into account a much higher amount of data and, furthermore, handle irregularly distributed data, since the algorithm is able to combine arbitrary spherical basis functions, i.e., global as well as local trial functions. This additionaly results in a solution, which is sparse in the sense that it features more basis functions where the signal has a higher local detail density. Summarizing, we get a method which reconstructs large quantities of data with a preferably low number of basis functions, combining global as well as several localizing functions to a sparse basis and a solution which is locally adapted to the data density and also to the detail density of the signal.

  14. Iterative LQG Controller Design Through Closed-Loop Identification

    NASA Technical Reports Server (NTRS)

    Hsiao, Min-Hung; Huang, Jen-Kuang; Cox, David E.

    1996-01-01

    This paper presents an iterative Linear Quadratic Gaussian (LQG) controller design approach for a linear stochastic system with an uncertain open-loop model and unknown noise statistics. This approach consists of closed-loop identification and controller redesign cycles. In each cycle, the closed-loop identification method is used to identify an open-loop model and a steady-state Kalman filter gain from closed-loop input/output test data obtained by using a feedback LQG controller designed from the previous cycle. Then the identified open-loop model is used to redesign the state feedback. The state feedback and the identified Kalman filter gain are used to form an updated LQC controller for the next cycle. This iterative process continues until the updated controller converges. The proposed controller design is demonstrated by numerical simulations and experiments on a highly unstable large-gap magnetic suspension system.

  15. Iterative reconstruction methods for high-throughput PET tomographs.

    PubMed

    Hamill, James; Bruckbauer, Thomas

    2002-08-01

    A fast iterative method is described for processing clinical PET scans acquired in three dimensions, that is, with no inter-plane septa, using standard computers to replace dedicated processors used until the late 1990s. The method is based on sinogram resampling, Fourier rebinning, Monte Carlo scatter simulation and iterative reconstruction using the attenuation-weighted OSEM method and a projector based on a Gaussian pixel model. Resampling of measured sinogram values occurs before Fourier rebinning, to minimize parallax and geometric distortions due to the circular geometry, and also to reduce the size of the sinogram. We analyse the geometrical and statistical effects of resampling, showing that the lines of response are positioned correctly and that resampling is equivalent to about 4 mm of post-reconstruction filtering. We also present phantom and patient results. In this approach, multi-bed clinical oncology scans can be ready for diagnosis within minutes. PMID:12200928

  16. Statistics Clinic

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James

    2014-01-01

    Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.

  17. Experimental Evidence on Iterated Reasoning in Games

    PubMed Central

    Grehl, Sascha; Tutić, Andreas

    2015-01-01

    We present experimental evidence on two forms of iterated reasoning in games, i.e. backward induction and interactive knowledge. Besides reliable estimates of the cognitive skills of the subjects, our design allows us to disentangle two possible explanations for the observed limits in performed iterated reasoning: Restrictions in subjects’ cognitive abilities and their beliefs concerning the rationality of co-players. In comparison to previous literature, our estimates regarding subjects’ skills in iterated reasoning are quite pessimistic. Also, we find that beliefs concerning the rationality of co-players are completely irrelevant in explaining the observed limited amount of iterated reasoning in the dirty faces game. In addition, it is demonstrated that skills in backward induction are a solid predictor for skills in iterated knowledge, which points to some generalized ability of the subjects in iterated reasoning. PMID:26312486

  18. ITER Port Interspace Pressure Calculations

    SciTech Connect

    Carbajo, Juan J; Van Hove, Walter A

    2016-01-01

    The ITER Vacuum Vessel (VV) is equipped with 54 access ports. Each of these ports has an opening in the bioshield that communicates with a dedicated port cell. During Tokamak operation, the bioshield opening must be closed with a concrete plug to shield the radiation coming from the plasma. This port plug separates the port cell into a Port Interspace (between VV closure lid and Port Plug) on the inner side and the Port Cell on the outer side. This paper presents calculations of pressures and temperatures in the ITER (Ref. 1) Port Interspace after a double-ended guillotine break (DEGB) of a pipe of the Tokamak Cooling Water System (TCWS) with high temperature water. It is assumed that this DEGB occurs during the worst possible conditions, which are during water baking operation, with water at a temperature of 523 K (250 C) and at a pressure of 4.4 MPa. These conditions are more severe than during normal Tokamak operation, with the water at 398 K (125 C) and 2 MPa. Two computer codes are employed in these calculations: RELAP5-3D Version 4.2.1 (Ref. 2) to calculate the blowdown releases from the pipe break, and MELCOR, Version 1.8.6 (Ref. 3) to calculate the pressures and temperatures in the Port Interspace. A sensitivity study has been performed to optimize some flow areas.

  19. Challenges for Cryogenics at Iter

    NASA Astrophysics Data System (ADS)

    Serio, L.

    2010-04-01

    Nuclear fusion of light nuclei is a promising option to provide clean, safe and cost competitive energy in the future. The ITER experimental reactor being designed by seven partners representing more than half of the world population will be assembled at Cadarache, South of France in the next decade. It is a thermonuclear fusion Tokamak that requires high magnetic fields to confine and stabilize the plasma. Cryogenic technology is extensively employed to achieve low-temperature conditions for the magnet and vacuum pumping systems. Efficient and reliable continuous operation shall be achieved despite unprecedented dynamic heat loads due to magnetic field variations and neutron production from the fusion reaction. Constraints and requirements of the largest superconducting Tokamak machine have been analyzed. Safety and technical risks have been initially assessed and proposals to mitigate the consequences analyzed. Industrial standards and components are being investigated to anticipate the requirements of reliable and efficient large scale energy production. After describing the basic features of ITER and its cryogenic system, we shall present the key design requirements, improvements, optimizations and challenges.

  20. Status of US ITER Diagnostics

    NASA Astrophysics Data System (ADS)

    Stratton, B.; Delgado-Aparicio, L.; Hill, K.; Johnson, D.; Pablant, N.; Barnsley, R.; Bertschinger, G.; de Bock, M. F. M.; Reichle, R.; Udintsev, V. S.; Watts, C.; Austin, M.; Phillips, P.; Beiersdorfer, P.; Biewer, T. M.; Hanson, G.; Klepper, C. C.; Carlstrom, T.; van Zeeland, M. A.; Brower, D.; Doyle, E.; Peebles, A.; Ellis, R.; Levinton, F.; Yuh, H.

    2013-10-01

    The US is providing 7 diagnostics to ITER: the Upper Visible/IR cameras, the Low Field Side Reflectometer, the Motional Stark Effect diagnostic, the Electron Cyclotron Emission diagnostic, the Toroidal Interferometer/Polarimeter, the Core Imaging X-Ray Spectrometer, and the Diagnostic Residual Gas Analyzer. The front-end components of these systems must operate with high reliability in conditions of long pulse operation, high neutron and gamma fluxes, very high neutron fluence, significant neutron heating (up to 7 MW/m3) , large radiant and charge exchange heat flux (0.35 MW/m2) , and high electromagnetic loads. Opportunities for repair and maintenance of these components will be limited. These conditions lead to significant challenges for the design of the diagnostics. Space constraints, provision of adequate radiation shielding, and development of repair and maintenance strategies are challenges for diagnostic integration into the port plugs that also affect diagnostic design. The current status of design of the US ITER diagnostics is presented and R&D needs are identified. Supported by DOE contracts DE-AC02-09CH11466 (PPPL) and DE-AC05-00OR22725 (UT-Battelle, LLC).

  1. ETR/ITER systems code

    SciTech Connect

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.; Bulmer, R.H.; Busigin, A.; DuBois, P.F.; Fenstermacher, M.E.; Fink, J.; Finn, P.A.; Galambos, J.D.; Gohar, Y.; Gorker, G.E.; Haines, J.R.; Hassanein, A.M.; Hicks, D.R.; Ho, S.K.; Kalsi, S.S.; Kalyanam, K.M.; Kerns, J.A.; Lee, J.D.; Miller, J.R.; Miller, R.L.; Myall, J.O.; Peng, Y-K.M.; Perkins, L.J.; Spampinato, P.T.; Strickler, D.J.; Thomson, S.L.; Wagner, C.E.; Willms, R.S.; Reid, R.L.

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs.

  2. Preconditioned iterations to calculate extreme eigenvalues

    SciTech Connect

    Brand, C.W.; Petrova, S.

    1994-12-31

    Common iterative algorithms to calculate a few extreme eigenvalues of a large, sparse matrix are Lanczos methods or power iterations. They converge at a rate proportional to the separation of the extreme eigenvalues from the rest of the spectrum. Appropriate preconditioning improves the separation of the eigenvalues. Davidson`s method and its generalizations exploit this fact. The authors examine a preconditioned iteration that resembles a truncated version of Davidson`s method with a different preconditioning strategy.

  3. SEER Statistics

    Cancer.gov

    The Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute works to provide information on cancer statistics in an effort to reduce the burden of cancer among the U.S. population.

  4. Cancer Statistics

    MedlinePlus

    ... cancer statistics across the world. U.S. Cancer Mortality Trends The best indicator of progress against cancer is ... the number of cancer survivors has increased. These trends show that progress is being made against the ...

  5. Statistical Physics

    NASA Astrophysics Data System (ADS)

    Hermann, Claudine

    Statistical Physics bridges the properties of a macroscopic system and the microscopic behavior of its constituting particles, otherwise impossible due to the giant magnitude of Avogadro's number. Numerous systems of today's key technologies - such as semiconductors or lasers - are macroscopic quantum objects; only statistical physics allows for understanding their fundamentals. Therefore, this graduate text also focuses on particular applications such as the properties of electrons in solids with applications, and radiation thermodynamics and the greenhouse effect.

  6. Performance evaluation of iterative reconstruction algorithms for achieving CT radiation dose reduction - a phantom study.

    PubMed

    Dodge, Cristina T; Tamm, Eric P; Cody, Dianna D; Liu, Xinming; Jensen, Corey T; Wei, Wei; Kundra, Vikas; Rong, John

    2016-01-01

    The purpose of this study was to characterize image quality and dose performance with GE CT iterative reconstruction techniques, adaptive statistical iterative recon-struction (ASiR), and model-based iterative reconstruction (MBIR), over a range of typical to low-dose intervals using the Catphan 600 and the anthropomorphic Kyoto Kagaku abdomen phantoms. The scope of the project was to quantitatively describe the advantages and limitations of these approaches. The Catphan 600 phantom, supplemented with a fat-equivalent oval ring, was scanned using a GE Discovery HD750 scanner at 120 kVp, 0.8 s rotation time, and pitch factors of 0.516, 0.984, and 1.375. The mA was selected for each pitch factor to achieve CTDIvol values of 24, 18, 12, 6, 3, 2, and 1 mGy. Images were reconstructed at 2.5 mm thickness with filtered back-projection (FBP); 20%, 40%, and 70% ASiR; and MBIR. The potential for dose reduction and low-contrast detectability were evaluated from noise and contrast-to-noise ratio (CNR) measurements in the CTP 404 module of the Catphan. Hounsfield units (HUs) of several materials were evaluated from the cylinder inserts in the CTP 404 module, and the modulation transfer function (MTF) was calculated from the air insert. The results were con-firmed in the anthropomorphic Kyoto Kagaku abdomen phantom at 6, 3, 2, and 1mGy. MBIR reduced noise levels five-fold and increased CNR by a factor of five compared to FBP below 6mGy CTDIvol, resulting in a substantial improvement in image quality. Compared to ASiR and FBP, HU in images reconstructed with MBIR were consistently lower, and this discrepancy was reversed by higher pitch factors in some materials. MBIR improved the conspicuity of the high-contrast spatial resolution bar pattern, and MTF quantification confirmed the superior spatial resolution performance of MBIR versus FBP and ASiR at higher dose levels. While ASiR and FBP were relatively insensitive to changes in dose and pitch, the spatial resolution for MBIR

  7. Sequence analysis by iterated maps, a review.

    PubMed

    Almeida, Jonas S

    2014-05-01

    Among alignment-free methods, Iterated Maps (IMs) are on a particular extreme: they are also scale free (order free). The use of IMs for sequence analysis is also distinct from other alignment-free methodologies in being rooted in statistical mechanics instead of computational linguistics. Both of these roots go back over two decades to the use of fractal geometry in the characterization of phase-space representations. The time series analysis origin of the field is betrayed by the title of the manuscript that started this alignment-free subdomain in 1990, 'Chaos Game Representation'. The clash between the analysis of sequences as continuous series and the better established use of Markovian approaches to discrete series was almost immediate, with a defining critique published in same journal 2 years later. The rest of that decade would go by before the scale-free nature of the IM space was uncovered. The ensuing decade saw this scalability generalized for non-genomic alphabets as well as an interest in its use for graphic representation of biological sequences. Finally, in the past couple of years, in step with the emergence of BigData and MapReduce as a new computational paradigm, there is a surprising third act in the IM story. Multiple reports have described gains in computational efficiency of multiple orders of magnitude over more conventional sequence analysis methodologies. The stage appears to be now set for a recasting of IMs with a central role in processing nextgen sequencing results. PMID:24162172

  8. Mixed Confidence Estimation for Iterative CT Reconstruction.

    PubMed

    Perlmutter, David S; Kim, Soo Mee; Kinahan, Paul E; Alessio, Adam M

    2016-09-01

    Dynamic (4D) CT imaging is used in a variety of applications, but the two major drawbacks of the technique are its increased radiation dose and longer reconstruction time. Here we present a statistical analysis of our previously proposed Mixed Confidence Estimation (MCE) method that addresses both these issues. This method, where framed iterative reconstruction is only performed on the dynamic regions of each frame while static regions are fixed across frames to a composite image, was proposed to reduce computation time. In this work, we generalize the previous method to describe any application where a portion of the image is known with higher confidence (static, composite, lower-frequency content, etc.) and a portion of the image is known with lower confidence (dynamic, targeted, etc). We show that by splitting the image space into higher and lower confidence components, MCE can lower the estimator variance in both regions compared to conventional reconstruction. We present a theoretical argument for this reduction in estimator variance and verify this argument with proof-of-principle simulations. We also propose a fast approximation of the variance of images reconstructed with MCE and confirm that this approximation is accurate compared to analytic calculations of and multi-realization image variance. This MCE method requires less computation time and provides reduced image variance for imaging scenarios where portions of the image are known with more certainty than others allowing for potentially reduced radiation dose and/or improved dynamic imaging. PMID:27008663

  9. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    NASA Astrophysics Data System (ADS)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.

    2014-08-01

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ-ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.

  10. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    SciTech Connect

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.

    2014-08-21

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ–ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.

  11. Iterative reconstruction for pet scanners with continuous scintillators.

    PubMed

    Iriarte, Ana; Caffarena, Gabriel; Lopez-Fernandez, Mariano; Garcia-Carmona, Rodrigo; Otero, Abraham; Sorzano, Carlos O S; Marabini, Roberto

    2015-08-01

    Several technical developments have led to a comeback of the continuous scintillators in positron emission tomography (PET). Important differences exist between the resurgent continuous scintillators and the prevailing pixelated devices, which can translate into certain advantages of the former over the latter. However, if the peculiarities of the continuous scintillators are not considered in the iterative reconstruction in which the measured data is converted to images, these advantages will not be fully exploited. In this paper, we review which those peculiarities are and how they have been considered in the literature of PET reconstruction. In light of this review, we propose a new method to compute one of the key elements of the iterative schemes, the system matrix. Specifically, we substitute the traditional Gaussian approach to the so-called uncertainty term by a more general Monte Carlo estimation, and account for the effect of the optical photons, which cannot be neglected in continuous-scintillators devices. Finally, we gather in a single scheme all the elements of the iterative reconstruction that have been individually reformulated, in this or previous works, for continuous scintillators, providing the first reconstruction framework fully adapted to this type of detectors. The preliminary images obtained for a commercially available PET scanner show the benefits of adjusting the reconstruction to the nature of the scintillators. PMID:26736742

  12. Iterative reconstruction methods in X-ray CT.

    PubMed

    Beister, Marcel; Kolditz, Daniel; Kalender, Willi A

    2012-04-01

    Iterative reconstruction (IR) methods have recently re-emerged in transmission x-ray computed tomography (CT). They were successfully used in the early years of CT, but given up when the amount of measured data increased because of the higher computational demands of IR compared to analytical methods. The availability of large computational capacities in normal workstations and the ongoing efforts towards lower doses in CT have changed the situation; IR has become a hot topic for all major vendors of clinical CT systems in the past 5 years. This review strives to provide information on IR methods and aims at interested physicists and physicians already active in the field of CT. We give an overview on the terminology used and an introduction to the most important algorithmic concepts including references for further reading. As a practical example, details on a model-based iterative reconstruction algorithm implemented on a modern graphics adapter (GPU) are presented, followed by application examples for several dedicated CT scanners in order to demonstrate the performance and potential of iterative reconstruction methods. Finally, some general thoughts regarding the advantages and disadvantages of IR methods as well as open points for research in this field are discussed. PMID:22316498

  13. An Efficient Augmented Lagrangian Method for Statistical X-Ray CT Image Reconstruction

    PubMed Central

    Li, Jiaojiao; Niu, Shanzhou; Huang, Jing; Bian, Zhaoying; Feng, Qianjin; Yu, Gaohang; Liang, Zhengrong; Chen, Wufan; Ma, Jianhua

    2015-01-01

    Statistical iterative reconstruction (SIR) for X-ray computed tomography (CT) under the penalized weighted least-squares criteria can yield significant gains over conventional analytical reconstruction from the noisy measurement. However, due to the nonlinear expression of the objective function, most exiting algorithms related to the SIR unavoidably suffer from heavy computation load and slow convergence rate, especially when an edge-preserving or sparsity-based penalty or regularization is incorporated. In this work, to address abovementioned issues of the general algorithms related to the SIR, we propose an adaptive nonmonotone alternating direction algorithm in the framework of augmented Lagrangian multiplier method, which is termed as “ALM-ANAD”. The algorithm effectively combines an alternating direction technique with an adaptive nonmonotone line search to minimize the augmented Lagrangian function at each iteration. To evaluate the present ALM-ANAD algorithm, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present ALM-ANAD algorithm can achieve noticeable gains over the classical nonlinear conjugate gradient algorithm and state-of-the-art split Bregman algorithm in terms of noise reduction, contrast-to-noise ratio, convergence rate, and universal quality index metrics. PMID:26495975

  14. Adaptive Algebraic Multigrid Methods

    SciTech Connect

    Brezina, M; Falgout, R; MacLachlan, S; Manteuffel, T; McCormick, S; Ruge, J

    2004-04-09

    Our ability to simulate physical processes numerically is constrained by our ability to solve the resulting linear systems, prompting substantial research into the development of multiscale iterative methods capable of solving these linear systems with an optimal amount of effort. Overcoming the limitations of geometric multigrid methods to simple geometries and differential equations, algebraic multigrid methods construct the multigrid hierarchy based only on the given matrix. While this allows for efficient black-box solution of the linear systems associated with discretizations of many elliptic differential equations, it also results in a lack of robustness due to assumptions made on the near-null spaces of these matrices. This paper introduces an extension to algebraic multigrid methods that removes the need to make such assumptions by utilizing an adaptive process. The principles which guide the adaptivity are highlighted, as well as their application to algebraic multigrid solution of certain symmetric positive-definite linear systems.

  15. Benchmarking ICRF simulations for ITER

    SciTech Connect

    R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R.J. Dumont, A. Fukuyama, R. Harvey, E.F. Jaeger, E. Lerche, C.K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

    2010-09-28

    Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode plasma. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by seven groups to predict the ICRF electromagnetic fields and heating profiles. Approximate agreement is achieved for the predicted heating power partitions for the DT and He4 cases. Profiles of the heating powers and electromagnetic fields are compared.

  16. Fixed Point Transformations Based Iterative Control of a Polymerization Reaction

    NASA Astrophysics Data System (ADS)

    Tar, József K.; Rudas, Imre J.

    As a paradigm of strongly coupled non-linear multi-variable dynamic systems the mathematical model of the free-radical polymerization of methyl-metachrylate with azobis (isobutyro-nitrile) as an initiator and toluene as a solvent taking place in a jacketed Continuous Stirred Tank Reactor (CSTR) is considered. In the adaptive control of this system only a single input variable is used as the control signal (the process input, i.e. dimensionless volumetric flow rate of the initiator), and a single output variable is observed (the process output, i.e. the number-average molecular weight of the polymer). Simulation examples illustrate that on the basis of a very rough and primitive model consisting of two scalar variables various fixed-point transformations based convergent iterations result in a novel, sophisticated adaptive control.

  17. Climate change report calls for iterative risk management framework

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    2011-05-01

    Climate change is occurring, is very likely caused by human activities, and poses significant risks for a broad range of human and natural systems, a 12 May report by the U.S. National Research Council (NRC) reaffirms. The report includes a series of recommended steps to respond to those risks. In addition, the report urges an iterative risk management framework that can adapt to new information and changing circumstances and concerns about the risks. “Each additional ton of greenhouse gases emitted commits us to further change and greater risks. In the judgment of the [NRC] Committee on America's Climate Choices, the environmental, economic, and humanitarian risks of climate change indicate a pressing need for substantial action to limit the magnitude of climate change and to prepare to adapt to its impacts,” the report states.

  18. Statistical optics

    NASA Astrophysics Data System (ADS)

    Goodman, J. W.

    This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.

  19. New concurrent iterative methods with monotonic convergence

    SciTech Connect

    Yao, Qingchuan

    1996-12-31

    This paper proposes the new concurrent iterative methods without using any derivatives for finding all zeros of polynomials simultaneously. The new methods are of monotonic convergence for both simple and multiple real-zeros of polynomials and are quadratically convergent. The corresponding accelerated concurrent iterative methods are obtained too. The new methods are good candidates for the application in solving symmetric eigenproblems.

  20. An accelerated subspace iteration for eigenvector derivatives

    NASA Technical Reports Server (NTRS)

    Ting, Tienko

    1991-01-01

    An accelerated subspace iteration method for calculating eigenvector derivatives has been developed. Factors affecting the effectiveness and the reliability of the subspace iteration are identified, and effective strategies concerning these factors are presented. The method has been implemented, and the results of a demonstration problem are presented.

  1. Iterative methods for weighted least-squares

    SciTech Connect

    Bobrovnikova, E.Y.; Vavasis, S.A.

    1996-12-31

    A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.

  2. DSC -- Disruption Simulation Code for Tokamaks and ITER applications

    NASA Astrophysics Data System (ADS)

    Galkin, S. A.; Grubert, J. E.; Zakharov, L. E.

    2010-11-01

    Arguably the most important issue facing the further development of magnetic fusion via advanced tokamaks is to predict, avoid, or mitigate disruptions. This recently became the hottest challenging topic in fusion research because of several potentially damaging effects, which could impact the ITER device. To address this issue, two versions of a new 3D adaptive Disruption Simulation Code (DSC) will be developed. The first version will solve the ideal reduced 3D MHD model in the real geometry with a thin conducting wall structure, utilizing the adaptive meshless technique. The second version will solve the resistive reduced 3D MHD model in the real geometry of the conducting structure of the tokamak vessel and will finally be parallelized. The DSC will be calibrated against the JET disruption data and will be capable of predicting the disruption effects in ITER, as well as contributing to the development of the disruption mitigation scheme and suppression of the RE generation. The progress on the first version of the 3D DSC development will be presented.

  3. Statistics Revelations

    ERIC Educational Resources Information Center

    Chicot, Katie; Holmes, Hilary

    2012-01-01

    The use, and misuse, of statistics is commonplace, yet in the printed format data representations can be either over simplified, supposedly for impact, or so complex as to lead to boredom, supposedly for completeness and accuracy. In this article the link to the video clip shows how dynamic visual representations can enliven and enhance the…

  4. Statistical Fun

    ERIC Educational Resources Information Center

    Catley, Alan

    2007-01-01

    Following the announcement last year that there will be no more math coursework assessment at General Certificate of Secondary Education (GCSE), teachers will in the future be able to devote more time to preparing learners for formal examinations. One of the key things that the author has learned when teaching statistics is that it makes for far…

  5. Adaptive wavelets and relativistic magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hirschmann, Eric; Neilsen, David; Anderson, Matthe; Debuhr, Jackson; Zhang, Bo

    2016-03-01

    We present a method for integrating the relativistic magnetohydrodynamics equations using iterated interpolating wavelets. Such provide an adaptive implementation for simulations in multidimensions. A measure of the local approximation error for the solution is provided by the wavelet coefficients. They place collocation points in locations naturally adapted to the flow while providing expected conservation. We present demanding 1D and 2D tests includingthe Kelvin-Helmholtz instability and the Rayleigh-Taylor instability. Finally, we consider an outgoing blast wave that models a GRB outflow.

  6. Adaptive ILC algorithms of nonlinear continuous systems with non-parametric uncertainties for non-repetitive trajectory tracking

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Dong; Lv, Mang-Mang; Ho, John K. L.

    2016-07-01

    In this article, two adaptive iterative learning control (ILC) algorithms are presented for nonlinear continuous systems with non-parametric uncertainties. Unlike general ILC techniques, the proposed adaptive ILC algorithms allow that both the initial error at each iteration and the reference trajectory are iteration-varying in the ILC process, and can achieve non-repetitive trajectory tracking beyond a small initial time interval. Compared to the neural network or fuzzy system-based adaptive ILC schemes and the classical ILC methods, in which the number of iterative variables is generally larger than or equal to the number of control inputs, the first adaptive ILC algorithm proposed in this paper uses just two iterative variables, while the second even uses a single iterative variable provided that some bound information on system dynamics is known. As a result, the memory space in real-time ILC implementations is greatly reduced.

  7. On the interplay between inner and outer iterations for a class of iterative methods

    SciTech Connect

    Giladi, E.

    1994-12-31

    Iterative algorithms for solving linear systems of equations often involve the solution of a subproblem at each step. This subproblem is usually another linear system of equations. For example, a preconditioned iteration involves the solution of a preconditioner at each step. In this paper, the author considers algorithms for which the subproblem is also solved iteratively. The subproblem is then said to be solved by {open_quotes}inner iterations{close_quotes} while the term {open_quotes}outer iteration{close_quotes} refers to a step of the basic algorithm. The cost of performing an outer iteration is dominated by the solution of the subproblem, and can be measured by the number of inner iterations. A good measure of the total amount of work needed to solve the original problem to some accuracy c is then, the total number of inner iterations. To lower the amount of work, one can consider solving the subproblems {open_quotes}inexactly{close_quotes} i.e. not to full accuracy. Although this diminishes the cost of solving each subproblem, it usually slows down the convergence of the outer iteration. It is therefore interesting to study the effect of solving each subproblem inexactly on the total amount of work. Specifically, the author considers strategies in which the accuracy to which the inner problem is solved, changes from one outer iteration to the other. The author seeks the `optimal strategy`, that is, the one that yields the lowest possible cost. Here, the author develops a methodology to find the optimal strategy, from the set of slowly varying strategies, for some iterative algorithms. This methodology is applied to the Chebychev iteration and it is shown that for Chebychev iteration, a strategy in which the inner-tolerance remains constant is optimal. The author also estimates this optimal constant. Then generalizations to other iterative procedures are discussed.

  8. Hydropower, adaptive management, and Biodiversity

    NASA Astrophysics Data System (ADS)

    Wieringa, Mark J.; Morton, Anthony G.

    1996-11-01

    Adaptive management is a policy framework within which an iterative process of decision making is followed based on the observed responses to and effectiveness of previous decisions. The use of adaptive management allows science-based research and monitoring of natural resource and ecological community responses, in conjunction with societal values and goals, to guide decisions concerning man's activities. The adaptive management process has been proposed for application to hydropower operations at Glen Canyon Dam on the Colorado River, a situation that requires complex balancing of natural resources requirements and competing human uses. This example is representative of the general increase in public interest in the operation of hydropower facilities and possible effects on downstream natural resources and of the growing conflicts between uses and users of river-based resources. This paper describes the adaptive management process, using the Glen Canyon Dam example, and discusses ways to make the process work effectively in managing downstream natural resources and biodiversity.

  9. A component analysis based on serial results analyzing performance of parallel iterative programs

    SciTech Connect

    Richman, S.C.

    1994-12-31

    This research is concerned with the parallel performance of iterative methods for solving large, sparse, nonsymmetric linear systems. Most of the iterative methods are first presented with their time costs and convergence rates examined intensively on sequential machines, and then adapted to parallel machines. The analysis of the parallel iterative performance is more complicated than that of serial performance, since the former can be affected by many new factors, such as data communication schemes, number of processors used, and Ordering and mapping techniques. Although the author is able to summarize results from data obtained after examining certain cases by experiments, two questions remain: (1) How to explain the results obtained? (2) How to extend the results from the certain cases to general cases? To answer these two questions quantitatively, the author introduces a tool called component analysis based on serial results. This component analysis is introduced because the iterative methods consist mainly of several basic functions such as linked triads, inner products, and triangular solves, which have different intrinsic parallelisms and are suitable for different parallel techniques. The parallel performance of each iterative method is first expressed as a weighted sum of the parallel performance of the basic functions that are the components of the method. Then, one separately examines the performance of basic functions and the weighting distributions of iterative methods, from which two independent sets of information are obtained when solving a given problem. In this component approach, all the weightings require only serial costs not parallel costs, and each iterative method for solving a given problem is represented by its unique weighting distribution. The information given by the basic functions is independent of iterative method, while that given by weightings is independent of parallel technique, parallel machine and number of processors.

  10. Statistical Optics

    NASA Astrophysics Data System (ADS)

    Goodman, Joseph W.

    2000-07-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research

  11. High-Resolution Iterative Frequency Identification for NMR as a General Strategy for Multidimensional Data Collection

    PubMed Central

    Bahrami, Arash; Tonelli, Marco; Hallenga, Klaas; Markley, John L.

    2015-01-01

    We describe a novel approach to the rapid collection and processing of multidimensional NMR data: “high-resolution iterative frequency identification for NMR” (HIFI–NMR). As with other reduced dimensionality approaches, HIFI–NMR collects n-dimensional data as a set of two-dimensional (2D) planes. The HIFI–NMR algorithm incorporates several innovative features. (1) Following the initial collection of two orthogonal 2D planes, tilted planes are selected adaptively, one-by-one. (2) Spectral space is analyzed in a rigorous statistical manner. (3) An online algorithm maintains a model that provides a probabilistic representation of the three-dimensional (3D) peak positions, derives the optimal angle for the next plane to be collected, and stops data collection when the addition of another plane would not improve the data model. (4) A robust statistical algorithm extracts information from the plane projections and is used to drive data collection. (5) Peak lists with associated probabilities are generated directly, without total reconstruction of the 3D spectrum; these are ready for use in subsequent assignment or structure determination steps. As a proof of principle, we have tested the approach with 3D triple-resonance experiments of the kind used to assign protein backbone and side-chain resonances. Peaks extracted automatically by HIFI–NMR, for both small and larger proteins, included ~98% of real peaks obtained from control experiments in which data were collected by conventional 3D methods. HIFI–NMR required about one-tenth the time for data collection and avoided subsequent data processing and peak-picking. The approach can be implemented on commercial NMR spectrometers and is extensible to higher-dimensional NMR. PMID:16144400

  12. High-resolution iterative frequency identification for NMR as a general strategy for multidimensional data collection.

    PubMed

    Eghbalnia, Hamid R; Bahrami, Arash; Tonelli, Marco; Hallenga, Klaas; Markley, John L

    2005-09-14

    We describe a novel approach to the rapid collection and processing of multidimensional NMR data: "high-resolution iterative frequency identification for NMR" (HIFI-NMR). As with other reduced dimensionality approaches, HIFI-NMR collects n-dimensional data as a set of two-dimensional (2D) planes. The HIFI-NMR algorithm incorporates several innovative features. (1) Following the initial collection of two orthogonal 2D planes, tilted planes are selected adaptively, one-by-one. (2) Spectral space is analyzed in a rigorous statistical manner. (3) An online algorithm maintains a model that provides a probabilistic representation of the three-dimensional (3D) peak positions, derives the optimal angle for the next plane to be collected, and stops data collection when the addition of another plane would not improve the data model. (4) A robust statistical algorithm extracts information from the plane projections and is used to drive data collection. (5) Peak lists with associated probabilities are generated directly, without total reconstruction of the 3D spectrum; these are ready for use in subsequent assignment or structure determination steps. As a proof of principle, we have tested the approach with 3D triple-resonance experiments of the kind used to assign protein backbone and side-chain resonances. Peaks extracted automatically by HIFI-NMR, for both small and larger proteins, included approximately 98% of real peaks obtained from control experiments in which data were collected by conventional 3D methods. HIFI-NMR required about one-tenth the time for data collection and avoided subsequent data processing and peak-picking. The approach can be implemented on commercial NMR spectrometers and is extensible to higher-dimensional NMR. PMID:16144400

  13. Radiation Dose Reduction in Pediatric Body CT Using Iterative Reconstruction and a Novel Image-Based Denoising Method

    PubMed Central

    Yu, Lifeng; Fletcher, Joel G.; Shiung, Maria; Thomas, Kristen B.; Matsumoto, Jane M.; Zingula, Shannon N.; McCollough, Cynthia H.

    2016-01-01

    OBJECTIVE The objective of this study was to evaluate the radiation dose reduction potential of a novel image-based denoising technique in pediatric abdominopelvic and chest CT examinations and compare it with a commercial iterative reconstruction method. MATERIALS AND METHODS Data were retrospectively collected from 50 (25 abdominopelvic and 25 chest) clinically indicated pediatric CT examinations. For each examination, a validated noise-insertion tool was used to simulate half-dose data, which were reconstructed using filtered back-projection (FBP) and sinogram-affirmed iterative reconstruction (SAFIRE) methods. A newly developed denoising technique, adaptive nonlocal means (aNLM), was also applied. For each of the 50 patients, three pediatric radiologists evaluated four datasets: full dose plus FBP, half dose plus FBP, half dose plus SAFIRE, and half dose plus aNLM. For each examination, the order of preference for the four datasets was ranked. The organ-specific diagnosis and diagnostic confidence for five primary organs were recorded. RESULTS The mean (± SD) volume CT dose index for the full-dose scan was 5.3 ± 2.1 mGy for abdominopelvic examinations and 2.4 ± 1.1 mGy for chest examinations. For abdominopelvic examinations, there was no statistically significant difference between the half dose plus aNLM dataset and the full dose plus FBP dataset (3.6 ± 1.0 vs 3.6 ± 0.9, respectively; p = 0.52), and aNLM performed better than SAFIRE. For chest examinations, there was no statistically significant difference between the half dose plus SAFIRE and the full dose plus FBP (4.1 ± 0.6 vs 4.2 ± 0.6, respectively; p = 0.67), and SAFIRE performed better than aNLM. For all organs, there was more than 85% agreement in organ-specific diagnosis among the three half-dose configurations and the full dose plus FBP configuration. CONCLUSION Although a novel image-based denoising technique performed better than a commercial iterative reconstruction method in pediatric

  14. ITER Ion Cyclotron Heating and Fueling Systems

    SciTech Connect

    Rasmussen, D.A.; Baylor, L.R.; Combs, S.K.; Fredd, E.; Goulding, R.H.; Hosea, J.; Swain, D.W.

    2005-04-15

    The ITER burning plasma and advanced operating regimes require robust and reliable heating and current drive and fueling systems. The ITER design documents describe the requirements and reference designs for the ion cyclotron and pellet fueling systems. Development and testing programs are required to optimize, validate and qualify these systems for installation on ITER.The ITER ion cyclotron system offers significant technology challenges. The antenna must operate in a nuclear environment and withstand heat loads and disruption forces beyond present-day designs. It must operate for long pulse lengths and be highly reliable, delivering power to a plasma load with properties that will change throughout the discharge. The ITER ion cyclotron system consists of one eight-strap antenna, eight rf sources (20 MW, 35-65 MHz), associated high-voltage DC power supplies, transmission lines and matching and decoupling components.The ITER fueling system consists of a gas injection system and multiple pellet injectors for edge fueling and deep core fueling. Pellet injection will be the primary ITER fuel delivery system. The fueling requirements will require significant extensions in pellet injector pulse length ({approx}3000 s), throughput (400 torr-L/s,) and reliability. The proposed design is based on a centrifuge accelerator fed by a continuous screw extruder. Inner wall pellet injection with the use of curved guide tubes will be utilized for deep fueling.

  15. Progress on ITER Diagnostic Integration

    NASA Astrophysics Data System (ADS)

    Johnson, David; Feder, Russ; Klabacha, Jonathan; Loesser, Doug; Messineo, Mike; Stratton, Brentley; Wood, Rick; Zhai, Yuhu; Andrew, Phillip; Barnsley, Robin; Bertschinger, Guenter; Debock, Maarten; Reichle, Roger; Udintsev, Victor; Vayakis, George; Watts, Christopher; Walsh, Michael

    2013-10-01

    On ITER, front-end components must operate reliably in a hostile environment. Many will be housed in massive port plugs, which also shield the machine from radiation. Multiple diagnostics reside in a single plug, presenting new challenges for developers. Front-end components must tolerate thermally-induced stresses, disruption-induced mechanical loads, stray ECH radiation, displacement damage, and degradation due to plasma-induced coatings. The impact of failures is amplified due to the difficulty in performing robotic maintenance on these large structures. Motivated by needs to minimize disruption loads on the plugs, standardize the handling of shield modules, and decouple the parallel efforts of the many parties, the packaging strategy for diagnostics has recently focused on the use of 3 vertical shield modules inserted from the plasma side into each equatorial plug structure. At the front of each is a detachable first wall element with customized apertures. Progress on US equatorial and upper plugs will be used as examples, including the layout of components in the interspace and port cell regions. Supported by PPPL under contract DE-AC02-09CH11466 and UT-Battelle, LLC under contract DE-AC05-00OR22725 with the U.S. DOE.

  16. Iterants, Fermions and Majorana Operators

    NASA Astrophysics Data System (ADS)

    Kauffman, Louis H.

    Beginning with an elementary, oscillatory discrete dynamical system associated with the square root of minus one, we study both the foundations of mathematics and physics. Position and momentum do not commute in our discrete physics. Their commutator is related to the diffusion constant for a Brownian process and to the Heisenberg commutator in quantum mechanics. We take John Wheeler's idea of It from Bit as an essential clue and we rework the structure of that bit to a logical particle that is its own anti-particle, a logical Marjorana particle. This is our key example of the amphibian nature of mathematics and the external world. We show how the dynamical system for the square root of minus one is essentially the dynamics of a distinction whose self-reference leads to both the fusion algebra and the operator algebra for the Majorana Fermion. In the course of this, we develop an iterant algebra that supports all of matrix algebra and we end the essay with a discussion of the Dirac equation based on these principles.

  17. Multichannel blind iterative image restoration.

    PubMed

    Sroubek, Filip; Flusser, Jan

    2003-01-01

    Blind image deconvolution is required in many applications of microscopy imaging, remote sensing, and astronomical imaging. Unfortunately in a single-channel framework, serious conceptual and numerical problems are often encountered. Very recently, an eigenvector-based method (EVAM) was proposed for a multichannel framework which determines perfectly convolution masks in a noise-free environment if channel disparity, called co-primeness, is satisfied. We propose a novel iterative algorithm based on recent anisotropic denoising techniques of total variation and a Mumford-Shah functional with the EVAM restoration condition included. A linearization scheme of half-quadratic regularization together with a cell-centered finite difference discretization scheme is used in the algorithm and provides a unified approach to the solution of total variation or Mumford-Shah. The algorithm performs well even on very noisy images and does not require an exact estimation of mask orders. We demonstrate capabilities of the algorithm on synthetic data. Finally, the algorithm is applied to defocused images taken with a digital camera and to data from astronomical ground-based observations of the Sun. PMID:18237981

  18. Sequence analysis by iterated maps, a review

    PubMed Central

    2014-01-01

    Among alignment-free methods, Iterated Maps (IMs) are on a particular extreme: they are also scale free (order free). The use of IMs for sequence analysis is also distinct from other alignment-free methodologies in being rooted in statistical mechanics instead of computational linguistics. Both of these roots go back over two decades to the use of fractal geometry in the characterization of phase-space representations. The time series analysis origin of the field is betrayed by the title of the manuscript that started this alignment-free subdomain in 1990, ‘Chaos Game Representation’. The clash between the analysis of sequences as continuous series and the better established use of Markovian approaches to discrete series was almost immediate, with a defining critique published in same journal 2 years later. The rest of that decade would go by before the scale-free nature of the IM space was uncovered. The ensuing decade saw this scalability generalized for non-genomic alphabets as well as an interest in its use for graphic representation of biological sequences. Finally, in the past couple of years, in step with the emergence of BigData and MapReduce as a new computational paradigm, there is a surprising third act in the IM story. Multiple reports have described gains in computational efficiency of multiple orders of magnitude over more conventional sequence analysis methodologies. The stage appears to be now set for a recasting of IMs with a central role in processing nextgen sequencing results. PMID:24162172

  19. Computer implementations of iterative and non-iterative crystal plasticity solvers on high performance graphics hardware

    NASA Astrophysics Data System (ADS)

    Savage, Daniel J.; Knezevic, Marko

    2015-10-01

    We present parallel implementations of Newton-Raphson iterative and spectral based non-iterative solvers for single-crystal visco-plasticity models on a specialized computer hardware integrating a graphics-processing unit (GPU). We explore two implementations for the iterative solver on GPU multiprocessors: one based on a thread per crystal parallelization on local memory and another based on multiple threads per crystal on shared memory. The non-iterative solver implementation on the GPU hardware is based on a divide-conquer approach for matrix operations. The reduction of computational time for the iterative scheme was found to approach one order of magnitude. From detailed performance comparisons of the developed GPU iterative and non-iterative implementations, we conclude that the spectral non-iterative solver programed on a GPU platform is superior over the iterative implementation in terms of runtime as well as ease of implementation. It provides remarkable speedup factors exceeding three orders of magnitude over the iterative scalar version of the solver.

  20. Shuffling Adaptive Clinical Trials.

    PubMed

    Gokhale, Sanjay G; Gokhale, Sankalp

    2016-01-01

    Clinical trials are interventional studies on human beings, designed to test the hypothesis for diagnostic techniques, treatments, and disease preventions. Any novel medical technology should be evaluated for its efficacy and safety by clinical trials. The costs associated with developing drugs have increased dramatically over the past decade, and fewer drugs are obtaining regulatory approval. Because of this, the pharmaceutical industry is continually exploring new ways of improving drug developments, and one area of focus is adaptive clinical trial designs. Adaptive designs, which allow for some types of prospectively planned mid-study changes, can improve the efficiency of a trial and maximize the chance of success without undermining validity and integrity of the trial. However it is felt that in adaptive trials; perhaps by using accrued data the actual patient population after the adaptations could deviate from the originally target patient population and so to overcome this drawback; special methods like Bayesian Statistics, predicted probability are used to deduce data-analysis. Here, in this study, mathematical model of a new adaptive design (shuffling adaptive trial) is suggested which uses real-time data, and because there is no gap between expected and observed data, statistical modifications are not needed. Results are obviously clinically relevant. PMID:23751329

  1. [Statistical materials].

    PubMed

    1986-01-01

    Official population data for the USSR are presented for 1985 and 1986. Part 1 (pp. 65-72) contains data on capitals of union republics and cities with over one million inhabitants, including population estimates for 1986 and vital statistics for 1985. Part 2 (p. 72) presents population estimates by sex and union republic, 1986. Part 3 (pp. 73-6) presents data on population growth, including birth, death, and natural increase rates, 1984-1985; seasonal distribution of births and deaths; birth order; age-specific birth rates in urban and rural areas and by union republic; marriages; age at marriage; and divorces. PMID:12178831

  2. Adaptive Management for Urban Watersheds: The Slavic Village Pilot Project

    EPA Science Inventory

    Adaptive management is an environmental management strategy that uses an iterative process of decision-making to reduce the uncertainty in environmental management via system monitoring. A central tenet of adaptive management is that management involves a learning process that ca...

  3. Three-dimensional stellarator equilibria by iteration

    SciTech Connect

    Boozer, A.H.

    1983-02-01

    The iterative method of evaluating plasma equilibria is especially simple in a magnetic coordinate representation. This method is particularly useful for clarifying the subtle constraints of three-dimensional equilibria and studying magnetic surface breakup at high plasma beta.

  4. Anderson Acceleration for Fixed-Point Iterations

    SciTech Connect

    Walker, Homer F.

    2015-08-31

    The purpose of this grant was to support research on acceleration methods for fixed-point iterations, with applications to computational frameworks and simulation problems that are of interest to DOE.

  5. On the safety of ITER accelerators.

    PubMed

    Li, Ge

    2013-01-01

    Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate -1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER. PMID:24008267

  6. US sanctions on Russia hit ITER council

    NASA Astrophysics Data System (ADS)

    Clery, Daniel

    2014-06-01

    The ITER fusion experiment has had to bow to the impact of US sanctions against Russia and move the venue of its council meeting, scheduled for 18-19 June, from St Petersburg to the project headquarters in Cadarache, France.

  7. Budget woes continue to hamper ITER

    NASA Astrophysics Data System (ADS)

    Starckx, Senne

    2011-02-01

    A financial rescue package for ITER - the experimental nuclear-fusion reactor that is currently being built in Cadarache, France - has been refused by the European Parliament and the European Council.

  8. Archimedes' Pi--An Introduction to Iteration.

    ERIC Educational Resources Information Center

    Lotspeich, Richard

    1988-01-01

    One method (attributed to Archimedes) of approximating pi offers a simple yet interesting introduction to one of the basic ideas of numerical analysis, an iteration sequence. The method is described and elaborated. (PK)

  9. ITER Magnet Feeder: Design, Manufacturing and Integration

    NASA Astrophysics Data System (ADS)

    CHEN, Yonghua; ILIN, Y.; M., SU; C., NICHOLAS; BAUER, P.; JAROMIR, F.; LU, Kun; CHENG, Yong; SONG, Yuntao; LIU, Chen; HUANG, Xiongyi; ZHOU, Tingzhi; SHEN, Guang; WANG, Zhongwei; FENG, Hansheng; SHEN, Junsong

    2015-03-01

    The International Thermonuclear Experimental Reactor (ITER) feeder procurement is now well underway. The feeder design has been improved by the feeder teams at the ITER Organization (IO) and the Institute of Plasma Physics, Chinese Academy of Sciences (ASIPP) in the last 2 years along with analyses and qualification activities. The feeder design is being progressively finalized. In addition, the preparation of qualification and manufacturing are well scheduled at ASIPP. This paper mainly presents the design, the overview of manufacturing and the status of integration on the ITER magnet feeders. supported by the National Special Support for R&D on Science and Technology for ITER (Ministry of Public Security of the People's Republic of China-MPS) (No. 2008GB102000)

  10. The Physics Basis of ITER Confinement

    SciTech Connect

    Wagner, F.

    2009-02-19

    ITER will be the first fusion reactor and the 50 year old dream of fusion scientists will become reality. The quality of magnetic confinement will decide about the success of ITER, directly in the form of the confinement time and indirectly because it decides about the plasma parameters and the fluxes, which cross the separatrix and have to be handled externally by technical means. This lecture portrays some of the basic principles which govern plasma confinement, uses dimensionless scaling to set the limits for the predictions for ITER, an approach which also shows the limitations of the predictions, and describes briefly the major characteristics and physics behind the H-mode--the preferred confinement regime of ITER.

  11. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles

    PubMed Central

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-01-01

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well. PMID:26287194

  12. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles.

    PubMed

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-01-01

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well. PMID:26287194

  13. An efficient iterative algorithm for computation of scattering from dielectric objects.

    SciTech Connect

    Liao, L.; Gopalsami, N.; Venugopal, A.; Heifetz, A.; Raptis, A. C.

    2011-02-14

    We have developed an efficient iterative algorithm for electromagnetic scattering of arbitrary but relatively smooth dielectric objects. The algorithm iteratively adapts the equivalent surface currents until the electromagnetic fields inside and outside the dielectric objects match the boundary conditions. Theoretical convergence is analyzed for two examples that solve scattering of plane waves incident upon air/dielectric slabs of semi-infinite and finite thicknesses. We applied the iterative algorithm for simulation of sinusoidally-perturbed dielectric slab on one side and the method converged for such unsmooth surfaces. We next simulated the shift in radiation pattern of a 6-inch dielectric lens for different offsets of the feed antenna on the focal plane. The result is compared to that of the Geometrical Optics (GO).

  14. Novel aspects of plasma control in ITER

    SciTech Connect

    Humphreys, D.; Jackson, G.; Walker, M.; Welander, A.; Ambrosino, G.; Pironti, A.; Felici, F.; Kallenbach, A.; Raupp, G.; Treutterer, W.; Kolemen, E.; Lister, J.; Sauter, O.; Moreau, D.; Schuster, E.

    2015-02-15

    ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily for ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g., current profile regulation, tearing mode (TM) suppression), control mathematics (e.g., algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g., methods for management of highly subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.

  15. An Iterative Soft-Decision Decoding Algorithm

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Koumoto, Takuya; Takata, Toyoo; Kasami, Tadao

    1996-01-01

    This paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. Simulation results for the RM(64,22), EBCH(64,24), RM(64,42) and EBCH(64,45) codes show that the proposed decoding algorithm achieves practically (or near) optimal error performance with significant reduction in decoding computational complexity. The average number of search iterations is also small even for low signal-to-noise ratio.

  16. Novel aspects of plasma control in ITER

    NASA Astrophysics Data System (ADS)

    Humphreys, D.; Ambrosino, G.; de Vries, P.; Felici, F.; Kim, S. H.; Jackson, G.; Kallenbach, A.; Kolemen, E.; Lister, J.; Moreau, D.; Pironti, A.; Raupp, G.; Sauter, O.; Schuster, E.; Snipes, J.; Treutterer, W.; Walker, M.; Welander, A.; Winter, A.; Zabeo, L.

    2015-02-01

    ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily for ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g., current profile regulation, tearing mode (TM) suppression), control mathematics (e.g., algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g., methods for management of highly subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.

  17. Gyrokinetic Simulations of the ITER Pedestal

    NASA Astrophysics Data System (ADS)

    Kotschenreuther, Mike

    2015-11-01

    It has been reported that low collisionality pedestals for JET parameters are strongly stable to Kinetic Ballooning Modes (KBM), and it is, as simulations with GENE show, the drift-tearing modes that produce the pedestal transport. It would seem, then, that gyrokinetic simulations may be a powerful, perhaps, indispensable tool for probing the characteristics of the H-mode pedestal in ITER especially since projected ITER pedestals have the normalized gyroradius ρ* smaller than the range of present experimental investigation; they do lie, however, within the regime of validity of gyrokinetics. Since ExB shear becomes small as ρ* approaches zero, strong drift turbulence will eventually be excited. Finding an answer to the question whether the ITER ρ* is small enough to place it in the high turbulence regime compels serious investigation. We begin with MHD equilibria (including pedestal bootstrap current) constructed using VMEC. Plasma profile shapes, very close to JET experimental profiles, are scaled to values expected on ITER (e.g., a 4 keV pedestal). The equilibrium ExB shear is computed using a neoclassical formula for the radial electric field. As with JET, the ITER pedestal is found to be strongly stable to KBM. Preliminary nonlinear simulations with GENE show that the turbulent drift transport is strong for ITER; the electrostatic transport has a highly unfavorable scaling from JET to ITER, going from being highly sub-dominant to electromagnetic transport on JET, to dominant on ITER. At burning plasma parameters, pedestals in spherical tokamak H-modes may have much stronger velocity shear, and hence more favorable transport; preliminary investigations will be reported. This research supported by U.S. Department of Energy, Office of Fusion Energy Science: Grant No. DE-FG02-04ER-54742.

  18. Programmable Iterative Optical Image And Data Processing

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    1995-01-01

    Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

  19. Students' attitudes towards learning statistics

    NASA Astrophysics Data System (ADS)

    Ghulami, Hassan Rahnaward; Hamid, Mohd Rashid Ab; Zakaria, Roslinazairimah

    2015-05-01

    Positive attitude towards learning is vital in order to master the core content of the subject matters under study. This is unexceptional in learning statistics course especially at the university level. Therefore, this study investigates the students' attitude towards learning statistics. Six variables or constructs have been identified such as affect, cognitive competence, value, difficulty, interest, and effort. The instrument used for the study is questionnaire that was adopted and adapted from the reliable instrument of Survey of Attitudes towards Statistics(SATS©). This study is conducted to engineering undergraduate students in one of the university in the East Coast of Malaysia. The respondents consist of students who were taking the applied statistics course from different faculties. The results are analysed in terms of descriptive analysis and it contributes to the descriptive understanding of students' attitude towards the teaching and learning process of statistics.

  20. Adaptive Management

    EPA Science Inventory

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...

  1. Bayesian classification of polarimetric SAR images using adaptive a priori probabilities

    NASA Technical Reports Server (NTRS)

    Van Zyl, J. J.; Burnette, C. F.

    1992-01-01

    The problem of classifying earth terrain by observed polarimetric scattering properties is tackled with an iterative Bayesian scheme using a priori probabilities adaptively. The first classification is based on the use of fixed and not necessarily equal a priori probabilities, and successive iterations change the a priori probabilities adaptively. The approach is applied to an SAR image in which a single water body covers 10 percent of the image area. The classification accuracy for ocean, urban, vegetated, and total area increase, and the percentage of reclassified pixels decreases greatly as the iteration number increases. The iterative scheme is found to improve the a posteriori classification accuracy of maximum likelihood classifiers by iteratively using the local homogeneity in polarimetric SAR images. A few iterations can improve the classification accuracy significantly without sacrificing key high-frequency detail or edges in the image.

  2. EDITORIAL: ECRH physics and technology in ITER

    NASA Astrophysics Data System (ADS)

    Luce, T. C.

    2008-05-01

    It is a great pleasure to introduce you to this special issue containing papers from the 4th IAEA Technical Meeting on ECRH Physics and Technology in ITER, which was held 6-8 June 2007 at the IAEA Headquarters in Vienna, Austria. The meeting was attended by more than 40 ECRH experts representing 13 countries and the IAEA. Presentations given at the meeting were placed into five separate categories EC wave physics: current understanding and extrapolation to ITER Application of EC waves to confinement and stability studies, including active control techniques for ITER Transmission systems/launchers: state of the art and ITER relevant techniques Gyrotron development towards ITER needs System integration and optimisation for ITER. It is notable that the participants took seriously the focal point of ITER, rather than simply contributing presentations on general EC physics and technology. The application of EC waves to ITER presents new challenges not faced in the current generation of experiments from both the physics and technology viewpoints. High electron temperatures and the nuclear environment have a significant impact on the application of EC waves. The needs of ITER have also strongly motivated source and launcher development. Finally, the demonstrated ability for precision control of instabilities or non-inductive current drive in addition to bulk heating to fusion burn has secured a key role for EC wave systems in ITER. All of the participants were encouraged to submit their contributions to this special issue, subject to the normal publication and technical merit standards of Nuclear Fusion. Almost half of the participants chose to do so; many of the others had been published in other publications and therefore could not be included in this special issue. The papers included here are a representative sample of the meeting. The International Advisory Committee also asked the three summary speakers from the meeting to supply brief written summaries (O. Sauter

  3. Kernel-based least squares policy iteration for reinforcement learning.

    PubMed

    Xu, Xin; Hu, Dewen; Lu, Xicheng

    2007-07-01

    In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating

  4. Newton iterative methods for large scale nonlinear systems

    SciTech Connect

    Walker, H.F.; Turner, K.

    1993-01-01

    Objective is to develop robust, efficient Newton iterative methods for general large scale problems well suited for discretizations of partial differential equations, integral equations, and other continuous problems. A concomitant objective is to develop improved iterative linear algebra methods. We first outline research on Newton iterative methods and then review work on iterative linear algebra methods. (DLC)

  5. Iterative minimization algorithm for efficient calculations of transition states

    NASA Astrophysics Data System (ADS)

    Gao, Weiguo; Leng, Jing; Zhou, Xiang

    2016-03-01

    This paper presents an efficient algorithmic implementation of the iterative minimization formulation (IMF) for fast local search of transition state on potential energy surface. The IMF is a second order iterative scheme providing a general and rigorous description for the eigenvector-following (min-mode following) methodology. We offer a unified interpretation in numerics via the IMF for existing eigenvector-following methods, such as the gentlest ascent dynamics, the dimer method and many other variants. We then propose our new algorithm based on the IMF. The main feature of our algorithm is that the translation step is replaced by solving an optimization subproblem associated with an auxiliary objective function which is constructed from the min-mode information. We show that using an efficient scheme for the inexact solver and enforcing an adaptive stopping criterion for this subproblem, the overall computational cost will be effectively reduced and a super-linear rate between the accuracy and the computational cost can be achieved. A series of numerical tests demonstrate the significant improvement in the computational efficiency for the new algorithm.

  6. A holistic strategy for adaptive land management

    USGS Publications Warehouse

    Herrick, Jeffrey E.; Duniway, Michael C.; Pyke, David A.; Bestelmeyer, Brandon T.; Wills, Skye A.; Brown, Joel R.; Karl, Jason W.; Havstad, Kris M.

    2012-01-01

    Adaptive management is widely applied to natural resources management (Holling 1973; Walters and Holling 1990). Adaptive management can be generally defined as an iterative decision-making process that incorporates formulation of management objectives, actions designed to address these objectives, monitoring of results, and repeated adaptation of management until desired results are achieved (Brown and MacLeod 1996; Savory and Butterfield 1999). However, adaptive management is often criticized because very few projects ever complete more than one cycle, resulting in little adaptation and little knowledge gain (Lee 1999; Walters 2007). One significant criticism is that adaptive management is often used as a justification for undertaking actions with uncertain outcomes or as a surrogate for the development of specific, measurable indicators and monitoring programs (Lee 1999; Ruhl 2007).

  7. A Predictive Analysis Approach to Adaptive Testing.

    ERIC Educational Resources Information Center

    Kirisci, Levent; Hsu, Tse-Chi

    The predictive analysis approach to adaptive testing originated in the idea of statistical predictive analysis suggested by J. Aitchison and I.R. Dunsmore (1975). The adaptive testing model proposed is based on parameter-free predictive distribution. Aitchison and Dunsmore define statistical prediction analysis as the use of data obtained from an…

  8. RESEARCH NOTE FROM COLLABORATION: Adaptive vertex fitting

    NASA Astrophysics Data System (ADS)

    Waltenberger, Wolfgang; Frühwirth, Rudolf; Vanlaer, Pascal

    2007-12-01

    Vertex fitting frequently has to deal with both mis-associated tracks and mis-measured track errors. A robust, adaptive method is presented that is able to cope with contaminated data. The method is formulated as an iterative re-weighted Kalman filter. Annealing is introduced to avoid local minima in the optimization. For the initialization of the adaptive filter a robust algorithm is presented that turns out to perform well in a wide range of applications. The tuning of the annealing schedule and of the cut-off parameter is described using simulated data from the CMS experiment. Finally, the adaptive property of the method is illustrated in two examples.

  9. PREFACE: Progress in the ITER Physics Basis

    NASA Astrophysics Data System (ADS)

    Ikeda, K.

    2007-06-01

    I would firstly like to congratulate all who have contributed to the preparation of the `Progress in the ITER Physics Basis' (PIPB) on its publication and express my deep appreciation of the hard work and commitment of the many scientists involved. With the signing of the ITER Joint Implementing Agreement in November 2006, the ITER Members have now established the framework for construction of the project, and the ITER Organization has begun work at Cadarache. The review of recent progress in the physics basis for burning plasma experiments encompassed by the PIPB will be a valuable resource for the project and, in particular, for the current Design Review. The ITER design has been derived from a physics basis developed through experimental, modelling and theoretical work on the properties of tokamak plasmas and, in particular, on studies of burning plasma physics. The `ITER Physics Basis' (IPB), published in 1999, has been the reference for the projection methodologies for the design of ITER, but the IPB also highlighted several key issues which needed to be resolved to provide a robust basis for ITER operation. In the intervening period scientists of the ITER Participant Teams have addressed these issues intensively. The International Tokamak Physics Activity (ITPA) has provided an excellent forum for scientists involved in these studies, focusing their work on the high priority physics issues for ITER. Significant progress has been made in many of the issues identified in the IPB and this progress is discussed in depth in the PIPB. In this respect, the publication of the PIPB symbolizes the strong interest and enthusiasm of the plasma physics community for the success of the ITER project, which we all recognize as one of the great scientific challenges of the 21st century. I wish to emphasize my appreciation of the work of the ITPA Coordinating Committee members, who are listed below. Their support and encouragement for the preparation of the PIPB were

  10. Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace

    PubMed Central

    Zhang, Cheng; Lai, Chun-Liang; Pettitt, B. Montgomery

    2016-01-01

    The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution. PMID:27453632

  11. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    SciTech Connect

    Li Yupeng Deutsch, Clayton V.

    2012-06-15

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells. In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.

  12. Adaptive independent component analysis to analyze electrocardiograms

    NASA Astrophysics Data System (ADS)

    Yim, Seong-Bin; Szu, Harold H.

    2001-03-01

    In this work, we apply adaptive version independent component analysis (ADAPTIVE ICA) to the nonlinear measurement of electro-cardio-graphic (ECG) signals for potential detection of abnormal conditions in the heart. In principle, unsupervised ADAPTIVE ICA neural networks can demix the components of measured ECG signals. However, the nonlinear pre-amplification and post measurement processing make the linear ADAPTIVE ICA model no longer valid. This is possible because of a proposed adaptive rectification pre-processing is used to linearize the preamplifier of ECG, and then linear ADAPTIVE ICA is used in iterative manner until the outputs having their own stable Kurtosis. We call such a new approach adaptive ADAPTIVE ICA. Each component may correspond to individual heart function, either normal or abnormal. Adaptive ADAPTIVE ICA neural networks have the potential to make abnormal components more apparent, even when they are masked by normal components in the original measured signals. This is particularly important for diagnosis well in advance of the actual onset of heart attack, in which abnormalities in the original measured ECG signals may be difficult to detect. This is the first known work that applies Adaptive ADAPTIVE ICA to ECG signals beyond noise extraction, to the detection of abnormal heart function.

  13. Current status of the ITER MSE diagnostic

    NASA Astrophysics Data System (ADS)

    Yuh, Howard; Levinton, F.; La Fleur, H.; Foley, E.; Feder, R.; Zakharov, L.

    2013-10-01

    The U.S. is providing ITER with a Motional Stark Effect (MSE) diagnostic to provide a measurement to guide reconstructions of the plasma q-profile. The diagnostic design has gone through many iterations, driven primarily by the evolution of the ITER port plug design and the steering of the heating beams. The present two port, three view design viewing both heating beams and the DNB has recently passed a conceptual design review at the IO. The traditional line polarization (MSE-LP) technique employed on many devices around the world faces many challenges in ITER, including strong background light and mirror degradation. To mitigate these effects, a multi-wavelength polarimeter and high resolution spectrometer will be used to subtract polarized background, while retroreflecting polarizers will provide mirror calibration concurrent with MSE-LP measurements. However, without a proven plasma-facing mirror cleaning technique, inherent risks to MSE-LP remain. The high field and high beam energy on ITER offers optimal conditions for a spectroscopic measurement of the electric field using line splitting (MSE-LS), a technique which does not depend on mirror polarization properties. The current design is presented with a roadmap of the R&D needed to address remaining challenges. This work is supported by DOE contracts S009627-R and S012380-F.

  14. Preliminary Master Logic Diagram for ITER operation

    SciTech Connect

    Cadwallader, L.C.; Taylor, N.P.; Poucet, A.E.

    1998-04-01

    This paper describes the work performed to develop a Master Logic Diagram (MLD) for the operations phase of the International Thermonuclear Experimental Reactor (ITER). The MLD is a probabilistic risk assessment tool used to identify the broad set of potential initiating events that could lead to an offsite radioactive or toxic chemical release from the facility under study. The MLD described here is complementary to the failure modes and effects analyses (FMEAs) that have been performed for ITER`s major plant systems in the engineering evaluation of the facility design. While the FMEAs are a bottom-up or component level approach, the MLD is a top-down or facility level approach to identifying the broad spectrum of potential events. Strengths of the MLD are that it analyzes the entire plant, depicts completeness in the accident initiator process, provides an independent method for identification, and can also identify potential system interactions. MLDs have been used successfully as a hazard analysis tool. This paper describes the process used for the ITER MLD to treat the variety of radiological and toxicological source terms present in the ITER design. One subtree of the nineteen page MLD is shown to illustrate the levels of the diagram.

  15. Iterative contextual CV model for liver segmentation

    NASA Astrophysics Data System (ADS)

    Ji, Hongwei; He, Jiangping; Yang, Xin

    2014-01-01

    In this paper, we propose a novel iterative active contour algorithm, i.e. Iterative Contextual CV Model (ICCV), and apply it to automatic liver segmentation from 3D CT images. ICCV is a learning-based method and can be divided into two stages. At the first stage, i.e. the training stage, given a set of abdominal CT training images and the corresponding manual liver labels, our task is to construct a series of self-correcting classifiers by learning a mapping between automatic segmentations (in each round) and manual reference segmentations via context features. At the second stage, i.e. the segmentation stage, first the basic CV model is used to segment the image and subsequently Contextual CV Model (CCV), which combines the image information and the current shape model, is iteratively performed to improve the segmentation result. The current shape model is obtained by inputting the previous automatic segmentation result into the corresponding self-correcting classifier. The proposed method is evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results show that we obtain more and more accurate segmentation results by the iterative steps and satisfying results are obtained after about six iterations. Also, our method is comparable to the state-of-the-art work on liver segmentation.

  16. U.S. Contributions to ITER

    SciTech Connect

    Ned R. Sauthoff

    2005-05-13

    The United States participates in the ITER project and program to enable the study of the science and technology of burning plasmas, a key programmatic element missing from the world fusion program. The 2003 U.S. decision to enter the ITER negotiations followed an extensive series of community and governmental reviews of the benefits, readiness, and approaches to the study of burning plasmas. This paper describes both the technical and the organizational preparations and plans for U.S. participation in the ITER construction activity: in-kind contributions, staff contributions, and cash contributions as well as supporting physics and technology research. Near-term technical activities focus on the completion of R&D and design and mitigation of risks in the areas of the central solenoid magnet, shield/blanket, diagnostics, ion cyclotron system, electron cyclotron system, pellet fueling system, vacuum system, tritium processing system, and conventional systems. Outside the project, the U .S. is engaged in preparations for the test blanket module program. Organizational activities focus on preparations of the project management arrangements to maximize the overall success of the ITER Project; elements include refinement of U.S. directions on the international arrangements, the establishment of the U.S. Domestic Agency, progress along the path of the U.S. Department of Energy's Project Management Order, and overall preparations for commencement of the fabrication of major items of equipment and for provision of staff and cash as specified in the upcoming ITER agreement.

  17. The Impact of Iterative Reconstruction on Computed Tomography Radiation Dosimetry: Evaluation in a Routine Clinical Setting

    PubMed Central

    Moorin, Rachael E.; Gibson, David A. J.; Forsyth, Rene K.; Fox, Richard

    2015-01-01

    Purpose To evaluate the effect of introduction of iterative reconstruction as a mandated software upgrade on radiation dosimetry in routine clinical practice over a range of computed tomography examinations. Methods Random samples of scanning data were extracted from a centralised Picture Archiving Communication System pertaining to 10 commonly performed computed tomography examination types undertaken at two hospitals in Western Australia, before and after the introduction of iterative reconstruction. Changes in the mean dose length product and effective dose were evaluated along with estimations of associated changes to annual cancer incidence. Results We observed statistically significant reductions in the effective radiation dose for head computed tomography (22–27%) consistent with those reported in the literature. In contrast the reductions observed for non-contrast chest (37–47%); chest pulmonary embolism study (28%), chest/abdominal/pelvic study (16%) and thoracic spine (39%) computed tomography. Statistically significant reductions in radiation dose were not identified in angiographic computed tomography. Dose reductions translated to substantial lowering of the lifetime attributable risk, especially for younger females, and estimated numbers of incident cancers. Conclusion Reduction of CT dose is a priority Iterative reconstruction algorithms have the potential to significantly assist with dose reduction across a range of protocols. However, this reduction in dose is achieved via reductions in image noise. Fully realising the potential dose reduction of iterative reconstruction requires the adjustment of image factors and forgoing the noise reduction potential of the iterative algorithm. Our study has demonstrated a reduction in radiation dose for some scanning protocols, but not to the extent experimental studies had previously shown or in all protocols expected, raising questions about the extent to which iterative reconstruction achieves dose

  18. A fast poly-energetic iterative FBP algorithm

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Samei, Ehsan

    2014-04-01

    The beam hardening (BH) effect can influence medical interpretations in two notable ways. First, high attenuation materials, such as bones, can induce strong artifacts, which severely deteriorate the image quality. Second, voxel values can significantly deviate from the real values, which can lead to unreliable quantitative evaluation results. Some iterative methods have been proposed to eliminate the BH effect, but they cannot be widely applied for clinical practice because of the slow computational speed. The purpose of this study was to develop a new fast and practical poly-energetic iterative filtered backward projection algorithm (piFBP). The piFBP is composed of a novel poly-energetic forward projection process and a robust FBP-type backward updating process. In the forward projection process, an adaptive base material decomposition method is presented, based on which diverse body tissues (e.g., lung, fat, breast, soft tissue, and bone) and metal implants can be incorporated to accurately evaluate poly-energetic forward projections. In the backward updating process, one robust and fast FBP-type backward updating equation with a smoothing kernel is introduced to avoid the noise accumulation in the iteration process and to improve the convergence properties. Two phantoms were designed to quantitatively validate our piFBP algorithm in terms of the beam hardening index (BIdx) and the noise index (NIdx). The simulation results showed that piFBP possessed fast convergence speed, as the images could be reconstructed within four iterations. The variation range of the BIdx's of various tissues across phantom size and spectrum were reduced from [-7.5, 17.5] for FBP to [-0.1, 0.1] for piFBP while the NIdx's were maintained in the same low level (about [0.3, 1.7]). When a metal implant presented in a complex phantom, piFBP still had excellent reconstruction performance, as the variation range of the BIdx's of body tissues were reduced from [-2.9, 15.9] for FBP to [-0

  19. The PDZ Domain as a Complex Adaptive System

    PubMed Central

    Kurakin, Alexei; Swistowski, Andrzej; Wu, Susan C.; Bredesen, Dale E.

    2007-01-01

    Specific protein associations define the wiring of protein interaction networks and thus control the organization and functioning of the cell as a whole. Peptide recognition by PDZ and other protein interaction domains represents one of the best-studied classes of specific protein associations. However, a mechanistic understanding of the relationship between selectivity and promiscuity commonly observed in the interactions mediated by peptide recognition modules as well as its functional meaning remain elusive. To address these questions in a comprehensive manner, two large populations of artificial and natural peptide ligands of six archetypal PDZ domains from the synaptic proteins PSD95 and SAP97 were generated by target-assisted iterative screening (TAIS) of combinatorial peptide libraries and by synthesis of proteomic fragments, correspondingly. A comparative statistical analysis of affinity-ranked artificial and natural ligands yielded a comprehensive picture of known and novel PDZ ligand specificity determinants, revealing a hitherto unappreciated combination of specificity and adaptive plasticity inherent to PDZ domain recognition. We propose a reconceptualization of the PDZ domain in terms of a complex adaptive system representing a flexible compromise between the rigid order of exquisite specificity and the chaos of unselective promiscuity, which has evolved to mediate two mutually contradictory properties required of such higher order sub-cellular organizations as synapses, cell junctions, and others – organizational structure and organizational plasticity/adaptability. The generalization of this reconceptualization in regard to other protein interaction modules and specific protein associations is consistent with the image of the cell as a complex adaptive macromolecular system as opposed to clockwork. PMID:17895993

  20. Climate Change Assessment and Adaptation Planning for the Southeast US

    NASA Astrophysics Data System (ADS)

    Georgakakos, A. P.; Yao, H.; Zhang, F.

    2012-12-01

    A climate change assessment is carried out for the Apalachicola-Chattahoochee-Flint River Basin in the southeast US following an integrated water resources assessment and planning framework. The assessment process begins with the development/selection of consistent climate, demographic, socio-economic, and land use/cover scenarios. Historical scenarios and responses are analyzed first to establish baseline conditions. Future climate scenarios are based on GCMs available through the IPCC. Statistical and/or dynamic downscaling of GCM outputs is applied to generate high resolution (12x12 km) atmospheric forcing, such as rainfall, temperature, and ET demand, over the ACF River Basin watersheds. Physically based watershed, aquifer, and estuary models (lumped and distributed) are used to quantify the hydrologic and water quality river basin response to alternative climate and land use/cover scenarios. Demand assessments are carried out for each water sector, for example, water supply for urban, agricultural, and industrial users; hydro-thermal facilities; navigation reaches; and environmental/ecological flow and lake level requirements, aiming to establish aspirational water use targets, performance metrics, and management/adaptation options. Response models for the interconnected river-reservoir-aquifer-estuary system are employed next to assess actual water use levels and other sector outputs under a specific set of hydrologic inputs, demand targets, and management/adaptation options. Adaptive optimization methods are used to generate system-wide management policies conditional on inflow forecasts. The generated information is used to inform stakeholder planning and decision processes aiming to develop consensus on adaptation measures, management strategies, and performance monitoring indicators. The assessment and planning process is driven by stakeholder input and is inherently iterative and sequential.

  1. Mad Libs Statistics: A "Happy" Activity

    ERIC Educational Resources Information Center

    Trumpower, David

    2010-01-01

    This article describes a fun activity that can be used to help students make links between statistical analyses and their real-world implications. Although an illustrative example is provided using analysis of variance, the activity may be adapted for use with other statistical techniques.

  2. Adaptive SPECT

    PubMed Central

    Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.

    2008-01-01

    Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485

  3. Illustrating the practice of statistics

    SciTech Connect

    Hamada, Christina A; Hamada, Michael S

    2009-01-01

    The practice of statistics involves analyzing data and planning data collection schemes to answer scientific questions. Issues often arise with the data that must be dealt with and can lead to new procedures. In analyzing data, these issues can sometimes be addressed through the statistical models that are developed. Simulation can also be helpful in evaluating a new procedure. Moreover, simulation coupled with optimization can be used to plan a data collection scheme. The practice of statistics as just described is much more than just using a statistical package. In analyzing the data, it involves understanding the scientific problem and incorporating the scientist's knowledge. In modeling the data, it involves understanding how the data were collected and accounting for limitations of the data where possible. Moreover, the modeling is likely to be iterative by considering a series of models and evaluating the fit of these models. Designing a data collection scheme involves understanding the scientist's goal and staying within hislher budget in terms of time and the available resources. Consequently, a practicing statistician is faced with such tasks and requires skills and tools to do them quickly. We have written this article for students to provide a glimpse of the practice of statistics. To illustrate the practice of statistics, we consider a problem motivated by some precipitation data that our relative, Masaru Hamada, collected some years ago. We describe his rain gauge observational study in Section 2. We describe modeling and an initial analysis of the precipitation data in Section 3. In Section 4, we consider alternative analyses that address potential issues with the precipitation data. In Section 5, we consider the impact of incorporating additional infonnation. We design a data collection scheme to illustrate the use of simulation and optimization in Section 6. We conclude this article in Section 7 with a discussion.

  4. Iterative reconstruction techniques for industrial CT: application and performance

    SciTech Connect

    Arrowood, Lloyd; Gregor, Jens; Bingham, Philip R

    2008-01-01

    BWXT Y-12, Oak Ridge National Laboratory, and the University of Tennessee have been working toward improved high-resolution X-ray computed tomography for non-destructive testing of manufactured objects. The emphasis of this work has been on iterative reconstruction, calibration, and performance testing. Algebraic reconstruction algorithms for CT have been developed that are more robust in handling incomplete and noisy data and permit high-resolution volumetric imaging on metallic part assemblies. A key source of artifacts in reconstructed CT images for industrial components is poor image statistics due to areas of high attenuation. This loss of information in the captured projections not only affects reconstruction of those areas, but also the surrounding regions. To overcome numerical instabilities arising from the ill-posed nature of inverse problems, standard regularization techniques can be applied as can Bayesian reconstruction techniques using prior data such as CAD information to improve image quality. To accelerate the reconstruction of certain regions of interest and reduce memory requirements, subvolume reconstruction has been implemented and tested. A computational framework has been implemented that facilitates the use of sophisticated iterative algorithms for reconstruction of three-dimensional images from high-resolution X-ray cone-beam projection data. The code supports parallel computing at two levels: message passing is used to farm the computation out across a network of computers while threads allow all processors available on any one computer to be used.

  5. Series Supply of Cryogenic Venturi Flowmeters for the ITER Project

    NASA Astrophysics Data System (ADS)

    André, J.; Poncet, J. M.; Ercolani, E.; Clayton, N.; Journeaux, J. Y.

    2015-12-01

    In the framework of the ITER project, the CEA-SBT has been contracted to supply 277 venturi tube flowmeters to measure the distribution of helium in the superconducting magnets of the ITER tokamak. Six sizes of venturi tube have been designed so as to span a measurable helium flowrate range from 0.1 g/s to 400g/s. They operate, in nominal conditions, either at 4K or at 300K, and in a nuclear and magnetic environment. Due to the cryogenic conditions and the large number of venturi tubes to be supplied, an individual calibration of each venturi tube would be too expensive and time consuming. Studies have been performed to produce a design which will offer high repeatability in manufacture, reduce the geometrical uncertainties and improve the final helium flowrate measurement accuracy. On the instrumentation side, technologies for differential and absolute pressure transducers able to operate in applied magnetic fields need to be identified and validated. The complete helium mass flow measurement chain will be qualified in four test benches: - A helium loop at room temperature to insure the qualification of a statistically relevant number of venturi tubes operating at 300K.- A supercritical helium loop for the qualification of venturi tubes operating at cryogenic temperature (a modification to the HELIOS test bench). - A dedicated vacuum vessel to check the helium leak tightness of all the venturi tubes. - A magnetic test bench to qualify different technologies of pressure transducer in applied magnetic fields up to 100mT.

  6. Deconvolution of interferometric data using interior point iterative algorithms

    NASA Astrophysics Data System (ADS)

    Theys, C.; Lantéri, H.; Aime, C.

    2016-09-01

    We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.

  7. Iterative Reconstruction of Coded Source Neutron Radiographs

    SciTech Connect

    Santos-Villalobos, Hector J; Bingham, Philip R; Gregor, Jens

    2013-01-01

    Use of a coded source facilitates high-resolution neutron imaging through magnifications but requires that the radiographic data be deconvolved. A comparison of direct deconvolution with two different iterative algorithms has been performed. One iterative algorithm is based on a maximum likelihood estimation (MLE)-like framework and the second is based on a geometric model of the neutron beam within a least squares formulation of the inverse imaging problem. Simulated data for both uniform and Gaussian shaped source distributions was used for testing to understand the impact of non-uniformities present in neutron beam distributions on the reconstructed images. Results indicate that the model based reconstruction method will match resolution and improve on contrast over convolution methods in the presence of non-uniform sources. Additionally, the model based iterative algorithm provides direct calculation of quantitative transmission values while the convolution based methods must be normalized base on known values.

  8. ITER Experts' meeting on density limits

    SciTech Connect

    Borrass, K.; Igitkhanov, Y.L.; Uckan, N.A.

    1989-12-01

    The necessity of achieving a prescribed wall load or fusion power essentially determines the plasma pressure in a device like ITER. The range of operation densities and temperatures compatible with this condition is constrained by the problems of power exhaust and the disruptive density limit. The maximum allowable heat loads on the divertor plates and the maximum allowable sheath edge temperature practically impose a lower limit on the operating densities, whereas the disruptive density limit imposes an upper limit. For most of the density limit scalings proposed in the past an overlap of the two constraints or at best a very narrow accessible density range is predicted for ITER. Improved understanding of the underlying mechanisms is therefore a crucial issue in order to provide a more reliable basis for extrapolation to ITER and to identify possible ways of alleviating the problem.

  9. Re-starting an Arnoldi iteration

    SciTech Connect

    Lehoucq, R.B.

    1996-12-31

    The Arnoldi iteration is an efficient procedure for approximating a subset of the eigensystem of a large sparse n x n matrix A. The iteration produces a partial orthogonal reduction of A into an upper Hessenberg matrix H{sub m} of order m. The eigenvalues of this small matrix H{sub m} are used to approximate a subset of the eigenvalues of the large matrix A. The eigenvalues of H{sub m} improve as estimates to those of A as m increases. Unfortunately, so does the cost and storage of the reduction. The idea of re-starting the Arnoldi iteration is motivated by the prohibitive cost associated with building a large factorization.

  10. Safety and Environmental Activities for ITER

    NASA Astrophysics Data System (ADS)

    Saji, G.; Aymar, R.; Bartels, H.-W.; Gordon, C. W.; Gulden, W.; Holl, D. H.; Iida, H.; Inabe, T.; Iseli, M.; Kashirski, A. V.; Kolbasov, B. N.; Krivosheev, M.; McCarthy, K. A.; Marbach, G.; Morozov, S. I.; Natalizio, A.; Petti, D. A.; Piet, S. J.; Poucet, A. E.; Raeder, J.; Seki, Y.; Topilski, L. N.

    1997-09-01

    This paper will summarize highlights of the safety approach and discuss the ITER EDA safety activities. The ITER safety approach is driven by three major objectives: (1) Enhancement or improvement of fusion's intrinsic safety characteristics to the maximum extent feasible, which includes a minimization of the dependence on dedicated “safety systems”; (2) Selection of conservative design parameters and development of a robust design to accommodate uncertainties in plasma physics as well as the lack of operational experience and data; and (3) Integration of engineered mitigation systems to enhance the safety assurance against potentially hazardous inventories in the device by deploying well-established “nuclear safety” approaches and methodologies tailored as appropriate for ITER.

  11. US solid breeder blanket design for ITER

    SciTech Connect

    Gohar, Y.; Attaya, H.; Billone, M.; Lin, C.; Johnson, C.; Majumdar, S.; Smith, D. ); Goranson, P.; Nelson, B.; Williamson, D.; Baker, C. ); Raffray, A.; Badawi, A.; Gorbis, Z.; Ying, A.; Abdou, M. ); Sviatoslavsky, I.; Blanchard, J.; Mogahed, E.; Sawan, M.; Kulcinski, G. )

    1990-09-01

    The US blanket design activity has focused on the developments and the analyses of a solid breeder blanket concept for ITER. The main function of this blanket is to produce the necessary tritium required for the ITER operation and the test program. Safety, power reactor relevance, low tritium inventory, and design flexibility are the main reasons for the blanket selection. The blanket is designed to operate satisfactorily in the physics and the technology phases of ITER without the need for hardware changes. Mechanical simplicity, predictability, performance, minimum cost, and minimum R D requirements are the other criteria used to guide the design process. The design aspects of the blanket are summarized in this paper. 2 refs., 7 figs., 3 tabs.

  12. Accelerating an iterative process by explicit annihilation

    NASA Technical Reports Server (NTRS)

    Jespersen, D. C.; Buning, P. G.

    1983-01-01

    A slowly convergent stationary iterative process can be accelerated by explicitly annihilating (i.e., eliminating) the dominant eigenvector component of the error. The dominant eigenvalue or complex pair of eigenvalues can be estimated from the solution during the iteration. The corresponding eigenvector or complex pair of eigenvectors can then be annihilated by applying an explicit Richardson process over the basic iterative method. This can be done entirely in real arithmetic by analytically combining the complex conjugate annihilation steps. The technique is applied to an implicit algorithm for the calculation of two dimensional steady transonic flow over a circular cylinder using the equations of compressible inviscid gas dynamics. This demonstrates the use of explicit annihilation on a nonlinear problem.

  13. Accelerating an iterative process by explicit annihilation

    NASA Technical Reports Server (NTRS)

    Jespersen, D. C.; Buning, P. G.

    1985-01-01

    A slowly convergent stationary iterative process can be accelerated by explicitly annihilating (i.e., eliminating) the dominant eigenvector component of the error. The dominant eigenvalue or complex pair of eigenvalues can be estimated from the solution during the iteration. The corresponding eigenvector or complex pair of eigenvectors can then be annihilated by applying an explicit Richardson process over the basic iterative method. This can be done entirely in real arithmetic by analytically combining the complex conjugate annihilation steps. The technique is applied to an implicit algorithm for the calculation of two dimensional steady transonic flow over a circular cylinder using the equations of compressible inviscid gas dynamics. This demonstrates the use of explicit annihilation on a nonlinear problem.

  14. Development of pellet injection systems for ITER

    SciTech Connect

    Combs, S.K.; Gouge, M.J.; Baylor, L.R.

    1995-12-31

    Oak Ridge National Laboratory (ORNL) has been developing innovative pellet injection systems for plasma fueling experiments on magnetic fusion confinement devices for about 20 years. Recently, the ORNL development has focused on meeting the complex fueling needs of the International Thermonuclear Experimental Reactor (ITER). In this paper, we describe the ongoing research and development activities that will lead to a ITER prototype pellet injector test stand. The present effort addresses three main areas: (1) an improved pellet feed and delivery system for centrifuge injectors, (2) a long-pulse (up to steady-state) hydrogen extruder system, and (3) tritium extruder technology. The final prototype system must be fully tritium compatible and will be used to demonstrate the operating parameters and the reliability required for the ITER fueling application.

  15. The ITER in-vessel system

    SciTech Connect

    Lousteau, D.C.

    1994-09-01

    The overall programmatic objective, as defined in the ITER Engineering Design Activities (EDA) Agreement, is to demonstrate the scientific and technological feasibility of fusion energy for peaceful purposes. The ITER EDA Phase, due to last until July 1998, will encompass the design of the device and its auxiliary systems and facilities, including the preparation of engineering drawings. The EDA also incorporates validating research and development (R&D) work, including the development and testing of key components. The purpose of this paper is to review the status of the design, as it has been developed so far, emphasizing the design and integration of those components contained within the vacuum vessel of the ITER device. The components included in the in-vessel systems are divertor and first wall; blanket and shield; plasma heating, fueling, and vacuum pumping equipment; and remote handling equipment.

  16. Low-memory iterative density fitting.

    PubMed

    Grajciar, Lukáš

    2015-07-30

    A new low-memory modification of the density fitting approximation based on a combination of a continuous fast multipole method (CFMM) and a preconditioned conjugate gradient solver is presented. Iterative conjugate gradient solver uses preconditioners formed from blocks of the Coulomb metric matrix that decrease the number of iterations needed for convergence by up to one order of magnitude. The matrix-vector products needed within the iterative algorithm are calculated using CFMM, which evaluates them with the linear scaling memory requirements only. Compared with the standard density fitting implementation, up to 15-fold reduction of the memory requirements is achieved for the most efficient preconditioner at a cost of only 25% increase in computational time. The potential of the method is demonstrated by performing density functional theory calculations for zeolite fragment with 2592 atoms and 121,248 auxiliary basis functions on a single 12-core CPU workstation. PMID:26058451

  17. Global Asymptotic Behavior of Iterative Implicit Schemes

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1994-01-01

    The global asymptotic nonlinear behavior of some standard iterative procedures in solving nonlinear systems of algebraic equations arising from four implicit linear multistep methods (LMMs) in discretizing three models of 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODEs) is analyzed using the theory of dynamical systems. The iterative procedures include simple iteration and full and modified Newton iterations. The results are compared with standard Runge-Kutta explicit methods, a noniterative implicit procedure, and the Newton method of solving the steady part of the ODEs. Studies showed that aside from exhibiting spurious asymptotes, all of the four implicit LMMs can change the type and stability of the steady states of the differential equations (DEs). They also exhibit a drastic distortion but less shrinkage of the basin of attraction of the true solution than standard nonLMM explicit methods. The simple iteration procedure exhibits behavior which is similar to standard nonLMM explicit methods except that spurious steady-state numerical solutions cannot occur. The numerical basins of attraction of the noniterative implicit procedure mimic more closely the basins of attraction of the DEs and are more efficient than the three iterative implicit procedures for the four implicit LMMs. Contrary to popular belief, the initial data using the Newton method of solving the steady part of the DEs may not have to be close to the exact steady state for convergence. These results can be used as an explanation for possible causes and cures of slow convergence and nonconvergence of steady-state numerical solutions when using an implicit LMM time-dependent approach in computational fluid dynamics.

  18. On an iterative ensemble smoother and its application to a reservoir facies estimation problem

    NASA Astrophysics Data System (ADS)

    Luo, Xiaodong; Chen, Yan; Valestrand, Randi; Stordal, Andreas; Lorentzen, Rolf; Nævdal, Geir

    2014-05-01

    For data assimilation problems there are different ways in utilizing the available observations. While certain data assimilation algorithms, for instance, the ensemble Kalman filter (EnKF, see, for examples, Aanonsen et al., 2009; Evensen, 2006) assimilate the observations sequentially in time, other data assimilation algorithms may instead collect the observations at different time instants and assimilate them simultaneously. In general such algorithms can be classified as smoothers. In this aspect, the ensemble smoother (ES, see, for example, Evensen and van Leeuwen, 2000) can be considered as an smoother counterpart of the EnKF. The EnKF has been widely used for reservoir data assimilation (history matching) problems since its introduction to the community of petroleum engineering (Nævdal et al., 2002). The applications of the ES to reservoir data assimilation problems are also investigated recently (see, for example, Skjervheim and Evensen, 2011). Compared to the EnKF, the ES has certain technical advantages, including, for instance, avoiding the restarts associated with each update step in the EnKF and also having fewer variables to update, which may result in a significant reduction in simulation time, while providing similar assimilation results to those obtained by the EnKF (Skjervheim and Evensen, 2011). To further improve the performance of the ES, some iterative ensemble smoothers are suggested in the literature, in which the iterations are carried out in the forms of certain iterative optimization algorithms, e.g., the Gaussian-Newton (Chen and Oliver, 2012) or the Levenberg-Marquardt method (Chen and Oliver, 2013; Emerick and Reynolds, 2012), or in the context of adaptive Gaussian mixture (AGM, see Stordal and Lorentzen, 2013). In Emerick and Reynolds (2012) the iteration formula is derived based on the idea that, for linear observations, the final results of the iterative ES should be equal to the estimate of the EnKF. In Chen and Oliver (2013), the

  19. Iterative method for generating correlated binary sequences

    NASA Astrophysics Data System (ADS)

    Usatenko, O. V.; Melnik, S. S.; Apostolov, S. S.; Makarov, N. M.; Krokhin, A. A.

    2014-11-01

    We propose an efficient iterative method for generating random correlated binary sequences with a prescribed correlation function. The method is based on consecutive linear modulations of an initially uncorrelated sequence into a correlated one. Each step of modulation increases the correlations until the desired level has been reached. The robustness and efficiency of the proposed algorithm are tested by generating sequences with inverse power-law correlations. The substantial increase in the strength of correlation in the iterative method with respect to single-step filtering generation is shown for all studied correlation functions. Our results can be used for design of disordered superlattices, waveguides, and surfaces with selective transport properties.

  20. Challenges and status of ITER conductor production

    NASA Astrophysics Data System (ADS)

    Devred, A.; Backbier, I.; Bessette, D.; Bevillard, G.; Gardner, M.; Jong, C.; Lillaz, F.; Mitchell, N.; Romano, G.; Vostner, A.

    2014-04-01

    Taking the relay of the large Hadron collider (LHC) at CERN, ITER has become the largest project in applied superconductivity. In addition to its technical complexity, ITER is also a management challenge as it relies on an unprecedented collaboration of seven partners, representing more than half of the world population, who provide 90% of the components as in-kind contributions. The ITER magnet system is one of the most sophisticated superconducting magnet systems ever designed, with an enormous stored energy of 51 GJ. It involves six of the ITER partners. The coils are wound from cable-in-conduit conductors (CICCs) made up of superconducting and copper strands assembled into a multistage cable, inserted into a conduit of butt-welded austenitic steel tubes. The conductors for the toroidal field (TF) and central solenoid (CS) coils require about 600 t of Nb3Sn strands while the poloidal field (PF) and correction coil (CC) and busbar conductors need around 275 t of Nb-Ti strands. The required amount of Nb3Sn strands far exceeds pre-existing industrial capacity and has called for a significant worldwide production scale up. The TF conductors are the first ITER components to be mass produced and are more than 50% complete. During its life time, the CS coil will have to sustain several tens of thousands of electromagnetic (EM) cycles to high current and field conditions, way beyond anything a large Nb3Sn coil has ever experienced. Following a comprehensive R&D program, a technical solution has been found for the CS conductor, which ensures stable performance versus EM and thermal cycling. Productions of PF, CC and busbar conductors are also underway. After an introduction to the ITER project and magnet system, we describe the ITER conductor procurements and the quality assurance/quality control programs that have been implemented to ensure production uniformity across numerous suppliers. Then, we provide examples of technical challenges that have been encountered and

  1. Scheduling and rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper describes the GERRY scheduling and rescheduling system being applied to coordinate Space Shuttle Ground Processing. The system uses constraint-based iterative repair, a technique that starts with a complete but possibly flawed schedule and iteratively improves it by using constraint knowledge within repair heuristics. In this paper we explore the tradeoff between the informedness and the computational cost of several repair heuristics. We show empirically that some knowledge can greatly improve the convergence speed of a repair-based system, but that too much knowledge, such as the knowledge embodied within the MIN-CONFLICTS lookahead heuristic, can overwhelm a system and result in degraded performance.

  2. Modified Iterative Extended Hueckel. 1: Theory

    NASA Technical Reports Server (NTRS)

    Aronowitz, S.

    1980-01-01

    Iterative Extended Huekel is modified by inclusion of explicit effective internuclear and electronic interactions. The one electron energies are shown to obey a variational principle because of the form of the effective electronic interactions. The modifications permit mimicking of aspects of valence bond theory with the additional feature that the energies associated with valence bond type structures are explicitly calculated. In turn, a hybrid molecular, orbital valence, bond scheme is introduced which incorporates variant total molecular electronic density distributions similar to the way that Iterative Extended Hueckel incorporates atoms.

  3. Iterative instructions in the Manchester dataflow computer

    SciTech Connect

    Bohm, A.P.; Gurd, J.R. )

    1990-04-01

    Compilation techniques for dataflow computers, particularly techniques associated with optimized code generation, have led to the introduction of iterative instructions, which produce a sequence of outputs when presented with a single set of inputs. Although these are beneficial in reducing program execution times, they exhibit distinctive, coarse-grain characteristics that effect the normal, fine-grain operation of a dataflow computer. This paper investigates the nature and extent of the benefits and adverse effects of iterative instructions in the prototype Manchester dataflow computer.

  4. The ITER bolometer diagnostic: status and plans.

    PubMed

    Meister, H; Giannone, L; Horton, L D; Raupp, G; Zeidner, W; Grunda, G; Kalvin, S; Fischer, U; Serikov, A; Stickel, S; Reichle, R

    2008-10-01

    A consortium consisting of four EURATOM Associations has been set up to develop the project plan for the full development of the ITER bolometer diagnostic and to continue urgent R&D activities. An overview of the current status is given, including detector development, line-of-sight optimization, performance analysis as well as the design of the diagnostic components and their integration in ITER. This is complemented by the presentation of plans for future activities required to successfully implement the bolometer diagnostic, ranging from the detector development over diagnostic design and prototype testing to RH tools for calibration. PMID:19044656

  5. Particle migration analysis in iterative classification of cryo-EM single-particle data

    PubMed Central

    Chen, Bo; Shen, Bingxin; Frank, Joachim

    2014-01-01

    Recently developed classification methods have enabled resolving multiple biological structures from cryo-EM data collected on heterogeneous biological samples. However, there remains the problem of how to base the decisions in the classification on the statistics of the cryo-EM data, to reduce the subjectivity in the process. Here, we propose a quantitative analysis to determine the iteration of convergence and the number of distinguishable classes, based on the statistics of the single particles in an iterative classification scheme. We start the classification with more number of classes than anticipated based on prior knowledge, and then combine the classes that yield similar reconstructions. The classes yielding similar reconstructions can be identified from the migrating particles (jumpers) during consecutive iterations after the iteration of convergence. We therefore termed the method “jumper analysis”, and applied it to the output of RELION 3D classification of a benchmark experimental dataset. This work is a step forward toward fully automated single-particle reconstruction and classification of cryo-EM data. PMID:25449317

  6. Iterative Monte Carlo analysis of spin-dependent parton distributions

    NASA Astrophysics Data System (ADS)

    Sato, Nobuo; Melnitchouk, W.; Kuhn, S. E.; Ethier, J. J.; Accardi, A.; Jefferson Lab Angular Momentum Collaboration

    2016-04-01

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳0.1 . The study also provides the first determination of the flavor-separated twist-3 PDFs and the d2 moment of the nucleon within a global PDF analysis.

  7. Downscaling climate variability associated with quasi-periodic climate signals: A new statistical approach using MSSA

    NASA Astrophysics Data System (ADS)

    Cañón, Julio; Domínguez, Francina; Valdés, Juan B.

    2011-02-01

    SummaryA statistical method is introduced to downscale hydroclimatic variables while incorporating the variability associated with quasi-periodic global climate signals. The method extracts statistical information of distributed variables from historic time series available at high resolution and uses Multichannel Singular Spectrum Analysis (MSSA) to reconstruct, on a cell-by-cell basis, specific frequency signatures associated with both the variable at a coarse scale and the global climate signals. Historical information is divided in two sets: a reconstruction set to identify the dominant modes of variability of the series for each cell and a validation set to compare the downscaling relative to the observed patterns. After validation, the coarse projections from Global Climate Models (GCMs) are disaggregated to higher spatial resolutions by using an iterative gap-filling MSSA algorithm to downscale the projected values of the variable, using the distributed series statistics and the MSSA analysis. The method is data adaptive and useful for downscaling short-term forecasts as well as long-term climate projections. The method is applied to the downscaling of temperature and precipitation from observed records and GCM projections over a region located in the US Southwest, taking into account the seasonal variability associated with ENSO.

  8. Statistical computation of tolerance limits

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1993-01-01

    Based on a new theory, two computer codes were developed specifically to calculate the exact statistical tolerance limits for normal distributions within unknown means and variances for the one-sided and two-sided cases for the tolerance factor, k. The quantity k is defined equivalently in terms of the noncentral t-distribution by the probability equation. Two of the four mathematical methods employ the theory developed for the numerical simulation. Several algorithms for numerically integrating and iteratively root-solving the working equations are written to augment the program simulation. The program codes generate some tables of k's associated with the varying values of the proportion and sample size for each given probability to show accuracy obtained for small sample sizes.

  9. Optimal application of Morrison's iterative noise removal for deconvolution. Appendices

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1987-01-01

    Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.

  10. Cosmetic Plastic Surgery Statistics

    MedlinePlus

    2014 Cosmetic Plastic Surgery Statistics Cosmetic Procedure Trends 2014 Plastic Surgery Statistics Report Please credit the AMERICAN SOCIETY OF PLASTIC SURGEONS when citing statistical data or using ...

  11. Intelligent control and adaptive systems; Proceedings of the Meeting, Philadelphia, PA, Nov. 7, 8, 1989

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo (Editor)

    1990-01-01

    Various papers on intelligent control and adaptive systems are presented. Individual topics addressed include: control architecture for a Mars walking vehicle, representation for error detection and recovery in robot task plans, real-time operating system for robots, execution monitoring of a mobile robot system, statistical mechanics models for motion and force planning, global kinematics for manipulator planning and control, exploration of unknown mechanical assemblies through manipulation, low-level representations for robot vision, harmonic functions for robot path construction, simulation of dual behavior of an autonomous system. Also discussed are: control framework for hand-arm coordination, neural network approach to multivehicle navigation, electronic neural networks for global optimization, neural network for L1 norm linear regression, planning for assembly with robot hands, neural networks in dynamical systems, control design with iterative learning, improved fuzzy process control of spacecraft autonomous rendezvous using a genetic algorithm.

  12. Quality metric in matched Laplacian of Gaussian response domain for blind adaptive optics image deconvolution

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Yang, Yikang; Xu, Rong; Liu, Changhai; Li, Jisheng

    2016-04-01

    Adaptive optics (AO) in conjunction with subsequent postprocessing techniques have obviously improved the resolution of turbulence-degraded images in ground-based astronomical observations or artificial space objects detection and identification. However, important tasks involved in AO image postprocessing, such as frame selection, stopping iterative deconvolution, and algorithm comparison, commonly need manual intervention and cannot be performed automatically due to a lack of widely agreed on image quality metrics. In this work, based on the Laplacian of Gaussian (LoG) local contrast feature detection operator, we propose a LoG domain matching operation to perceive effective and universal image quality statistics. Further, we extract two no-reference quality assessment indices in the matched LoG domain that can be used for a variety of postprocessing tasks. Three typical space object images with distinct structural features are tested to verify the consistency of the proposed metric with perceptual image quality through subjective evaluation.

  13. Adaptive optics image deconvolution based on a modified Richardson-Lucy algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Bo; Geng, Ze-xun; Yan, Xiao-dong; Yang, Yang; Sui, Xue-lian; Zhao, Zhen-lei

    2007-12-01

    Adaptive optical (AO) system provides a real-time compensation for atmospheric turbulence. However, the correction is often only partial, and a deconvolution is required for reaching the diffraction limit. The Richardson-Lucy (R-L) Algorithm is the technique most widely used for AO image deconvolution, but Standard R-L Algorithm (SRLA) is often puzzled by speckling phenomenon, wraparound artifact and noise problem. A Modified R-L Algorithm (MRLA) for AO image deconvolution is presented. This novel algorithm applies Magain's correct sampling approach and incorporating noise statistics to Standard R-L Algorithm. The alternant iterative method is applied to estimate PSF and object in the novel algorithm. Comparing experiments for indoor data and AO image are done with SRLA and the MRLA in this paper. Experimental results show that this novel MRLA outperforms the SRLA.

  14. Towards plasma cleaning of ITER first mirrors

    NASA Astrophysics Data System (ADS)

    Moser, L.; Marot, L.; Eren, B.; Steiner, R.; Mathys, D.; Leipold, F.; Reichle, R.; Meyer, E.

    2015-06-01

    To avoid reflectivity losses in ITER's optical diagnostic systems, on-site cleaning of metallic first mirrors via plasma sputtering is foreseen to remove deposit build-ups migrating from the main wall. In this work, the influence of aluminium and tungsten deposits on the reflectivity of molybdenum mirrors as well as the possibility to clean them with plasma exposure is investigated. Porous ITER-like deposits are grown to mimic the edge conditions expected in ITER, and a severe degradation in the specular reflectivity is observed as these deposits build up on the mirror surface. In addition, dense oxide films are produced for comparisons with porous films. The composition, morphology and crystal structure of several films were characterized by means of scanning electron microscopy, x-ray photoelectron spectroscopy, x-ray diffraction and secondary ion mass spectrometry. The cleaning of the deposits and the restoration of the mirrors' optical properties are possible either with a Kaufman source or radio frequency directly applied to the mirror (or radio frequency plasma generated directly around the mirror surface). Accelerating ions of an external plasma source through a direct current applied onto the mirror does not remove deposits composed of oxides. A possible implementation of plasma cleaning in ITER is addressed.

  15. Iteration of Complex Functions and Newton's Method

    ERIC Educational Resources Information Center

    Dwyer, Jerry; Barnard, Roger; Cook, David; Corte, Jennifer

    2009-01-01

    This paper discusses some common iterations of complex functions. The presentation is such that similar processes can easily be implemented and understood by undergraduate students. The aim is to illustrate some of the beauty of complex dynamics in an informal setting, while providing a couple of results that are not otherwise readily available in…

  16. Nuclear analyses for the ITER ECRH launcher

    NASA Astrophysics Data System (ADS)

    Serikov, A.; Fischer, U.; Heidinger, R.; Spaeh, P.; Stickel, S.; Tsige-Tamirat, H.

    2008-05-01

    Computational results of the nuclear analyses for the ECRH launcher integrated into the ITER upper port are presented. The purpose of the analyses was to provide the proof for the launcher design that the nuclear requirements specified in the ITER project can be met. The aim was achieved on the basis of 3D neutronics radiation transport calculations using the Monte Carlo code MCNP. In the course of the analyses an adequate shielding configuration against neutron and gamma radiation was developed keeping the necessary empty space for mm-waves propagation in accordance with the ECRH physics guidelines. Different variants of the shielding configuration for the extended performance front steering launcher (EPL) were compared in terms of nuclear response functions in the critical positions. Neutron damage (dpa), nuclear heating, helium production rate, neutron and gamma fluxes have been calculated under the conditions of ITER operation. It has been shown that the radiation shielding criteria are satisfied and the supposed shutdown dose rates are below the ITER nuclear design limits.

  17. Iteration and Anxiety in Mathematical Literature

    ERIC Educational Resources Information Center

    Capezzi, Rita; Kinsey, L. Christine

    2016-01-01

    We describe our experiences in team-teaching an honors seminar on mathematics and literature. We focus particularly on two of the texts we read: Georges Perec's "How to Ask Your Boss for a Raise" and Alain Robbe-Grillet's "Jealousy," both of which make use of iterative structures.

  18. Spectral resolvability of iterated rippled noise

    NASA Astrophysics Data System (ADS)

    Yost, William A.

    2005-04-01

    A forward-masking experiment was used to estimate the spectral ripple of iterated rippled noise (IRN) that is possibly resolved by the auditory system. Tonal signals were placed at spectral peaks and valleys of IRN maskers for a wide variety of IRN conditions that included different delays, number of iterations, and stimulus durations. The differences in the forward-masked thresholds of tones at spectral peaks and valleys were used to estimate spectral resolvability, and these results were compared to estimates obtained from a gamma-tone filter bank. The IRN spectrum has spectral peaks that are harmonics of the reciprocal of the delay used to generate IRN stimuli. As the number of iterations in the generation of IRN stimuli increases so does the difference in the spectral peak-to-valley ratio. For high number of iterations, long delays, and long durations evidence for spectral resolvability existed up to the 6th harmonic. For all other conditions spectral resolvability appeared to disappear at harmonics lower than the 6th, or was not measurable at all. These data will be discussed in terms of the role spectral resolvability might play in processing the pitch, pitch strength, and timbre of IRN stimuli. [Work supported by a grant from NIDCD.

  19. ITER faces further five-year delay

    NASA Astrophysics Data System (ADS)

    Clery, Daniel

    2016-06-01

    The €14bn ITER fusion reactor currently under construction in Cadarache, France, will require an additional cash injection of €4.6bn if it is to start up in 2025 – a target date that is already five years later than currently scheduled.

  20. Constructing Easily Iterated Functions with Interesting Properties

    ERIC Educational Resources Information Center

    Sprows, David J.

    2009-01-01

    A number of schools have recently introduced new courses dealing with various aspects of iteration theory or at least have found ways of including topics such as chaos and fractals in existing courses. In this note, we will consider a family of functions whose members are especially well suited to illustrate many of the concepts involved in these…

  1. On the safety of ITER accelerators

    PubMed Central

    Li, Ge

    2013-01-01

    Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate −1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER. PMID:24008267

  2. Solving Differential Equations Using Modified Picard Iteration

    ERIC Educational Resources Information Center

    Robin, W. A.

    2010-01-01

    Many classes of differential equations are shown to be open to solution through a method involving a combination of a direct integration approach with suitably modified Picard iterative procedures. The classes of differential equations considered include typical initial value, boundary value and eigenvalue problems arising in physics and…

  3. The determination of orbits using Picard iteration

    NASA Technical Reports Server (NTRS)

    Mikkilineni, R. P.; Feagin, T.

    1975-01-01

    The determination of orbits by using Picard iteration is reported. This is a direct extension of the classical method of Picard that has been used in finding approximate solutions of nonlinear differential equations for a variety of problems. The application of the Picard method of successive approximations to the initial value and the two point boundary value problems is given.

  4. Symbolic Computational Algebra Applied to Picard Iteration.

    ERIC Educational Resources Information Center

    Mathews, John

    1989-01-01

    Uses muMATH to illustrate the step-by-step process in translating mathematical theory into the symbolic manipulation setting. Shows an application of a Picard iteration which uses a computer to generate a sequence of functions which converge to a solution. (MVL)

  5. First mirrors for diagnostic systems of ITER

    NASA Astrophysics Data System (ADS)

    Litnovsky, A.; Voitsenya, V. S.; Costley, A.; Donné, A. J. H.; SWG on First Mirrors of the ITPA Topical Group on Diagnostics

    2007-08-01

    The majority of optical diagnostics presently foreseen for ITER will implement in-vessel metallic mirrors as plasma-viewing components. Mirrors are used for the observation of the plasma radiation in a very wide wavelength range: from about 1 nm up to a few mm. In the hostile ITER environment, mirrors are subject to erosion, deposition, particle implantation and other adverse effects which will change their optical properties, affecting the entire performance of the respective diagnostic systems. The Specialists Working Group (SWG) on first mirrors was established under the wings of the International Tokamak Physics Activity (ITPA) Topical Group (TG) on Diagnostics to coordinate and guide the investigations on diagnostic mirrors towards the development of optimal, robust and durable solutions for ITER diagnostic systems. The results of tests of various ITER-candidate mirror materials, performed in Tore-Supra, TEXTOR, DIII-D, TCV, T-10, TRIAM-1M and LHD under various plasma conditions, as well as an overview of laboratory investigations of mirror performance and mirror cleaning techniques are presented in the paper. The current tasks in the R&D of diagnostic mirrors will be addressed.

  6. Iterative solution of the Helmholtz equation

    SciTech Connect

    Larsson, E.; Otto, K.

    1996-12-31

    We have shown that the numerical solution of the two-dimensional Helmholtz equation can be obtained in a very efficient way by using a preconditioned iterative method. We discretize the equation with second-order accurate finite difference operators and take special care to obtain non-reflecting boundary conditions. We solve the large, sparse system of equations that arises with the preconditioned restarted GMRES iteration. The preconditioner is of {open_quotes}fast Poisson type{close_quotes}, and is derived as a direct solver for a modified PDE problem.The arithmetic complexity for the preconditioner is O(n log{sub 2} n), where n is the number of grid points. As a test problem we use the propagation of sound waves in water in a duct with curved bottom. Numerical experiments show that the preconditioned iterative method is very efficient for this type of problem. The convergence rate does not decrease dramatically when the frequency increases. Compared to banded Gaussian elimination, which is a standard solution method for this type of problems, the iterative method shows significant gain in both storage requirement and arithmetic complexity. Furthermore, the relative gain increases when the frequency increases.

  7. Testing Short Samples of ITER Conductors and Projection of Their Performance in ITER Magnets

    SciTech Connect

    Martovetsky, N N

    2007-08-20

    Qualification of the ITER conductor is absolutely necessary. Testing large scale conductors is expensive and time consuming. To test straight 3-4m long samples in a bore of a split solenoid is a relatively economical way in comparison with fabrication of a coil to be tested in a bore of a background field solenoid. However, testing short sample may give ambiguous results due to different constraints in current redistribution in the cable or other end effects which are not present in the large magnet. This paper discusses processes taking place in the ITER conductor, conditions when conductor performance could be distorted and possible signal processing to deduce behavior of ITER conductors in ITER magnets from the test data.

  8. Linearly-Constrained Adaptive Signal Processing Methods

    NASA Astrophysics Data System (ADS)

    Griffiths, Lloyd J.

    1988-01-01

    In adaptive least-squares estimation problems, a desired signal d(n) is estimated using a linear combination of L observation values samples xi (n), x2(n), . . . , xL-1(n) and denoted by the vector X(n). The estimate is formed as the inner product of this vector with a corresponding L-dimensional weight vector W. One particular weight vector of interest is Wopt which minimizes the mean-square between d(n) and the estimate. In this context, the term `mean-square difference' is a quadratic measure such as statistical expectation or time average. The specific value of W which achieves the minimum is given by the prod-uct of the inverse data covariance matrix and the cross-correlation between the data vector and the desired signal. The latter is often referred to as the P-vector. For those cases in which time samples of both the desired and data vector signals are available, a variety of adaptive methods have been proposed which will guarantee that an iterative weight vector Wa(n) converges (in some sense) to the op-timal solution. Two which have been extensively studied are the recursive least-squares (RLS) method and the LMS gradient approximation approach. There are several problems of interest in the communication and radar environment in which the optimal least-squares weight set is of interest and in which time samples of the desired signal are not available. Examples can be found in array processing in which only the direction of arrival of the desired signal is known and in single channel filtering where the spectrum of the desired response is known a priori. One approach to these problems which has been suggested is the P-vector algorithm which is an LMS-like approximate gradient method. Although it is easy to derive the mean and variance of the weights which result with this algorithm, there has never been an identification of the corresponding underlying error surface which the procedure searches. The purpose of this paper is to suggest an alternative

  9. MHD stability of ITER H-mode confinement with pedestal bootstrap current effects taken into account

    NASA Astrophysics Data System (ADS)

    Zheng, L. J.; Kotschenreuther, M. T.; Valanju, P.; Mahajan, S. M.; Hatch, D.; Liu, X.

    2015-11-01

    We have shown that the bootstrap current can have significant effects both on tokamak equilibrium and stability (Nucl. Fusion 53, 063009 (2013)). For ITER H-mode discharges pedestal density is low and consequently bootstrap current is large. We reconstruct numerically ITER equilibria with bootstrap current taken into account. Especially, we have considered a more realistic scenario in which density and temperature profiles can be different. The direct consequence of bootstrap current effects on equilibrium is the modification of local safety factor profile at pedestal. This results in a dramatic change of MHD mode behavior. The stability of ITER numerical equilibria is investigated with AEGIS code. Both low-n and peeling-ballooning modes are investigated. Note that pressure gradient at pedestal is steep. High resolution computation is needed. Since AEGIS code is an adaptive code, it can well handle this problem. Also, the analytical continuation technique based on the Cauchy-Riemann condition of dispersion relation is applied, so that the marginal stability conditions can be determined. Both numerical scheme and results will be presented. The effects of different density and temperature profiles on ITER H-mode discharges will be discussed. This research is supported by U. S. Department of Energy, Office of Fusion Energy Science: Grant No. DE-FG02-04ER-54742.

  10. Helicopter trim analysis by shooting and finite element methods with optimally damped Newton iterations

    NASA Technical Reports Server (NTRS)

    Achar, N. S.; Gaonkar, G. H.

    1993-01-01

    Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used, and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.

  11. Design studies for ITER x-ray diagnostics

    SciTech Connect

    Hill, K.W.; Bitter, M.; von Goeler, S.; Hsuan, H.

    1995-01-01

    Concepts for adapting conventional tokamak x-ray diagnostics to the harsh radiation environment of ITER include use of grazing-incidence (GI) x-ray mirrors or man-made Bragg multilayer (ML) elements to remove the x-ray beam from the neutron beam, or use of bundles of glass-capillary x-ray ``light pipes`` embedded in radiation shields to reduce the neutron/gamma-ray fluxes onto the detectors while maintaining usable x-ray throughput. The x-ray optical element with the broadest bandwidth and highest throughput, the GI mirror, can provide adequate lateral deflection (10 cm for a deflected-path length of 8 m) at x-ray energies up to 12, 22, or 30 keV for one, two, or three deflections, respectively. This element can be used with the broad band, high intensity x-ray imaging system (XIS), the pulseheight analysis (PHA) survey spectrometer, or the high resolution Johann x-ray crystal spectrometer (XCS), which is used for ion-temperature measurement. The ML mirrors can isolate the detector from the neutron beam with a single deflection for energies up to 50 keV, but have much narrower bandwidth and lower x-ray power throughput than do the GI mirrors; they are unsuitable for use with the XIS or PHA, but they could be used with the XCS; in particular, these deflectors could be used between ITER and the biological shield to avoid direct plasma neutron streaming through the biological shield. Graded-d ML mirrors have good reflectivity from 20 to 70 keV, but still at grazing angles (<3 mrad). The efficiency at 70 keV for double reflection (10 percent), as required for adequate separation of the x-ray and neutron beams, is high enough for PHA requirements, but not for the XIS. Further optimization may be possible.

  12. Assessment of the dose reduction potential of a model-based iterative reconstruction algorithm using a task-based performance metrology

    SciTech Connect

    Samei, Ehsan; Richard, Samuel

    2015-01-15

    Purpose: Different computed tomography (CT) reconstruction techniques offer different image quality attributes of resolution and noise, challenging the ability to compare their dose reduction potential against each other. The purpose of this study was to evaluate and compare the task-based imaging performance of CT systems to enable the assessment of the dose performance of a model-based iterative reconstruction (MBIR) to that of an adaptive statistical iterative reconstruction (ASIR) and a filtered back projection (FBP) technique. Methods: The ACR CT phantom (model 464) was imaged across a wide range of mA setting on a 64-slice CT scanner (GE Discovery CT750 HD, Waukesha, WI). Based on previous work, the resolution was evaluated in terms of a task-based modulation transfer function (MTF) using a circular-edge technique and images from the contrast inserts located in the ACR phantom. Noise performance was assessed in terms of the noise-power spectrum (NPS) measured from the uniform section of the phantom. The task-based MTF and NPS were combined with a task function to yield a task-based estimate of imaging performance, the detectability index (d′). The detectability index was computed as a function of dose for two imaging tasks corresponding to the detection of a relatively small and a relatively large feature (1.5 and 25 mm, respectively). The performance of MBIR in terms of the d′ was compared with that of ASIR and FBP to assess its dose reduction potential. Results: Results indicated that MBIR exhibits a variability spatial resolution with respect to object contrast and noise while significantly reducing image noise. The NPS measurements for MBIR indicated a noise texture with a low-pass quality compared to the typical midpass noise found in FBP-based CT images. At comparable dose, the d′ for MBIR was higher than those of FBP and ASIR by at least 61% and 19% for the small feature and the large feature tasks, respectively. Compared to FBP and ASIR, MBIR

  13. Adaptive management of natural resources-framework and issues

    USGS Publications Warehouse

    Williams, B.K.

    2011-01-01

    Adaptive management, an approach for simultaneously managing and learning about natural resources, has been around for several decades. Interest in adaptive decision making has grown steadily over that time, and by now many in natural resources conservation claim that adaptive management is the approach they use in meeting their resource management responsibilities. Yet there remains considerable ambiguity about what adaptive management actually is, and how it is to be implemented by practitioners. The objective of this paper is to present a framework and conditions for adaptive decision making, and discuss some important challenges in its application. Adaptive management is described as a two-phase process of deliberative and iterative phases, which are implemented sequentially over the timeframe of an application. Key elements, processes, and issues in adaptive decision making are highlighted in terms of this framework. Special emphasis is given to the question of geographic scale, the difficulties presented by non-stationarity, and organizational challenges in implementing adaptive management. ?? 2010.

  14. An Automatic Optical and SAR Image Registration Method Using Iterative Multi-Level and Refinement Model

    NASA Astrophysics Data System (ADS)

    Xu, C.; Sui, H. G.; Li, D. R.; Sun, K. M.; Liu, J. Y.

    2016-06-01

    Automatic image registration is a vital yet challenging task, particularly for multi-sensor remote sensing images. Given the diversity of the data, it is unlikely that a single registration algorithm or a single image feature will work satisfactorily for all applications. Focusing on this issue, the mainly contribution of this paper is to propose an automatic optical-to-SAR image registration method using -level and refinement model: Firstly, a multi-level strategy of coarse-to-fine registration is presented, the visual saliency features is used to acquire coarse registration, and then specific area and line features are used to refine the registration result, after that, sub-pixel matching is applied using KNN Graph. Secondly, an iterative strategy that involves adaptive parameter adjustment for re-extracting and re-matching features is presented. Considering the fact that almost all feature-based registration methods rely on feature extraction results, the iterative strategy improve the robustness of feature matching. And all parameters can be automatically and adaptively adjusted in the iterative procedure. Thirdly, a uniform level set segmentation model for optical and SAR images is presented to segment conjugate features, and Voronoi diagram is introduced into Spectral Point Matching (VSPM) to further enhance the matching accuracy between two sets of matching points. Experimental results show that the proposed method can effectively and robustly generate sufficient, reliable point pairs and provide accurate registration.

  15. Iterative reconstruction methods in atmospheric tomography: FEWHA, Kaczmarz and Gradient-based algorithm

    NASA Astrophysics Data System (ADS)

    Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.

    2014-07-01

    The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.

  16. Iterative image-domain decomposition for dual-energy CT

    SciTech Connect

    Niu, Tianye; Dong, Xue; Petrongolo, Michael; Zhu, Lei

    2014-04-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the

  17. New iterative solvers for the NAG Libraries

    SciTech Connect

    Salvini, S.; Shaw, G.

    1996-12-31

    The purpose of this paper is to introduce the work which has been carried out at NAG Ltd to update the iterative solvers for sparse systems of linear equations, both symmetric and unsymmetric, in the NAG Fortran 77 Library. Our current plans to extend this work and include it in our other numerical libraries in our range are also briefly mentioned. We have added to the Library the new Chapter F11, entirely dedicated to sparse linear algebra. At Mark 17, the F11 Chapter includes sparse iterative solvers, preconditioners, utilities and black-box routines for sparse symmetric (both positive-definite and indefinite) linear systems. Mark 18 will add solvers, preconditioners, utilities and black-boxes for sparse unsymmetric systems: the development of these has already been completed.

  18. High resolution non-iterative aperture synthesis.

    PubMed

    Kraczek, Jeffrey R; McManamon, Paul F; Watson, Edward A

    2016-03-21

    The maximum resolution of a multiple-input multiple-output (MIMO) imaging system is determined by the size of the synthetic aperture. The synthetic aperture is determined by a coordinate shift using the relative positions of the illuminators and receive apertures. Previous methods have shown non-iterative phasing for multiple illuminators with a single receive aperture for intra-aperture synthesis. This work shows non-iterative phasing with both multiple illuminators and multiple receive apertures for inter-aperture synthesis. Simulated results show that piston, tip, and tilt can be calculated using inter-aperture phasing after intra-aperture phasing has been performed. Use of a fourth illuminator for increased resolution is shown. The modulation transfer function (MTF) is used to quantitatively judge increased resolution. PMID:27136816

  19. ITER Shape Controller and Transport Simulations

    SciTech Connect

    Casper, T A; Meyer, W H; Pearlstein, L D; Portone, A

    2007-05-31

    We currently use the CORSICA integrated modeling code for scenario studies for both the DIII-D and ITER experiments. In these simulations, free- or fixed-boundary equilibria are simultaneously converged with thermal evolution determined from transport models providing temperature and current density profiles. Using a combination of fixed boundary evolution followed by free-boundary calculation to determine the separatrix and coil currents. In the free-boundary calculation, we use the state-space controller representation with transport simulations to provide feedback modeling of shape, vertical stability and profile control. In addition to a tightly coupled calculation with simulator and controller imbedded inside CORSICA, we also use a remote procedure call interface to couple the CORSICA non-linear plasma simulations to the controller environments developed within the Mathworks Matlab/Simulink environment. We present transport simulations using full shape and vertical stability control with evolution of the temperature profiles to provide simulations of the ITER controller and plasma response.

  20. Iterative optimization calibration method for stereo deflectometry.

    PubMed

    Ren, Hongyu; Gao, Feng; Jiang, Xiangqian

    2015-08-24

    An accurate system calibration method is presented in this paper to calibrate stereo deflectometry. A corresponding iterative optimization algorithm is also proposed to improve the system calibration accuracy. This merges CCD parameters and geometrical relation between CCDs and the LCD into one cost function. In this calibration technique, an optical flat acts as a reference mirror and simultaneously reflect sinusoidal fringe patterns into the two CCDs. The normal vector of the reference mirror is used as an intermediate variable to implement this iterative optimization algorithm until the root mean square of the reprojection errors converge to a minimum. The experiment demonstrates that this method can optimize all the calibration parameters and can effectively reduce reprojection error, which correspondingly improves the final reconstruction accuracy. PMID:26368180

  1. Main challenges for ITER optical diagnostics

    NASA Astrophysics Data System (ADS)

    Vukolov, K. Yu.; Orlovskiy, I. I.; Alekseev, A. G.; Borisov, A. A.; Andreenko, E. N.; Kukushkin, A. B.; Lisitsa, V. S.; Neverov, V. S.

    2014-08-01

    The review is made of the problems of ITER optical diagnostics. Most of these problems will be related to the intensive neutron radiation from hot plasma. At a high level of radiation loads the most types of materials gradually change their properties. This effect is most critical for optical diagnostics because of degradation of optical glasses and mirrors. The degradation of mirrors, that collect the light from plasma, basically will be induced by impurity deposition and (or) sputtering by charge exchange atoms. Main attention is paid to the search of glasses for vacuum windows and achromatic lens which are stable under ITER irradiation conditions. The last results of irradiation tests in nuclear reactor of candidate silica glasses KU-1, KS-4V and TF 200 are presented. An additional problem is discussed that deals with the stray light produced by multiple reflections from the first wall of the intense light emitted in the divertor plasma.

  2. Iterative most likely oriented point registration.

    PubMed

    Billings, Seth; Taylor, Russell

    2014-01-01

    A new algorithm for model based registration is presented that optimizes both position and surface normal information of the shapes being registered. This algorithm extends the popular Iterative Closest Point (ICP) algorithm by incorporating the surface orientation at each point into both the correspondence and registration phases of the algorithm. For the correspondence phase an efficient search strategy is derived which computes the most probable correspondences considering both position and orientation differences in the match. For the registration phase an efficient, closed-form solution provides the maximum likelihood rigid body alignment between the oriented point matches. Experiments by simulation using human femur data demonstrate that the proposed Iterative Most Likely Oriented Point (IMLOP) algorithm has a strong accuracy advantage over ICP and has increased ability to robustly identify a successful registration result. PMID:25333116

  3. Iterative image restoration using approximate inverse preconditioning.

    PubMed

    Nagy, J G; Plemmons, R J; Torgersen, T C

    1996-01-01

    Removing a linear shift-invariant blur from a signal or image can be accomplished by inverse or Wiener filtering, or by an iterative least-squares deblurring procedure. Because of the ill-posed characteristics of the deconvolution problem, in the presence of noise, filtering methods often yield poor results. On the other hand, iterative methods often suffer from slow convergence at high spatial frequencies. This paper concerns solving deconvolution problems for atmospherically blurred images by the preconditioned conjugate gradient algorithm, where a new approximate inverse preconditioner is used to increase the rate of convergence. Theoretical results are established to show that fast convergence can be expected, and test results are reported for a ground-based astronomical imaging problem. PMID:18285203

  4. Thermomechanical analysis of the ITER breeding blanket

    SciTech Connect

    Majumdar, S.; Gruhn, H.; Gohar, Y.; Giegerich, M.

    1997-03-01

    Thermomechanical performance of the ITER breeding blanket is an important design issue because it requires first, that the thermal expansion mismatch between the blanket structure and the blankets internals (such as, beryllium multiplier and tritium breeders) can be accommodated without creating high stresses, and second, that the thermomechanical deformation of various interfaces within the blanket does not create high resistance to heat flow and consequent unacceptably high temperatures in the blanket materials. Thermomechanical analysis of a single beryllium block sandwiched between two stainless steel plates was carried out using the finite element code ABAQUS to illustrate the importance of elastic deformation on the temperature distributions. Such an analysis for the whole ITER blanket needs to be conducted in the future. Uncertainties in the thermomechanical contact analysis can be reduced by bonding the beryllium blocks to the stainless steel plates by a thin soft interfacial layer.

  5. Iterative Reconstruction of Coded Source Neutron Radiographs

    SciTech Connect

    Santos-Villalobos, Hector J; Bingham, Philip R; Gregor, Jens

    2012-01-01

    Use of a coded source facilitates high-resolution neutron imaging but requires that the radiographic data be deconvolved. In this paper, we compare direct deconvolution with two different iterative algorithms, namely, one based on direct deconvolution embedded in an MLE-like framework and one based on a geometric model of the neutron beam and a least squares formulation of the inverse imaging problem.

  6. Iterative solution of high order compact systems

    SciTech Connect

    Spotz, W.F.; Carey, G.F.

    1996-12-31

    We have recently developed a class of finite difference methods which provide higher accuracy and greater stability than standard central or upwind difference methods, but still reside on a compact patch of grid cells. In the present study we investigate the performance of several gradient-type iterative methods for solving the associated sparse systems. Both serial and parallel performance studies have been made. Representative examples are taken from elliptic PDE`s for diffusion, convection-diffusion, and viscous flow applications.

  7. Fourier analysis of the SOR iteration

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.; Trefethen, L. N.

    1986-01-01

    The SOR iteration for solving linear systems of equations depends upon an overrelaxation factor omega. It is shown that for the standard model problem of Poisson's equation on a rectangle, the optimal omega and corresponding convergence rate can be rigorously obtained by Fourier analysis. The trick is to tilt the space-time grid so that the SOR stencil becomes symmetrical. The tilted grid also gives insight into the relation between convergence rates of several variants.

  8. Predict! Teaching Statistics Using Informational Statistical Inference

    ERIC Educational Resources Information Center

    Makar, Katie

    2013-01-01

    Statistics is one of the most widely used topics for everyday life in the school mathematics curriculum. Unfortunately, the statistics taught in schools focuses on calculations and procedures before students have a chance to see it as a useful and powerful tool. Researchers have found that a dominant view of statistics is as an assortment of tools…

  9. Statistics Poker: Reinforcing Basic Statistical Concepts

    ERIC Educational Resources Information Center

    Leech, Nancy L.

    2008-01-01

    Learning basic statistical concepts does not need to be tedious or dry; it can be fun and interesting through cooperative learning in the small-group activity of Statistics Poker. This article describes a teaching approach for reinforcing basic statistical concepts that can help students who have high anxiety and makes learning and reinforcing…

  10. Iterative pass optimization of sequence data

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  11. Cyclic Game Dynamics Driven by Iterated Reasoning

    PubMed Central

    Frey, Seth; Goldstone, Robert L.

    2013-01-01

    Recent theories from complexity science argue that complex dynamics are ubiquitous in social and economic systems. These claims emerge from the analysis of individually simple agents whose collective behavior is surprisingly complicated. However, economists have argued that iterated reasoning–what you think I think you think–will suppress complex dynamics by stabilizing or accelerating convergence to Nash equilibrium. We report stable and efficient periodic behavior in human groups playing the Mod Game, a multi-player game similar to Rock-Paper-Scissors. The game rewards subjects for thinking exactly one step ahead of others in their group. Groups that play this game exhibit cycles that are inconsistent with any fixed-point solution concept. These cycles are driven by a “hopping” behavior that is consistent with other accounts of iterated reasoning: agents are constrained to about two steps of iterated reasoning and learn an additional one-half step with each session. If higher-order reasoning can be complicit in complex emergent dynamics, then cyclic and chaotic patterns may be endogenous features of real-world social and economic systems. PMID:23441191

  12. Iterative pass optimization of sequence data.

    PubMed

    Wheeler, Ward C

    2003-06-01

    The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed. PMID:12901382

  13. ITER Creation Safety File Expertise Results

    NASA Astrophysics Data System (ADS)

    Perrault, D.

    2013-06-01

    In March 2010, the ITER operator delivered the facility safety file to the French "Autorité de Sûreté Nucléaire" (ASN) as part of its request for the creation decree, legally necessary before building works can begin on the site. The French "Institut de Radioprotection et de Sûreté Nucléaire" (IRSN), in support to the ASN, recently completed its expertise of the safety measures proposed for ITER, on the basis of this file and of additional technical documents from the operator. This paper presents the IRSN's main conclusions. In particular, they focus on the radioactive materials involved, the safety and radiation protection demonstration (suitability of risk management measures…), foreseeable accidents, building and safety important component design and, finally, wastes and effluents to be produced. This assessment was just the first legally-required step in on-going safety monitoring of the ITER project, which will include other complete regulatory re-evaluations.

  14. Conformal mapping and convergence of Krylov iterations

    SciTech Connect

    Driscoll, T.A.; Trefethen, L.N.

    1994-12-31

    Connections between conformal mapping and matrix iterations have been known for many years. The idea underlying these connections is as follows. Suppose the spectrum of a matrix or operator A is contained in a Jordan region E in the complex plane with 0 not an element of E. Let {phi}(z) denote a conformal map of the exterior of E onto the exterior of the unit disk, with {phi}{infinity} = {infinity}. Then 1/{vert_bar}{phi}(0){vert_bar} is an upper bound for the optimal asymptotic convergence factor of any Krylov subspace iteration. This idea can be made precise in various ways, depending on the matrix iterations, on whether A is finite or infinite dimensional, and on what bounds are assumed on the non-normality of A. This paper explores these connections for a variety of matrix examples, making use of a new MATLAB Schwarz-Christoffel Mapping Toolbox developed by the first author. Unlike the earlier Fortran Schwarz-Christoffel package SCPACK, the new toolbox computes exterior as well as interior Schwarz-Christoffel maps, making it easy to experiment with spectra that are not necessarily symmetric about an axis.

  15. Recent ADI iteration analysis and results

    SciTech Connect

    Wachspress, E.L.

    1994-12-31

    Some recent ADI iteration analysis and results are discussed. Discovery that the Lyapunov and Sylvester matrix equations are model ADI problems stimulated much research on ADI iteration with complex spectra. The ADI rational Chebyshev analysis parallels the classical linear Chebyshev theory. Two distinct approaches have been applied to these problems. First, parameters which were optimal for real spectra were shown to be nearly optimal for certain families of complex spectra. In the linear case these were spectra bounded by ellipses in the complex plane. In the ADI rational case these were spectra bounded by {open_quotes}elliptic-function regions{close_quotes}. The logarithms of the latter appear like ellipses, and the logarithms of the optimal ADI parameters for these regions are similar to the optimal parameters for linear Chebyshev approximation over superimposed ellipses. W.B. Jordan`s bilinear transformation of real variables to reduce the two-variable problem to one variable was generalized into the complex plane. This was needed for ADI iterative solution of the Sylvester equation.

  16. ITER (International Thermonuclear Experimental Reactor) in perspective

    SciTech Connect

    Henning, C.D. )

    1989-10-20

    The International Thermonuclear Experimental Reactor (ITER) team is completing the second year of a three-year conceptual design phase. The purpose of ITER is to demonstrate the scientific and technological feasibility of fusion power. It is to demonstrate plasma ignition and extended burn with steady state as the ultimate goal. In so doing, it is to provide the physics data base needed for a demonstration tokamak power reactor and to demonstrate reactor-relevant technologies, such as high-heat-flux and nuclear components for fusion power. To meet these objectives, many design compromises had to be reached by the participants following a careful review of the physics and technology base for fusion. The current ITER design features a 6-m major radius, a 2.15-m minor radius and a 22-MA plasma current. About 330 volt-seconds in the poloidal field system inductively drive the current for hundreds of seconds. Moreover, about 125 MW of neutral-beam, lower-hybrid, and electron-cyclotron power are provided for steady-state current drive and heating all these systems are discussed in this paper. 3 refs., 6 figs., 7 tabs.

  17. The dynamics of iterated transportation simulations

    SciTech Connect

    Nagel, K.; Rickert, M.; Simon, P.M.

    1998-12-01

    Transportation-related decisions of people often depend on what everybody else is doing. For example, decisions about mode choice, route choice, activity scheduling, etc., can depend on congestion, caused by the aggregated behavior of others. From a conceptual viewpoint, this consistency problem causes a deadlock, since nobody can start planning because they do not know what everybody else is doing. It is the process of iterations that is examined in this paper as a method for solving the problem. In this paper, the authors concentrate on the aspect of the iterative process that is probably the most important one from a practical viewpoint, and that is the ``uniqueness`` or ``robustness`` of the results. Also, they define robustness more in terms of common sense than in terms of a mathematical formalism. For this, they do not only want a single iterative process to converge, but they want the result to be independent of any particular implementation. The authors run many computational experiments, sometimes with variations of the same code, sometimes with totally different code, in order to see if any of the results are robust against these changes.

  18. Performance assessment of the ITER ICRF antenna

    NASA Astrophysics Data System (ADS)

    Durodié, F.; Vrancken, M.; Bamber, R.; Colas, L.; Dumortier, P.; Hancock, D.; Huygen, S.; Lockley, D.; Louche, F.; Maggiora, R.; Milanesio, D.; Messiaen, A.; Nightingale, M. P. S.; Shannon, M.; Tigwell, P.; van Schoor, M.; Wilson, D.; Winkler, K.; Cycle Team

    2014-02-01

    ITER's Ion Cyclotron Range of Frequencies (ICRF) system [1] comprises two antenna launchers designed by CYCLE (a consortium of European associations listed in the author affiliations above) on behalf F4E for the ITER Organisation (IO), each inserted as a Port Plug (PP) into one of ITER's Vacuum Vessel (VV) ports. Each launcher is an array of 4 toroidal by 6 poloidal RF current straps specified to couple up to 20 MW in total to the plasma in the frequency range of 40 to 55 MHz but limited to a maximum system voltage of 45 kV and limits on RF electric fields depending on their location and direction with respect to respectively the torus vacuum and the toroidal magnetic field. A crucial aspect of coupling ICRF power to plasmas is the knowledge of the plasma density profiles in the Scrape-Off Layer (SOL) and the location of the RF current straps with respect to the SOL. The launcher layout and details were optimized and its performance estimated for a worst case SOL provided by the IO. The paper summarizes the estimated performance obtained within the operational parameter space specified by IO. Aspects of the RF grounding of the whole antenna PP to the VV port and the effect of the voids between the PP and the Blanket Shielding Modules (BSM) surrounding the antenna front are discussed.

  19. Iterative solution of the semiconductor device equations

    SciTech Connect

    Bova, S.W.; Carey, G.F.

    1996-12-31

    Most semiconductor device models can be described by a nonlinear Poisson equation for the electrostatic potential coupled to a system of convection-reaction-diffusion equations for the transport of charge and energy. These equations are typically solved in a decoupled fashion and e.g. Newton`s method is used to obtain the resulting sequences of linear systems. The Poisson problem leads to a symmetric, positive definite system which we solve iteratively using conjugate gradient. The transport equations lead to nonsymmetric, indefinite systems, thereby complicating the selection of an appropriate iterative method. Moreover, their solutions exhibit steep layers and are subject to numerical oscillations and instabilities if standard Galerkin-type discretization strategies are used. In the present study, we use an upwind finite element technique for the transport equations. We also evaluate the performance of different iterative methods for the transport equations and investigate various preconditioners for a few generalized gradient methods. Numerical examples are given for a representative two-dimensional depletion MOSFET.

  20. Fine-granularity and spatially-adaptive regularization for projection-based image deblurring.

    PubMed

    Li, Xin

    2011-04-01

    This paper studies two classes of regularization strategies to achieve an improved tradeoff between image recovery and noise suppression in projection-based image deblurring. The first is based on a simple fact that r-times Landweber iteration leads to a fixed level of regularization, which allows us to achieve fine-granularity control of projection-based iterative deblurring by varying the value r. The regularization behavior is explained by using the theory of Lagrangian multiplier for variational schemes. The second class of regularization strategy is based on the observation that various regularized filters can be viewed as nonexpansive mappings in the metric space. A deeper understanding about different regularization filters can be gained by probing into their asymptotic behavior--the fixed point of nonexpansive mappings. By making an analogy to the states of matter in statistical physics, we can observe that different image structures (smooth regions, regular edges and textures) correspond to different fixed points of nonexpansive mappings when the temperature(regularization) parameter varies. Such an analogy motivates us to propose a deterministic annealing based approach toward spatial adaptation in projection-based image deblurring. Significant performance improvements over the current state-of-the-art schemes have been observed in our experiments, which substantiates the effectiveness of the proposed regularization strategies. PMID:20876018

  1. Iterative procedure for in-situ EUV optical testing with an incoherent source

    SciTech Connect

    Miyawaka, Ryan; Naulleau, Patrick; Zakhor, Avideh

    2009-12-01

    We propose an iterative method for in-situ optical testing under partially coherent illumination that relies on the rapid computation of aerial images. In this method a known pattern is imaged with the test optic at several planes through focus. A model is created that iterates through possible aberration maps until the through-focus series of aerial images matches the experimental result. The computation time of calculating the through-focus series is significantly reduced by a-SOCS, an adapted form of the Sum Of Coherent Systems (SOCS) decomposition. In this method, the Hopkins formulation is described by an operator S which maps the space of pupil aberrations to the space of aerial images. This operator is well approximated by a truncated sum of its spectral components.

  2. An iterative particle filter approach for respiratory motion estimation in nuclear medicine imaging

    NASA Astrophysics Data System (ADS)

    Abd. Rahni, Ashrani Aizzuddin; Wells, Kevin; Lewis, Emma; Guy, Matthew; Goswami, Budhaditya

    2011-03-01

    The continual improvement in spatial resolution of Nuclear Medicine (NM) scanners has made accurate compensation of patient motion increasingly important. A major source of corrupting motion in NM acquisition is due to respiration. Therefore a particle filter (PF) approach has been proposed as a powerful method for motion correction in NM. The probabilistic view of the system in the PF is seen as an advantage that considers the complexity and uncertainties in estimating respiratory motion. Previous tests using XCAT has shown the possibility of estimating unseen organ configuration using training data that only consist of a single respiratory cycle. This paper augments application specific adaptation methods that have been implemented for better PF estimates with an iterative model update step. Results show that errors are further reduced to an extent up to a small number of iterations and such improvements will be advantageous for the PF to cope with more realistic and complex applications.

  3. Stability of resistive wall modes with plasma rotation and thick wall in ITER scenario

    NASA Astrophysics Data System (ADS)

    Zheng, L. J.; Kotschenreuther, M.; Chu, M.; Chance, M.; Turnbull, A.

    2004-11-01

    The rotation effect on resistive wall modes (RWMs) is examined for realistically shaped, high-beta tokamak equilibria, including reactor relevant cases with low mach number M and realistic thick walls. For low M, Stabilization of RWMs arises from unusually thin inertial layers. The investigation employs the newly developed adaptive eigenvalue code (AEGIS: Adaptive EiGenfunction Independent Solution), which describes both low and high n modes and is in good agreement with GATO in the benchmark studies. AEGIS is unique in using adaptive methods to resolve such inertial layers with low mach number rotation. This feature is even more desirable for transport barrier cases. Additionally, ITER and reactors have thick conducting walls ( ˜.5-1 m) which are not well modeled as a thin shell. Such thick walls are considered here, including semi-analytical approximations to account for the toroidally segmented nature of real walls.

  4. Overview of International Thermonuclear Experimental Reactor (ITER) engineering design activities*

    NASA Astrophysics Data System (ADS)

    Shimomura, Y.

    1994-05-01

    The International Thermonuclear Experimental Reactor (ITER) [International Thermonuclear Experimental Reactor (ITER) (International Atomic Energy Agency, Vienna, 1988), ITER Documentation Series, No. 1] project is a multiphased project, presently proceeding under the auspices of the International Atomic Energy Agency according to the terms of a four-party agreement among the European Atomic Energy Community (EC), the Government of Japan (JA), the Government of the Russian Federation (RF), and the Government of the United States (US), ``the Parties.'' The ITER project is based on the tokamak, a Russian invention, and has since been brought to a high level of development in all major fusion programs in the world. The objective of ITER is to demonstrate the scientific and technological feasibility of fusion energy for peaceful purposes. The ITER design is being developed, with support from the Parties' four Home Teams and is in progress by the Joint Central Team. An overview of ITER Design activities is presented.

  5. Adaptive Development

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The goal of this research is to develop and demonstrate innovative adaptive seal technologies that can lead to dramatic improvements in engine performance, life, range, and emissions, and enhance operability for next generation gas turbine engines. This work is concentrated on the development of self-adaptive clearance control systems for gas turbine engines. Researchers have targeted the high-pressure turbine (HPT) blade tip seal location for following reasons: Current active clearance control (ACC) systems (e.g., thermal case-cooling schemes) cannot respond to blade tip clearance changes due to mechanical, thermal, and aerodynamic loads. As such they are prone to wear due to the required tight running clearances during operation. Blade tip seal wear (increased clearances) reduces engine efficiency, performance, and service life. Adaptive sealing technology research has inherent impact on all envisioned 21st century propulsion systems (e.g. distributed vectored, hybrid and electric drive propulsion concepts).

  6. Evaluation of ITER MSE Viewing Optics

    SciTech Connect

    Allen, S; Lerner, S; Morris, K; Jayakumar, J; Holcomb, C; Makowski, M; Latkowski, J; Chipman, R

    2007-03-26

    The Motional Stark Effect (MSE) diagnostic on ITER determines the local plasma current density by measuring the polarization angle of light resulting from the interaction of a high energy neutral heating beam and the tokamak plasma. This light signal has to be transmitted from the edge and core of the plasma to a polarization analyzer located in the port plug. The optical system should either preserve the polarization information, or it should be possible to reliably calibrate any changes induced by the optics. This LLNL Work for Others project for the US ITER Project Office (USIPO) is focused on the design of the viewing optics for both the edge and core MSE systems. Several design constraints were considered, including: image quality, lack of polarization aberrations, ease of construction and cost of mirrors, neutron shielding, and geometric layout in the equatorial port plugs. The edge MSE optics are located in ITER equatorial port 3 and view Heating Beam 5, and the core system is located in equatorial port 1 viewing heating beam 4. The current work is an extension of previous preliminary design work completed by the ITER central team (ITER resources were not available to complete a detailed optimization of this system, and then the MSE was assigned to the US). The optimization of the optical systems at this level was done with the ZEMAX optical ray tracing code. The final LLNL designs decreased the ''blur'' in the optical system by nearly an order of magnitude, and the polarization blur was reduced by a factor of 3. The mirror sizes were reduced with an estimated cost savings of a factor of 3. The throughput of the system was greater than or equal to the previous ITER design. It was found that optical ray tracing was necessary to accurately measure the throughput. Metal mirrors, while they can introduce polarization aberrations, were used close to the plasma because of the anticipated high heat, particle, and neutron loads. These mirrors formed an intermediate

  7. Hydropower, adaptive management, and biodiversity

    SciTech Connect

    Wieringa, M.J.; Morton, A.G.

    1996-11-01

    Adaptive management is a policy framework within which an iterative process of decision making is allowed based on the observed responses to and effectiveness of previous decisions. The use of adaptive management allows science-based research and monitoring of natural resource and ecological community responses, in conjunction with societal values and goals, to guide decisions concerning man`s activities. The adaptive management process has been proposed for application to hydropower operations at Glen Canyon Dam on the Colorado River, a situation that requires complex balancing of natural resources requirements and competing human uses. This example is representative of the general increase in public interest in the operation of hydropower facilities and possible effects on downstream natural resources and of the growing conflicts between uses and users of river-based resources. This paper describes the adaptive management process, using the Glen Canyon Dam example, and discusses ways to make the process work effectively in managing downstream natural resources and biodiversity. 10 refs., 2 figs.

  8. Advanced Statistical Properties of Dispersing Billiards

    NASA Astrophysics Data System (ADS)

    Chernov, N.

    2006-03-01

    A new approach to statistical properties of hyperbolic dynamical systems emerged recently; it was introduced by L.-S. Young and modified by D. Dolgopyat. It is based on coupling method borrowed from probability theory. We apply it here to one of the most physically interesting models—Sinai billiards. It allows us to derive a series of new results, as well as make significant improvements in the existing results. First we establish sharp bounds on correlations (including multiple correlations). Then we use our correlation bounds to obtain the central limit theorem (CLT), the almost sure invariance principle (ASIP), the law of iterated logarithms, and integral tests.

  9. Parallel adaptive mesh refinement for electronic structure calculations

    SciTech Connect

    Kohn, S.; Weare, J.; Ong, E.; Baden, S.

    1996-12-01

    We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradients with multigrid preconditioning. We have parallelized our solver using an object-oriented adaptive mesh refinement framework.

  10. The Role of Bridging Organizations in Enhancing Ecosystem Services and Facilitating Adaptive Management of Social-Ecological Systems

    EPA Science Inventory

    Adaptive management is an approach for monitoring the response of ecological systems to different policies and practices and attempts to reduce the inherent uncertainty in ecological systems via system monitoring and iterative decision making and experimentation (Holling 1978). M...

  11. Holographic imaging through a scattering medium by diffuser-assisted statistical averaging

    NASA Astrophysics Data System (ADS)

    Purcell, Michael J.; Kumar, Manish; Rand, Stephen C.

    2016-03-01

    The ability to image through a scattering or diffusive medium such as tissue or hazy atmosphere is a goal which has garnered extensive attention from the scientific community. Existing imaging methods in this field make use of phase conjugation, time of flight, iterative wave-front shaping or statistical averaging approaches, which tend to be either time consuming or complicated to implement. We introduce a novel and practical way of statistical averaging which makes use of a rotating ground glass diffuser to nullify the adverse effects caused by speckle introduced by a first static diffuser / aberrator. This is a Fourier transform-based, holographic approach which demonstrates the ability to recover detailed images and shows promise for further remarkable improvement. The present experiments were performed with 2D flat images, but this method could be easily adapted for recovery of 3D extended object information. The simplicity of the approach makes it fast, reliable, and potentially scalable as a portable technology. Since imaging through a diffuser has direct applications in biomedicine and defense technologies this method may augment advanced imaging capabilities in many fields.

  12. Using Action Research to Develop a Course in Statistical Inference for Workplace-Based Adults

    ERIC Educational Resources Information Center

    Forbes, Sharleen

    2014-01-01

    Many adults who need an understanding of statistical concepts have limited mathematical skills. They need a teaching approach that includes as little mathematical context as possible. Iterative participatory qualitative research (action research) was used to develop a statistical literacy course for adult learners informed by teaching in…

  13. Neuroendocrine Tumor: Statistics

    MedlinePlus

    ... Tumor > Neuroendocrine Tumor - Statistics Request Permissions Neuroendocrine Tumor - Statistics Approved by the Cancer.Net Editorial Board , 04/ ... the body. It is important to remember that statistics on how many people survive this type of ...

  14. Noise characterization of block-iterative reconstruction algorithms: II. Monte Carlo simulations.

    PubMed

    Soares, Edward J; Glick, Stephen J; Hoppin, John W

    2005-01-01

    In Soares et al. (2000), the ensemble statistical properties of the rescaled block-iterative expectation-maximization (RBI-EM) reconstruction algorithm and rescaled block-iterative simultaneous multiplicative algebraic reconstruction technique (RBI-SMART) were derived. Included in this analysis were the special cases of RBI-EM, maximum-likelihood EM (ML-EM) and ordered-subset EM (OS-EM), and the special case of RBI-SMART, SMART. Explicit expressions were found for the ensemble mean, covariance matrix, and probability density function of RBI reconstructed images, as a function of iteration number. The theoretical formulations relied on one approximation, namely that the noise in the reconstructed image was small compared to the mean image. In this paper, we evaluate the predictions of the theory by using Monte Carlo methods to calculate the sample statistical properties of each algorithm and then compare the results with the theoretical formulations. In addition, the validity of the approximation will be justified. PMID:15638190

  15. Adapting Animals.

    ERIC Educational Resources Information Center

    Wedman, John; Wedman, Judy

    1985-01-01

    The "Animals" program found on the Apple II and IIe system master disk can be adapted for use in the mathematics classroom. Instructions for making the necessary changes and suggestions for using it in lessons related to geometric shapes are provided. (JN)

  16. Adaptive Thresholds

    SciTech Connect

    Bremer, P. -T.

    2014-08-26

    ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.

  17. Adaptive homeostasis.

    PubMed

    Davies, Kelvin J A

    2016-06-01

    Homeostasis is a central pillar of modern Physiology. The term homeostasis was invented by Walter Bradford Cannon in an attempt to extend and codify the principle of 'milieu intérieur,' or a constant interior bodily environment, that had previously been postulated by Claude Bernard. Clearly, 'milieu intérieur' and homeostasis have served us well for over a century. Nevertheless, research on signal transduction systems that regulate gene expression, or that cause biochemical alterations to existing enzymes, in response to external and internal stimuli, makes it clear that biological systems are continuously making short-term adaptations both to set-points, and to the range of 'normal' capacity. These transient adaptations typically occur in response to relatively mild changes in conditions, to programs of exercise training, or to sub-toxic, non-damaging levels of chemical agents; thus, the terms hormesis, heterostasis, and allostasis are not accurate descriptors. Therefore, an operational adjustment to our understanding of homeostasis suggests that the modified term, Adaptive Homeostasis, may be useful especially in studies of stress, toxicology, disease, and aging. Adaptive Homeostasis may be defined as follows: 'The transient expansion or contraction of the homeostatic range in response to exposure to sub-toxic, non-damaging, signaling molecules or events, or the removal or cessation of such molecules or events.' PMID:27112802

  18. Statistical Symbolic Execution with Informed Sampling

    NASA Technical Reports Server (NTRS)

    Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco

    2014-01-01

    Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.

  19. Test Strategy for the European HCPB Test Blanket Module in ITER

    SciTech Connect

    Boccaccini, L.V.; Meyder, R.; Fischer, U.

    2005-05-15

    According to the European Blanket Programme two blanket concepts, the Helium Cooled Pebble Bed (HCPB) and a Helium Cooled Lithium Lead (HCLL) will be tested in ITER. During 2004 the test blanket modules (TBM) of both concepts were redesigned with the goal to use as much as possible similar design options and fabrication techniques for both types in order to reduce the European effort for TBM development. The result is a robust TBM box being able to withstand 8 MPa internal pressure in case of in-box LOCA; the TBM box consists of First wall (FW), caps, stiffening grid and manifolds. The box is filled with typically 18 and 24 breeding units (BU), for HCPB and HCLL respectively. A breeding unit has about 200 mm in poloidal and toroidal direction and about 400 mm in radial direction; the design is adapted to contain and cooling ceramic breeder/beryllium pebble beds for the HCPB and eutectic Lithium-Lead for the HCLL.The use of a new material, EUROFER, and the innovative design of these Helium Cooled components call for a large qualification programme before the installation in ITER; availability and safety of ITER should not be jeopardised by a failure of these components. Fabrication technologies especially in the welding processes (diffusion welding, EB, TIG, LASER) need to be tested in the manufacturing of large mock-ups; an extensive out-of-pile programme in Helium facility should be foreseen for the verification of the concept from basic helium cooling functions (uniformity of flow in parallel channels, heat transfer coefficient in FW, etc.) up to the verification of large portions of the TBM design under relevant ITER loading.In ITER the TBM will have the main objective to collect information that will contribute to the final design of DEMO blankets. A strategy has been proposed in 2001 that leads to the tests in ITER 4 different Test Blanket Modules (TBM's) type during the first 10 years of ITER operation. For the new HCPB design this strategy is confirmed with

  20. Statistical shape model-based reconstruction of a scaled, patient-specific surface model of the pelvis from a single standard AP x-ray radiograph

    SciTech Connect

    Zheng Guoyan

    2010-04-15

    Purpose: The aim of this article is to investigate the feasibility of using a statistical shape model (SSM)-based reconstruction technique to derive a scaled, patient-specific surface model of the pelvis from a single standard anteroposterior (AP) x-ray radiograph and the feasibility of estimating the scale of the reconstructed surface model by performing a surface-based 3D/3D matching. Methods: Data sets of 14 pelvises (one plastic bone, 12 cadavers, and one patient) were used to validate the single-image based reconstruction technique. This reconstruction technique is based on a hybrid 2D/3D deformable registration process combining a landmark-to-ray registration with a SSM-based 2D/3D reconstruction. The landmark-to-ray registration was used to find an initial scale and an initial rigid transformation between the x-ray image and the SSM. The estimated scale and rigid transformation were used to initialize the SSM-based 2D/3D reconstruction. The optimal reconstruction was then achieved in three stages by iteratively matching the projections of the apparent contours extracted from a 3D model derived from the SSM to the image contours extracted from the x-ray radiograph: Iterative affine registration, statistical instantiation, and iterative regularized shape deformation. The image contours are first detected by using a semiautomatic segmentation tool based on the Livewire algorithm and then approximated by a set of sparse dominant points that are adaptively sampled from the detected contours. The unknown scales of the reconstructed models were estimated by performing a surface-based 3D/3D matching between the reconstructed models and the associated ground truth models that were derived from a CT-based reconstruction method. Such a matching also allowed for computing the errors between the reconstructed models and the associated ground truth models. Results: The technique could reconstruct the surface models of all 14 pelvises directly from the landmark

  1. Final Report on ITER Task Agreement 81-08

    SciTech Connect

    Richard L. Moore

    2008-03-01

    As part of an ITER Implementing Task Agreement (ITA) between the ITER US Participant Team (PT) and the ITER International Team (IT), the INL Fusion Safety Program was tasked to provide the ITER IT with upgrades to the fusion version of the MELCOR 1.8.5 code including a beryllium dust oxidation model. The purpose of this model is to allow the ITER IT to investigate hydrogen production from beryllium dust layers on hot surfaces inside the ITER vacuum vessel (VV) during in-vessel loss-of-cooling accidents (LOCAs). Also included in the ITER ITA was a task to construct a RELAP5/ATHENA model of the ITER divertor cooling loop to model the draining of the loop during a large ex-vessel pipe break followed by an in-vessel divertor break and compare the results to a simular MELCOR model developed by the ITER IT. This report, which is the final report for this agreement, documents the completion of the work scope under this ITER TA, designated as TA 81-08.

  2. Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2006-01-01

    The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described

  3. Research on JET in view of ITER

    NASA Astrophysics Data System (ADS)

    Pamela, Jerome; Ongena, Jef; Watkins, Michael

    2004-11-01

    Research on JET is focused on further development of the two ITER reference plasma scenarios. The ELMy H-Mode, has been extended to lower rho* at high and q_95=3, with simultaneously H_98=0.9, and f_GW=0.9 at I_p=3.5 MA. The dependence of confinement on beta and rho* has been found to be more favorable than given by the IPB98(y,2) scaling. Highlights in the development of Advanced Regimes with Internal Transport Barriers (ITB) and strong reversed shear (q_0=2-3, q_min=1.5-2.5) are : (i) operation at a core density close to the Greenwald limit and (ii) full current drive in 3T/1.8MA ITB plasmas extended to 20 seconds with a JET record injected energy of E≈ 330MJ; (iii) 7 keV Te≈ Ti ITB plasmas at low toroidal rotation, and (iv) wide radius ITB's (r/a=0.6). Furthermore, emphasis in JET is placed on (i) mitigating the impact of ELMs, (ii) understanding the phenomena leading to tritium retention and (iii) preparing burning plasma physics. Recent developments on JET in view of ITER are : (i) real-time control in both ELMy H-Mode and ITB plasmas and (ii) an upgrade of JET with: (a) increased NBI power (b) a new ELM-resilient ITER-like ICRH antenna (7MW) to be tested in 2006 (c) 16 new and upgraded diagnostics.

  4. Corneal topography matching by iterative registration.

    PubMed

    Wang, Junjie; Elsheikh, Ahmed; Davey, Pinakin G; Wang, Weizhuo; Bao, Fangjun; Mottershead, John E

    2014-11-01

    Videokeratography is used for the measurement of corneal topography in overlapping portions (or maps) which must later be joined together to form the overall topography of the cornea. The separate portions are measured from different viewpoints and therefore must be brought together by registration of measurement points in the regions of overlap. The central map is generally the most accurate, but all maps are measured with uncertainty that increases towards the periphery. It becomes the reference (or static) map, and the peripheral (or dynamic) maps must then be transformed by rotation and translation so that the overlapping portions are matched. The process known as registration, of determining the necessary transformation, is a well-understood procedure in image analysis and has been applied in several areas of science and engineering. In this article, direct search optimisation using the Nelder-Mead algorithm and several variants of the iterative closest/corresponding point routine are explained and applied to simulated and real clinical data. The measurement points on the static and dynamic maps are generally different so that it becomes necessary to interpolate, which is done using a truncated series of Zernike polynomials. The point-to-plane iterative closest/corresponding point variant has the advantage of releasing certain optimisation constraints that lead to persistent registration and alignment errors when other approaches are used. The point-to-plane iterative closest/corresponding point routine is found to be robust to measurement noise, insensitive to starting values of the transformation parameters and produces high-quality results when using real clinical data. PMID:25500860

  5. Cryogenic High Voltage Insulation Breaks for ITER

    NASA Astrophysics Data System (ADS)

    Kovalchuk, O. A.; Safonov, A. V.; Rodin, I. Yu.; Mednikov, A. A.; Lancetov, A. A.; Klimchenko, Yu. A.; Grinchenko, V. A.; Voronin, N. M.; Smorodina, N. V.; Bursikov, A. S.

    High voltage insulation breaks are used in cryogenic lines with gas or liquid (helium, hydrogen, nitrogen, etc.) at a temperature range of 4.2-300 K and pressure up to 30 MPa to insulate the parts of an electrophysical facility with different electrical potentials. In 2013 JSC "NIIEFA" delivered 95 high voltage insulation breaks to the IO ITER, i.e. 65 breaks with spiral channels and 30 breaks with uniflow channels. These high voltage insulation breaks were designed, manufactured and tested in accordance with the ITER Technical Specifications: «Axial Insulating Breaks for the Qualification Phase of ITER Coils and Feeders». The high voltage insulation breaks consist of the glass-reinforced plastic cylinder equipped with channels for cryoagent and stainless steel end fittings. The operating voltage is 30 kV for the breaks with spiral channels (30 kV HV IBs) and 4 kV for the breaks with uniflow channels (4 kV HV IBs). The main design feature of the 30 kV HV IBs is the spiral channels instead of a linear one. This approach has enabled us to increase the breakdown voltage and decrease the overall dimensions of the high voltage insulation breaks. In 2013 the manufacturing technique was developed to produce the high voltage insulation breaks with the spiral and uniflow channels that made it possible to proceed to serial production. To provide the acceptance tests of the breaks a special test facility was prepared. The helium tightness test at 10-11 m3Pa/s under the pressure up to 10 MPa, the high voltage test up to 135 kV and different types of mechanical tests were carried out at the room and liquid nitrogen temperatures.

  6. Fast iterative reconstructions for animal CT

    NASA Astrophysics Data System (ADS)

    Huang, H.-M.; Hsiao, I.-T.; Jan, M.-L.

    2009-06-01

    For iterative x-ray computed tomography (CT) reconstruction, the convex algorithm combined with ordered subset (OSC) [1] is a relatively fast algorithm and has shown its potential for low-dose situations. But it needs one forward projection and two backprojections per iteration. Unlike convex algorithm, the gradient algorithm only requires one forward projection and one backprojection per iteration. Here, we applied ordered subsets of projection data to a modified gradient algorithm. In order to further reduce computation time, the new algorithm, the ordered subset gradient (OSG) algorithm, can be adjusted with a step size. We also implemented another OS-type algorithm called OSTR. The OSG algorithm is compared with OSC algorithm and OSTR algorithm using three-dimensional simulated helical cone-beam CT data. The performance is evaluated in terms of log-likelihood, contrast recovery, and bias-variance studies. Results show that images of OSG has compatible visual image quality to those of OSC and OSTR, but in the resolution and bias-variance studies, OSG seems to reach stable values with faster speed. In particular, OSTR has better recovery in a smoother region, but both OSG and OSC have better recovery in the high-frequency regions. Moreover, in terms of log likelihood with respect to computation time, OSG has faster convergence rate than that of OSC and similar to that of OSTR. We conclude that OSG has potential to provide comparable image quality and is more computationally efficient, and thus could be suitable for low-dose, helical cone-beam CT image reconstruction.

  7. Generalized iterative deconvolution for receiver function estimation

    NASA Astrophysics Data System (ADS)

    Wang, Yinzhi; Pavlis, Gary L.

    2016-02-01

    This paper describes a generalization of the iterative deconvolution method commonly used as a component of passive array wavefield imaging. We show that the iterative method should be thought of as a sparse output deconvolution method with the number of terms retained dependent on the convergence criteria. The generalized method we introduce uses an inverse operator to shape the assumed wavelet to a peaked function at zero lag. We show that the conventional method is equivalent to using a damped least-squares spiking filter with extremely large damping and proper scaling. In that case, the inverse operator used in the generalized method reduces to the cross-correlation operator. The theoretical insight of realizing the output is a sparse series provides a basis for the second important addition of the generalized method-an output shaping wavelet. A constant output shaping wavelet is a critical component in scattered wave imaging to avoid mixing data of variable bandwidth. We demonstrate the new approach can improve resolution by using an inverse operator tuned to maximize resolution. We also show that the signal-to-noise ratio of the result can be improved by applying a different convergence criterion than the standard method, which measures the energy left after each iteration. The efficacy of the approach was evaluated with synthetic experiment in various signal and noise conditions. We further validated the approach with real data from the USArray. We compared our results with data from the EarthScope Automated Receiver Survey and found that our results show modest improvements in consistency measured by correlation coefficients with station stacks and a reduced number of outliers.

  8. Experimental studies of ITER demonstration discharges

    NASA Astrophysics Data System (ADS)

    Sips, A. C. C.; Casper, T. A.; Doyle, E. J.; Giruzzi, G.; Gribov, Y.; Hobirk, J.; Hogeweij, G. M. D.; Horton, L. D.; Hubbard, A. E.; Hutchinson, I.; Ide, S.; Isayama, A.; Imbeaux, F.; Jackson, G. L.; Kamada, Y.; Kessel, C.; Kochl, F.; Lomas, P.; Litaudon, X.; Luce, T. C.; Marmar, E.; Mattei, M.; Nunes, I.; Oyama, N.; Parail, V.; Portone, A.; Saibene, G.; Sartori, R.; Stober, J. K.; Suzuki, T.; Wolfe, S. M.; C-Mod Team; ASDEX Upgrade Team; DIII-D Team; JET EFDA Contributors

    2009-08-01

    Key parts of the ITER scenarios are determined by the capability of the proposed poloidal field (PF) coil set. They include the plasma breakdown at low loop voltage, the current rise phase, the performance during the flat top (FT) phase and a ramp down of the plasma. The ITER discharge evolution has been verified in dedicated experiments. New data are obtained from C-Mod, ASDEX Upgrade, DIII-D, JT-60U and JET. Results show that breakdown for Eaxis < 0.23-0.33 V m-1 is possible unassisted (ohmic) for large devices like JET and attainable in devices with a capability of using ECRH assist. For the current ramp up, good control of the plasma inductance is obtained using a full bore plasma shape with early X-point formation. This allows optimization of the flux usage from the PF set. Additional heating keeps li(3) < 0.85 during the ramp up to q95 = 3. A rise phase with an H-mode transition is capable of achieving li(3) < 0.7 at the start of the FT. Operation of the H-mode reference scenario at q95 ~ 3 and the hybrid scenario at q95 = 4-4.5 during the FT phase is documented, providing data for the li (3) evolution after the H-mode transition and the li (3) evolution after a back-transition to L-mode. During the ITER ramp down it is important to remain diverted and to reduce the elongation. The inductance could be kept <=1.2 during the first half of the current decay, using a slow Ip ramp down, but still consuming flux from the transformer. Alternatively, the discharges can be kept in H-mode during most of the ramp down, requiring significant amounts of additional heating.

  9. Enhancing multiple-point geostatistical modeling: 2. Iterative simulation and multiple distance function

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman; Sahimi, Muhammad

    2016-03-01

    This series addresses a fundamental issue in multiple-point statistical (MPS) simulation for generation of realizations of large-scale porous media. Past methods suffer from the fact that they generate discontinuities and patchiness in the realizations that, in turn, affect their flow and transport properties. Part I of this series addressed certain aspects of this fundamental issue, and proposed two ways of improving of one such MPS method, namely, the cross correlation-based simulation (CCSIM) method that was proposed by the authors. In the present paper, a new algorithm is proposed to further improve the quality of the realizations. The method utilizes the realizations generated by the algorithm introduced in Part I, iteratively removes any possible remaining discontinuities in them, and addresses the problem with honoring hard (quantitative) data, using an error map. The map represents the differences between the patterns in the training image (TI) and the current iteration of a realization. The resulting iterative CCSIM—the iCCSIM algorithm—utilizes a random path and the error map to identify the locations in the current realization in the iteration process that need further "repairing;" that is, those locations at which discontinuities may still exist. The computational time of the new iterative algorithm is considerably lower than one in which every cell of the simulation grid is visited in order to repair the discontinuities. Furthermore, several efficient distance functions are introduced by which one extracts effectively key information from the TIs. To increase the quality of the realizations and extracting the maximum amount of information from the TIs, the distance functions can be used simultaneously. The performance of the iCCSIM algorithm is studied using very complex 2-D and 3-D examples, including those that are process-based. Comparison is made between the quality and accuracy of the results with those generated by the original CCSIM

  10. Quantization noise in adaptive weighting networks

    NASA Astrophysics Data System (ADS)

    Davis, R. M.; Sher, P. J.-S.

    1984-09-01

    Adaptive weighting networks can be implemented using in-phase and quadrature, phase-phase, or phase-amplitude modulators. The statistical properties of the quantization error are derived for each modulator and the quantization noise power produced by the modulators are compared at the output of an adaptive antenna. Other relevant characteristics of the three types of modulators are also discussed.

  11. Fuzzy logic components for iterative deconvolution systems

    NASA Astrophysics Data System (ADS)

    Northan, Brian M.

    2013-02-01

    Deconvolution systems rely heavily on expert knowledge and would benefit from approaches that capture this expert knowledge. Fuzzy logic is an approach that is used to capture expert knowledge rules and produce outputs that range in degree. This paper describes a fuzzy-deconvolution-system that integrates traditional Richardson-Lucy deconvolution with fuzzy components. The system is intended for restoration of 3D widefield images taken under conditions of refractive index mismatch. The system uses a fuzzy rule set for calculating sample refractive index, a fuzzy median filter for inter-iteration noise reduction, and a fuzzy rule set for stopping criteria.

  12. Iterative repair for scheduling and rescheduling

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Deale, Michael

    1991-01-01

    An iterative repair search method is described called constraint based simulated annealing. Simulated annealing is a hill climbing search technique capable of escaping local minima. The utility of the constraint based framework is shown by comparing search performance with and without the constraint framework on a suite of randomly generated problems. Results are also shown of applying the technique to the NASA Space Shuttle ground processing problem. These experiments show that the search methods scales to complex, real world problems and reflects interesting anytime behavior.

  13. Deterministic convergence in iterative phase shifting

    SciTech Connect

    Luna, Esteban; Salas, Luis; Sohn, Erika; Ruiz, Elfego; Nunez, Juan M.; Herrera, Joel

    2009-03-10

    Previous implementations of the iterative phase shifting method, in which the phase of a test object is computed from measurements using a phase shifting interferometer with unknown positions of the reference, do not provide an accurate way of knowing when convergence has been attained. We present a new approach to this method that allows us to deterministically identify convergence. The method is tested with a home-built Fizeau interferometer that measures optical surfaces polished to {lambda}/100 using the Hydra tool. The intrinsic quality of the measurements is better than 0.5 nm. Other possible applications for this technique include fringe projection or any problem where phase shifting is involved.

  14. Nonlinear Burn Control and Operating Point Optimization in ITER

    NASA Astrophysics Data System (ADS)

    Boyer, Mark; Schuster, Eugenio

    2013-10-01

    Control of the fusion power through regulation of the plasma density and temperature will be essential for achieving and maintaining desired operating points in fusion reactors and burning plasma experiments like ITER. In this work, a volume averaged model for the evolution of the density of energy, deuterium and tritium fuel ions, alpha-particles, and impurity ions is used to synthesize a multi-input multi-output nonlinear feedback controller for stabilizing and modulating the burn condition. Adaptive control techniques are used to account for uncertainty in model parameters, including particle confinement times and recycling rates. The control approach makes use of the different possible methods for altering the fusion power, including adjusting the temperature through auxiliary heating, modulating the density and isotopic mix through fueling, and altering the impurity density through impurity injection. Furthermore, a model-based optimization scheme is proposed to drive the system as close as possible to desired fusion power and temperature references. Constraints are considered in the optimization scheme to ensure that, for example, density and beta limits are avoided, and that optimal operation is achieved even when actuators reach saturation. Supported by the NSF CAREER award program (ECCS-0645086).

  15. Iterative learning control for the filling of wet clutches

    NASA Astrophysics Data System (ADS)

    Pinte, G.; Depraetere, B.; Symens, W.; Swevers, J.; Sas, P.

    2010-10-01

    This paper discusses the development of an advanced iterative learning control (ILC) scheme for the filling of wet clutches. In the presented scheme, the appropriate actuator signal for a new clutch engagement is learned automatically based on the quality of previous engagements, such that time-consuming and cumbersome calibrations can be avoided. First, an ILC controller, which uses the position of the piston as control input, is developed and tested on a non-rotating clutch under well controlled conditions. Afterwards, a similar strategy is tested on a rotating set-up, where a pressure sensor is used as the input of the ILC controller. On a higher level, both the position and the pressure controller are extended with a second learning algorithm, that adapts the reference position/pressure to account for environmental changes which cannot be learned by the low-level ILC controller. It is shown that a strong reduction of the transmitted torque level as well as a significant shortening of the engagement time can be achieved with the developed strategy, compared to traditional time-invariant control strategies.

  16. Hessian Schatten-norm regularization for CBCT image reconstruction using fast iterative shrinkage-thresholding algorithm

    NASA Astrophysics Data System (ADS)

    Li, Xinxin; Wang, Jiang; Tan, Shan

    2015-03-01

    Statistical iterative reconstruction in Cone-beam computed tomography (CBCT) uses prior knowledge to form different kinds of regularization terms. The total variation (TV) regularization has shown state-of-the-art performance in suppressing noises and preserving edges. However, it produces the well-known staircase effect. In this paper, a method that involves second-order differential operators was employed to avoid the staircase effect. The ability to avoid staircase effect lies in that higher-order derivatives can avoid over-sharpening the regions of smooth intensity transitions. Meanwhile, a fast iterative shrinkage-thresholding algorithm was used for the corresponding optimization problem. The proposed Hessian Schatten norm-based regularization keeps lots of favorable properties of TV, such as translation and scale invariant, with getting rid of the staircase effect that appears in TV-based reconstructions. The experiments demonstrated the outstanding ability of the proposed algorithm over TV method especially in suppressing the staircase effect.

  17. Connector adapter

    NASA Technical Reports Server (NTRS)

    Hacker, Scott C. (Inventor); Dean, Richard J. (Inventor); Burge, Scott W. (Inventor); Dartez, Toby W. (Inventor)

    2007-01-01

    An adapter for installing a connector to a terminal post, wherein the connector is attached to a cable, is presented. In an embodiment, the adapter is comprised of an elongated collet member having a longitudinal axis comprised of a first collet member end, a second collet member end, an outer collet member surface, and an inner collet member surface. The inner collet member surface at the first collet member end is used to engage the connector. The outer collet member surface at the first collet member end is tapered for a predetermined first length at a predetermined taper angle. The collet includes a longitudinal slot that extends along the longitudinal axis initiating at the first collet member end for a predetermined second length. The first collet member end is formed of a predetermined number of sections segregated by a predetermined number of channels and the longitudinal slot.

  18. Statistical Reference Datasets

    National Institute of Standards and Technology Data Gateway

    Statistical Reference Datasets (Web, free access)   The Statistical Reference Datasets is also supported by the Standard Reference Data Program. The purpose of this project is to improve the accuracy of statistical software by providing reference datasets with certified computational results that enable the objective evaluation of statistical software.

  19. Adaptive sampler

    DOEpatents

    Watson, Bobby L.; Aeby, Ian

    1982-01-01

    An adaptive data compression device for compressing data having variable frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.

  20. Adaptive sampler

    DOEpatents

    Watson, B.L.; Aeby, I.

    1980-08-26

    An adaptive data compression device for compressing data is described. The device has a frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.